CVSweb with nginx

Whoever using FreeBSD, NetBSD, or OpenBSD must’ve at least once opened their respective online source repository browser – CVSweb. CVSweb is basically CVS version of hgwebdir and co – a web application to give CVS repository(ies) a web interface where one can browse the content of a CVS repository remotely without using command line. You can see CVSweb in action here (OpenBSD), here (NetBSD), and here (FreeBSD).

One day, I wanted to install CVSweb on one of my virtual machine. Instead of using Apache, I decided to try installing it using nginx. The obvious problem on installing CVSweb over nginx is nginx’s non-existent support for cgi. Well, for starter, cgi is considered slow, etc but it doesn’t help in this case.

Then we have fcgiwrap. Simply put, it’s a program which can be run to provide cgi support under fastcgi which nginx supports. Installing it is relatively easy and to run it you’ll need either the sample launcher provided in its website, spawn-fcgi, or supervisord. Note that under FreeBSD (and most likely anything other than Linux) you have to be, um, kind of creative – it uses autoconf and the configure will fail when searching for fastcgi library. There is a way to skip autoconf usage at all though. Simply fetch the fcgiwrap.c and compile it using this command:

cc -Wall -Werror -O2 fcgiwrap.c -o fcgiwrap -lfcgi -L/my/fcgi/lib -I/my/fcgi/include -static

The static option will allow you to remove fastcgi library from the system and still able to run fcgiwrap. Copy the compiled fcgiwrap somewhere you like and configure your fastcgi launcher. It looks like this under supervisord:

[fcgi-program:fcgiwrap]
process_name=%(program_name)s_%(process_num)02d
command=/Applications/fcgiwrap/bin/fcgiwrap
socket=unix:///tmp/%(program_name)s.sock
socket_mode=0777
numprocs=3
autorestart=true
user=fcgiwrap

Adjust the paths, etc. Make sure not to run it under user root – you don’t want to take any risk.

Once the fcgiwrap set up correctly, you then need to install CVSweb. I’m not going to pretend anything here: I only know how to install it using (FreeBSD’s) ports. Manual install involves fulfilling dependencies which is not quite easy to do. It is available on Ubuntu and probably some other distros. After installation, locate and copy the cvsweb.cgi and the static data somewhere you like (I usually use /srv/http/cgi/cvsweb, btw). It goes something like this:

  • cvsweb.cgi (and its config – cvsweb.conf): /srv/http/cgi/cvsweb/root
  • css, icons: /srv/http/cgi/cvsweb/public

Before continuing to nginx config, please rename and move cvsweb.cgi to wherever and whatever name you want relative to the CVSweb path you want for the address. Examples:

  • https://myconan.net/cvsweb: rename to cvsweb
  • https://myconan.net/cgi-bin/cvsweb.cgi: put cvsweb.cgi in /srv/http/cgi/cvsweb/root/cgi-bin/
  • https://myconan.net/some/where: rename cvsweb.cgi to where and put in /srv/http/cgi/cvsweb/root/some/

You get the idea.

After that, it’s relatively simple in nginx:

  location = /cvsweb {
    rewrite ^ /cvsweb/ permanent;
  }
  location /cvsweb/ {
    alias /srv/http/cgi/cvsweb/public/;
    try_files $uri @cvsweb;
  }
  location @cvsweb {
    include fastcgi_params;
    fastcgi_pass unix:/tmp/fcgiwrap.sock;
    root /srv/http/cgi/cvsweb/root;
  }

And done.

Or better, use ViewVC instead of CVSweb and proxy nginx to its standalone server.

Windows 7 Timeline

I think it went like this:

  • Windows 98 released: 95 successor, isn’t really useful until Second Edition. Relatively unstable.
  • Windows 2000 released: NT4 successor, targeted for workstation use (not general use). Some hardware support from NT4 (not quite smooth). Driver (and application) problem solved as time goes on.
  • Windows ME released: adds some features, too bad still based on 98. Even more unstable than 98.
  • Windows XP released: can use most drivers from 2000 (guess why it’s easier to be accepted), and actually targeted for general use. Highly preferred since ME was crashier than ever and 98 is getting old – and XP is much stabler as it’s from NT line. And repeating, it enjoys drivers availability from 2000.
  • …few years went on without any new Windows version release:
    • Most programs available assumed everyone is administrator
  • Windows Vista released: architectural change. Most drivers broken, most applications assuming “user is administrator” broken in unfunny way (including WinRAR, Foxit Reader, etc).
  • Most people stayed on Windows XP but most new applications are actually fixed to stop assuming “user is administrator”. Drivers for Vista kept coming up.
  • Windows 7 released: most applications are fixed, drivers from Vista are usable. “It’s much better than Vista”. Everyone is happy.

In summary: without Windows 2000, Windows XP probably won’t succeed. In same spirit, without Windows Vista, Windows 7 probably won’t succeed. The latter would be much worse if it actually happens, though. And if you look closely, 2000 -> XP is 5.0 -> 5.1 and Vista -> 7 is 6.0 -> 6.1. See a pattern there?

In other news, Compiz’ alt-tab (and the switcher, etc) is still crappy as ever. Who cares about fancy animation. Give me non-crappy alt-tab please!

Hayate no Gotoku ch. 272

Fanservice chapter.

The following page: “Just take the entire dresser!!” LOL

And there’s this one gem “But, whatever it’s like, I’ll be fine with it as long as it has an internet connection.”

I noticed though that Maria is using something resembling Xperia X10. Technology sure advances much faster in HnG world.

On Seagate’s 3 TB

So, Seagate confirmed a 3 TB disk. It would be able to store a lot of stuff.

Or few video.

Well, IMO, the only benefit of this is that the price of lower capacity disks would go downfall. I probably won’t get any of it anytime soon. No. None.

Why? It’s really simple: dangerous thing to do. You see, having 3 TB of data in one freaking disk, without backup, is not good for one’s health. Yes really, you don’t want to do that. Just imagine when it becomes inaccessible because of reaching MTBF or bad luck (it will). If you store all of your data in there, I can see you crying.

Now if you put it in a RAID 1 (mirror), it’d take as soon as 9 hours to complete resilvering on restore. Assuming no disk access and full 80 MiB/s, that is. Good thing USB 3 is getting common (no, it isn’t yet here – in fact I’ve never seen one).

Got them covered? Now imagine the one time one of your file silently corrupted. Especially on case accidental power down. Or just bad sector. How would you find the bad file? Scan each one of them? All 3 TB? Ha.

The only possible case can think of is for personal NAS. The one with (Open)Solaris, Nexenta, or FreeBSD (or NetBSD once it’s ready). Oh and probably Linux with its Btrfs which probably going to be ready just before Hurd becomes usable. Even then I wouldn’t use it unless it’s got RAID-6 like mode. Nevermind, I probably wouldn’t trust anything other than ZFS anytime soon. Which filesystem are you going to use for it on Windows? NTFS? Lol.

In fact, I hoped resources used for either:

  • improvement on disk access time, especially on random and parallel read/write operation,
  • smaller physical disk size, or
  • Cheaper and bigger SSD,

not on some “hey look we’ve created the biggest disk ever!”

At the bright side, though – lower capacity disks will be cheaper thanks to the new higher capacity disk.

</random whatever>

On waix.dl.sourceforge.net

I’m placing permanent ban on waix.dl.sourceforge.net – from now on, on EVERY SINGLE SYSTEMS I’m managing I’ll put this one magic line to hosts (/etc/hosts on *nix, %WINDIR%System32driversetchosts on Windows):

127.0.0.1 waix.dl.sourceforge.net

That one mirror has been giving me headaches – it’s a completely irrelevant mirror for anyone not using waix’s connection which is ONLY AVAILABLE IN AUSTRALIA. Everyone else will get crappy speed – even on my crappy connection (something like 4 KB/s). Why sourceforge still gives it to everyone else is a mystery for me. Their solution is not feasible for me. Seriously, changing mirror to other server by visiting their site for every single machines? Why the heck can’t they just disable that fail mirror for everyone outside Australia? Or better, why don’t they just remove waix from list of mirrors – public mirrors are supposed to have good connection to everyone – not just some small area in a corner of the world.

Oh, in case I’m not being clear enough, in summary, I’m saying:

FUCK YOU WAIX

Epic Bad Luck

Not the quality, but the quantity – the amount of fails happened today and yesterday.

  • The FreeBSD and ZFS on external hard disk is a disaster combination: it likes to break itself. Crashed/kernel panic twice in 18 hours.
  • I got “caught” when infiltrating certain place. Worse, I have to report to certain person about the “incident”.
  • My attempt to renew my driving license turned to be a failure due to my lateness – note that the place I tried is supposed to still open for 6 hours but the form is not available anymore.
  • I forgot to order certain device to enable me doing online banking. The pickup place on the area where I’m currently working is a little bit out of my reach.
  • It turns out my academic transcript in English which I ordered 6 months ago is still not available due to incomplete data I provided. And there I assumed it’s been done months ago.
  • My professor forgot to give certain document I needed (ASAP) to administration dept.
  • And the last one, my printer’s ink ran out and I can’t print document I must print.

Great.

ed2k hash using ruby

I’ve created ed2k hashing implementation using ruby. It’s not too slow and only use core library (openssl).

def file_ed2k(file_name, output_mode = "hash")
  ed2k_block = 9500*1024 #ed2k block size is 9500 KiB
  ed2k_hash = ""
  file = File.open(file_name, 'rb')
  file_size = file.stat.size #while at it, fetch the size of the file
  while (block = file.read(ed2k_block)) do
    ed2k_hash < < OpenSSL::Digest::MD4.digest(block) #hashes are concatenated md4 per block size for ed2k hash
  end
  ed2k_hash << OpenSSL::Digest::MD4.digest("") if file_size % ed2k_block == 0 #on size of modulo block size, append another md4 hash of a blank string
  file.close
  ed2k_hash = OpenSSL::Digest::MD4.hexdigest(ed2k_hash) #finally
  return case output_mode #there are 2 modes, just the has, or complete with link.
    when "hash"
      ed2k_hash
    when "link"
      "ed2k://|file|#{File.basename(file_name)}|#{file_size}|#{ed2k_hash}|"
    end
end

You can then call the file_ed2k method (or whatever you name it) to calculate a file’s ed2k hash. ed2k link generation was created to reduce amount of IO involved when reading the file(s).

ef complete translation by nnl!

Yes it’s out! As mentioned in their website, there are two parts: installer file and data file. Installer file itself is available as DDL linked in their website and its data file is available on torrent and xdcc somewhere.

As I was the one pointed/arranged folks at [Commie] to provide the xdcc, aside the official(?) nnl channel on irchighway, it’s also available on Commie channel at Rizon.

File data (as of released 1 May 2010):

  • installer:
    • filename: ef_Lite_Installer_[nnl].exe
    • filesize: 5737984 bytes (5.74 MB / 5.47 MiB)
    • md5: 04e0304bc2efbc3f298af188f1a4fb46
    • sha256: 4945c1563d743284853f87619a0b5a5a2a8ef2ae602198083ecfd7398afc3703
  • data file:
    • filename: ef_Lite_Data_[nnl].cnd
    • filesize: 2315609468 bytes (2.32 GB / 2.17 GiB)
    • md5: d6a9ab3fc37ec715f8169990514b1e66
    • sha256: 6a20bb48a4787144e0f83ea4ca3ad397aece5380f5c09d9b85d8d9a716468ef6

(information as of today, may become irrelevant after several days/months/years)

XDCC:

[ nnl | Installer | Data file torrent ]

(yes, this post is another experiment to get free traffic, lololol)

(derp, WordPress broke my link. No wonder no one clicked on the torrent)