Archive for February, 2006

More work on xulrunner

Xulrunner finally reached unstable on sunday, and it already needed some adjustments. I forgot a conflict, misnamed the libsmjs-dev package, the hppa assembler code got somehow duplicated, so xulrunner failed to build on hppa, and some preference files needed to be moved around the packages.

It also fails to build on alpha, but that's not my fault.

I also changed the way we identify a Debian built xulrunner in the user agent. I replaced the "Gecko/yyyymmdd" string with "Gecko/Debian/x.y.z.t-r". That has several advantages over the previous "Gecko/yyyymmdd Debian/x.y.z.t-r" string:

  • It shortens the already long user agent string so that additions such as product name are not painful,
  • removes pointless information (the date in the original string indicates the date of the build, not that of the API),
  • keeps the "Gecko" string (which some site might want, seing how Apple and Konqueror did put a "like Gecko" string),
  • and finally avoid confusion with other Debian release informations that may be present in the product specific part (I think Galeon puts one, for instance).

That needed 2 updates, because I realized in between that while you can set the general.useragent.product and general.usergagent.productSub variables, they are not actually taken into account until you change them a first time... The HTTP protocol initialization in xulrunner forces the respective values Gecko and yyyymmdd... I just completely removed that part of the code in xulrunner.

I took advantage of this needed fix to work even more on the package, enabling the typeaheadfind module, needed by galeon and epiphany, removing some old perl code that does nothing except producing useless errors during the build, and, last but not the least, enabling the "flat style" chrome, meaning that instead of being stuck in .jar files, everything is just here in a standard tree.

It appears upstream provides a way to build such trees (option --enable-chrome-format=flat to configure), but it's useless because it doesn't install the trees when running make install. In addition to that, the configure script still requires zip, even if it doesn't use it, except for some obscure .jar files in libnss, that are not installed either. So after disabling the build of these useless files and removing the zip strict requirement if we build flat chrome, and fixing the script responsible for installation of the chrome files, everything got much better.

These changes have been applied to firefox as well, so stay tuned, it's for next upload.

Having highjacked spidermonkey, I did some bug triage there as well. It appeared they could all be closed by the fact libmozjs0d is there. I still need to make some triage in my patches to xulrunner and send upstream those that I still have not sent, if they are of any interest there.

Apart from this xulrunner stuff, I also did some bug clean-up on libxml2 and libxslt, which I've been somehow neglecting for some time.

2006-02-22 21:06:21+0900

p.d.o, xulrunner | 6 Comments »

Is libpng crap ? and php ?

As I told before, I've been working for a few days on a way to display japanese characters on a web page when you don't have japanese fonts locally. That involved taking japanese characters from a font file, and turning it into a png image. So I just wrote a small and simple program using libpng and freetype. Which works quite well. But that's not the point.

During this "development", I lost 2 or 3 days trying to understand what was wrong with my code that made it work properly from a shell command and fail halfway when wrapped in a php script. I still don't know what's wrong with php, but it works well with a perl cgi wrapper, and it still fails if I raise the php memory limit. Interestingly, other libpng tools (such as pngtopnm) fail in the same way. It wouldn't surprise me if it was libpng's fault.

The first thing i did to understand why it'd fail, was to run the program under valgrind. And what I saw was not encouraging. A lot of "Conditional jump or move depends on uninitialised value(s)". I tried different combinations of code, using png_write_image or png_write_row, with large or small buffers...

It turns out libpng can't write a png file with a width smaller than 148 pixels without those "Conditional jump or move depends on uninitialised value(s)", which happen, as the stack trace shows, in the zlib, called by png_write_row, but i doubt it to be a zlib issue. You would think it would be a buffer size issue, but if you use a larger buffer for each line you call png_write_row for, you still get the errors.

If anyone has a clue, please leave a comment.

Update: Thanks Mark for your comment, though I don't get how it can be safe to have such accesses to uninitialized memory. Deuce, the php issue can't be a timeout issue, the program runs in fractions of a second.

2006-02-19 14:21:32+0900

miscellaneous, p.d.o | 3 Comments »

How-to: switch your partitions to LVM and lose everything

(almost)

I've had some problems with disk space for a while, so much that I was "creating space" by creating an lvm logical volume of 5GB over 3 files (through loop devices) on 3 different partitions (/usr, /var, and /home) that all had less than 5GB free. But I needed 5GB free.

So, instead of continuing relying on such brain damaged solutions, I decided to just switch everything on lvm, which, incidentally, would allow me to use lvm snapshots for pbuilder or similar tools (pbuilder doesn't support lvm snapshot, as far as I know). Since my / partition is quite small (so small that I sometimes have space problems at kernel upgrade), I just decided to keep it and just transform my /usr, /var, and /home partitions into a big lvm physical volume.

The challenge was to move all the files without any resort to external devices. Yes, I have other devices on other PCs, but they all are in about the same shape : full as hell. No way I could possibly free 40GB for backup. I couldn't even burn some data : my DVD burner died a few weeks ago and I still haven't bought a new one. I could've waited but well, I wanted to do it.

So, the week-end before last, I decided to go on, and do it the painful way. I used my swap partition to move /usr and /var around, so that I could free the space from these 2 partitions to create the premises of the lvm physical volume. First trick: not enough space in the swap partition, so I just deleted a bunch of files (mostly /usr/share/doc/ and /usr/share/locale files) so that all would fit. After moving all around, I'd just unpack all installed debian packages so that the files are there again.

Then, I managed to move everything by packs of 4~5GB, which was laborious. It involved creating temporary lvm physical volumes on temporary partitions and then moving the logical volumes's chunks from physical volumes to other physical volumes on the same disk. Brain damage.

Anyway, the big fright came when I had this stupid idea to delete partitions and create them again. Why ? because neither fdisk (no surprise) nor parted can resize a partition. Well, parted can, but it apparently can't without playing around with the underlying filesystem.

So, I deleted and recreated the partitions. And guess what ? They didn't end up on the same sectors... The second stupid idea was to run pvcreate on the newly created not quite the same partitions... I thought I just lost everything. Pictures, personal repositories (especially 6 months worth (well, it took 6 months of real time, not of work time, but still) of work on xulrunner), everything.

It's actually not the first time I (almost) lose everything. I once did rm -rf / in a chroot... in which /home was mounted bound... and wrote a program to scan the file-system for the precious files (that's the story of ext3rminator, which I still have to release once and for all). That was 3 years ago.

First, calm down. Second, the bits are still on the disk (except maybe the bits overwritten by pvcreate). Third, calm down. Fourth, think.

All I obviously had to do was to find the correct location of the partitions, and pray that the pvcreate didn't blow everything. Which was almost okay. I could find 2 of the LVM volumes out of 3, while the third one seemed in a pretty bad shape. A bit of search with my friend google on another computer hinted me on vgcfgrestore. So I pvcreated once more over this broken volume, and used vgcfgrestore to get the volume group correct. It seemed okay. I could mount the volume. That was close.

Well, to be sure, let's run a fsck... ouch... inconsistency in some inodes. Okay, correct it. Once, twice... a couple times. Waw, is it that fucked up ? It can't be that fucked up.

So I took another look and discovered that the third volume was offsetted by a sector. 512 octets. Ouch, letting the fsck correct things was definitely not a good idea. Correcting again the partition table, pvcreating and vgcfgrestoring again, fsck, and let's rock and roll.

Conclusion(s):
- I only lost 150MB of MP3. I don't care.
- Never ever use parted again, and never ever use fdisk in basic mode again.
- Avoid tricky procedures with stuff you are trying for the first time.
- Backups are for the weakest.

2006-02-14 21:56:19+0900

ext3rminator, miscellaneous | Comments Off on How-to: switch your partitions to LVM and lose everything

Font packages

I'm currently working on a way to display japanese characters on web pages, on a computer where japanese fonts are not installed. It involves a server-side image generator for glyphs, for which I needed the .ttf files for appropriate fonts, like kochi-mincho.

So, i just apt-get installed ttf-kochi-mincho, and it told me it wanted to install a bunch of stuff I don't care about. I ended getting the .ttf files directly from the sourceforge site... I don't want to clutter my server with X packages.

The question is: why do fonts need to depend on xutils (not all but quite some) and defoma (all of them, apparently). Couldn't they just recommend defoma and run the font registration thingy only if it is present ? If not present, nothing would be lost, since defoma's installation would trigger font registration. On the other hand, I don't understand nor see why some need xutils at all...

2006-02-14 20:50:53+0900

debian | 2 Comments »

Gripes with Firefox

Martin said:

Firefox and OpenOffice.org seem to be two flagship products of the F/OSS world. At the same time, they are probably the two best examples of programmes orthogonal to the Unix philosophy. Does that mean that the Unix philosophy is the wrong way? Holy Shit!

No, Martin, that doesn't mean anything about the Unix philosophy. It only means Windows developers don't have a clue about Unix.

2006-02-14 19:14:05+0900

firefox | 2 Comments »

xulrunner 1.8.0.1-2

You know, nothing can be perfect from the first time, so I had to fix a few issues in a new upload. Well, for you, it makes no difference, since 1.8.0.1-1 never reached the archive, but these changes were needed so that epiphany and friends could properly build against xulrunner.

I also started filing wishlist bugs with patches to build some packages against xulrunner as soon as it gets into the archive. Galeon is undergoing. Next one might be kazehakase. Stay tuned.

Update: Patches for galeon, kazehakase, and devhelp sent.

2006-02-08 23:49:38+0900

xulrunner | 4 Comments »

xulrunner 1.8.0.1-1

Checking Signature on .changes
(...)
Good signature on /home/mh/pbuilder/sid/result/xulrunner_1.8.0.1-1_i386.changes.
Checking Signature on .dsc
(...)
Good signature on /home/mh/pbuilder/sid/result/xulrunner_1.8.0.1-1.dsc.
Uploading via ftp xulrunner_1.8.0.1-1.dsc: done.
Uploading via ftp xulrunner_1.8.0.1.orig.tar.gz: done.
Uploading via ftp xulrunner_1.8.0.1-1.diff.gz: done.
Uploading via ftp libnspr4-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libmozjs-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libsmjs1_1.8.0.1-1_all.deb: done.
Uploading via ftp libsmjs1-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libxul-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libnss3-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp xulrunner_1.8.0.1-1_i386.deb: done.
Uploading via ftp xulrunner-gnome-support_1.8.0.1-1_i386.deb: done.
Uploading via ftp libnspr4-0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp libmozjs0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp spidermonkey-bin_1.8.0.1-1_i386.deb: done.
Uploading via ftp libxul0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp libnss3-0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp xulrunner_1.8.0.1-1_i386.changes: done.
Successfully uploaded packages.

I wonder how long it will take to the ftp-masters to accept it. It seems the NEW queue is not as fast as it used to be. It's been a week I've been waiting for libxml2 and libxslt to get through, and they still haven't made it. If enough people is interested in the package before it comes out of NEW, I'll probably upload it in my repository.

I have a patch for epiphany-browser that I'll send soon to the BTS. I'll come up with a patch for yelp as well. Then, bye bye mozilla-browser.

I'm also going to build an unofficial firefox on top of xulrunner, and maybe upload it to experimental, so that we can have some feedback about that. I'll see with Eric about that.

2006-02-07 21:21:03+0900

xulrunner | 2 Comments »