Author Archive

Is libpng crap ? and php ?

As I told before, I've been working for a few days on a way to display japanese characters on a web page when you don't have japanese fonts locally. That involved taking japanese characters from a font file, and turning it into a png image. So I just wrote a small and simple program using libpng and freetype. Which works quite well. But that's not the point.

During this "development", I lost 2 or 3 days trying to understand what was wrong with my code that made it work properly from a shell command and fail halfway when wrapped in a php script. I still don't know what's wrong with php, but it works well with a perl cgi wrapper, and it still fails if I raise the php memory limit. Interestingly, other libpng tools (such as pngtopnm) fail in the same way. It wouldn't surprise me if it was libpng's fault.

The first thing i did to understand why it'd fail, was to run the program under valgrind. And what I saw was not encouraging. A lot of "Conditional jump or move depends on uninitialised value(s)". I tried different combinations of code, using png_write_image or png_write_row, with large or small buffers...

It turns out libpng can't write a png file with a width smaller than 148 pixels without those "Conditional jump or move depends on uninitialised value(s)", which happen, as the stack trace shows, in the zlib, called by png_write_row, but i doubt it to be a zlib issue. You would think it would be a buffer size issue, but if you use a larger buffer for each line you call png_write_row for, you still get the errors.

If anyone has a clue, please leave a comment.

Update: Thanks Mark for your comment, though I don't get how it can be safe to have such accesses to uninitialized memory. Deuce, the php issue can't be a timeout issue, the program runs in fractions of a second.

2006-02-19 14:21:32+0900

miscellaneous, p.d.o | 3 Comments »

How-to: switch your partitions to LVM and lose everything

(almost)

I've had some problems with disk space for a while, so much that I was "creating space" by creating an lvm logical volume of 5GB over 3 files (through loop devices) on 3 different partitions (/usr, /var, and /home) that all had less than 5GB free. But I needed 5GB free.

So, instead of continuing relying on such brain damaged solutions, I decided to just switch everything on lvm, which, incidentally, would allow me to use lvm snapshots for pbuilder or similar tools (pbuilder doesn't support lvm snapshot, as far as I know). Since my / partition is quite small (so small that I sometimes have space problems at kernel upgrade), I just decided to keep it and just transform my /usr, /var, and /home partitions into a big lvm physical volume.

The challenge was to move all the files without any resort to external devices. Yes, I have other devices on other PCs, but they all are in about the same shape : full as hell. No way I could possibly free 40GB for backup. I couldn't even burn some data : my DVD burner died a few weeks ago and I still haven't bought a new one. I could've waited but well, I wanted to do it.

So, the week-end before last, I decided to go on, and do it the painful way. I used my swap partition to move /usr and /var around, so that I could free the space from these 2 partitions to create the premises of the lvm physical volume. First trick: not enough space in the swap partition, so I just deleted a bunch of files (mostly /usr/share/doc/ and /usr/share/locale files) so that all would fit. After moving all around, I'd just unpack all installed debian packages so that the files are there again.

Then, I managed to move everything by packs of 4~5GB, which was laborious. It involved creating temporary lvm physical volumes on temporary partitions and then moving the logical volumes's chunks from physical volumes to other physical volumes on the same disk. Brain damage.

Anyway, the big fright came when I had this stupid idea to delete partitions and create them again. Why ? because neither fdisk (no surprise) nor parted can resize a partition. Well, parted can, but it apparently can't without playing around with the underlying filesystem.

So, I deleted and recreated the partitions. And guess what ? They didn't end up on the same sectors... The second stupid idea was to run pvcreate on the newly created not quite the same partitions... I thought I just lost everything. Pictures, personal repositories (especially 6 months worth (well, it took 6 months of real time, not of work time, but still) of work on xulrunner), everything.

It's actually not the first time I (almost) lose everything. I once did rm -rf / in a chroot... in which /home was mounted bound... and wrote a program to scan the file-system for the precious files (that's the story of ext3rminator, which I still have to release once and for all). That was 3 years ago.

First, calm down. Second, the bits are still on the disk (except maybe the bits overwritten by pvcreate). Third, calm down. Fourth, think.

All I obviously had to do was to find the correct location of the partitions, and pray that the pvcreate didn't blow everything. Which was almost okay. I could find 2 of the LVM volumes out of 3, while the third one seemed in a pretty bad shape. A bit of search with my friend google on another computer hinted me on vgcfgrestore. So I pvcreated once more over this broken volume, and used vgcfgrestore to get the volume group correct. It seemed okay. I could mount the volume. That was close.

Well, to be sure, let's run a fsck... ouch... inconsistency in some inodes. Okay, correct it. Once, twice... a couple times. Waw, is it that fucked up ? It can't be that fucked up.

So I took another look and discovered that the third volume was offsetted by a sector. 512 octets. Ouch, letting the fsck correct things was definitely not a good idea. Correcting again the partition table, pvcreating and vgcfgrestoring again, fsck, and let's rock and roll.

Conclusion(s):
- I only lost 150MB of MP3. I don't care.
- Never ever use parted again, and never ever use fdisk in basic mode again.
- Avoid tricky procedures with stuff you are trying for the first time.
- Backups are for the weakest.

2006-02-14 21:56:19+0900

ext3rminator, miscellaneous | Comments Off on How-to: switch your partitions to LVM and lose everything

Font packages

I'm currently working on a way to display japanese characters on web pages, on a computer where japanese fonts are not installed. It involves a server-side image generator for glyphs, for which I needed the .ttf files for appropriate fonts, like kochi-mincho.

So, i just apt-get installed ttf-kochi-mincho, and it told me it wanted to install a bunch of stuff I don't care about. I ended getting the .ttf files directly from the sourceforge site... I don't want to clutter my server with X packages.

The question is: why do fonts need to depend on xutils (not all but quite some) and defoma (all of them, apparently). Couldn't they just recommend defoma and run the font registration thingy only if it is present ? If not present, nothing would be lost, since defoma's installation would trigger font registration. On the other hand, I don't understand nor see why some need xutils at all...

2006-02-14 20:50:53+0900

debian | 2 Comments »

Gripes with Firefox

Martin said:

Firefox and OpenOffice.org seem to be two flagship products of the F/OSS world. At the same time, they are probably the two best examples of programmes orthogonal to the Unix philosophy. Does that mean that the Unix philosophy is the wrong way? Holy Shit!

No, Martin, that doesn't mean anything about the Unix philosophy. It only means Windows developers don't have a clue about Unix.

2006-02-14 19:14:05+0900

firefox | 2 Comments »

xulrunner 1.8.0.1-2

You know, nothing can be perfect from the first time, so I had to fix a few issues in a new upload. Well, for you, it makes no difference, since 1.8.0.1-1 never reached the archive, but these changes were needed so that epiphany and friends could properly build against xulrunner.

I also started filing wishlist bugs with patches to build some packages against xulrunner as soon as it gets into the archive. Galeon is undergoing. Next one might be kazehakase. Stay tuned.

Update: Patches for galeon, kazehakase, and devhelp sent.

2006-02-08 23:49:38+0900

xulrunner | 4 Comments »

xulrunner 1.8.0.1-1

Checking Signature on .changes
(...)
Good signature on /home/mh/pbuilder/sid/result/xulrunner_1.8.0.1-1_i386.changes.
Checking Signature on .dsc
(...)
Good signature on /home/mh/pbuilder/sid/result/xulrunner_1.8.0.1-1.dsc.
Uploading via ftp xulrunner_1.8.0.1-1.dsc: done.
Uploading via ftp xulrunner_1.8.0.1.orig.tar.gz: done.
Uploading via ftp xulrunner_1.8.0.1-1.diff.gz: done.
Uploading via ftp libnspr4-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libmozjs-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libsmjs1_1.8.0.1-1_all.deb: done.
Uploading via ftp libsmjs1-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libxul-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp libnss3-dev_1.8.0.1-1_all.deb: done.
Uploading via ftp xulrunner_1.8.0.1-1_i386.deb: done.
Uploading via ftp xulrunner-gnome-support_1.8.0.1-1_i386.deb: done.
Uploading via ftp libnspr4-0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp libmozjs0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp spidermonkey-bin_1.8.0.1-1_i386.deb: done.
Uploading via ftp libxul0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp libnss3-0d_1.8.0.1-1_i386.deb: done.
Uploading via ftp xulrunner_1.8.0.1-1_i386.changes: done.
Successfully uploaded packages.

I wonder how long it will take to the ftp-masters to accept it. It seems the NEW queue is not as fast as it used to be. It's been a week I've been waiting for libxml2 and libxslt to get through, and they still haven't made it. If enough people is interested in the package before it comes out of NEW, I'll probably upload it in my repository.

I have a patch for epiphany-browser that I'll send soon to the BTS. I'll come up with a patch for yelp as well. Then, bye bye mozilla-browser.

I'm also going to build an unofficial firefox on top of xulrunner, and maybe upload it to experimental, so that we can have some feedback about that. I'll see with Eric about that.

2006-02-07 21:21:03+0900

xulrunner | 2 Comments »

Deep Blue, Deep Blitz, what’s next ?

People seem to still want to make big computers to beat chess masters, while Kasparov has been beaten by the upgraded Deep Blue, then nicknamed Deeper Blue, in 1997. But actually, what's the point ?
I mean, all it demonstrates is that brutal brute force is better than human. And now human has been beaten, what's the challenge ?

So, let's move on a bit and go for a real challenge : beat human at Go. You know, this simple game with simple rules but for which no computer has yet beaten average players. I'm not even saying advanced or masters, because computers are so boring and lame that strong players prefer to play humans.

And if they happen to want to use the brute force as they did for chess, well, they'd need a computer to which Deep(er) Blue would be an abacus.

More seriously, though, why isn't there more IA research on Go, now that Chess is more or less mastered ? Another thing that puzzles me is how come gnugo is so good compared to other Go programs, while free software Chess programs are so lame ?

2006-01-27 19:54:28+0900

miscellaneous, p.d.o | 1 Comment »

What is your Perfect Major ?

You scored as Engineering. You should be an Engineering major!

Engineering

92%

Chemistry

75%

Mathematics

75%

Philosophy

67%

Linguistics

67%

Psychology

67%

Sociology

58%

Biology

58%

Theater

42%

English

42%

Anthropology

33%

Art

33%

Journalism

25%

Dance

17%

What is your Perfect Major?
created with QuizFarm.com

2006-01-13 18:09:41+0900

me, p.d.o | 1 Comment »

Meme time


Your Inner European is French!



Smart and sophisticated.

You have the best of everything - at least, *you* think so.

Who's Your Inner European?

2006-01-13 18:04:17+0900

me, p.d.o | 1 Comment »

Web 1.0 vs Web 2.0

How can you tell the difference ? Here is the answer.

2006-01-01 08:52:27+0900

miscellaneous, p.d.o | Comments Off on Web 1.0 vs Web 2.0