Archive for the 'p.d.o' Category

OpenGL performance and the importance of configuration

Until recently, I wasn't really bothered by the poor OpenGL performance of my laptop. Though it has a Radeon Mobility 9200, the OpenGL "experience" was really bad. But since I was not using GL enabled applications, that was not an issue.

Then, I tried Google Earth and thought "well, it's slow, but it's huge, I don't know, maybe it's normal". A few days later I heard about neverball and gave it a try. It was even slower than Google Earth, and that seemed pretty hard to believe this time.

Thanks to clues from people on #d-d, I figured out I didn't have the libgl1-mesa-dri package installed. Quite simple to know if you are in the same case : first, grep "Direct rendering" in your X.org log file to be sure DRI is enabled on the server side. Then check for "direct rendering" in glxinfo's output to know if it's enabled on the client side. If not, try to apt-get install libgl1-mesa-dri.

Trying neverball again was delightful: the game was playable.

I thought I was done with my OpenGL settings until yesterday, when, for some reason, I read the X.org log and noticed AGP mode was 1x. I didn't know exactly about nowadays, but I was pretty sure 4x was reached at some point, and that the chip in my laptop might support it.

My friend google helped me finding something about "xorg.conf radeon 9200 mobility agp", and gave me hints about three parameters to add to the xorg.conf file that might improve GL performance:

Option "AGPMode" "8"
Option "AGPFastWrite" "on"
Option "EnablePageFlip" "true"

Glxgears is not a benchmark, but still, it helps to make self comparisons. Before: ~260fps. After: ~1350fps. WAW ! Now Google Earth is really fast.

Though I have a strange bug which appears whenever an X window (popup, balloon help, etc.) gets on top of the 3D frame: only the rightmost part appears... on the left. It appears on neverball, on glxgears, on Google Earth, but NOT on foobillard... moving or resizing the window usually fixes the display.

Update: When switching to console and back to X, the glxgear performance gets back to ~260fps...

Update 2: It seems my "before" score is not as bad as claimed. It's actually ~950fps. Still a good improvement, though. The reason why i got ~260fps is that i almost never reboot my laptop nor restart X, and hibernate instead of shutting down the system. Which means I'm in the switched back from console case most of the time.

Update 3: Thanks to Robert Hart for pointing to bug #363995. The patch from there works perfectly \o/. (If you can't wait and want a patched .deb, send me a mail)

2006-06-25 21:37:36+0900

miscellaneous, p.d.o | 2 Comments »

Anti comment spam measures

In the past months, I was getting more and more comment spam. Even though the configuration was set to moderate comments containing a link. It was a good filtering measure at the beginning, but became less and less handleable. When I decided to act, I was getting more than a hundred comment spams in my "Awaiting moderation" list. Per day. Maybe even more, I can't remember. And about the same amount were actually able to go past the moderation filtering, by not putting links in the comment itself, but in the homepage field of the comment form.

While wordpress' comment moderation interface is pretty efficient at deleting a lot of spam, the comment management interface just sucks, even in the "massive editing" mode. So, after having spent quite some time in this sucky interface, I decided I didn't want to resort to it any more.

First I had to remove all these comment spams. I had to use use the SQL delete command myself, since WordPress is useless. I basically deleted all comments posted after the last real one I saw. Sorry if someone posted one I didn't see. The tricky part was that the comment count for articles is kept in a field of the wp_posts table. Which means there was a difference between the actual comment count and the count displayed. For those who'd want to do the same at home, here is the magical SQL query to refresh the comment count:
update wp_posts set comment_count = (select count(*) from wp_comments where comment_post_ID = id and comment_approved = '1');

Next step was to avoid getting more spam. I didn't want to use captchas or any turing tests, because they basically all fail to be accessible in some way. So, I took a balanced decision. While I appreciate to get comments, I can't stand any more the spam that get in posts as old as the blog. The best thing to do then, I think, was to allow comments on recent posts only. Sorry for those who'd like to comment on old stuff, but being able to comment on the newer posts is still better than nothing. I also kept the link moderation policy, which seemed to be helpful at the beginning.

For the little story, again, WordPress was not very helpful, so I had to resort to an SQL query to close comments on all the posts.

It's been 3 weeks or so, now, since I switched to this new policy. I got only 1 spam to moderate and none directly in the comments. Let's hope it will last.

2006-06-25 20:27:18+0900

p.d.o, website | 6 Comments »

Targetted Spam

This morning, I got one of these standard cialis/viagra/xanax/valium/whatever spam you get numerous times in a day, but with a "test xul" subject. Well, that's targetted.

2006-06-05 10:13:23+0900

miscellaneous, p.d.o | Comments Off on Targetted Spam

The Windows way

Imagine an application that writes in its application directory, and if it fails to write there, write in a user profile directory.

Imagine a wide-spread operating system where the default settings allow any user to write almost anywhere on the file system.

Enjoy the result (Comment #7 gives a good overview of the problem).

2006-03-22 16:54:01+0900

firefox | Comments Off on The Windows way

What kind of language is that ?

Consider the following code :


for (var i = 0; i < 20; i++) {
var j = i;
}
alert(i + ' ' + j);

That's javascript. It gives "20 19". Would you expect that from a decent language ?

Update: Same result with a var j = 0; before the for loop. No surprise, actually.

Update 2: It seems what I'm complaining about has not been well received ;). I'm not complaining about the values, I'm complaining about the fact that there are values...

2006-03-11 19:20:42+0900

p.d.o, website | 7 Comments »

More work on xulrunner

Xulrunner finally reached unstable on sunday, and it already needed some adjustments. I forgot a conflict, misnamed the libsmjs-dev package, the hppa assembler code got somehow duplicated, so xulrunner failed to build on hppa, and some preference files needed to be moved around the packages.

It also fails to build on alpha, but that's not my fault.

I also changed the way we identify a Debian built xulrunner in the user agent. I replaced the "Gecko/yyyymmdd" string with "Gecko/Debian/x.y.z.t-r". That has several advantages over the previous "Gecko/yyyymmdd Debian/x.y.z.t-r" string:

  • It shortens the already long user agent string so that additions such as product name are not painful,
  • removes pointless information (the date in the original string indicates the date of the build, not that of the API),
  • keeps the "Gecko" string (which some site might want, seing how Apple and Konqueror did put a "like Gecko" string),
  • and finally avoid confusion with other Debian release informations that may be present in the product specific part (I think Galeon puts one, for instance).

That needed 2 updates, because I realized in between that while you can set the general.useragent.product and general.usergagent.productSub variables, they are not actually taken into account until you change them a first time... The HTTP protocol initialization in xulrunner forces the respective values Gecko and yyyymmdd... I just completely removed that part of the code in xulrunner.

I took advantage of this needed fix to work even more on the package, enabling the typeaheadfind module, needed by galeon and epiphany, removing some old perl code that does nothing except producing useless errors during the build, and, last but not the least, enabling the "flat style" chrome, meaning that instead of being stuck in .jar files, everything is just here in a standard tree.

It appears upstream provides a way to build such trees (option --enable-chrome-format=flat to configure), but it's useless because it doesn't install the trees when running make install. In addition to that, the configure script still requires zip, even if it doesn't use it, except for some obscure .jar files in libnss, that are not installed either. So after disabling the build of these useless files and removing the zip strict requirement if we build flat chrome, and fixing the script responsible for installation of the chrome files, everything got much better.

These changes have been applied to firefox as well, so stay tuned, it's for next upload.

Having highjacked spidermonkey, I did some bug triage there as well. It appeared they could all be closed by the fact libmozjs0d is there. I still need to make some triage in my patches to xulrunner and send upstream those that I still have not sent, if they are of any interest there.

Apart from this xulrunner stuff, I also did some bug clean-up on libxml2 and libxslt, which I've been somehow neglecting for some time.

2006-02-22 21:06:21+0900

p.d.o, xulrunner | 6 Comments »

Is libpng crap ? and php ?

As I told before, I've been working for a few days on a way to display japanese characters on a web page when you don't have japanese fonts locally. That involved taking japanese characters from a font file, and turning it into a png image. So I just wrote a small and simple program using libpng and freetype. Which works quite well. But that's not the point.

During this "development", I lost 2 or 3 days trying to understand what was wrong with my code that made it work properly from a shell command and fail halfway when wrapped in a php script. I still don't know what's wrong with php, but it works well with a perl cgi wrapper, and it still fails if I raise the php memory limit. Interestingly, other libpng tools (such as pngtopnm) fail in the same way. It wouldn't surprise me if it was libpng's fault.

The first thing i did to understand why it'd fail, was to run the program under valgrind. And what I saw was not encouraging. A lot of "Conditional jump or move depends on uninitialised value(s)". I tried different combinations of code, using png_write_image or png_write_row, with large or small buffers...

It turns out libpng can't write a png file with a width smaller than 148 pixels without those "Conditional jump or move depends on uninitialised value(s)", which happen, as the stack trace shows, in the zlib, called by png_write_row, but i doubt it to be a zlib issue. You would think it would be a buffer size issue, but if you use a larger buffer for each line you call png_write_row for, you still get the errors.

If anyone has a clue, please leave a comment.

Update: Thanks Mark for your comment, though I don't get how it can be safe to have such accesses to uninitialized memory. Deuce, the php issue can't be a timeout issue, the program runs in fractions of a second.

2006-02-19 14:21:32+0900

miscellaneous, p.d.o | 3 Comments »

How-to: switch your partitions to LVM and lose everything

(almost)

I've had some problems with disk space for a while, so much that I was "creating space" by creating an lvm logical volume of 5GB over 3 files (through loop devices) on 3 different partitions (/usr, /var, and /home) that all had less than 5GB free. But I needed 5GB free.

So, instead of continuing relying on such brain damaged solutions, I decided to just switch everything on lvm, which, incidentally, would allow me to use lvm snapshots for pbuilder or similar tools (pbuilder doesn't support lvm snapshot, as far as I know). Since my / partition is quite small (so small that I sometimes have space problems at kernel upgrade), I just decided to keep it and just transform my /usr, /var, and /home partitions into a big lvm physical volume.

The challenge was to move all the files without any resort to external devices. Yes, I have other devices on other PCs, but they all are in about the same shape : full as hell. No way I could possibly free 40GB for backup. I couldn't even burn some data : my DVD burner died a few weeks ago and I still haven't bought a new one. I could've waited but well, I wanted to do it.

So, the week-end before last, I decided to go on, and do it the painful way. I used my swap partition to move /usr and /var around, so that I could free the space from these 2 partitions to create the premises of the lvm physical volume. First trick: not enough space in the swap partition, so I just deleted a bunch of files (mostly /usr/share/doc/ and /usr/share/locale files) so that all would fit. After moving all around, I'd just unpack all installed debian packages so that the files are there again.

Then, I managed to move everything by packs of 4~5GB, which was laborious. It involved creating temporary lvm physical volumes on temporary partitions and then moving the logical volumes's chunks from physical volumes to other physical volumes on the same disk. Brain damage.

Anyway, the big fright came when I had this stupid idea to delete partitions and create them again. Why ? because neither fdisk (no surprise) nor parted can resize a partition. Well, parted can, but it apparently can't without playing around with the underlying filesystem.

So, I deleted and recreated the partitions. And guess what ? They didn't end up on the same sectors... The second stupid idea was to run pvcreate on the newly created not quite the same partitions... I thought I just lost everything. Pictures, personal repositories (especially 6 months worth (well, it took 6 months of real time, not of work time, but still) of work on xulrunner), everything.

It's actually not the first time I (almost) lose everything. I once did rm -rf / in a chroot... in which /home was mounted bound... and wrote a program to scan the file-system for the precious files (that's the story of ext3rminator, which I still have to release once and for all). That was 3 years ago.

First, calm down. Second, the bits are still on the disk (except maybe the bits overwritten by pvcreate). Third, calm down. Fourth, think.

All I obviously had to do was to find the correct location of the partitions, and pray that the pvcreate didn't blow everything. Which was almost okay. I could find 2 of the LVM volumes out of 3, while the third one seemed in a pretty bad shape. A bit of search with my friend google on another computer hinted me on vgcfgrestore. So I pvcreated once more over this broken volume, and used vgcfgrestore to get the volume group correct. It seemed okay. I could mount the volume. That was close.

Well, to be sure, let's run a fsck... ouch... inconsistency in some inodes. Okay, correct it. Once, twice... a couple times. Waw, is it that fucked up ? It can't be that fucked up.

So I took another look and discovered that the third volume was offsetted by a sector. 512 octets. Ouch, letting the fsck correct things was definitely not a good idea. Correcting again the partition table, pvcreating and vgcfgrestoring again, fsck, and let's rock and roll.

Conclusion(s):
- I only lost 150MB of MP3. I don't care.
- Never ever use parted again, and never ever use fdisk in basic mode again.
- Avoid tricky procedures with stuff you are trying for the first time.
- Backups are for the weakest.

2006-02-14 21:56:19+0900

ext3rminator, miscellaneous | Comments Off on How-to: switch your partitions to LVM and lose everything

Font packages

I'm currently working on a way to display japanese characters on web pages, on a computer where japanese fonts are not installed. It involves a server-side image generator for glyphs, for which I needed the .ttf files for appropriate fonts, like kochi-mincho.

So, i just apt-get installed ttf-kochi-mincho, and it told me it wanted to install a bunch of stuff I don't care about. I ended getting the .ttf files directly from the sourceforge site... I don't want to clutter my server with X packages.

The question is: why do fonts need to depend on xutils (not all but quite some) and defoma (all of them, apparently). Couldn't they just recommend defoma and run the font registration thingy only if it is present ? If not present, nothing would be lost, since defoma's installation would trigger font registration. On the other hand, I don't understand nor see why some need xutils at all...

2006-02-14 20:50:53+0900

debian | 2 Comments »

Gripes with Firefox

Martin said:

Firefox and OpenOffice.org seem to be two flagship products of the F/OSS world. At the same time, they are probably the two best examples of programmes orthogonal to the Unix philosophy. Does that mean that the Unix philosophy is the wrong way? Holy Shit!

No, Martin, that doesn't mean anything about the Unix philosophy. It only means Windows developers don't have a clue about Unix.

2006-02-14 19:14:05+0900

firefox | 2 Comments »