Archive for the 'miscellaneous' Category

Long live the battery

After playing around with gnome-power-manager so that it can support spicctrl (which is to be done in hal, actually) to be able to adjust LCD brightness according to whether the AC adapter is plugged or not, I wanted to go a bit further in my quest for battery life. To be honest, I was already adjusting the LCD brightness with a custom acpid event script, but it's still nicer to have it out of the box.

Anyway, the main issue I remembered for battery life, apart from LCD backlight and Wi-Fi, is CPU power states.

When on battery, the CPU in my laptop can handle 4 such states. From C1 to C4 (According to the acpi docs there's actually a C0 state, so that'd make 5). But when on battery, the CPU never goes past level C2. I did some basic check a long time ago, by removing modules, and after removing Firewire, PCMCIA, and USB support, the CPU would go in C3 and C4 states. Though this definitely saves battery because it turns off parts of the computer, I wanted to know exactly what was sucking the CPU, and to keep at least USB, that I use more than the rest.

After selectively removing the modules, it turns out the uhci_hcd support (USB1) is somehow responsible. Which is sad, because my USB mouse is USB1. But it also turns out there's always a peripheral connected to the USB1 hub on my laptop. Indeed, the internal bluetooth support, which is activated at the same time as Wi-Fi, is connected internally through USB1. If I turn off Wi-Fi, it disconects the BT USB device. It appears that in this configuration, having uhci_hcd loaded doesn't have an influence on the CPU. It still reaches C3 and C4 levels. Problem is I don't want to turn off Wi-Fi.

So, with the help of my friend google, I found out that it's possible to unbind drivers from devices, and successfully unbound the USB port the BT device is connected to, without deactivating all the other (external) ports. The sad thing is that when doing such, it also automagically disables the Wi-Fi device (an ipw2200) IRQ, but modprobing ipw2200 again enables it without enabling the BT device.

Next step was to find a way to have this set-up applied automatically at boot-time. I tried to add the following to my udev rules:
ACTION=="add", SUBSYSTEM=="pci", ID=="0000:00:1d.2", ENV{UDEV_START}==1, OPTIONS="ignore_device"
which i'd have expected to do what i want, but it doesn't. I also tried:
ACTION=="add", DRIVER=="uhci_hcd", RUN+="/bin/echo -n 0000:00:1d.2 > /sys/bus/pci/drivers/uhci_hcd/unbind"
which doesn't work either. But:
ACTION=="add", DRIVER=="uhci_hcd", RUN+="/some/script/containing/the/previous/echo"
does. Well, it does work when you restart udev, but not when you reboot. Unfortunately, the initramfs (built with the initramfs-tools) loads a lot of modules, including USB. Meaning that udev doesn't load these and doesn't apply the rules then. Unless I write my own list of modules to load at boot time, initramfs-tools won't produce a useful initramfs for my case. Yaird, on the other hand, did what I wanted it to do, except that it still lacks support for resuming from suspend-to-disk.

In the end, I put a script in /etc/rcS.d. This script unbinds the uhci_hcd driver from the BT device port and re-modprobes ipw2200.

I got a bad surprise, though. After a fresh startup, the CPU would be stuck on C2 state again. Remember how the OpenGL performance would drop if I switch to console and back to X ? It also turns out that it also makes the CPU able to reach C3 and C4 states. There seems to be something odd with the radeon driver...

2006-07-02 09:53:01+0900

miscellaneous, p.d.o | Comments Off on Long live the battery

I hate soccer

And hooting like crazy until late at night when France somehow beats Brasil is not going to change my mind. Dumbass.

2006-07-02 09:10:33+0900

miscellaneous | 1 Comment »

OpenGL performance and the importance of configuration

Until recently, I wasn't really bothered by the poor OpenGL performance of my laptop. Though it has a Radeon Mobility 9200, the OpenGL "experience" was really bad. But since I was not using GL enabled applications, that was not an issue.

Then, I tried Google Earth and thought "well, it's slow, but it's huge, I don't know, maybe it's normal". A few days later I heard about neverball and gave it a try. It was even slower than Google Earth, and that seemed pretty hard to believe this time.

Thanks to clues from people on #d-d, I figured out I didn't have the libgl1-mesa-dri package installed. Quite simple to know if you are in the same case : first, grep "Direct rendering" in your X.org log file to be sure DRI is enabled on the server side. Then check for "direct rendering" in glxinfo's output to know if it's enabled on the client side. If not, try to apt-get install libgl1-mesa-dri.

Trying neverball again was delightful: the game was playable.

I thought I was done with my OpenGL settings until yesterday, when, for some reason, I read the X.org log and noticed AGP mode was 1x. I didn't know exactly about nowadays, but I was pretty sure 4x was reached at some point, and that the chip in my laptop might support it.

My friend google helped me finding something about "xorg.conf radeon 9200 mobility agp", and gave me hints about three parameters to add to the xorg.conf file that might improve GL performance:

Option "AGPMode" "8"
Option "AGPFastWrite" "on"
Option "EnablePageFlip" "true"

Glxgears is not a benchmark, but still, it helps to make self comparisons. Before: ~260fps. After: ~1350fps. WAW ! Now Google Earth is really fast.

Though I have a strange bug which appears whenever an X window (popup, balloon help, etc.) gets on top of the 3D frame: only the rightmost part appears... on the left. It appears on neverball, on glxgears, on Google Earth, but NOT on foobillard... moving or resizing the window usually fixes the display.

Update: When switching to console and back to X, the glxgear performance gets back to ~260fps...

Update 2: It seems my "before" score is not as bad as claimed. It's actually ~950fps. Still a good improvement, though. The reason why i got ~260fps is that i almost never reboot my laptop nor restart X, and hibernate instead of shutting down the system. Which means I'm in the switched back from console case most of the time.

Update 3: Thanks to Robert Hart for pointing to bug #363995. The patch from there works perfectly \o/. (If you can't wait and want a patched .deb, send me a mail)

2006-06-25 21:37:36+0900

miscellaneous, p.d.o | 2 Comments »

Targetted Spam

This morning, I got one of these standard cialis/viagra/xanax/valium/whatever spam you get numerous times in a day, but with a "test xul" subject. Well, that's targetted.

2006-06-05 10:13:23+0900

miscellaneous, p.d.o | Comments Off on Targetted Spam

Is libpng crap ? and php ?

As I told before, I've been working for a few days on a way to display japanese characters on a web page when you don't have japanese fonts locally. That involved taking japanese characters from a font file, and turning it into a png image. So I just wrote a small and simple program using libpng and freetype. Which works quite well. But that's not the point.

During this "development", I lost 2 or 3 days trying to understand what was wrong with my code that made it work properly from a shell command and fail halfway when wrapped in a php script. I still don't know what's wrong with php, but it works well with a perl cgi wrapper, and it still fails if I raise the php memory limit. Interestingly, other libpng tools (such as pngtopnm) fail in the same way. It wouldn't surprise me if it was libpng's fault.

The first thing i did to understand why it'd fail, was to run the program under valgrind. And what I saw was not encouraging. A lot of "Conditional jump or move depends on uninitialised value(s)". I tried different combinations of code, using png_write_image or png_write_row, with large or small buffers...

It turns out libpng can't write a png file with a width smaller than 148 pixels without those "Conditional jump or move depends on uninitialised value(s)", which happen, as the stack trace shows, in the zlib, called by png_write_row, but i doubt it to be a zlib issue. You would think it would be a buffer size issue, but if you use a larger buffer for each line you call png_write_row for, you still get the errors.

If anyone has a clue, please leave a comment.

Update: Thanks Mark for your comment, though I don't get how it can be safe to have such accesses to uninitialized memory. Deuce, the php issue can't be a timeout issue, the program runs in fractions of a second.

2006-02-19 14:21:32+0900

miscellaneous, p.d.o | 3 Comments »

How-to: switch your partitions to LVM and lose everything

(almost)

I've had some problems with disk space for a while, so much that I was "creating space" by creating an lvm logical volume of 5GB over 3 files (through loop devices) on 3 different partitions (/usr, /var, and /home) that all had less than 5GB free. But I needed 5GB free.

So, instead of continuing relying on such brain damaged solutions, I decided to just switch everything on lvm, which, incidentally, would allow me to use lvm snapshots for pbuilder or similar tools (pbuilder doesn't support lvm snapshot, as far as I know). Since my / partition is quite small (so small that I sometimes have space problems at kernel upgrade), I just decided to keep it and just transform my /usr, /var, and /home partitions into a big lvm physical volume.

The challenge was to move all the files without any resort to external devices. Yes, I have other devices on other PCs, but they all are in about the same shape : full as hell. No way I could possibly free 40GB for backup. I couldn't even burn some data : my DVD burner died a few weeks ago and I still haven't bought a new one. I could've waited but well, I wanted to do it.

So, the week-end before last, I decided to go on, and do it the painful way. I used my swap partition to move /usr and /var around, so that I could free the space from these 2 partitions to create the premises of the lvm physical volume. First trick: not enough space in the swap partition, so I just deleted a bunch of files (mostly /usr/share/doc/ and /usr/share/locale files) so that all would fit. After moving all around, I'd just unpack all installed debian packages so that the files are there again.

Then, I managed to move everything by packs of 4~5GB, which was laborious. It involved creating temporary lvm physical volumes on temporary partitions and then moving the logical volumes's chunks from physical volumes to other physical volumes on the same disk. Brain damage.

Anyway, the big fright came when I had this stupid idea to delete partitions and create them again. Why ? because neither fdisk (no surprise) nor parted can resize a partition. Well, parted can, but it apparently can't without playing around with the underlying filesystem.

So, I deleted and recreated the partitions. And guess what ? They didn't end up on the same sectors... The second stupid idea was to run pvcreate on the newly created not quite the same partitions... I thought I just lost everything. Pictures, personal repositories (especially 6 months worth (well, it took 6 months of real time, not of work time, but still) of work on xulrunner), everything.

It's actually not the first time I (almost) lose everything. I once did rm -rf / in a chroot... in which /home was mounted bound... and wrote a program to scan the file-system for the precious files (that's the story of ext3rminator, which I still have to release once and for all). That was 3 years ago.

First, calm down. Second, the bits are still on the disk (except maybe the bits overwritten by pvcreate). Third, calm down. Fourth, think.

All I obviously had to do was to find the correct location of the partitions, and pray that the pvcreate didn't blow everything. Which was almost okay. I could find 2 of the LVM volumes out of 3, while the third one seemed in a pretty bad shape. A bit of search with my friend google on another computer hinted me on vgcfgrestore. So I pvcreated once more over this broken volume, and used vgcfgrestore to get the volume group correct. It seemed okay. I could mount the volume. That was close.

Well, to be sure, let's run a fsck... ouch... inconsistency in some inodes. Okay, correct it. Once, twice... a couple times. Waw, is it that fucked up ? It can't be that fucked up.

So I took another look and discovered that the third volume was offsetted by a sector. 512 octets. Ouch, letting the fsck correct things was definitely not a good idea. Correcting again the partition table, pvcreating and vgcfgrestoring again, fsck, and let's rock and roll.

Conclusion(s):
- I only lost 150MB of MP3. I don't care.
- Never ever use parted again, and never ever use fdisk in basic mode again.
- Avoid tricky procedures with stuff you are trying for the first time.
- Backups are for the weakest.

2006-02-14 21:56:19+0900

ext3rminator, miscellaneous | Comments Off on How-to: switch your partitions to LVM and lose everything

Deep Blue, Deep Blitz, what’s next ?

People seem to still want to make big computers to beat chess masters, while Kasparov has been beaten by the upgraded Deep Blue, then nicknamed Deeper Blue, in 1997. But actually, what's the point ?
I mean, all it demonstrates is that brutal brute force is better than human. And now human has been beaten, what's the challenge ?

So, let's move on a bit and go for a real challenge : beat human at Go. You know, this simple game with simple rules but for which no computer has yet beaten average players. I'm not even saying advanced or masters, because computers are so boring and lame that strong players prefer to play humans.

And if they happen to want to use the brute force as they did for chess, well, they'd need a computer to which Deep(er) Blue would be an abacus.

More seriously, though, why isn't there more IA research on Go, now that Chess is more or less mastered ? Another thing that puzzles me is how come gnugo is so good compared to other Go programs, while free software Chess programs are so lame ?

2006-01-27 19:54:28+0900

miscellaneous, p.d.o | 1 Comment »

Web 1.0 vs Web 2.0

How can you tell the difference ? Here is the answer.

2006-01-01 08:52:27+0900

miscellaneous, p.d.o | Comments Off on Web 1.0 vs Web 2.0

Quote of the day

If you’re selling anything, there are three kinds of people out there: those who will buy from you, those who might buy from you, and those who will never buy from you. It’s not cost-effective to try to shut down the third group, and there’s a word for unpaid use by the second group: “marketing”.

From On Selling Art, by Tim Bray.

2006-01-01 08:45:44+0900

miscellaneous, p.d.o | Comments Off on Quote of the day

Happy New Year !

Bonne Année !
明けましておめでとう!
Feliz año nuevo !
Feliç any nou !

2006-01-01 07:59:10+0900

miscellaneous, p.d.o | 1 Comment »