Author Archive

Laptop “upgrade” prices

I'm currently evaluating different laptops to decide what I'm going to buy to renew my current one. And while doing so, I came to the conclusion that you are often better off buying the cheapest model of a line, and upgrading it yourself, at least if you're considering Mac or a Vaio.

If you customize a Macbook on the french Apple web store, it costs you 140€ to replace the default 2 x 512MiB SO-DIMM DDR2 PC5300 with 2 x 1GiB, or 810€ for 2 x 2GiB. Such upgrade on other models have apparently the same price ranges.

On the other hand, the most expensive 1GiB module I can find on materiel.net (a french web store selling computer parts, not necessarily the cheapest you can find) is priced 29.89€ and branded Kingston. That's a wooping 60€ to upgrade to 2 x 1GiB (instead of 140), and you keep the original 2 x 512MiB ! And for a 2GiB module, it's "only" 104.95€, which makes the whole 4GiB at roughly 210€ instead of 600 more. And again, you keep the original 2 x 512 MiB.

I don't really know what Kingston memory modules are worth, but are the Apple ones made of gold ? As far as I know, they're not even ECC, which could legitimate the difference.

Now, on the same Macbook, it would cost you 140 € to replace the standard 80GB SATA 5400rpm hard drive with a 160GB one and 280€ for a 250GB disk. On the same materiel.net site, I see a Western Digital 160GB 5200rpm SATA drive at 102.99€, and a 250GB one for 152.49€. So again, it's cheaper to buy the standard model and upgrade it yourself. And you keep the original disk as a bonus !

It works equally well on the american or the japanese Apple stores. And the same applies with Sony (though I couldn't find how to customize a vaio laptop on the french site, I checked it was true on the japanese and american web stores).

On the other hand, Dell and Lenovo seem to have much more reasonable upgrade prices.

What the fsck ?

Update: The more I look at it, the less the memory thing makes sense to me. If it happens to actualy be ECC, why don't they advertise so, which is more important (and justifying the amazing pricing) than the memory being SO-DIMM DDR2 PC5300 (which they do advertise) ?

Update 2: Looking for dmidecode output with google makes it clear that Apple and Sony laptops, at least the ones I checked price for, don't use ECC memory.

2007-11-16 21:07:56+0900

miscellaneous, p.d.o | 3 Comments »

“Free software” Firefox builds

As MJ Ray reports, ftp.mozilla.org now has "Free software" Firefox tarballs. While they are truly free and don't contain the non-free logo nor Talkback, they are not Firefox either. Only the tarballs and the executables are named Firefox. The product itself is "Bon Echo". And the name is going to change at every major release. This is why we changed for a "static" name.

2007-11-08 20:44:05+0900

firefox | 6 Comments »

Gobuntu and Firefox

You may remember, a while ago, Mark Shuttleworth announced that there would be a 100% free version of Ubuntu Gutsy Gibbon :

Ubuntu 7.10 will feature a new flavour - as yet unnamed - which takes an ultra-orthodox view of licensing: no firmware, drivers, imagery, sounds, applications, or other content which do not include full source materials and come with full rights of modification, remixing and redistribution.

Later, we learned it would be named Gobuntu.

Well, they didn't quite follow their promise. Yes, Gobuntu includes Firefox, making it a pretty useless failed attempt.

By the way, I'm still amazed so many people believe it was all about the trademarks. For them, I'll quote something I wrote a year ago:

Trademark and copyright are different things. Mozilla® has unnecessarily given a non-free license to “clarify” the trademark situation, but that is not required. To make it clear: Debian thinks the logos are not free because they are not free. Period.

I'm glad at least Mark Pilgrim got it right.

Update: And as seen on Planet Mozilla, Robert Sayre obviously still hasn't understood the issue.

2007-10-19 07:43:57+0900

miscellaneous, p.d.o | 8 Comments »

Adding some VCS information in bash prompt

I don't spend a lot of time customizing my "working" environment nowadays, like enhancing vim configuration, or tweaking the shell. But when I read MadCoder's zsh git-enabled prompt, I though it was too convenient to not have something like that. Except I don't work with git only (sadly, but that's changing), and I don't like colours in prompt (and a 2 lines prompt is too much).

Anyways, since I have a bunch of directories in my $HOME that contain either svk, svn, mercurial, or git working trees, I thought it would be nice to have some information about all this on my prompt.

After a few iterations, here are the sample results:

mh@namakemono:~/dd/packages$
mh@namakemono:(svn)~/dd/packages/iceape[trunk:39972]debian/patches$
mh@namakemono:(svk)~/dd/packages/libxml2[trunk:1308]include/libxml$
mh@namakemono:(hg)~/moz/cvs-trunk-mirror[default]uriloader/exthandler$
mh@namakemono:(git)~/git/webkit[debian]JavaScriptCore/wtf$

The script follows, with a bit of explanation intertwined.

_bold=$(tput bold)
_normal=$(tput sgr0)

tput is a tool I only dicovered recently, and avoids the need to know the escape codes. There are also options for cursor placement, colours, etc. It lies in the ncurses-bin package, if you want to play with it.

__vcs_dir() {
  local vcs base_dir sub_dir ref
  sub_dir() {
    local sub_dir
    sub_dir=$(readlink -f "${PWD}")
    sub_dir=${sub_dir#$1}
    echo ${sub_dir#/}
  }

We declare as much as possible as local (even functions), so that we avoid cluttering the whole environment. sub_dir is going to be used in several places below, which is why we declare it as a function. It outputs the current directory, relative to the directory given as argument.

  git_dir() {
    base_dir=$(git-rev-parse --show-cdup 2>/dev/null) || return 1
    base_dir=$(readlink -f "$base_dir/..")
    sub_dir=$(git-rev-parse --show-prefix)
    sub_dir=${sub_dir%/}
    ref=$(git-symbolic-ref -q HEAD || git-name-rev --name-only HEAD 2>/dev/null)
    ref=${ref#refs/heads/}
    vcs="git"
  }

This is the first function to detect a working tree, for git this time. Each of these functions set the 4 variables we declared earlier: vcs, base_dir, sub_dir and ref. They are, respectively, the VCS type, the top-level directory of the working tree, the current directory, relative to base_dir, and the branch, revision or a reference in the repository, depending on the VCS in use. These functions return 1 if the current directory is not in a working tree of the currently considered VCS.
The base directory of a git working tree can be deduced from the result of git-rev-parse --show-cdup, which gives the way up to the top-level directory, relative to the current directory. readlink -f then gives the canonical top-level directory. The current directory, relative to the top-level, is simply given by git-rev-parse --show-prefix.
git-name-rev --name-only HEAD gives a nice reference for the current HEAD, especially if you're on a detached head. But this can turn out to do a lot of work, introducing a slight lag when you cd for the first time in the git working tree, while most of the time, the HEAD is just a symbolic ref. This is why we first try git-symbolic-ref --name-only HEAD.

  svn_dir() {
    [ -d ".svn" ] || return 1
    base_dir="."
    while [ -d "$base_dir/../.svn" ]; do base_dir="$base_dir/.."; done
    base_dir=$(readlink -f "$base_dir")
    sub_dir=$(sub_dir "${base_dir}")
    ref=$(svn info "$base_dir" | awk '/^URL/ { sub(".*/","",$0); r=$0 } /^Revision/ { sub("[^0-9]*","",$0); print r":"$0 }')
    vcs="svn"
  }

Detecting an svn working tree is easier : it contains a .svn directory, be it top-level or sub directory. We look up the top-level directory by checking the last directory containing a .svn sub directory on the way up. This obviously doesn't work if you checkout under another svn working tree, but I don't do such things.
For the ref, I wanted something like the name of the directory that has been checked out at the top-level directory (usually "trunk" or a branch name), followed by the revision number.

  svk_dir() {
    [ -f ~/.svk/config ] || return 1
    base_dir=$(awk '/: *$/ { sub(/^ */,"",$0); sub(/: *$/,"",$0); if (match("'${PWD}'", $0"(/|$)")) { print $0; d=1; } } /depotpath/ && d == 1 { sub(".*/","",$0); r=$0 } /revision/ && d == 1 { print r ":" $2; exit 1 }' ~/.svk/config) && return 1
    ref=${base_dir##*
}
    base_dir=${base_dir%%
*}
    sub_dir=$(sub_dir "${base_dir}")
    vcs="svk"
  }

svk doesn't have repository files in the working tree, so we would have to ask svk itself if the current directory is a working tree. Unfortunately, svk is quite slow at that (not that it takes several seconds, but that induces a noticeable delay to display the prompt), so we have to parse its config file by ourselves. We avoid running awk twice by outputing both the informations we are looking for, separated by a carriage return, and then do some tricks with bash variable expansion.

  hg_dir() {
    base_dir="."
    while [ ! -d "$base_dir/.hg" ]; do base_dir="$base_dir/.."; [ $(readlink -f "${base_dir}") = "/" ] && return 1; done
    base_dir=$(readlink -f "$base_dir")
    sub_dir=$(sub_dir "${base_dir}")
    ref=$(< "${base_dir}/.hg/branch")
    vcs="hg"
  }

I don't use mercurial much, but I happen to have exactly one working tree (a clone of http://hg.mozilla.org/cvs-trunk-mirror/), so I got some basic information. There is no way we can ask mercurial itself for information, it is too slow for that (main culprit being the python interpreter startup), so we take the informations we can (and since I don't know much about mercurial, that's really basic). Note that if you're deep in the VFS tree, but not in a mercurial working tree, the while loop may be slow. I didn't bother much looking for a better solution.

  git_dir ||
  svn_dir ||
  svk_dir ||
  hg_dir ||
  base_dir="$PWD"

Here we just run all these functions one by one, stopping at the first that matches. Adding some more for other VCS would be easy.

  echo "${vcs:+($vcs)}${_bold}${base_dir/$HOME/~}${_normal}${vcs:+[$ref]${_bold}${sub_dir}${_normal}}"
}
PS1='${debian_chroot:+($debian_chroot)}\u@\h:$(___vcs_dir)\$ '

Finally, we set up the prompt, so that it looks nice with all the gathered information.

Update: made the last lines of the script a little better and factorized.

2007-10-14 13:24:44+0900

miscellaneous, p.d.o | 2 Comments »

Dead battery

A little while ago, it started behaving strangely during charges, and the time the laptop would run on battery dropped significantly. Now, it seems to be just dead:

$ cat /proc/acpi/battery/BAT1/state
present: no

The sad thing is this battery is (only) 3 years old, while the battery in my other laptop, more than 6 years old, is still alive (though it would empty in half an hour).

2007-09-23 10:06:08+0900

miscellaneous, p.d.o | 2 Comments »

WebKit and the Acid test

Someone in the "Why no Opera?" thread on the debian-devel list mentioned tkhtml/hv3, and how it passed the Acid Test (though he didn't mention if it was the first or the second Acid Test).

While it is a known fact that Mozilla doesn't pass the Second Acid Test yet (you have to use 1.9 alpha for this), it is also known that Safari has been for more than 2 years, and Konqueror, since version 3.5.2. So just to be sure, I gave it a try with WebKit (the one currently in unstable), and the results are... well, see for yourself.

This is what QtLauncher display, when the window is quite large, which is just perfect.
QtLauncher showing Second Acid Test

Now, this is what you get when the window is not so large, but still large enough for the whole thing to be displayed.
QtLauncher showing Second Acid Test #2

And what you get when you shrink the window more and more.
QtLauncher showing Second Acid Test #3 QtLauncher showing Second Acid Test #4

It goes further down when you shrink even more.

Sadly, the Gtk port is not as good.
GdkLauncher showing Second Acid Test

It also does the "going down when shrinking" thing.

Update : Apparently, the "going down when shrinking" thing is a known "feature" of the Acid Test.

Update : The reason why the Gtk port is not fully passing the test is that while there is a KIOslave for the data url scheme, curl doesn't support it.

2007-09-10 20:54:55+0900

webkit | 2 Comments »

Machine-readable copyright files

GyrosGeier has briefly talked about machine readable copyright files in Debian. I won't talk much more about it, except pointing out the current proposal, and my own first implementation on the webkit package.

Note that, huge as the source is, I didn't feel like listing copyright information file by file, but rather set of files by set of files, grouping all files under the same licensing terms. Even doing so, the copyright file is still more than 600 lines long. And well, obviously, the ftp masters didn't reject it.

2007-09-10 20:26:32+0900

debian | Comments Off on Machine-readable copyright files

Javascript performance in browsers

Ars Technica has recently posted an article about the new Opera alpha release, with some Javascript benchmark results showing it is quite faster than version 9.23. It also goes to compare with Firefox and IE7, but omits some other not so unimportant browsers. I think the main reason is because they seem to have only tested Windows browsers. Sure, Safari has been released recently on Windows, but it is still quite marginal.

Anyways, I was wondering how all this was going under Linux, and also, how (good?) WebKit would perform compared to others.

So, I tried the same Javascript speed tests on various browsers under Linux on my laptop, which happens to be a Pentium M 1.5GHz.

And the winner is...

Test Iceweasel 2.0.0.6 Epiphany 2.18.3/libxul 1.8.1.6 GdkWebKit Opera 9.23 Opera 9.50 alpha 1
Try/Catch with errors 80 81 41 18 22
Layer movement 250 214 76 53 47
Random number engine 280 190 57 72 68
Math engine 343 274 82 101 91
DOM speed 205 225 18 41 54
Array functions 97 97 72 82 44
String functions 14 12 12 46 52
Ajax declaration 178 127 16 21 17
Total 1447 1220 374 434 395

So, It seems the speed gain Opera got on Windows doesn't happen much on Linux.

An interesting result, is that Iceweasel, with a bunch of extensions installed, is slower than Epiphany, despite both using the same rendering engine and Javascript library. Running Iceweasel in safe mode makes it the same speed as Epiphany, though. So having extensions does not only clutter the UI, but actually has an impact on how fast the Javascript code in web pages is going to run.

And well, WebKit is the fastest for this testcase, though it stays behind Opera on some specific tests.

2007-09-07 21:44:19+0900

firefox, iceape, webkit, xulrunner | 2 Comments »

WebKit in unstable

Thanks to whoever ftp-master who did the review and approval, WebKit is now in unstable. It has not yet been built on all architectures, but several FTBFSes have already shown up :

  • on arm, because #if defined(arm) doesn't seem to do much with our toolchain, and because gcc doesn't pack a simple struct such as struct { short x ; } on arm, while it obviously does by default on all other architectures,
  • on hppa, apparently because of a kernel bug,
  • on s390, maybe fixable by using gcc 4.2.

I already fixed the arm issue in our git tree, but am waiting for the last buildds to keep up before uploading a new release, in case some other architecture would fail to build. I'd be very much thankful if some people with alpha, x86_64, ia64, mips, or powerpc machines could do some basic testing with /usr/lib/WebKit/GdkLauncher and /usr/lib/WebKit/QtLauncher and report any problem (BTS preferred).

Again, interested people are invited to subscribe to the pkg-webkit-maintainers mailing list.

2007-08-26 10:22:32+0900

webkit | 4 Comments »

Problems and expectations

What would you expect a software such as VMware ESX server, in its latest version, to do when it technically can't do what you would like it to do ?

Well, I, for one, would expect it, at least, to tell me... but it doesn't. If you don't have VT enabled on an Intel 64bits processor-based server, and want to run a 64bits OS in a VM configured to host a 64bits guest, it doesn't tell you. All you have for your eyes to stare at is an error message from the guest OS saying that the processor doesn't support 64bits instructions. You have to gather from this message that it only needs VT extensions to be enabled.

Now, if you're not very familiar with these technical details, what would be your first test on such a server ? I'd say, most probably, try to run the 64bits OS on the bare hardware... which would succeed, indeed, leaving the user in a big blur.

Note that in the "processor" part of the configuration panel in the Virtual Infrastructure Client, while there is information about Hyperthreading being enabled or not, there is nothing about of VT.

2007-08-22 20:38:35+0900

miscellaneous, p.d.o | Comments Off on Problems and expectations