Archive for the 'p.d.o' Category

Adding some VCS information in bash prompt

I don't spend a lot of time customizing my "working" environment nowadays, like enhancing vim configuration, or tweaking the shell. But when I read MadCoder's zsh git-enabled prompt, I though it was too convenient to not have something like that. Except I don't work with git only (sadly, but that's changing), and I don't like colours in prompt (and a 2 lines prompt is too much).

Anyways, since I have a bunch of directories in my $HOME that contain either svk, svn, mercurial, or git working trees, I thought it would be nice to have some information about all this on my prompt.

After a few iterations, here are the sample results:

mh@namakemono:~/dd/packages$
mh@namakemono:(svn)~/dd/packages/iceape[trunk:39972]debian/patches$
mh@namakemono:(svk)~/dd/packages/libxml2[trunk:1308]include/libxml$
mh@namakemono:(hg)~/moz/cvs-trunk-mirror[default]uriloader/exthandler$
mh@namakemono:(git)~/git/webkit[debian]JavaScriptCore/wtf$

The script follows, with a bit of explanation intertwined.

_bold=$(tput bold)
_normal=$(tput sgr0)

tput is a tool I only dicovered recently, and avoids the need to know the escape codes. There are also options for cursor placement, colours, etc. It lies in the ncurses-bin package, if you want to play with it.

__vcs_dir() {
  local vcs base_dir sub_dir ref
  sub_dir() {
    local sub_dir
    sub_dir=$(readlink -f "${PWD}")
    sub_dir=${sub_dir#$1}
    echo ${sub_dir#/}
  }

We declare as much as possible as local (even functions), so that we avoid cluttering the whole environment. sub_dir is going to be used in several places below, which is why we declare it as a function. It outputs the current directory, relative to the directory given as argument.

  git_dir() {
    base_dir=$(git-rev-parse --show-cdup 2>/dev/null) || return 1
    base_dir=$(readlink -f "$base_dir/..")
    sub_dir=$(git-rev-parse --show-prefix)
    sub_dir=${sub_dir%/}
    ref=$(git-symbolic-ref -q HEAD || git-name-rev --name-only HEAD 2>/dev/null)
    ref=${ref#refs/heads/}
    vcs="git"
  }

This is the first function to detect a working tree, for git this time. Each of these functions set the 4 variables we declared earlier: vcs, base_dir, sub_dir and ref. They are, respectively, the VCS type, the top-level directory of the working tree, the current directory, relative to base_dir, and the branch, revision or a reference in the repository, depending on the VCS in use. These functions return 1 if the current directory is not in a working tree of the currently considered VCS.
The base directory of a git working tree can be deduced from the result of git-rev-parse --show-cdup, which gives the way up to the top-level directory, relative to the current directory. readlink -f then gives the canonical top-level directory. The current directory, relative to the top-level, is simply given by git-rev-parse --show-prefix.
git-name-rev --name-only HEAD gives a nice reference for the current HEAD, especially if you're on a detached head. But this can turn out to do a lot of work, introducing a slight lag when you cd for the first time in the git working tree, while most of the time, the HEAD is just a symbolic ref. This is why we first try git-symbolic-ref --name-only HEAD.

  svn_dir() {
    [ -d ".svn" ] || return 1
    base_dir="."
    while [ -d "$base_dir/../.svn" ]; do base_dir="$base_dir/.."; done
    base_dir=$(readlink -f "$base_dir")
    sub_dir=$(sub_dir "${base_dir}")
    ref=$(svn info "$base_dir" | awk '/^URL/ { sub(".*/","",$0); r=$0 } /^Revision/ { sub("[^0-9]*","",$0); print r":"$0 }')
    vcs="svn"
  }

Detecting an svn working tree is easier : it contains a .svn directory, be it top-level or sub directory. We look up the top-level directory by checking the last directory containing a .svn sub directory on the way up. This obviously doesn't work if you checkout under another svn working tree, but I don't do such things.
For the ref, I wanted something like the name of the directory that has been checked out at the top-level directory (usually "trunk" or a branch name), followed by the revision number.

  svk_dir() {
    [ -f ~/.svk/config ] || return 1
    base_dir=$(awk '/: *$/ { sub(/^ */,"",$0); sub(/: *$/,"",$0); if (match("'${PWD}'", $0"(/|$)")) { print $0; d=1; } } /depotpath/ && d == 1 { sub(".*/","",$0); r=$0 } /revision/ && d == 1 { print r ":" $2; exit 1 }' ~/.svk/config) && return 1
    ref=${base_dir##*
}
    base_dir=${base_dir%%
*}
    sub_dir=$(sub_dir "${base_dir}")
    vcs="svk"
  }

svk doesn't have repository files in the working tree, so we would have to ask svk itself if the current directory is a working tree. Unfortunately, svk is quite slow at that (not that it takes several seconds, but that induces a noticeable delay to display the prompt), so we have to parse its config file by ourselves. We avoid running awk twice by outputing both the informations we are looking for, separated by a carriage return, and then do some tricks with bash variable expansion.

  hg_dir() {
    base_dir="."
    while [ ! -d "$base_dir/.hg" ]; do base_dir="$base_dir/.."; [ $(readlink -f "${base_dir}") = "/" ] && return 1; done
    base_dir=$(readlink -f "$base_dir")
    sub_dir=$(sub_dir "${base_dir}")
    ref=$(< "${base_dir}/.hg/branch")
    vcs="hg"
  }

I don't use mercurial much, but I happen to have exactly one working tree (a clone of http://hg.mozilla.org/cvs-trunk-mirror/), so I got some basic information. There is no way we can ask mercurial itself for information, it is too slow for that (main culprit being the python interpreter startup), so we take the informations we can (and since I don't know much about mercurial, that's really basic). Note that if you're deep in the VFS tree, but not in a mercurial working tree, the while loop may be slow. I didn't bother much looking for a better solution.

  git_dir ||
  svn_dir ||
  svk_dir ||
  hg_dir ||
  base_dir="$PWD"

Here we just run all these functions one by one, stopping at the first that matches. Adding some more for other VCS would be easy.

  echo "${vcs:+($vcs)}${_bold}${base_dir/$HOME/~}${_normal}${vcs:+[$ref]${_bold}${sub_dir}${_normal}}"
}
PS1='${debian_chroot:+($debian_chroot)}\u@\h:$(___vcs_dir)\$ '

Finally, we set up the prompt, so that it looks nice with all the gathered information.

Update: made the last lines of the script a little better and factorized.

2007-10-14 13:24:44+0900

miscellaneous, p.d.o | 2 Comments »

Dead battery

A little while ago, it started behaving strangely during charges, and the time the laptop would run on battery dropped significantly. Now, it seems to be just dead:

$ cat /proc/acpi/battery/BAT1/state
present: no

The sad thing is this battery is (only) 3 years old, while the battery in my other laptop, more than 6 years old, is still alive (though it would empty in half an hour).

2007-09-23 10:06:08+0900

miscellaneous, p.d.o | 2 Comments »

WebKit and the Acid test

Someone in the "Why no Opera?" thread on the debian-devel list mentioned tkhtml/hv3, and how it passed the Acid Test (though he didn't mention if it was the first or the second Acid Test).

While it is a known fact that Mozilla doesn't pass the Second Acid Test yet (you have to use 1.9 alpha for this), it is also known that Safari has been for more than 2 years, and Konqueror, since version 3.5.2. So just to be sure, I gave it a try with WebKit (the one currently in unstable), and the results are... well, see for yourself.

This is what QtLauncher display, when the window is quite large, which is just perfect.
QtLauncher showing Second Acid Test

Now, this is what you get when the window is not so large, but still large enough for the whole thing to be displayed.
QtLauncher showing Second Acid Test #2

And what you get when you shrink the window more and more.
QtLauncher showing Second Acid Test #3 QtLauncher showing Second Acid Test #4

It goes further down when you shrink even more.

Sadly, the Gtk port is not as good.
GdkLauncher showing Second Acid Test

It also does the "going down when shrinking" thing.

Update : Apparently, the "going down when shrinking" thing is a known "feature" of the Acid Test.

Update : The reason why the Gtk port is not fully passing the test is that while there is a KIOslave for the data url scheme, curl doesn't support it.

2007-09-10 20:54:55+0900

webkit | 2 Comments »

Machine-readable copyright files

GyrosGeier has briefly talked about machine readable copyright files in Debian. I won't talk much more about it, except pointing out the current proposal, and my own first implementation on the webkit package.

Note that, huge as the source is, I didn't feel like listing copyright information file by file, but rather set of files by set of files, grouping all files under the same licensing terms. Even doing so, the copyright file is still more than 600 lines long. And well, obviously, the ftp masters didn't reject it.

2007-09-10 20:26:32+0900

debian | Comments Off on Machine-readable copyright files

Javascript performance in browsers

Ars Technica has recently posted an article about the new Opera alpha release, with some Javascript benchmark results showing it is quite faster than version 9.23. It also goes to compare with Firefox and IE7, but omits some other not so unimportant browsers. I think the main reason is because they seem to have only tested Windows browsers. Sure, Safari has been released recently on Windows, but it is still quite marginal.

Anyways, I was wondering how all this was going under Linux, and also, how (good?) WebKit would perform compared to others.

So, I tried the same Javascript speed tests on various browsers under Linux on my laptop, which happens to be a Pentium M 1.5GHz.

And the winner is...

Test Iceweasel 2.0.0.6 Epiphany 2.18.3/libxul 1.8.1.6 GdkWebKit Opera 9.23 Opera 9.50 alpha 1
Try/Catch with errors 80 81 41 18 22
Layer movement 250 214 76 53 47
Random number engine 280 190 57 72 68
Math engine 343 274 82 101 91
DOM speed 205 225 18 41 54
Array functions 97 97 72 82 44
String functions 14 12 12 46 52
Ajax declaration 178 127 16 21 17
Total 1447 1220 374 434 395

So, It seems the speed gain Opera got on Windows doesn't happen much on Linux.

An interesting result, is that Iceweasel, with a bunch of extensions installed, is slower than Epiphany, despite both using the same rendering engine and Javascript library. Running Iceweasel in safe mode makes it the same speed as Epiphany, though. So having extensions does not only clutter the UI, but actually has an impact on how fast the Javascript code in web pages is going to run.

And well, WebKit is the fastest for this testcase, though it stays behind Opera on some specific tests.

2007-09-07 21:44:19+0900

firefox, iceape, webkit, xulrunner | 2 Comments »

WebKit in unstable

Thanks to whoever ftp-master who did the review and approval, WebKit is now in unstable. It has not yet been built on all architectures, but several FTBFSes have already shown up :

  • on arm, because #if defined(arm) doesn't seem to do much with our toolchain, and because gcc doesn't pack a simple struct such as struct { short x ; } on arm, while it obviously does by default on all other architectures,
  • on hppa, apparently because of a kernel bug,
  • on s390, maybe fixable by using gcc 4.2.

I already fixed the arm issue in our git tree, but am waiting for the last buildds to keep up before uploading a new release, in case some other architecture would fail to build. I'd be very much thankful if some people with alpha, x86_64, ia64, mips, or powerpc machines could do some basic testing with /usr/lib/WebKit/GdkLauncher and /usr/lib/WebKit/QtLauncher and report any problem (BTS preferred).

Again, interested people are invited to subscribe to the pkg-webkit-maintainers mailing list.

2007-08-26 10:22:32+0900

webkit | 4 Comments »

Problems and expectations

What would you expect a software such as VMware ESX server, in its latest version, to do when it technically can't do what you would like it to do ?

Well, I, for one, would expect it, at least, to tell me... but it doesn't. If you don't have VT enabled on an Intel 64bits processor-based server, and want to run a 64bits OS in a VM configured to host a 64bits guest, it doesn't tell you. All you have for your eyes to stare at is an error message from the guest OS saying that the processor doesn't support 64bits instructions. You have to gather from this message that it only needs VT extensions to be enabled.

Now, if you're not very familiar with these technical details, what would be your first test on such a server ? I'd say, most probably, try to run the 64bits OS on the bare hardware... which would succeed, indeed, leaving the user in a big blur.

Note that in the "processor" part of the configuration panel in the Virtual Infrastructure Client, while there is information about Hyperthreading being enabled or not, there is nothing about of VT.

2007-08-22 20:38:35+0900

miscellaneous, p.d.o | Comments Off on Problems and expectations

New WebKit snapshot (almost) in unstable

Along with a new epiphany release using it as backend, I prepared a new WebKit snapshot, which is now waiting in NEW for some ftp-master attention. Unfortunately, while webkit now has the necessary symbols for back and forward buttons, it seems not to work properly. Scrollbars are also not yet displayed. I'll have to take a look at these some day, if upstream doesn't do it before me.

I also set up a git repository to hold the debian branch, following the already existing git repository tracking upstream. Note the filtered branch, which avoids the debian branch to contain what we don't ship, and reduce the download size from 100MB+ to roughly 16MB. I'll write more about this filtering in a few days.

Also, if you're interested in webkit and/or want to give a hand, you can subscribe to the pkg-webkit-maintainers mailing list. Everything is ready for team maintenance, so, don't hesitate ;).

If you want to track changes on the debian repo, there is also a pkg-webkit-commits mailing list where the post-receive hook sends the commit messages.

2007-08-19 20:33:06+0900

webkit | Comments Off on New WebKit snapshot (almost) in unstable

WebKit (almost) in unstable

I finally uploaded the first release of WebKit to unstable. It will obviously need to go through NEW first, which might take some time.

There still is work to do, first of which being to correctly setup the git repositories. For the moment, there is only a git repository following upstream svn available on git.d.o. The branches are a bit messy, though ; I have to figure out why git-svn insists on randomly recreating the master branch... I supposedly removed the master branch to have the svn branch follow the git-svn remote.

Anyways, once it is sorted out, I'll set up a special branch to create our tarballs and from which we'd derive the debian branch (or not, this is not decided yet). This special branch will be a copy of the upstream branch with some stuff removed (see debian/copyright in the current packages source for a list of these).

Speaking of the package source, since NEW is not available, I made the packages sources and binaries available on gluck.

An important note: the version I uploaded is made from revision 24735 of upstream svn repository, which is from July 27. Unfortunately, to be able to build the first version of epiphany that includes webkit embedding (2.19.6) as is, webkit_gtk_page_can_go_forward and webkit_gtk_page_can_go_backward are needed, and these, while available in the API headers, only appeared in the source code on July 30.

However, I built a hacked epiphany with calls to these functions removed (which means back and forward buttons won't work properly), and made it available on gluck too. Be aware this version requires glib and gtk from experimental, which, I've been said, made all gtk/glib warnings fatal. That means all applications that usually fill your .xsession-errors log file are likely to crash with these versions.

You'll note integration is not yet perfect, biggest misfeature being the scrollbars missing, and some glitches such as the user agent (it is hardcoded in WebKit :-/), and the about window still saying it is based on Gecko ;)

I'll try to push a new version of WebKit soon enough.

Stay tuned.

2007-08-15 17:40:38+0900

webkit | 5 Comments »

Buildd network for QA or experimentations ?

I often see people posting on planet.d.o, or on some list, talking about how they rebuilt the whole archive to test whether X or Y. Don't you think it would be useful to have a somewhat buildd network to do such experiments or QA testing ? Or at least some infrastructure that would make it easy to do ?

For instance, I would like to do some build testing of the whole archive, with hooked dh_strip and whatever else could be necessary, so that it can be determined how many packages follow recommendations on policy 10.1 about debugging symbols. Problem is I am far from having enough resources to do this...

Update : Actually, it would probably be enough to check the result of builds with DEB_BUILD_OPTIONS=nostrip

2007-08-05 09:37:45+0900

debian | 1 Comment »