Author Archive

Some more about the progressive decompression experiment

There are a few things I forgot to mention in my previous post about the progressive decompression experiment, so here they are.

Implicit section grouping

As I mentioned, GCC may now generate function sections with names starting with .text.hot., .text.unlikely., etc. It also expects the linker to group each category together. GNU ld does it, but Gold doesn't, yet.

While grouping these can be interesting for runtime performance, it can also go against startup performance, or only go half-way.

The .text.unlikely. sections being grouped together means that code that is not unlikely will be grouped together, which helps reducing cache pressure at runtime. But something unlikely to be called is not something that is never called, and a lot of unlikely functions are being called during startup. Fortunately, GNU ld places them first.

The .text.startup. sections being grouped together means that static initializers are all stacked together. This is good, because it helps a lot with the awful I/O patterns that can be observed during startup without this grouping. But it only goes half-way because functions that are being called by these static initializers (think class constructors, for example), are not marked as .text.startup. and can still be placed anywhere.

Profiling the right thing

Like with hardware-specific code paths, there are various code paths that can be triggered at startup during the profiling phase that may or may not be triggered in real case scenarios. Having or not having a pre-existing profile, having or not having extensions installed, opening the default home page or opening an url, etc. all have an impact on what specific code paths are exercised during startup.

Anything that happens on actual use cases that weren't part of the profiling scenario may call code that was not covered by any reordering, and go against the startup time improvements progressive decompression can get us. This is why, more than any kind of binary layout tweaks, being able to uncompress any random part of the binary at any time is critical to bring startup improvements to everyone, in every cases.

2011-11-02 08:46:10+0900

p.m.o | No Comments »

Improving binary layout for progressive decompression

As I mentioned in my previous post, we recently built a prototype that would only uncompress the bits we do require from our libraries, as opposed to the complete library. Or at least try to.

The idea

The original idea was to do selective decompression of blocks as they are required. Such that instead of uncompressing the whole file, and then use it, we'd only uncompress blocks we do need, at the moment we need them. So, for example, instead of uncompressing 32KB, and then use the first and last 4KB (completely ignoring 24KB in the middle), we'd only uncompress and use the first and last 4KB.

Unfortunately, compressed streams in ZIP files are not seekable, so with the compressed data we have currently, we just can't do that. Which is why we're going to work on figuring out how to best do it with an index and how the zlib library will allow us to do so. Fortunately, one of the zlib author already has some working code doing something similar, though it's probably not usable as-is.

Anyways, as we needed to put things together for a prototype quite rapidly and had experience with binary reordering in the past, we figured a way to get the same kind of benefit would be to improve the binary layout so that we can uncompress subsets of the file progressively, as it is used. So for example, if the order in which we need blocks looks like [10, 11, 7, 13, 17], we'd first uncompress up to the 10th blocks, then the 11th, then use the 7th, that was uncompressed earlier, then uncompress up to the 13th, and so on.

In fact, the way the prototype worked was by having a background thread doing decompression while other thread(s) would do the actual work, and when it hits a block that hasn't been uncompressed yet, wait for the required data to be reached by the background thread. The implementation was however probably racy, which led to multiple random crashes. There may also be some issues with the signal handler being used. These are details that we are expecting to figure in time.

Relocated data

The first obstacle is that when a library is loaded, relocations are performed. Relocations are basically what allows pointers in a library to point to the right place in memory. They usually apply to data sections. They can apply to code sections when the code is not PIC (Position Independent Code), but this is fortunately not the case anymore on Firefox for Android. So they apply to data sections, and data sections are at the end of libraries. Which means, under our progressive decompression scheme as explained above, we'd still have to uncompress almost the entire file.

On the long term, we should be able to apply these relocations as we uncompress each blocks, but that requires our linker to handle elfhack relocations itself, and some more tweaks to access the appropriate relocations without having to scan the entire list. But even then, data is accessed during startup. Thus until we can seek in the compressed data stream, data needs to be placed before code.

As I was using the gold linker for various other reasons, and as gold doesn't have much flexibility for these things, I added a little something to elfhack to rewrite the ELF file such that data sections would be before code sections in the file, but still after when loaded in memory. Nice trick, but unfortunately, a lot of tools are not very happy with it (even less than what elfhack already does).

Reordering functions

Once data is dealt with by being placed before code, we need to make sure code required during startup is grouped together. The linker, however, is not able to reorder functions without some special treatment when compiling the source to object files: it requires the -ffunction-sections option to be given to GCC.

Normally, when compiling a source file to an object file, GCC creates a unique .text section containing all the code. If in a given source file, there are two functions, foo and bar, and foo calls bar, the function call from foo to bar will be hard-coded to use the distance from the call location to the start of bar.

When using -ffunction-sections, each function is given a separate .text.function_name section, and the compiler places relocations for calls between them. Simply put, instead of having recorded the distance from the call location to the start of bar, we'd have a relocation, applying on the call location, with a symbolic information telling the destination is the start of bar.

The linker will then apply the relocations, and the resulting code will be using the distance between the functions, as without -ffunction-sections. The difference comes from the fact that the linker now is able to move these function sections independently and adjust the various calls accordingly.

Unfortunately, there are several parts that -ffunction-sections doesn't affect, and thus can't be reordered without some effort:

  • Assembler source
  • (Static) system libraries (e.g. libgcc, libstlport)

For the former, the various inline assembly and separate assembler sources must be annotated with individual sections for each function defined. For the latter, the static libraries can be rebuilt with -ffunction-sections.

Another problem that was spotted was that the javascript engine wasn't compiled with -ffunction-sections.

Thumb-2/ARM trampolines

ARM processors we target with Firefox for Android have two instruction sets: ARM and Thumb-2. Switching from one of them to the other is called interwork. While we build Firefox for Android with the Thumb-2 instruction set, system libraries may use the ARM instruction set. As such, they must be able to call system library functions with interworking.

Technically, interworking can be triggered from Thumb-2 or ARM, to call either one. However, the Procedure Linkage Table, containing code that either calls the linker to resolve symbols, or the system library functions, contains ARM code. Theoretically, in a program or library that is entirely Thumb-2, it should be possible to use Thumb-2 code, but neither GNU ld nor gold do that. They however both (currently) have a different behavior.

GNU ld generates PLT entries that start as Thumb-2, switch to ARM and then call whatever they need to call.

Gold generates PLT entries that are fully ARM, but adds trampolines into which Thumb-2 code jumps to interwork and jump back to the ARM PLT.

These trampolines are unfortunately "randomly" placed in the resulting binary. They're not actually randomly positioned, but the result is that these trampolines end up in places that are definitely not near the beginning of the file. In my Firefox for Android builds, they were usually grouped in various places, some of which were at the end of the code section, which went completely against our goal. For the purpose of making the prototype quickly available, we had to compile Firefox for Android as ARM, which effectively makes the binary larger (ARM instructions are larger than Thumb-2 instructions), thus longer to uncompress.

Identical Code Folding

Identical Code Folding is one of the features we were using gold for. It allows to replace various identical implementations of functions with a single occurrence. This is particularly interesting on C++ code, where templates are heavily used. For example, a templated function used for bools and for ints will likely result in the same machine code.

In some other cases, simple enough functions can end up completely identical at the machine code level despite not looking as such in C++.

The problem is that gold handles function (sections) reordering after ICF. Once the ICF pass is done, only one of the original functions is known of gold for function reordering (even if in the end it does expose all the symbols appropriately). And it may well be the variant that does not appear in the function order list.

The current workaround is to link libxul.so once, with an option printing the functions being folded together, and adding all of them in the order list, at the position where it makes most sense. For instance, if foo and bar are folded and bar appears in the order list, foo and bar would be put in place of bar in the list.

Section names

As it may have transpired from above explanations, gold doesn't actually do function reordering. It does section reordering. In the simple cases, sections for functions are just named .text.function_name. However, there are also other variants, such as .text.startup.function_name, .text.hot.function_name or .text.unlikely.function_name. Even more subtle, there can also be .text.function_name.part.number sections, but for these, the .part.number is actually part of the function name as seen in the symbols table.

Tools that do the kind of profiling required to do function ordering like we wanted to do will give function names, not section names, because the section names are only present in the object files, not in the final libraries and programs. Thus after getting a list of function names as they are called during startup, cross-referencing with the object files to get the corresponding section names is required.

Valgrind

We used Valgrind in order to get the order in which functions were called during startup. While our icegrind plugin would have allowed it, various changes in Valgrind in the past year apparently broke icegrind. But ARM support for Valgrind is work in progress and is based on current trunk, so we needed a recent build, which excludes using icegrind, except if we update it.

But it also turns out Valgrind has a built-in feature that more or less allows to get the order in which functions are called. Valgrind can show a trace of each basic blocks the first time it sees them executed (--trace-flags=10000000, and to be on the safe side, --vex-guest-chase-thresh=0). Basic blocks are subsets of functions, so Valgrind likely will report the same function several times, but it's pretty easy to filter out those duplicates.

Unfortunately, we encountered a couple problems with this built-in tracing, not reporting names for some functions when the binary is compiled as Thumb-2, and not reporting at all some functions that we know are being called when the binary is compiled as ARM (it may also be the case for Thumb-2, haven't verified). In the end, the order list was updated manually for the prototype.

Hardware-specific code paths

There are various places in the Mozilla code base where different implementations of the same functions are available for different kind of hardware, and the most suitable one is chosen at runtime, depending on the hardware detected. On x86, we have some implementations using SSE2 instructions, and fall-back implementations in plain x86. On ARM, there are implementations using NEON instructions, and fall-back implementations in plain ARM or Thumb-2. Sometimes, there aren't even fall-backs.

This means that which function is going to be used at runtime will very much depend on the hardware it will be executed on. So if you profile Firefox for Android on a tablet with a Tegra-2 CPU (which doesn't have NEON), you won't be exercising the same code paths as when running the same Firefox for Android build on a Nexus S (which CPU does have NEON).

If we get the order of functions called at startup on the Tegra-2 tablet, we won't be getting any of the NEON-specific functions in the functions list, and the resulting reordered binary may have the NEON-specific functions at an inconvenient position for progressive decompression. So the runtime benefits we'd see on the Tegra-2 tablet would go away on the Nexus S. And vice-versa.

Callgraph-based reordering

The GCC google branch has patches adding a new interesting feature: callgraph-based reordering. It makes the compiler add call graph information extracted from PGO/FDO profile data to object files, and the linker use that information to reorder the binary. Theoretically, this would allow us to reorder our binaries without relying on valgrind or something similar, and even take advantage of the fact that we already do PGO/FDO (except we currently don't, for Android builds).

The main drawback, however, is exactly the same as for the -ffunction-sections option: assembler sources and static (system) libraries are not going to be covered. And unlike with -ffunction-sections, it's hard to work around.

It also most likely suffers the same problem as section ordering w.r.t identical code folding, and also has the same problem with ARM/Thumb-2 trampolines and hardware-specific code paths.

Conclusion

It's still a long way before having efficiently reordered binaries for Android builds. It involves toolchain fixes and tweaks, and build engineering tricks to allow running the build on some device and during the cross-compiled build. We should however be able to benefit from incremental decompression with seekable compressed streams well before that. Stay tuned.

2011-10-31 18:53:23+0900

p.m.o | 7 Comments »

Making Firefox start faster on Android

On my Nexus S, the time line when starting up current nightly looks like this:

0ms Icon tapped
550ms (+550ms) The application entry point in Java code is reached
1350ms (+800ms) The application entry point in native code is reached
1700ms (+350ms) Start creating the top level window (i.e. no XUL has been loaded yet)
2500ms (+800ms) Something is painted on screen

That's a quite good scenario, currently. On the very first run after the installation, even more time is spent extracting a bunch of files and creating the profile. And when opening an url, it also is slower because of the time that's needed to initialize the content process: tapping the icon will bring about:home, which doesn't start a separate content process.
The Nexus S also isn't exactly a slow device, even if there are much better devices nowadays. This means there are devices out there where startup is even slower than that.

Considering Android devices have a much more limited amount of memory than most Desktop computers, and that as a result switching between applications is very likely to get some background applications stopped, this makes startup time a particularly important issue on Android.

The Mobile team is working on making most of the last 800ms go away. The Performance team is working on the other bits.

Reaching Java code

550ms to reach our code sounds like both outrageous, and impossible to solve: we don't control neither how nor when our code is called by Android after the user has tapped on our icon. In fact, we actually have some control.

Not all Firefox builds are made equal, and it turns out some builds don't have the outrageous 550ms overhead. They instead have a 300ms overhead. The former builds contain multiple languages. The latter only contain english.

Android applications come in the form of APK files, which really are ZIP files. It turns out Android is taking a whole lot of time to read the entire list of files in the APK and/or handling the corresponding manifest file at startup, and the more files there are, the more time is spent. A multi-languages build contains 3651 files. A single language build contains "only" 1428 files.

We expect getting down to about 100ms by packing chrome, components and some other resources together in a single file, effectively storing a ZIP archive (omni.jar) inside the APK. We (obviously) won't compress at both levels.

Reaching native code

Native code, like on other platforms, comes in the form of libraries. These libraries are stored in the APK, and uncompressed in memory when Firefox starts. Not so long ago, when space permitted, Firefox would actually store the uncompressed libraries on the internal flash, but as this wasn't a clear win in actual use cases (on top of making first startup abysmally slow), it was disabled. We however still have the possibility to enable it for use with various tools such as valgrind or oprofile, which don't support the way we load libraries directly off the APK.

These 800ms between reaching Java code and reaching native code are spent uncompressing the libraries, applying relocations and running static initialization code. Most of these 800ms are actually spent on uncompressing the main library file (libxul.so) alone.

But we don't actually need all that code from the very start. We don't even need all that code to display the user interface and the web page we first load. The analysis I conducted last year on desktop Firefox very much applies on mobile too.

By only loading the pieces that are needed, we expect to cut the loading time at least in half, and even more on multi-core devices. Last week, we built a prototype with promising results, but the experience also uncovered difficulties. More details will follow in a subsequent post.

Initializing Gecko

Another significant part of the startup phase is initializing Gecko. This includes, but is not limited to:

  • Initializing the Javascript engine
  • Registering and initializing components
  • Initializing the various subsystems using sqlite databases

The switch to a native Android user interface is going to significantly alter how filesystem accesses (most notably sqlite) are going to happen. While we have tested asynchronous I/O for sqlite (and having it break Places), we can't currently go much further until native UI reaches a certain level of maturity.

We however need to identify what particular pieces of initialization are contributing the most to the 350ms between reaching native code and starting to create the top level window.

2011-10-24 15:59:09+0900

p.m.o | 5 Comments »

Firefox Mobile debugging on Android got a bit easier

Two changes landed recently that are going to make Firefox Mobile debugging on Android a bit easier:

  • Debug info files are now available for nightly builds since friday, aurora builds since saturday, and beta builds starting from 8.0b3. Until now, debugging required to use custom builds, as the debug info wasn't available anywhere.
  • Our custom Android linker now makes GDB happy and properly reports our libraries to GDB, without requiring to extract them. This means you don't need to run with the debug intent to be able to attach and use GDB. This is only on mozilla-central for now. Update: unfortunately, it also only works on "debuggable" builds, so only custom builds :(

For convenience, I modified the fetch-symbols script to allow to download the debug info files for Android builds (Note this version of the script also solves the problem of getting debug info files for x86 builds on a x86-64 host). Instead of giving it the Firefox directory like for desktop builds, give it the APK file:

$ python fetch-symbols.py fennec-10.0a1.en-US.android-arm.apk http://symbols.mozilla.org/ /path/for/symbols

2011-10-10 10:51:12+0900

p.m.o | 3 Comments »

Building a custom kernel for the Nexus S

There are several reasons why someone would want to build a custom kernel for their Android phone. In my case, this is because I wanted performance counters (those used by the perf tool that comes with the kernel source). In Julian Seward's case, he wanted swap support to overcome the limited memory amount on these devices in order to run valgrind. In both cases, the usual suspects (AOSP, CyanogenMod) don't provide the wanted features in prebuilt ROMs.

There are also several reasons why someone would NOT want to build a complete ROM for their Android phone. In my case, the Nexus S is what I use to work on Firefox Mobile, but it is also my actual mobile phone. It's a quite painful and long process to create a custom ROM, and another long (but arguably less painful thanks to ROM manager) process to backup the phone data, install the ROM, restore the phone data. And if you happen to like or use the proprietary Google Apps that don't come with the AOSP sources, you need to add more steps.

There are however tricks that allow to build a custom kernel for the Nexus S and use it with the system already on the phone. Please note that the following procedure has only been tested on two Nexus S with a 2.6.35.7-something kernel (one with a stock ROM, but unlocked, and another one with an AOSP build). Also please note that there are various ways to achieve many of the steps in this procedure, but I'll only mention one (or two in a few cases). Finally, please note some steps rely on your device being rooted. There may be ways to do without, but I'm pretty sure it requires an unlocked device at the very least. This post doesn't cover neither rooting nor unlocking.

Preparing a build environment

To build an Android kernel, you need a cross-compiling toolchain. Theoretically, any will do, provided it targets ARM. I just used the one coming in the Android NDK:

$ wget http://dl.google.com/android/ndk/android-ndk-r6b-linux-x86.tar.bz2
$ tar -jxf android-ndk-r6b-linux-x86.tar.bz2
$ export ARCH=arm
$ export CROSS_COMPILE=$(pwd)/android-ndk-r6b/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-

For the latter, you need to use a directory path containing prefixed versions (such as arm-eabi-gcc or arm-linux-androideabi-gcc), and include the prefix, but not "gcc".

You will also need the adb tool coming from the Android SDK. You can install it this way:

$ wget http://dl.google.com/android/android-sdk_r12-linux_x86.tgz
$ tar -zxf android-sdk_r12-linux_x86.tgz
$ android-sdk-linux_x86/tools/android update sdk -u -t platform-tool
$ export PATH=$PATH:$(pwd)/android-sdk-linux_x86/platform-tools

Building the kernel

For the Nexus S, one needs to use the Samsung Android kernel tree, which happens to be unavailable at the moment of writing due to the kernel.org outage. Fortunately, there is a clone used for the B2G project, which also happens to contain the necessary cherry-picked patch to add support for the PMU registers on the Nexus S CPU that are needed for the performance counters.

$ git clone -b devrom-2.6.35 https://github.com/cgjones/samsung-android-kernel
$ cd samsung-android-kernel

You can then either start from the default kernel configuration:

$ make herring_defconfig

or use the one from the B2G project, which enables interesting features such as oprofile:

$ wget -O .config https://raw.github.com/cgjones/B2G/master/config/kernel-nexuss4g

From then, you can use the make menuconfig or similar commands to further configure your kernel.

One of the problems you'd first encounter when booting such a custom kernel image is that the bcm4329 driver module that is shipped in the system partition (and not in the boot image) won't match the kernel, and won't be loaded. The unfortunate consequence is the lack of WiFi support.

One way to overcome this problem is to overwrite the kernel module in the system partition, but I didn't want to have to deal with switching modules when switching kernels.

There is however a trick allowing the existing module to be loaded by the kernel: compile a kernel with the same version string as the one already on the phone. Please note this only really works if the kernel is really about the same. If there are differences in the binary interface between the kernel and the modules, it will fail in possibly dangerous ways.

To use that trick, you first need to know what kernel version is running on your device. Settings > About phone > Kernel version will give you that information on the device itself. You can also retrieve that information with the following command:

$ adb shell cat /proc/version

With my stock ROM, this looks like the following:

Linux version 2.6.35.7-ge382d80 (android-build@apa28.mtv.corp.google.com) (gcc version 4.4.3 (GCC) ) #1 PREEMPT Thu Mar 31 21:11:55 PDT 2011

In the About phone information, it looks like:

2.6.35.7-ge382d80
android-build@apa28

The important part above is -ge382d80, and that is what we will be using in our kernel build. Make sure the part preceding -ge382d80 does match the output of the following command:

$ make kernelversion

The trick is to write that -ge382d80 in a .scmversion file in the kernel source tree (obviously, you need to replace -ge382d80 with whatever your device has):

$ echo -ge382d80 > .scmversion

The kernel can now be built:

$ make -j$(($(grep -c processor /proc/cpuinfo) * 3 / 2))

The -j... part is the general rule I use when choosing the number of parallel processes make can use at the same time. You can pick whatever suits you better.

Before going further, we need to get back to the main directory:

$ cd ..

Getting the current boot image

The Android boot image living in the device doesn't contain only a kernel. It also contains a ramdisk containing a few scripts and binaries, that starts the system initialization. As we will be using the ramdisk coming with the existing kernel, we need to get that ramdisk from the device flash memory:

$ adb shell cat /proc/mtd | awk -F'[:"]' '$3 == "boot" {print $1}'

The above command will print the mtd device name corresponding to the "boot" partition. On the Nexus S, this should be mtd2.

$ adb shell
$ su
# dd if=/dev/mtd/mtd2 of=/sdcard/boot.img bs=4096
2048+0 records in
2048+0 records out
8388608 bytes transferred in x.xxx secs (xxxxxxxx bytes/sec)
# exit
$ exit

In the above command sequence, replace mtd2 with whatever the previous command did output for you. Now, you can retrieve the boot image:

$ adb pull /sdcard/boot.img

Creating the new boot image

We first want to extract the ramdisk from that boot image. There are various tools to do so, but for convenience, I took unbootimg, on github, and modified it slightly to seemlessly support the page size on the Nexus S. For convenience as well, we'll use mkbootimg even if fastboot is able to create boot images.

Building unbootimg, as well as the other tools rely on the Android build system, but since I didn't want to go through setting it up, I figured a minimalistic way to build the tools:

$ git clone https://github.com/glandium/unbootimg.git
$ git clone git://git.linaro.org/android/platform/system/core.git

The latter is a clone of git://android.git.kernel.org/platform/system/core.git, which is down at the moment.

$ gcc -o unbootimg/unbootimg unbootimg/unbootimg.c core/libmincrypt/sha.c -Icore/include -Icore/mkbootimg
$ gcc -o mkbootimg core/mkbootimg/mkbootimg.c core/libmincrypt/sha.c -Icore/include
$ gcc -o fastboot core/fastboot/{protocol,engine,bootimg,fastboot,usb_linux,util_linux}.c core/libzipfile/{centraldir,zipfile}.c -Icore/mkbootimg -Icore/include -lz

Once the tools are built, we can extract the various data from the boot image:

$ unbootimg/unbootimg boot.img
section sizes incorrect
kernel 1000 2b1b84
ramdisk 2b3000 22d55
second 2d6000 0
total 2d6000 800000
...but we can still continue

Don't worry about the error messages about incorrect section sizes if it tells you "we can still continue". The unbootimg program creates three files:

  • boot.img-mk, containing the mkbootimg options required to produce a working boot image,
  • boot.img-kernel, containing the kernel image,
  • boot.img-ramdisk.cpio.gz, containing the gzipped ramdisk, which we will reuse as-is.

All that is left to do is to generate the new boot image:

$ eval ./mkbootimg $(sed s,boot.img-kernel,samsung-android-kernel/arch/arm/boot/zImage, boot.img-mk)

Booting the image

There are two ways you can use the resulting boot image: one-time boot or flash. If you want to go for the latter, it is best to actually do both, starting with the one-time boot, to be sure you won't be leaving your phone useless (though recovery is there to the rescue, but is not covered here).

First, you need to get your device in the "fastboot" mode, a.k.a. boot-loader:

$ adb reboot bootloader

Alternatively, you can power it off, and power it back on while pressing the volume up button.

Once you see the boot-loader screen, you can test the boot image with a one-time boot:

$ ./fastboot boot boot.img
downloading 'boot.img'...
OKAY [ 0.xxxs]
booting...
OKAY [ 0.xxxs]
finished. total time: 0.xxxs

As a side note, if fastboot sits "waiting for device", it either means your device is not in fastboot mode (or is not connected), or that you have permissions issues on the corresponding USB device in /dev.

Your device should now be starting up, and eventually be usable under your brand new kernel (and WiFi should be working, too). Congratulations.

If you want to use that kernel permanently, you can now flash it after going back in the bootloader:

$ adb reboot bootloader
$ ./fastboot flash boot boot.img
sending 'boot' (2904 KB)...
OKAY [ 0.xxxs]
writing 'boot'...
OKAY [ 0.xxxs]
finished. total time: 0.xxxs
$ ./fastboot reboot

Voilà.

2011-09-14 09:23:47+0900

p.d.o, p.m.o | 8 Comments »

Initial VMFS 5 support

Today I added an initial VMFS 5 support to vmfs-tools. For the most part, VMFS 5 is VMFS 3, so these should just work as before, and adds new features; but this initial support is very limited:

  • Unified 1MB File Block Size - Nothing has been changed here, so file size is still limited to 256 GB with 1MB File Block Size.
  • Large Single Extent Volumes - This is not supported yet. So the 2TB extent limitation still exists.
  • Smaller Sub-Block - This actually doesn't change anything to the on-disk format, but is only really the tuning of an existing value in the format. This should be handled out of the box by vmfs-tools.
  • Small File Support - VMFS 5 now stores files smaller than 1KB in the inode itself instead of allocating a Sub-Block. Support for this has been added on master.
  • Increased File Count - Like smaller Sub-Blocks, this was supported by the on-disk format, and the change is only about tuning, so this should just work out of the box.

On related news, while the git repository here is kept alive, I also pushed it to github. The main reason I did so is the Issue tracker.

Update: It turns out the small file support makes the vmfs-tools crash when accessing files bigger than 256GB, because the assumption made when reverse engineering was wrong and clashes with how files bigger than 256GB are implemented. It also turns out large single extent volumes may be working already because it looks like it was only about tuning an existing value, like smaller sub-block and increased file count.

Update 2: Latest master now supports small files without crashing with files bigger than 256GB.

2011-09-09 17:34:17+0900

vmfs-tools | 23 Comments »

Extreme tab browsing

I have a pathological use of browser tabs: I use a lot of them. A lot is probably an understatement. We could say I use them as bookmarks of things I need to track. A couple weeks ago, I was saying I had around two hundred tabs opened. I now actually have much more.

It affected startup until I discovered that setting the browser.sessionstore.max_concurrent_tabs pref to 0 was making things much better by only loading tabs when they are selected. This preference has/will become browser.sessionstore.restore_on_demand. However, since I only start my main browser once a day, while other applications start and while I begin to read email, I hadn't noticed that this was still heavily affecting startup time: about:startup tells me reaching the sessionRestored state takes seven seconds, even on a warm startup.

It also affects memory usage, because even when tabs are only loaded on demand, there is a quite big overhead for each tab.

And more importantly, it gets worse with time. And I think the user interface is actively making it worse.

So, to get an idea how bad things were in my session, I wrote a little restartless extension. After installing it, you can go to the about:tabs url to see the damage on your session. Please note that the number of groups is currently wrong until you open the tab grouping interface.

This is what the extension has to say about my session 2 days ago, right after a startup:

  • 556 tabs across 4 groups in 1 window
  • 1 tab has been loaded
  • 444 unique addresses
  • 105 unique hosts
  • 9 empty tabs
  • 210 http:
  • 319 https:
  • 14 ftp:
  • 2 about:
  • 2 file:
  • 55 addresses in more than 1 tab
  • 39 hosts in more than 1 tab

The first thing to note is that when I filed the memory bug 4 days earlier, I had a bit more than 470 tabs in that session. You can see 4 days later, I now have 555 tabs (if excluding the about:tabs tab).

The second thing to note is something I suspected because it's so easy to get there: a lot of the tabs are opened on the same address. Since Firefox 4.0, if I'm not mistaken, there is a great feature in the awesomebar, that allows to jump to an existing tab matching what you type in the urlbar. That is very useful, and I use it a lot. However, there are a lot of cases where it's not as useful as it could be.

One of the addresses I visit a lot is http://buildd.debian.org/iceweasel. It gives me the build status of the latest iceweasel package I uploaded to Debian unstable. That url is particularly known in my browsing history, and is the first hit when I type "buildd" in the urlbar (actually, even typing "b" brings it first). Unfortunately, that url redirects to https://buildd.debian.org/status/package.php?p=iceweasel through an HTTP redirection. I say unfortunately because when I type "buildd" in the urlbar, I get 6 suggestions for urls in the form http://buildd.debian.org/package (I also watch other packages build status), and the suggestion to switch to the existing tab for what the first hit would get me to is 7th. Guess what? The suggestion list only shows 6 items ; you have to scroll to see the 7th.

The result is that I effectively have fifteen tabs open on that url.

I also keep a lot of bugzilla.mozilla.org bugs open in different tabs. The extension tells me there are 255 of them... for 166 unique bugs. Largely, the duplicate bug tabs are due to having these bugs open in some tab, but accessing the same bugs from somewhere else, usually a dependent bug or TBPL. I also have 5 tabs opened on my request queue. I usually get there by going to the bugzilla home page and clicking on the "My Requests" link. And I have several tabs opened on the same bug lists. For the same reason.

When I started using tab groups, I splitted in very distinct groups. Basically, one for Mozilla, one for Debian, one for stuff I want to follow (usually blog posts I want to follow comments from), and one for the rest. While I was keeping up with grouping at the beginning, I don't anymore, and the result is that each group is now a real mess.

Firefox has hundreds of millions users. It's impossible to create a user experience that works for everyone. One thing is sure, it doesn't work for me. My usage is probably very wrong at different levels, but I don't feel my browser is encouraging me to use it better, except by making my number of opened tabs explode to an unmanageable level (I already have 30 tabs more than when I started writing this post 2 days ago).

There are a few other things I would like to know about my usage that my extension hasn't told me yet, either because it doesn't tell, or because I haven't looked:

  • How many tabs end up loaded at the end of a typical day?
  • How many tabs do I close?
  • How many duplicate tabs do I open and close?
  • How long has it been since I looked at a given tab?
  • How do the number of tabs and duplicates evolve with time?

Reflecting on my usage patterns, I think a few improvements, either in the stock browser, or through extensions, could make my browsing easier:

  • Auto-grouping tabs: When I click on a link to an url under mozilla.org, I most likely want it in the Mozilla group. An url under debian.org would most likely go in the Debian group.
  • Switch to an existing tab when following a link to an already opened url: That might not be very useful as a general rule, but at least for some domains, it would seem useful for me that the browser switches to an existing tab not only through the urlbar, but also when following links in a page. If I'm reading a bug, click on a bug it depends on, and that bug is already opened in another tab, get me there. There would be a history problem to solve, though. (e.g. where do back and forward bring?)

Maybe these exist as extensions, I don't know. It's hard to find very specific things like that through an add-on search (though I haven't searched very hard). [Looks like there is an experiment for the auto tab grouping part]

I think it would also be interesting to have something like Test Pilot, but for users that want to know the answer to "How do I use my browser?". As I understand it, Test Pilot can show individual user data, but it only can do so if there is such data, and you can't get data for past studies you didn't take.

In my case, I'm not entirely sure that, apart from the pinned tabs, I use the tab bar a lot. And even for pinned tabs, most of the time I use keyboard shortcuts. I'm not using the menu button that much either. I already removed the url and search bar (most of the time) with LessChrome HD. Maybe I could go further and use the full window for web browsing.

2011-08-29 09:27:55+0900

firefox, p.m.o | 47 Comments »

No wonders with PGO on Android

I got Profile Guided Optimization (a.k.a. Feedback Directed Optimization) to work for Android builds, using GCC 4.6.1 and Gold 2.21.53.

Getting such a build is not difficult, just a bit time consuming.

  • Apply the patches from bug 632954
  • Get an instrumented build with the following command:

    $ make -f client.mk MOZ_PROFILE_GENERATE=1 MOZ_PROFILE_BASE=/sdcard/mozilla-pgo

  • Create a Fennec Android package:

    $ make -C $objdir package

    If you get an elfhack error during this phase, make sure to update your tree, the corresponding elfhack bug has been fixed.

  • Install the package on your device:

    $ adb install -r $objdir/dist/fennec-8.0a1.en-US.android-arm.apk

  • Open Fennec on your device, and do some things in your browser, so that execution data is collected. For my last build, I installed the Zippity Test Harness add-on, and ran V8, Sunspider and PageLoad tests
  • Collect the execution data:

    $ adb pull /sdcard/mozilla-pgo /

  • Clean-up the build tree:

    $ make -f client.mk clean

  • Build using the execution data:

    $ make -f client.mk MOZ_PROFILE_USE=1

  • Create the final Fennec Android package, install and profit:

    $ make -C $objdir package
    $ adb install -r $objdir/dist/fennec-8.0a1.en-US.android-arm.apk

As the title indicates, though, this actually leads to some disappointment. On my Nexus S, the resulting build is actually slightly slower on Sunspider than the corresponding nightly. It is however much faster on V8 (down to around 1200 from around 1800), but... is just as fast as a non PGO/FDO build with GCC 4.6. Even sadder, the non PGO/FDO build with GCC 4.6 is faster on Sunspider than the PGO/FDO build, and on-par with the GCC 4.4-built nightly.

So, my experiments suggest that switching to GCC 4.6 would give us some nice speed-ups, but enabling PGO/FDO wouldn't add to that.

If you want to test and compare my builds on different devices, please go ahead, with the following builds:

The former will install as "Nightly", while the two others will install as "Fennec".

The sizes are also interesting: while the PGO build is bigger than the Nightly build, the plain GCC 4.6 build is smaller.

2011-08-04 14:50:50+0900

p.m.o | 10 Comments »

Building an Android NDK with recent GCC and binutils

As of writing, the latest Native-code Development Kit for Android (r6) comes with gcc 4.4.3 and binutils 2.19 for ARM. The combination is a quite old toolchain, that lacks various novelties, such as properly working Profile Directed Optimization (a.k.a. Profile Guided Optimization), or Identical Code Folding.

The first thing that is needed to rebuild a custom NDK, is the NDK itself.

$ wget http://dl.google.com/android/ndk/android-ndk-r6-linux-x86.tar.bz2
$ tar -xjf android-ndk-r6-linux-x86.tar.bz2
$ cd android-ndk-r6

Next, you need to get the NDK source (this can take a little while and requires git, but see further below if you want to skip this part):

$ ./build/tools/download-toolchain-sources.sh src

Rebuilding the NDK toolchain binaries is quite simple:

$ ./build/tools/build-gcc.sh $(pwd)/src $(pwd) arm-linux-androideabi-4.4.3

But this doesn't get you anything modern. It only rebuilds what you already have.

The GCC 4.4.3 that comes with the NDK is actually quite heavily patched. Fortunately, only a few patches are required for gcc 4.6.1 to work with the NDK (corresponding upstream bug).

In order to build a newer GCC and binutils, you first need to download the source for GCC (I took 4.6.1) and binutils (I took the 2.21.53 snapshot, see further below), as well as GMP, MPFR and MPC. The latter was not a requirement to build GCC 4.4. GMP and MPFR are with the NDK toolchain sources, but the versions available there are too old for GCC 4.6.

All the sources must be placed under src/name, where name is gcc, binutils, mpc, mpfr, or gmp. The sources for MPC, MPFR and GMP need to remain as tarballs, but the sources for GCC and binutils need to be extracted (don't forget to apply the patch linked above to GCC). In the end you should have the following files/directories:

  • src/gcc/gcc-4.6.1/
  • src/binutils/binutils-2.21.53/
  • src/gmp/gmp-5.0.2.tar.bz2
  • src/mpc/mpc-0.9.tar.gz
  • src/mpfr/mpfr-3.0.1.tar.bz2

If you skipped the NDK toolchain source download above, you will also need the gdb sources. NDK comes with gdb 6.6, so you should probably stay with that one. The source needs to be extracted like GCC and binutils, so you'll have a src/gdb/gdb-6.6/ directory. Another part you will need is the NDK build scripts, available on git://android.git.kernel.org/toolchain/build.git. They should be put in a src/build/ directory. For convenience, you may directly download a tarball.

You then need to edit the build/tools/build-gcc.sh script to add support for MPC:

Add the following lines somewhere around similar lines in the script:

MPC_VERSION=0.8.1
register_var_option "--mpc-version=<version>" MPC_VERSION "Specify mpc version"

And add the following to the configure command in the script:

--with-mpc-version=$MPC_VERSION

If you want to use gold by default instead of GNU ld, you can also add, at the same place:

--enable-gold=default

If you want a GNU libstdc++ compiled as Position Independent Code (note that by default, the NDK won't use GNU libstdc++, but its own), you can add, at the same place:

--with-pic

Once all this preparation is done, you can build your new NDK toolchain with the following command:

$ ./build/tools/build-gcc.sh --gmp-version=5.0.2 --mpfr-version=3.0.1 --mpc-version=0.9 --binutils-version=2.21.53 $(pwd)/src $(pwd) arm-linux-androideabi-4.6.1

If you're running a 64-bits system on x86-64, you can also add the --try-64 option to the above command, which will give you a 64-bits toolchain to cross-build ARM binaries, instead of the 32-bits toolchain you get by default.

When building Firefox with this new toolchain, you need to use the following in your .mozconfig:

ac_add_options --with-android-toolchain=/path/to/android-ndk-r6/toolchains/arm-linux-androideabi-4.6.1/prebuilt/linux-x86

Or the following for the 64-bits toolchain:

ac_add_options --with-android-toolchain=/path/to/android-ndk-r6/toolchains/arm-linux-androideabi-4.6.1/prebuilt/linux-x86_64

Note that currently, elfhack doesn't support the resulting binaries very well, so you will need to also add the following to your .mozconfig:

ac_add_options --disable-elf-hack

Or, if you don't want to build it yourself, you can get the corresponding pre-built NDK (32-bits) (thanks to Brad Lassey for the temporary hosting). Please note it requires libstdc++ from gcc 4.5 or higher.

Here is a list of things you may need to know if you want to try various combinations of versions, and that I had to learn the hard way:

  • GCC 4.6.1 doesn't build with binutils 2.19 (GNU assembler lacks support for a few opcodes it uses)
  • GNU ld >= 2.21.1 has a crazy bug that leads to a crash of Firefox during startup. There is also a workaround.
  • Gold fails to build with gcc 4.1.1 (I was trying to build in the environment we use on the buildbots) because of warnings (it uses -Werror) in some versions, and because of an Internal Compiler Error with other versions.
  • When building with a toolchain that is not in the standard directory and that is newer than the system toolchain (like, in my case, using gcc 4.5 in /tools/gcc-4.5 instead of the system gcc 4.1.1), gold may end up with a libstdc++ dependency that is not satisfied with the system libstdc++. In that case, the NDK toolchain build will fail with the error message "Link tests are not allowed after GCC_NO_EXECUTABLES.", which isn't exactly helpful to understand what is wrong.
  • At some point, I was getting the same error as above when the build was occurring in parallel, and adding -j1 to the build-gcc.sh command line solved it. It hasn't happened to me in my recent attempts, though.
  • Gold 2.21.1 crashes when using Identical Code Folding. This is fixed on current binutils HEAD (which is why I took 2.21.53).

2011-08-01 17:48:16+0900

p.m.o | 19 Comments »

-feliminate-dwarf2-dups FAIL

DWARF-2 is a format to store debugging information. It is used on many ELF systems such as GNU/Linux. With the way things are compiled, there is a lot of redundant information in the DWARF-2 sections of an ELF binary.

Fortunately, there is an option to gcc that helps dealing with the redundant information and downsizes the DWARF-2 sections of ELF binaries. This option is -feliminate-dwarf2-dups.

Unfortunately, it doesn't work with C++.

With -g alone, libxul.so is 468 MB. With -g -feliminate-dwarf2-dups, it is... 1.5 GB. FAIL.

The good news is that as stated in the message linked above, -gdwarf-4 does indeed help reducing debugging information size. libxul.so, built with -gdwarf-4 is 339 MB. This however requires gcc 4.6 and a pretty recent gdb.

2011-07-30 11:21:01+0900

p.d.o, p.m.o | 1 Comment »