Archive for the 'p.d.o' Category

What is a Web App?

Is it this, this or that?

2012-07-20 11:04:59+0200

p.d.o, p.m.o | 7 Comments »

Comment spam

Three weeks ago, I slightly modified the comment system on this blog for an experiment. This blog is a standard wordpress installation. Comments are normally directed to the wp-comments-post.php script by the HTML form. What I did is:

  • Create a comments-post.php wrapper script that just includes wp-comments-post.php (this allows things to still work properly after wordpress upgrades),
  • Make the HTML form direct to a comments-post.php script,
  • Add a usedForm=1 parameter to the HTML form action, such that comments-post.php is supposed to always be called with it,
  • Add a simple javascript that adds a hasJS=1 parameter to the HTML form action when the page is loaded, and a Submit=1 parameter when the form is submitted.

During the past three weeks, on this blog, there were 7170 comments, 8 of which were actual comments. 7162 were spam (~99.9%).

  • 3165 spams (~44.1%) were sent to the original WordPress comment handler (wp-comments-post.php) from 1589 unique IP addresses.
  • 0 spam were sent to the new comment handler without a query string (comments-post.php), but 1 was sent with an empty query string (comments-post.php?).
  • 18 spams were sent to the new comment handler with a lowercased query string (comments-post.php?usedform=1) from 6 unique IP addresses.
  • 3971 spams (~55.4%) were sent to the new comment handler with the form query string (comments-post.php?usedForm=1) from 1153 unique IP addresses.
  • 7 spams (~0.1%) were sent to the new comment handler with the full query string, including what is added through javascript (comments-post.php?usedForm=1&hasJS=1&Submit=1) from 5 unique IP addresses.

This means a large portion of spammers didn’t care about actually checking the comment forms and used the standard wordpress url, and another large portion don’t run javascript on their bots, although a very few do.

2012-07-15 11:35:54+0200

p.d.o, p.m.o, website | 1 Comment »

Attempting to close a LinkedIn account

Following the trend, I attempted to close my LinkedIn account. Closing a LinkedIn account involves confirming and confirming and confirming again. Once it’s all done, you’d expect to, well, be done with it.

I’m outraged at the result:

  • My public profile is still there. I can’t be sure but I guess people with a connection to me can still see the full profile.
  • I’m still receiving LinkedIn connection emails (You, know, those “Learn about xxxxxxx, your new connection…” emails ; I must have had pending outgoing invitations).
  • I can still reset my password.
  • I can still login.

The only upside is that after I login, I can only see a page saying “Your LinkedIn account has been temporarily restricted”. “Contact our customer service team to get this resolved as soon as possible.”

Update: After contacting their customer service, the account was closed and the public profile is now unavailable.

2012-06-09 14:52:53+0200

p.d.o, p.m.o | 8 Comments »

Iceweasel ESR in squeeze-backports

In case this went unnoticed, Iceweasel ESR has been available in squeeze-backports for a few weeks, now. I highly recommend anyone using Iceweasel on the Debian stable release to upgrade to that version, at the very least. Even newer versions are available through the pkg-mozilla archive.

2012-06-02 11:33:08+0200

firefox | 8 Comments »

Announcing vmfs-tools version 0.2.5

The last release of vmfs-tools (0.2.1) was almost 2 years ago. It was about time to bring some of the changes that have been available in the git repository in an official tarball. So here it is.

It brings some limited VMFS 5 support and experimental extent removal, as well as some deep changes to the debugvmfs tool and various fixes.

Next release (0.2.6) will have a fixed fsck, which, while it still won’t fix file system inconsistencies, should at least report actual inconsistencies (which is far from being true currently). I won’t give any estimation as to when this will happen, though.

2012-03-25 11:02:03+0200

vmfs-tools | 11 Comments »

libgcc.a symbol visibility considered harmful

I recently got to rebuild an Android NDK with a fresh toolchain again, and hit an interesting problem. I actually had hit it before, but only this time I fully analyzed what’s going on. [As a side note, if you build such a NDK, don't use mpfr 3.1.0, as there is a bug in the libtool it ships]

Linking an application or a library pulls many things, that aren’t part of the code being built. One of these many things is the libgcc static library. Part of libgcc consists in an implementation of the platform ABI. On Android systems, this means the ARM EABI. GCC, when compiling some instructions, will generate ABI calls. For example, integer divisions may call __aeabi_idiv.

Consider the following minimized real world scenario:

$ echo "int foo(int a) { return 42 % a; }" > foo.c
$ arm-linux-androideabi-gcc -o -shared foo.c -mandroid

GCC will emit a call to __aeabi_idivmod for the % operation. With GCC 4.6.3, this function is in _divsi3.o under libgcc.a. That function itself calls __aeabi_idiv0, which lives in _dvmd_lnx.o under libgcc.a.

When statically linking, ld will thus include foo.o, _divsi3.o and _dvmd_lnx.o, meaning it will include all functions from these object files. That is, foo, __divsi3, __aeabi_idiv, __aeabi_idivmod, __aeabi_idiv0 and __aeabi_ldiv0. And more than being included, these functions are exported, because symbol visibility in libgcc.a is default. So while we expect exporting foo from our library, we’re actually exporting much more, including functions that just happened to be near the ones that our code (indirectly) uses.

Now, let’s say we want to build another library, using that foo function from libfoo:

$ cat > bar.c <<EOF
extern int foo(int a);
long long bar(long long a) { return foo(a) % a; }
$ arm-linux-androideabi-gcc -o -shared bar.c -mandroid

(The code above has absolutely no meaning, it just triggers the same function calls as what I was getting in the actual real world case)

When statically linking the above code, GCC will generate a call to __aeabi_ldivmod, which calls __aeabi_ldiv0, and many other things, directly or indirectly. When linking as above, nothing particularly nasty is going to happen. However, linking as above is actually wrong: the resulting library has an undefined reference to the foo symbol, and doesn’t depend on libfoo. At runtime, if libfoo wasn’t already loaded somehow, loading libbar would fail.

The proper way to link is the following:

$ arm-linux-androideabi-gcc -o -shared bar.c -mandroid -L. -lfoo

A feature of ELF static linking is that when it resolves undefined symbols, the linker will choose to use the first occurrence of a symbol it finds in the various objects and libraries given on its command line. So with the command line above, for each __aeabi_* symbol, it will first look in libfoo if there isn’t one. And while __aeabi_ldivmod is not in libfoo, __aeabi_ldiv0 is (see above).

So instead of including the code for __aeabi_ldiv0 from libgcc.a, it will call the copy from libfoo.

This wouldn’t be so much of a problem if __aeabi_ldiv0 wasn’t a weak symbol.

Enters faulty.lib. In the real world case, libfoo is loaded by the system dynamic linker, and libbar by faulty.lib. When resolving symbols for libbar, faulty.lib has to resolve libfoo symbols with the system linker, using dlsym(). On Android, dlsym() returns NULL for weak (defined) symbols, so faulty.lib can’t resolve __aeabi_ldiv0.

The real world case wasn’t a problem with GCC 4.4.3 from the vanilla Android NDK because in that GCC version, __aeabi_ldivmod doesn’t call __aeabi_ldiv0.

This wouldn’t happen if shared libraries wouldn’t expose random platform ABI specific bits depending on what they use and depending on other symbols that happen to be in the same object files.

A similar issue happened a little while ago on Debian powerpc because a shared library was exporting ABI specific bits. Even worse, the toolchain was assuming the symbols would come from libgcc.a and generated wrong relocations for these symbols.

Update: Interestingly, the __aeabi_* symbols are hidden, in libgcc.a as provided on the Debian armel port.

2012-03-06 17:19:34+0200

faulty.lib, p.d.o, p.m.o | 5 Comments »

Introducing faulty.lib

TL;DR link: faulty.lib is on github.

Android applications are essentially zip archives, with the APK extension instead of ZIP. While most of Android is java, and java classes are usually loaded from a ZIP archive (usually with the JAR extension), Android applications using native code need to have native libraries on the file system. These native libraries are found under /data/data/$appid/lib, where $appid is the package name, as defined in the AndroidManifest.xml file.

So, when Android installs an application, it puts that APK file under /data/app. Then, if the APK contains native libraries under a lib/$ABI subdirectory (where $ABI is armeabi, armeabi-v7a or x86), it also decompresses the files and places them under /data/data/$appid/lib. This means native libraries are actually stored twice on internal flash: once compressed and once decompressed.

This is why Chrome for Android takes almost 50MB of internal flash space after installation.

Firefox for Android used to have that problem, and we decided we should stop doing that. Michael Wu thus implemented a custom dynamic linker, which would load most of Firefox libraries directly off the APK. This involves decompressing the zipped data somewhere in memory, and doing‘s job to make the library usable (please note that on Android, is actually named linker). There were initially circumstances under which we would decompress into a file and reuse it the next time Firefox starts, but we subsequently removed that possibility (except for debugging purpose) because it ended up being slower than decompressing each time (thanks to internal flash being so slow).

Anyways, in order to do‘s job, our custom linker was directly derived from Android’s system linker, with many tweaks. This custom linker has done its job quite well for some time, now, but has been recently replaced, see further below.

Considering Firefox can’t do anything useful involving Gecko until its libraries are loaded, in practice, this means Firefox can’t display a web page faster than completely decompressing the libraries. Or can it?

Just don’t sit down ’cause I’ve moved your chair

We know that a lot of code and data is not used during Firefox startup. Based on that knowledge, we started working on only loading the necessary bits. The core of the idea is, when a library is requested to be loaded, to reserve anonymous memory for its decompressed size, and… that’s all. That memory is protected such that any access to it triggers a segmentation fault. When a segmentation fault happens, the required bits are decompressed, and execution is resumed where it was before the segmentation fault.

The original prototype was decompressing from a normal zip deflated stream, which means it was impossible to seek in it. So, if an access was made at the end of the library, it was necessary to decompress the whole library. With some nasty binary reordering, and some difficulty, it was possible to avoid accessing the end of the library, but the approach is very much fragile. It only takes an unexpected code path to make things much slower than they should be.

Consequently, for the past months, I’ve been working on improving the original idea and, with some assistance from Julian Seward, implemented the scheme with seekable compressed streams. Instead of letting the zip archive creation tool deflate libraries, we store specially crafted files. Essentially, files are cut in small chunks, and each chunk is compressed individually. This means a less efficient compression, but it also means random access to chunks is possible.

However, instead of stacking on top of our existing custom linker, I started over, from the ground up. First, because it needed a serious clean up (a good part of linker.c is leftovers from the Android linker that we don’t use, and APKOpen.cpp is a messy mix of JNI stubs, library decompression handling (which in itself was also a mess) and Gecko initialization code) and most importantly, because it relied on some Android system linker internals and thus required binary compatibility with the system linker. Which, according to Google engineers that contacted us a few months ago, was going to break in what we now know will be called Android Jelly Bean.

The benefit of the clean slate approach is that the new code is not tied to Gecko at all and was designed to work on Android as well as on desktop Linux systems (which made debugging much much easier). We’re thus releasing the code as a separate project: faulty.lib. It is licensed under the Mozilla Public License version 2.0. Please feel free to try, contribute, and/or fork it.

This dynamic linker is not meant to completely follow standard ELF rules (most notably for symbol resolution), and as a result does some assumptions. It’s also still work in progress, with some obvious optimizations pending (like, avoiding to resolve the same symbols again and again during relocations), and some features missing (for example, symbol versioning).

The next blog post will give some information about how to build Firefox for Android to benefit from on-demand decompression. I will also detail a few of the tricks involved in this dynamic linker in subsequent blog posts.

2012-03-01 17:10:11+0200

faulty.lib, p.d.o, p.m.o | 5 Comments »

Fun with weak symbols

Consider the following foo.c source file:

extern int bar() __attribute__((weak));
int foo() {
  return bar();

And the following bar.c source file:

int bar() {
  return 42;

Compile both sources:

$ gcc -o foo.o -c foo.c -fPIC
$ gcc -o bar.o -c bar.c -fPIC

In the resulting object for foo.c, we have an undefined symbol reference to bar. That symbol is marked weak.

In the resulting object for bar.c, the bar symbol is defined and not weak.

What we expect from linking both objects is that the weak reference is fulfilled by the existence of the bar symbol in the second object, and that in the resulting binary, the foo function calls bar.

$ ld -shared -o foo.o bar.o

And indeed, this is what happens.

$ objdump -T | grep "\(foo\|bar\)"
0000000000000260 g    DF .text  0000000000000007 foo
0000000000000270 g    DF .text  0000000000000006 bar

What do you think happens if the bar.o object file is embedded in a static library?

$ ar cr libbar.a bar.o
$ ld -shared -o foo.o libbar.a
$ objdump -T | grep "\(foo\|bar\)"
0000000000000260 g    DF .text  0000000000000007 foo
0000000000000000  w   D  *UND*  0000000000000000 bar

The bar function is now undefined and there is a weak reference for the symbol. Calling foo will crash at runtime.

This is apparently a feature of the linker. If anyone knows why, I would be interested to hear about it. Good to know, though.

2012-02-23 10:46:50+0200

p.d.o, p.m.o | 10 Comments »

Debian Mozilla news

Here are the few noteworthy news about Mozilla packages in Debian:

  • Iceape 2.7 made its way to unstable. This is a huge jump from the previously available 2.0.14, and finally happened because Iceape was finally the top item on my TODO list.
  • Iceape 2.7 is also available for Squeeze users, on the Debian Mozilla team APT archive.
  • Localization is now part of Iceweasel uploads, which means that upgrades won’t break localization anymore. It also means the Debian Mozilla team APT archive now also ships Iceweasel locales.

2012-02-18 09:37:10+0200

firefox, iceape | 5 Comments »

How to waste a lot of space without knowing

const char *foo = "foo";

This was recently mentioned on bugzilla, and the problem is usually underestimated, so I thought I would give some details about what is wrong with the code above.

The first common mistake here is to believe foo is a constant. It is a pointer to a constant. In practical ELF terms, this means the pointer lives in the .data section, and the string constant in .rodata. The following code defines a constant pointer to a constant:

const char * const foo = "foo";

The above code will put both the pointer and the string constant in .rodata. But keeping a constant pointer to a constant string is pointless. In the above examples, the string itself is 4 bytes (3 characters and a zero termination). On 32-bits architectures, a pointer is 4 bytes, so storing the pointer and the string takes 8 bytes. A 100% overhead. On 64-bits architectures, a pointer is 8 bytes, putting the total weight at 12 bytes, a 200% overhead.

The overhead is always the same size, though, so the longer the string, the smaller the overhead, relatively to the string size.

But there is another, not well known, hidden overhead: relocations. When loading a library in memory, its base address varies depending on how many other libraries were loaded beforehand, or depending on the use of address space layout randomization (ASLR). This also applies to programs built as position independent executables (PIE). For pointers embedded in the library or program image to point to the appropriate place, they need to be adjusted to the base address where the program or library is loaded. This process is called relocation.

The relocation process requires information which is stored in .rel.* or .rela.* ELF sections. Each pointer needs one relocation. The relocation overhead varies depending on the relocation type and the architecture. REL relocations use 2 words, and RELA relocations use 3 words, where a word is 4 bytes on 32-bits architectures and 8 bytes on 64-bits architectures.

On x86 and ARM, to mention the most popular 32-bits architectures nowadays, REL relocations are used, which makes a relocation weigh 8 bytes. This puts the pointer overhead for our example string to 12 bytes, or 300% of the string size.

On x86-64, RELA relocations are used, making a relocation weigh 24 bytes! This puts the pointer overhead for our example string to 32 bytes, or 800% of the string size!

Another hidden cost of using a pointer to a constant is that every time it is used in the code, there will be pointer dereference. A function as simple as

int bar() { return foo; }

weighs one instruction more when foo is defined const char *. On x86, that instruction weighs 2 bytes. On x86-64, 3 bytes. On ARM, 4 bytes (or 2 in Thumb). That weight can vary depending on the additional instructions required, but you get the idea: using a pointer to a constant also adds overhead to the code, both in time and space. Also, if the string is defined as a constant instead of being used as a literal in the code, chances are it’s used several times, multiplying the number of such instructions. Update: Note that in the case of const char * const, the compiler will optimize these instruction and avoid reading the pointer, since it’s never going to change.

The symbol for foo is also exported, making it available from other libraries or programs, which might not be required, but also adds its own overhead: an entry in the symbols table (5 words), an entry in the string table for the symbol name (strlen("foo") + 1) and an entry in the symbols hash chain table (4 bytes if only one type of hash table (sysv or GNU) is present, 8 if both are present), and possibly an entry in the symbols hash bucket table, depending on the other exported symbols (4 or 8 bytes, as chain table). It can also affect the size of the bloom filter table in the GNU symbol hash table.

So here we are, with a seemingly tiny 3 character string possibly taking 64 bytes or more! Now imagine what happens when you have an array of such tiny strings. This also doesn’t only apply to strings, it applies to any kind of global pointer to constants.

In conclusion, using a definition like

const char *foo = "foo";

is almost never what you want. Instead, you want to use one of the following forms:

  • For a string meant to be exported:

    const char foo[] = "foo";

  • For a string meant to be used in the same source file:

    static const char foo[] = "foo";

  • For a string meant to be used across several source files for the same library:

    __attribute__((visibility("hidden"))) const char foo[] = "foo";

2012-02-18 09:17:21+0200

p.d.o, p.m.o | 12 Comments »