Improving libxul startup I/O by hacking the ELF format
If you have been following this blog during the past months, you might have gathered that I've been focusing on I/O patterns on library initialization, more specifically libxul.so
. The very patterns there's not much to do about, except enduring them, or hacking ld.so
and/or the toolchain. The latter is not necessarily easy, and won't benefit everyone until all linux distros use an improved glibc and/or toolchain. This could take years before any actual result would reach end users.
But with a little creativity, one can overcome these limitations and improve the situation today. Here is a summary of my recent hacks.
Backwards static initializers
As Taras showed in the past, static initializers are executed in reverse order. It turns out it is an unfortunate design choice in gcc
, and it also turns out not to impact all ELF binary files.
As far as I know there are currently two different implementations of the static initializers call loop, one provided by gcc
, and one provided by ld.so
.
In the gcc
case, the compiler creates a .ctors
section, starting with 0xffffffff
, ending with 0x00000000
, and filled with pointers to the various static initializer functions, in ascending order. The compiler also injects an initializer function, called from .init
, going through this .ctors
section, starting from the end (__do_global_ctors_aux
in crtend.o
). A similar .dtors
section is created for functions to run when the library is unloaded, and the pointers in it are called in ascending order. There is actually no importance in the order in which .ctors
and .dtors
are executed. All that matters is that they are executed in reverse order from each other, because that limits the risks of bad ordering. It is only unfortunate that .ctors
was chosen to be the one backwards.
In the ld.so
case, the compiler creates a .init_array
section, filled with pointers to the various static initializer functions, in ascending order. It also stores the location and the size of this section in the PT_DYNAMIC
ELF header. At runtime, ld.so
itself reads this information and executes the static initializers in ascending orders. A similar .fini_array
section is created for functions to run when the library in unloaded, and ld.so
goes through it in descending order.
The .init_array
section is newer, and for backwards compatibility reasons, is not used on systems that have been using the .ctors
approach forever. On the other hand, some newer ABIs, such as arm EABI, uses the new facility, which means they are not victim of backwards static initialization. I however wonder if backwards compatibility is still needed, considering how long .init_array
support has been around (more than 11 years, according to glibc git repository).
While we could obviously bug the gcc
maintainers to revert their .ctors
execution order, or ditch .ctors
support on glibc systems, which means we wouldn't get any immediate global benefit, we can also just revert the .ctors
section content. Please note the source code out there doesn't take care of reverting the .dtors
section accordingly, because it is currently empty in libxul.so
.
Relocations
As I mentioned in an earlier post, relocations are what make it possible to load libraries at arbitrary locations. Unfortunately, a great number of relocations means awful I/O patterns: ld.so
reads and applies them one by one, and while the kernel reads ahead by 128 KB chunks, when your relocation section exceeds that amount by more than an order of magnitude, it means I/O goes back and forth a lot between the relocation section and where the relocations take place.
There are two classes of relocations: REL and RELA. In both cases, there are several types of relocations, some of which require an addend. The difference between both classes is in REL relocations, the addend is stored at the offset the relocation takes place, and in RELA relocations, the addend is stored in the relocation entry. It means that RELA relocations effectively waste space, as the addend ends up actually being stored twice: it is also stored at the offset the relocation takes place, like with REL relocations, though it could technically be nulled out, there ; in any case, the space is wasted.
REL relocations thus take 2 words of storage, and RELA relocations, 3. On x86, the ELF ABI only uses REL relocations, and words are 32 bits, making each relocation take 8 bytes ; on x86-64, the ELF ABI only uses RELA relocations, and words are 64 bits, thus making each relocation take 24 bytes, 3 times that of x86 relocations. On ARM, both may be used, but apparently the compiler and the linker only use REL relocations.
These words of storage are used to store 3 or 4 bits of informations:
r_offset
: the offset at which the relocation takes placer_info
: containing both the relocation type and a symbol reference for the relocation. On 32 bits systems, the symbol reference is stored on 24 bits, and the relocation type on 8 bits. On 64 bits systems, the symbol reference is stored on 32 bits, and the relocation type on 32 bits (which means at least 24 bits are never going to be used: there are currently no ABI using more than 256 relocation types).r_addend
: The addend, only in RELA relocations
The linker splits relocations in two main categories: those for dynamic linking at startup and those for procedure linkage table (PLT) resolution at runtime. Here is what the repartition look like on libxul.so
(taken from the mozilla-central nightly from Nov, the 22nd):
dynamic relocations | PLT relocations | |||
---|---|---|---|---|
Architecture | number | size (bytes) | number | size (bytes) |
x86 | 231,836 | 1,854,688 | 3,772 | 30,176 |
x86-64 | 235,893 | 5,661,432 | 3,773 | 90,528 |
So, PLT relocations are quite negligible, and dynamic relocations are just huge, especially on x86-64. Compared to the binary size, it's pretty scary:
Architecture | libxul.so size |
relocations size | % |
---|---|---|---|
x86 | 21,869,684 | 1,884,864 | 8.61% |
x86-64 | 29,629,040 | 5,751,984 | 19.41% |
Let me remind you that not only do we need to read these relocations at startup (at least the dynamic ones, but the PLT ones are just negligible), but we only read them by chunks, and need to read other data in between.
Of the many existing types of relocations, only a few are actually used in dynamic libraries. Here is a summary of those in use in libxul.so
:
x86 relocation type | number | x86-64 relocation type | number |
---|---|---|---|
dynamic relocations | |||
R_386_TLS_DTPMOD32 |
1 | R_X86_64_DTPMOD64 |
1 |
R_386_GLOB_DAT |
213 | R_X86_64_GLOB_DAT |
238 |
R_386_32 |
27,643 | R_X86_64_64 |
27,611 |
R_386_RELATIVE |
203,979 | R_X86_64_RELATIVE |
208,043 |
PLT relocations | |||
R_386_JUMP_SLOT |
3,772 | R_X86_64_JUMP_SLOT |
3,773 |
The above types can be separated in two distinct categories:
- Requiring symbols resolution (
R_*_GLOB_DAT
,R_*_JUMP_SLOT
,R_386_32
,R_X86_64_64
) - Not requiring symbols resolution (
R_*_RELATIVE
,R_386_TLS_DTPMOD32
,R_X86_64_TLS_DTPMOD64
)
In the second category, it means we are wasting all the bits for the symbol reference in each entry: 24 on x86 and 32 on x86-64. Unfortunately, the vast majority of the relocations in libxul.so
are from that category. At this point, this is how much waste we identified:
description | num. entries | waste per entry (bytes) | total | % of relocation size |
---|---|---|---|---|
x86 | ||||
Relocation without symbol ref. | 203,980 | 3 | 611,937 | 32.46% |
x86-64 | ||||
Duplicated addend | 239,666 | 8 | 1,917,328 | 33.33% |
Relocation type encoded on 32 bits | 239,666 | 3 | 718,998 | 12.49% |
Relocation without symbol ref. | 208,044 | 4 | 832,176 | 14.46% |
Total | 3,468,502 | 60.30% |
On top of that, one could argue that we don't need 24 or 32 bits for symbol reference (which is the entry number in the symbol table), and that we're really far from requiring 64 bits for r_offset
on 64 bits systems, as libxul.so
is far from being bigger than 4GB. Anyways, that means we have room for improvement here, and it means that the ELF format, especially on 64 bits systems with RELA relocations, really doesn't help with big programs or libraries with a lot of relocations. (For that matter, the mach-o format used on OS-X has some kind of similar issues ; I don't know for the DLL format)
Another interesting data point is looking at how relocations' r_offset
are following each other, once relocations are reordered (to avoid too much symbol resolution, they are actually grouped by symbol). It turns out 206,739 relocations' r_offset
are at 4 bytes from the previous one on x86 (i.e. strictly following it), and 210,364 at 8 bytes from the previous on x86-64. The main reason why this happens is that most relocations are happening in pointer tables, and libxul.so
has lots of them:
- That's what you get with virtual methods in C++.
- That's also what you get in
.ctors
and.dtors
sections, which are relocated as well (but not.init_array
and.fini_array
sections). - That's what you get when you use function tables.
- Obvously, that's what you get when you use pointer tables.
And when I say libxul.so
has lots of them, that's not an understatement: there are 184,320 relocations due to vtables (function tables you get when using virtual methods in C++) on x86 and 184,098 on x86-64. That's respectively 78.23% and 76.81% of the total number of relocations. Interestingly, while most of these relocations are R_*_RELATIVE
, 26,853 are R_386_32
, and 26,829 are R_X86_64_64
. Except for a very few, these non relative relocations are for the __cxa_pure_virtual
symbol, which is a dummy function that does nothing, used for pure virtual methods, such as the following:
class Abstract {
public:
virtual void pure_virtual() = 0;
};
As for the remaining relative relocations, there are a few pointer tables responsible for several thousand relocations, such as JSString::length2StringTable
, gEntries
, GkAtoms_info
, Html5Atoms_info
, kUnixCharsets
, nsIDOMCSS2Properties_properties
, etc.
Packing relocations
While they are not detrimental to most binaries, we saw that relocations can have a bad effect on startup I/O for big binaries. Reducing how much space they take would have a direct impact on these startup I/Os. One way to do so is obviously to reduce the number of relocations. But as we saw above, the vast majority are due to vtables. While reducing vtables would be a nice goal, it's a long way before this can have a significant impact on the relocations number. So another possible way is to try to avoid wasting so much space in relocations.
The main problem is that relocations are applied by ld.so
, and that, as mentioned in the introduction, while we can hack ld.so
to do what we want, it won't work everywhere before a long time. Moreover, it's difficult to test various implementations, and binary compatibility needs to be guaranteed once something is applied in the upstream libc.
The idea I started to test a few weeks ago, is to effectively apply relocations by hand instead of relying on ld.so
doing it. Considering the first thing ld.so
does with a library after applying relocations is to call its DT_INIT
function, the idea was to replace the dynamic relocations we can with a new DT_INIT
function we would inject that'd apply those relocations, while storing them in a more efficient manner. The function would obviously need to call the original DT_INIT
function once done. Some parts of the PT_DYNAMIC
section also obviously need to be adjusted accordingly.
The first iteration I tested a few weeks ago helped validating the idea, but didn't change much of the binary: the dynamic relocation section still was taking all the space it took before, though its content was mostly empty (we'll see further below how efficient even the current dumb format is). Once the theory that an injected DT_INIT
function could apply the relocations without breaking everything, I went on to implement a program that would actually strip the ELF binary. I won't get too much into the gory details, but it required PT_LOAD
splitting, thus program headers growth, which means moving the following sections, then removing a huge part of the relocations, shrinking the relocations section, injecting our code and the packed relocations as sections, and moving the remaining sections after these new sections, to avoid empty space, but still respecting the proper offsets for both PT_LOAD
and the code itself.
So far, I only took care of the R_*_RELATIVE
relocations. Some but not all other relocations could be taken care of, too, but everything involving symbol resolution from other libraries would be really difficult to handle from the injected code. The relocations are currently stored with a quite dumb format:
- a 32 bits word containing
r_offset
- a 32 bits word containing the number of relocations to apply consecutively
As we only take care of one type of relocations, we can skip any information about the type. Also, the addend already being stored at r_offset
, we just strip it from RELA relocations ; the binary being far from 4GB big, 32 bits is enough on 64 bits systems too. We also saw that a lot of relocations were next to the following ones, so instead of storing r_offset
for each one, storing some kind of range is more efficient. As I've so far been focusing on getting something that actually works more than on getting something really efficient, I only tested this format, but the results are already nice:
x86 | x86-64 | |
---|---|---|
before | ||
dynamic relocations (bytes) | 1,854,688 | 5,661,432 |
libxul.so size |
21,869,684 | 29,629,040 |
after | ||
dynamic relocations (bytes) | 222,856 | 668,400 |
injected code size | 129 | 84 |
packed relative relocations (bytes) | 224,816 | 228,568 |
libxul.so size |
20,464,836 (-6.42%) | 24,865,520 (-16.07%) |
Getting rid of the __cxa_pure_virtual
relocations would help getting this even further down, and an even better storage format could be used, too. Gathering new startup coverage data should be interesting as well.
The code is available via mercurial. Please note the Makefile
deeply sucks, and that the code is in the middle of its third refactoring. Anyways, once you get the code to compile, you just need to run test-x86
or test-x64
depending on your architecture, from the build directory, and give it the libxul.so
file as an argument. It will create a libxul.so.new
file that you can use to replace libxul.so
. You may successfully use the tool on other libraries, but chances are you'll bump into an assert()
, or in the worst case, that the resulting library ends up broken. The tool might work on PIE executables, but most probably not on other executables. If you wonder, Chrome is not PIE. It's not even relocatable (which means ASLR won't work with it) ; or at least the x86-64 binaries I just got aren't.
More possible ELF hacking
The code I wrote to pack relocations allows to do various tricks on ELF files, and it opens various opportunities, such as:
- Aligning
.bss
. As I wrote on this blog a while ago,ld.so
fills the last page from the.data
section with zeroes, starting at the offset of the.bss
section. Unfortunately, this means reading ahead at most 128KB of data nobody needs, and the corresponding disk seek. Inserting a section full of zeroes between.data
and.bss
would get rid of that, while minimally wasting space (less than 4KB in a 20MB+ binary). - Trying to move the
PT_DYNAMIC
section. It is one of the first bunch of data to be read from programs and libraries, yet it is near the end of the binary. - Replacing
.ctors
/.dtors
with.init_array
/.fini_array
(thus removing the corresponding relocations, though there's only around 300 of them). - Replacing the
__cxa_pure_virtual
relocations with relative relocations pointing to an injected dummy function (it could even just point to anyret
code in any existing function). - Replacing some internal symbol resolutions with relative relocations (which should be equivalent to building with
-Bsymbolic
) - etc.
Some of the above could probably be done with linker scripts, but using linker scripts is unnecessarily over complicated: as soon as you start using a linker script to fiddle with sections, you need to reimplement the entire default linker script (the one you can see with ld --verbose
). And it's not guaranteed to work (Taras told me he had a hard time failing to align the .bss
section). And as far as I know, you can't use the same thing depending whether you use ld
or gold
(AIUI, gold
doesn't support the whole ld
linker script syntax).
2010-11-23 15:44:19+0900
You can leave a response, or trackback from your own site.
2010-11-23 16:23:18+0900
Does this work impact the size and execution speed of Fennec (now called Firefox) on Android?
2010-11-23 16:30:08+0900
I haven’t implemented the hack for ARM, yet.
2010-11-23 18:39:08+0900
It’s worth remembering that the SVR4 pre-emption rules requires a lot of values to be potentially relocated in a shared object even when that will be the only definition. Careful use of visibility constraints in your source code can reduce the number of classes exported to only those that really do form an interface from your library, and thereby make a significant saving of dynreloc space.
2010-11-23 18:53:43+0900
Richard: Visibility constraints are already used all over the place, though a few relocations could be spared by using it even more carefully, though -Bsymbolic would effectively have the same effect on relocations. Unfortunately, doing only that won’t spare much, as the biggest contenders are vtables related relocations.
2010-11-23 20:08:59+0900
Hardcore awesomeness! Or maybe that’s awesome hardcoreness! In any event, great work! I especially look forward to the benefits trickling down to Thunderbird; with any luck we won’t need to fix our application logic because the platform will just be so fast…
2010-11-23 20:34:08+0900
__cxa_pure_virtual terminates the program. It’s just an error trap. We could just replace it with NULL if that buys us performance.
2010-11-23 20:41:44+0900
roc: this is interestingly true. I don’t know where I got it was a nop. Bad memory, certainly.
For the record, here is what the x86-64 version looks like:
2010-11-23 20:55:26+0900
In each of the matching pairs (.ctors and .dtors; .init_array and .fini_array) you said that one gets read forward (.dtors or .init_array) and one gets read backward (.ctors or .fini_array). How about taking advantage of this by using .init_array and .dtors as a matching pair where both get read forward?
2010-11-23 21:28:05+0900
The point is to keep going in opposite directions for both. Going in the same direction for both is possibly dangerous (though most probably harmless)
2010-11-23 21:39:11+0900
Going in the same direction for both is probably deadly…
2010-11-23 21:52:27+0900
roc: in practice constructors that have a corresponding destructor set it in the constructor with __cxa_atexit (which is why
.dtors
is empty inlibxul.so
).2010-11-24 00:38:51+0900
This is great work, Mike. I think we should look at upstream fixes too, to help other projects, and it’s possible we could get them into the distro toolchains relatively promptly, since those are often patched quite heavily.
2010-11-24 19:27:22+0900
Could you give some more information about how you measured the number of relocation and the space used by them? My naive use of objdump gives me inconsistent numbers.
2010-11-25 10:09:51+0900
Wow this is a very informative post, thanks a lot.
And please don’t hesitate to contribute to the toolchain. Even if it takes a few years to arrive in mainstream distros, it is sure worth the hassle. As far as I have read, relocations are also quite a problem for C/GTK programs, it would be nice to gain a sizeable binary size cut and also startup performance for the whole desktop.
2010-11-25 10:50:09+0900
Alessandro: a careful use of
readelf -r
is enough to get the necessary information about relocations.readelf -S
can be used to know the size of the relocation sections.2010-11-27 18:06:52+0900
I believe this is to initialize static libraries prior the main binary.
2010-11-27 18:08:22+0900
…. and also new GCC will put constructors into separate section (text.startup). This should eliminate most of the extra I/O
2010-11-29 06:27:59+0900
If .ctors must be read one way, and .dtors the opposite; how about reversing .ctors so it reads forwards in memory address space, and then *not* reversing .dtors so it also reads forwards and thus get best possible use for readahead — or did I misunderstand?
2010-11-29 09:57:20+0900
Simon B.: if .ctors go forward in address space, .dtors need to go downwards. BTW, .dtors only happen when closing the library, at which time readahead doesn’t matter so much. And chances are the .dtors code (if any, since in many cases, there aren’t) has already been read when reading surrounding code.
2010-12-05 00:58:15+0900
Simon: I think ctors is reversed to get libraries initialized first, before the main binary. The sections are created by linker by concatenating sections of the objects as it process them. It first process objects and then it gets more from the libraries, so the library constructors appears later in the ctor section…
Sure, linker could be smarter here, but it is not…
2010-12-14 13:34:45+0900
So linker is getting smarter now :)
http://gcc.gnu.org/ml/gcc-bugs/2010-12/msg01527.html
Concerning the actual ELF rewriting, it seems to me that this is getting around the fact that Firefox really should be a binary instead of tiny wrapper loading single library. That would make those IP relative relocations for vtables to disappear completely.
It might be interesting to tight better relocation with the kernel to do it while on demand paging to avoid reading whole data segment at startup.
Adding both REL+RELA into x86-64 ABI to imitate what EABI does might actually be possible – I will have to check. Problem here is that x86-64 ABI allows 64bit displacements and GCC might use them even if your binary fits in 4GB in codegen. For example when it needs constant &char_array + 12345678910 just to save arithmetic when someone writes char_array[index + 12345678910] etc.
Honza
2010-12-14 14:19:54+0900
> Concerning the actual ELF rewriting, it seems to me that this is getting around
> the fact that Firefox really should be a binary instead of tiny wrapper loading
> single library. That would make those IP relative relocations for vtables to
> disappear completely.
They wouldn’t disappear if the binary was PIE, without which a binary doesn’t take advantage of ASLR.
2021-05-28 13:26:56+0900
Hey, I have read this many years ago when I did not know the ELF toolchain well:)
Now I come back with quite a bit experience in this area.
The backward order of .ctors isn’t bad. It models the “dependency order” (ELF spec) requirement.
It does provide too strong guarantee that C++ [basic.start.dynamic] doesn’t require.
HP-UX .init_array was modeled after the legacy .init (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770#c50).
Regarding reversing .ctors, in ld.lld,
--shuffle-sections .init_array=-1
(https://reviews.llvm.org/D98679)can be used to reverse .init_array.
Regarding REL: I think using REL for aarch64/x86-64/other 64-bit arches linked image is very safe.
It will be difficult for GNU ld and glibc to support that because they don’t implement good abstraction.
We are using small code models – however, the Linux kernel doesn’t give a way to use ELFCLASS32 on 64-bit arches.
(Why let the great EM_X86_64/ELFCLASS32 be restricted to the ILP32 ABI?)
That would cut off a lot of waste.
Regarding relative relocations, I filed https://sourceware.org/bugzilla/show_bug.cgi?id=27924 today.
I knew ChromeOS had a glibc patch and I re-checked – there was no feature request or upstreaming intention :(
(I don’t blame them – dealing with glibc upstream isn’t easy)
Android has another packed relocation format (magic: ‘APS2’). I think it is overengineering.
What should be done is to make more symbols in shared objects non-preemptible
https://gcc.gnu.org/pipermail/gcc/2021-May/236063.html
(“ELF world with default -Bsymbolic-non-weak-functions”)
Regarding vtable pointers:
clang -fexperimental-relative-c++-abi-vtables
can use PC-relative relocations.If the Fuchsia experiment goes well, we (Linux) can think of porting the feature.
2021-10-16 10:06:03+0900
I created a glibc DT_RELR patch https://sourceware.org/pipermail/libc-alpha/2021-October/131768.html
Probably only with distro support I can push the patch forward, so I filed a bunch of feature requests: Gentoo: https://bugs.gentoo.org/818376 Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=2014699 Arch Linux: https://bugs.archlinux.org/task/72433 Debian https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=996598