Author Archive

Using C++ templates to prevent some classes of buffer overflows

I recently found a small buffer overflow in Firefox's SOCKS support, and came up with a nice-ish way to make it a C++ compilation error when it may happen with some template magic.

A simplfied form of the problem looks like this:

class nsSOCKSSocketInfo {
public:
    nsSOCKSSocketInfo() : mData(new uint8_t[BUFFER_SIZE]) , mDataLength(0) {}
    ~nsSOCKSSocketInfo() { delete[] mData; }

    void WriteUint8(uint8_t aValue) { mData[mDataLength++] = aValue; }

    void WriteV5AuthRequest() {
        mDataLength = 0;
        WriteUint8(0x05);
        WriteUint8(0x01);
        WriteUint8(0x00);
    }
private:
    uint8_t* mData;
    uint32_t mDataLength;
    static const size_t BUFFER_SIZE = 2;
};

Here, the problem is more or less obvious: the third WriteUint8() call in WriteV5AuthRequest() will write beyond the allocated buffer size. (The real buffer size was much larger than that, and it was a different method overflowing, but you get the point)

While growing the buffer size fixes the overflow, that doesn't do much to prevent a similar overflow from happening again if the code changes. That got me thinking that there has to be a way to do some compile-time checking of this. The resulting solution, at its core, looks like this:

template <size_t Size> class Buffer {
public:
    Buffer() : mBuf(nullptr) , mLength(0) {}
    Buffer(uint8_t* aBuf, size_t aLength=0) : mBuf(aBuf), mLength(aLength) {}

    Buffer<Size - 1> WriteUint8(uint8_t aValue) {
        static_assert(Size >= 1, "Cannot write that much");
        *mBuf = aValue;
        Buffer result(mBuf + 1, mLength + 1);
        mBuf = nullptr;
        mLength = 0;
        return result;
    }

    size_t Written() { return mLength; }
private:
    uint8_t* mBuf;
    size_t mLength;
};

Then replacing WriteV5AuthRequest() with the following:

void WriteV5AuthRequest() {
    mDataLength = Buffer<BUFFER_SIZE>(mData)
        .WriteUint8(0x05)
        .WriteUint8(0x01)
        .WriteUint8(0x00)
        .Written();
}

So, how does this work? The Buffer class is templated by size. The first thing we do is to create an instance for the complete size of the buffer:

Buffer<BUFFER_SIZE>(mData)

Then call the WriteUint8 method on that instance, to write the first byte:

.WriteUint8(0x05)

The result of that call is a Buffer<BUFFER_SIZE - 1> (in our case, Buffer<1>) instance pointing to &mData[1] and recording that 1 byte has been written.
Then we call the WriteUint8 method on that result, to write the second byte:

.WriteUint8(0x01)

The result of that call is a Buffer<BUFFER_SIZE - 2> (in our case, Buffer<0>) instance pointing to &mData[2] and recording that 2 bytes have been written so far.
Then we call the WriteUint8 method on that new result, to write the third byte:

.WriteUint8(0x00)

But this time, the Size template parameter being 0, it doesn't match the Size >= 1 static assertion, and the build fails.
If we modify BUFFER_SIZE to 3, then the instance we run that last WriteUint8 call on is a Buffer<1> and we don't hit the static assertion.

Interestingly, this also makes the compiler emit more efficient code than the original version.

Check the full patch for more about the complete solution.

2014-12-05 18:45:56+0900

p.m.o | 2 Comments »

Logging Firefox memory allocations

A couple years ago, when I was actively working on integrating jemalloc 3 in the Firefox build, and started investigating some memory usage regression compared to our old fork, I came up with a replace-malloc library for Firefox that would log all the allocations, and allow to replay them in a more consistent (and faster) way in a separate program, such that testing different configurations of jemalloc with the same workload can be streamlined.

A couple weeks ago, I refreshed that work, and made it work on all the tier-1 Firefox desktop platforms. That work is now in the tree instead of on my hard drive, and will allow us to test the effects of jemalloc changes in a better way.

The bulk of how to use this feature is the following:

  • Start Firefox with the following environment variables:
    • on Linux:

      LD_PRELOAD=/path/to/memory/replace/logalloc/liblogalloc.so

    • on Mac OSX:

      DYLD_INSERT_LIBRARIES=/path/to/memory/replace/logalloc/liblogalloc.dylib

    • on Windows:

      MOZ_REPLACE_MALLOC_LIB=/path/to/memory/replace/logalloc/logalloc.dll

    • on all the above:

      MALLOC_LOG=/path/to/log-file

  • Play your workload in Firefox, then close it.
  • Run the following command to prepare the log file for replay:

    python /source/path/to/memory/replace/logalloc/replay/logalloc_munge.py < /path/to/log-file > /path/to/replay.log

  • Replay the logged allocations with the following command:

    /path/to/memory/replace/logalloc/replay/logalloc-replay < /path/to/replay.log

More information and implementation details can be found in the README accompanying the code for that functionality.

2014-12-03 03:22:57+0900

p.m.o | No Comments »

Mozilla Build System: past, present and future

The Mozilla Build System has, for most of its history, not changed much. But, for a couple years now, we've been, slowly and incrementally, modifying it in quite extensive ways. This post summarizes the progress so far, and my personal view on where we're headed.

Recursive make

The Mozilla Build System has, all along, been implemented as a set of recursively traversed Makefiles. The way it has been working for a very long time looks like the following:

  • For each tier (group of source directories) defined at the top-level:
    • For each subdirectory in current tier:
      • Build the export target recursively for each subdirectory defined in Makefile.
      • Build the libs target recursively for each subdirectory defined in Makefile.

The typical limitation due to the above is that some compiled tests from a given directory would require a library that's not linked until after the given directory is recursed, so another target was later added on top of that (tools).

There was not much room for parallelism, except in individual directories, where multiple sources could be built in parallel, but never would sources from multiple directories be built at the same time. So, for a bunch of directories where it was possible, special rules were added to allow that to happen, which led to interesting recursions:

  • For each of export, libs, and tools:
    • Build the target in the subdirectories that can be built in parallel.
    • Build the target in the current directory.
    • Build the target in the remaining subdirectories.

This ensured some extra fun with dependencies between (sub)directories.

Apart from the way things were recursed, all sorts of custom build rules had piled up, some of which relied on things in other directories having happened beforehand, and the build system implementation itself relied on some quite awful things (remember allmakefiles.sh?)

Gradual Overhaul

Around two years ago, we started a gradual overhaul of the build system.

One of the goals was to move away from Makefiles. For various reasons, we decided to go with our own kind-of-declarative (but really, sandboxed python) format (moz.build) instead of using e.g. gyp. The more progress we make on the build system, and the more I think this was the right choice.

Anyways, while we've come a long way and converted a lot of Makefiles to moz.build, we're not quite there yet:

One interesting thing to note in the above graph is that we've also been reducing the overall number of moz.build files we use, by consolidating some declarations. For example, some moz.build files now declare source or test files from their subdirectories directly, instead of having one file per directory declare sources and test files local to their own directory.

Pseudo derecursifying recursive Make

Neologism aside, one of the ideas to help with the process of converting the build system to something that can be parallelized more massively was to reduce the depth of recursion we do with Make. So that instead of a sequence like this:

  • Entering directory A
    • Entering directory A/B
    • Leaving directory A/B
    • Entering directory A/C
      • Entering directory A/C/D
      • Leaving directory A/C/D
      • Entering directory A/C/E
      • Leaving directory A/C/E
      • Entering directory A/C/F
      • Leaving directory A/C/F
    • Leaving directory A/C
    • Entering directory A/G
    • Leaving directory A/G
  • Leaving directory A
  • Entering directory H
  • Leaving directory H

We would have a sequence like this:

  • Entering directory A
  • Leaving directory A
  • Entering directory A/B
  • Leaving directory A/B
  • Entering directory A/C
  • Leaving directory A/C
  • Entering directory A/C/D
  • Leaving directory A/C/D
  • Entering directory A/C/E
  • Leaving directory A/C/E
  • Entering directory A/C/F
  • Leaving directory A/C/F
  • Entering directory A/G
  • Leaving directory A/G
  • Entering directory H
  • Leaving directory H

For each directory there would be a directory-specific target per top-level target, such as A/B/export, A/B/libs, etc. Essentially those targets are defined as:

%/$(target):
        $(MAKE) -C $* $(target)

And each top-level target is expressed as a set of dependencies, such as, in the case above:

A/B/libs: A/libs
A/C/libs: A/B/libs
A/C/D/libs: A/C/libs
A/C/E/libs: A/C/D/libs
A/C/F/libs: A/C/E/libs
A/G/libs: A/C/F/libs
H/libs: A/G/libs
libs: H/libs

That dependency list, instead of being declared manually, is generated from the "traditional" recursion declaration we had in moz.build, through the various *_DIRS variables. I'll skip the gory details about how this replicated the weird subdirectory orders I mentioned above when you had both PARALLEL_DIRS and DIRS. They are irrelevant today anyways.

You'll also note that this also removed the notion of tier, and the entirety of the tree is dealt with for each target, instead of iterating all targets for each group of directories.

[But note that, confusingly, but for practical reasons related to the amount of code changes required, and to how things were incrementally set in place, those targets are now called tiers]

From there, parallelizing parts of the build only involves reorganizing those Make dependencies such that Make can go in several subdirectories at once, the ultimate goal being:

libs: A/libs A/B/libs A/C/libs A/C/D/libs A/C/E/libs A/C/F/libs A/G/libs A/H/libs

But then, things in various directories may need other things built in different directories first, such that additional dependencies can be needed, for example:

A/B/libs A/C/libs: A/libs
A/H/libs: A/G/libs

The trick is that those dependencies can in many cases be deduced from the contents of moz.build. That's where we are today, for example, for everything related to compilation: We added a compile target that deals with everything C/C++ compilation, and we have dependencies between directories building objects and directories building libraries and executables that depend on those objects. You can have a taste yourself:

$ mach clobber
$ mach configure
$ mach build export
$ mach build -C / toolkit/library/target

[Note, the compile target is actually composed of two sub-targets, target and host.]

The third command runs the export target on the whole tree, because that's a prerequisite that is not expressed as a make dependency yet (it would be too broad of a dependency anyways).

The last command runs the toolkit/library/target target at the top level (as opposed to mach build toolkit/library/target, which runs the target target in the toolkit/library directory). That will build libxul, and everything needed to link it, but nothing else unrelated.

All the other top-level targets have also received this treatment to some extent:

  • The export target was made entirely parallel, although it keeps some cross-directory dependencies derived from the historical traversal data.
  • The libs target is still entirely sequential, because of all the horrible rules in it that may or may not depend on the historical traversal order.
  • A parallelized misc target was recently created to receive all the things that are currently done as part of the libs target that are actually safe to be parallelized (another reason for creating this new target is that libs is now a misnomer, since everything related to libraries is now part of compile). Please feel free to participate in the effort to move libs rules there.
  • The tools target is still sequential, but see below.

Although, some interdependencies that can't be derived yet are currently hardcoded in config/recurse.mk.

And since the mere fact of entering a directory, figuring out if there's anything to do at all, and leaving a directory takes some time on builds when only a couple things changed, we also skip some directories entirely by making them not have a directory/target target at all:

  • export and libs only traverse directories where there is a Makefile.in file, or where the moz.build file sets variables that do require something to be done in those targets.
  • compile only traverses directories where there is something to compile or link.
  • misc only traverses directories with an explicit HAS_MISC_RULE = True when they have a Makefile.in, or with a moz.build setting variables affecting the misc target.
  • tools only traverses directories that contain a Makefile.in containing tools:: (there's actually a regexp, but in essence, that's it).

Those skipping rules allow to only traverse, taking numbers from a local Linux build, as of a couple weeks ago:

  • 186 directories during libs,
  • 472 directories during compile,
  • 161 directories during misc,
  • 382 directories during libs,
  • 2 directories during tools.

instead of 850 for each.

In the near future, we may want to change export and libs to opt-ins, like misc, instead of opt-outs.

Alternative build backends

With more and more declarations in moz.build, files, we've been able to build up some alternative build backends for Eclipse and Microsoft Visual Studio. While they still are considered experimental, they seem to work well enough for people to take advantage of them in some useful ways. They however still rely on the "traditional" Make backend to build various things (to the best of my knowledge).

Ultimately, we would like to support entirely different build systems such as ninja or tup, but at the moment, many things still heavily rely on the "traditional" Make backend. We're getting close to having everything related to compilation available from the moz.build declarations (ignoring third-party code, but see further below), but there is still a long way to go for other things.

In the near future, we may want to implement hybrid build backends where compilation would be driven by ninja or tup, and the rest of the build would be handled by Make. However, my feeling is that the Make backend is fast enough for compilation and ninja doesn't bring enough other than performance that it's not worth investing in a ninja backend. Tup is different because it does solve some of the problems with incremental builds.

While on the topic of being close to having everything related to compilation available from moz.build declarations, closing in on this topic would allow, more than such hybrid build systems, to better integrate tools such as code static analyzers.

Unified C/C++ sources

Compiling C code, and even more compiling C++ code, involves reading the same headers an important number of times. In C++, that usually also means instantiating the same templates and compiling the same inline methods numerous times.

So we've worked around this by creating "unified sources" that just #include the actual source files, grouping them 16 by 16 (except when specified otherwise).

This reduced build times drastically, and, interestingly, reduced the size of DWARF debugging symbols as well. This does have a couple downsides, though. It allows #include impurity in the code in such a way that e.g. changes in the file groups can lead to subtle build failures. It also makes incremental builds significantly slower in parts of the tree where compiling one file is already somehow slow, so doing 16 at a time can be a drag for people working on that code. This is why we're considering decreasing the number for the javascript engine in the near future. We should probably investigate where else building one unified source is slow enough to be a concern. I don't think we can rely on people actively complaining about it, they've been too used to slow build times to care to file bugs about it.

Relatedly, our story with #include is suboptimal to say the least, and several have been untangled, but there's still a long tail. It's a hard problem to solve, even with tools like IWYU.

Fake libraries

For a very long time, the build system was building intermediate static libraries, and then was linking them together to form shared libraries. This is how the main components were built in the old days before libxul (remember libgklayout?), and was how libxul built when it was created. For various reasons, ranging from disk space waste to linker inefficiencies, this was replaced by building fake libraries, that only reference the objects that would normally be contained in the static library that used to be built. This later was changed to use a more complex system allowing more flexibility.

Fast-forward to today, and with all the knowledge from moz.build, one of the usecases of that more complex system was made moot, and in the future, those fake libraries could be generated as build backend files instead of being created during the build.

Specialized incremental builds

When iterating C/C++ code patches, one usually needs to (re)compile often. With the build system having the overhead it has, and rebuilding with no change taking many seconds (it has been around a minute for a long time on my machine and is now around half of that, although, sadly, I had got it down to 20 seconds but that regressed recently, damn libs rules), we also added a special rule that only handles header changes and rebuilding objects, libraries and executables.

That special rule is invoked as:

$ mach build binaries

And takes about 3.5s on my machine when there are no changes. It used to be faster thanks to clever tricks, but that was regressed on purpose. That's a trade-off, but linking libxul, which most code changes require, takes much longer than that anyways. If deemed necessary, the same clever tricks could be restored.

While we work to improve the overall build experience, in the near future we should have one or more special rules for non-compilation use-cases. For Firefox frontend developers, the following command may do part of the job:

$ mach build -C / chrome

but we should have some better and more complete commands for them and e.g. Firefox Android developers.

We also currently have a build option allowing to entirely skip everything that is compilation related (--disable-compile-environment), but it is currently broken and is only really useful in few use cases. In the near future, we need build modes that allow to use e.g. nightly builds as if they were the result of compiling C++ source. This would allow some classes of developers to skip compilations altogether, which are an unnecessary overhead for them at the moment, since they need to compile at least once (and with all the auto-clobbers we have, it's much more than that).

Localization

Related to the above, the experience of building locale packs and repacks for Firefox is dreadful. Even worse than that, it also relies on a big pile of awful Make rules. It probably is, along with the code related to the creation of Firefox tarballs and installers, the most horrifying part of the build system. Its entanglement with release automation also makes improving the situation unnecessarily difficult.

While there are some sorts of tests running on every push, there are many occasions where those tests fail to catch regressions that lead to broken localized builds for nightlies, or worse on beta or release (which, you'll have to admit, is a sadly late a moment to find such regressions).

Something really needs to be done about localization, and hopefully the discussions we'll have this week in Portland will lead to improvements in the short to medium term.

Install manifests

The build system copies many files during the build. From the source directory to the "object" directory. Sometimes in $(DIST)/somedir, sometimes elsewhere. Sometimes in both or more. On non-Windows systems, copies are replaced by symbolic links. Sometimes not. There are also files that are preprocessed during the build.

All those used to be handled by Make rules invoking $(NSINSTALL) on every build. Even when the files hadn't changed. Most of these were replaced by some Makefile magic, but many are now covered with so-called "install manifests".

Others, defined in jar.mn files, used to be added to jar files during the build. While those jars are not created anymore because of omni.ja, the corresponding content is still copied/symlinked and defined in jar.mn.

In the near future, all those should be switched to install manifests somehow, and that is greatly tied to solving part of the localization problem: currently, localization relies on Make overrides that moz.build can't know about, preventing install manifests being created and used for the corresponding content.

Faster configure

One of the very first things the build system does when a build starts from scratch is to run configure. That's a part of the build system that is based on the antiquated autoconf 2.13, with 15+ years of accumulated linear m4 and shell gunk. That's what detects what kind of compiler you use, how broken it is, how broken its headers are, what options you requested, what application you want to build, etc.

Topping that, it also invokes configure from third-party software that happen to live in the tree, like ICU or jemalloc 3. Those are also based on autoconf, but in more recent versions than 2.13. They are also third-party, so we're essentially only importing them, as opposed to actively making them bigger for those that are ours.

While it doesn't necessarily look that bad when running on e.g. Linux, the time it takes to run all this pile of shell scripts is painfully horrible on Windows (like, taking more than 5 minutes on automation). While there's still a lot to do, various improvements were recently made:

  • Some classes of changes (such as modifying configure.in) make the build system re-run configure. It used to trigger every configure to run again, but now only re-runs a relevant subset.
  • They used to all run sequentially, but apart from the top-level one, which still needs to run before all the others, they now all run in parallel. This cut configure times almost in half on Windows clobber builds on automation.

In the future, we want to get rid of autoconf 2.13 and use smart lazy python code to only run the tests that are relevant to the configure options. How this would all exactly work has, as of writing, not been determined. It's been on my list of things to investigate for a while, but hasn't reached the top. In the near future, though, I would like to move all our autoconf code related to the build toolchain (compiler and linker) to some form of python code.

Zaphod beeblebuild

There are essentially two main projects in the mozilla-central repository: Firefox/Gecko and the Javascript engine. They use the same build system in many ways. But for a very long time, they actually relied on different copies of the same build system files, like config/rules.mk or build/autoconf/*.m4. And we had a check-sync-dirs script verifying that both projects were indeed using the same file contents. Countless times, we've had landings forgetting to synchronize the files and leading to a check-sync-dirs error during the build. I plead guilty to have landed such things multiple times, and so did many other people.

Those days are now long gone, but we currently rely on dirty tricks that still keep the Firefox/Gecko and Javascript engine build systems half separate. So we kind of replaced a conjoined-twins system with a biheaded system. In the future, and this is tied to the section above, both build systems would be completely merged.

Build system interface

Another goal of the build system changes was to make the build and test experience better. Especially, running tests was not exactly the most pleasant experience.

A single entry point to the build system was created in the form of the mach tool. It simplifies and self-documents many of the workflows that required arcane knowledge.

In the future, we will deprecate the historical build system entry points, or replace their implementation to call mach. This includes client.mk and testing/testsuite-targets.mk.

moz.build

Yet another goal of the build system changes was to improve the experience developers have when adding code to the tree. Again, while there is still a lot to be done on the subject, there have been a lot of changes in the past year that I hope have made developer's lives easier.

As an example, adding new code to libxul previous required:

  • Creating a Makefile.in file
    • Defining a LIBRARY_NAME.
    • Defining which sources to build with CPPSRCS, CSRCS, CMMSRCS, SSRCS or ASFILES, using the right variable name for the each source type (C++ or C or Obj-C, or assembly. By the way, did you know there was a difference between SSRCS and ASFILES?).
  • Adding something like SHARED_LIBRARY_LIBS += $(call EXPAND_LIBNAME_PATH,libname,$(DEPTH)/path) to toolkit/library/Makefile.in.

Now, adding new code to libxul requires:

  • Creating a moz.build file
    • Defining which sources to build with SOURCES, whether they are C, C++ or other.
    • Defining FINAL_LIBRARY to 'xul'.

This is only a simple example, though. There are more things that should have gotten easier, especially since support for templates landed. Templates allowed to hide some details such as dependencies on the right combination of libxul, libnss, libmozalloc and others when building programs or XPCOM components. Combined with syntactic sugar and recent changes to how moz.build data is handled by build backends, we could, in the future, allow to define multiple targets in a single directory. Currently, if you want to build e.g. a library and a program or multiple libraries in the same directory, well, essentially, you can't.

Relatedly, moz.build currently suffers from how it was grown from simply moving definitions from Makefile.in., and how those definitions in Makefile.in were partly tied to how Make works, and how config.mk and rules.mk work. Consolidating CPPSRCS, CSRCS and other variables into a single SOURCES variable is something that should be done more broadly, and we should bring more consistency to how things are defined (for example NO_PGO vs. no_pgo depending on the context, etc.). Incidentally, I think those changes can be made in a way that simplifies the build backend python code.

Multipass

Some build types, while unusual for developers to do locally on their machine, happen regularly on automation, and are done in awful or inefficient ways.

First, Profile Guided Optimized (PGO) builds. The core idea for those builds is to build once with instrumentation, run the resulting instrumented binary against a profile, and rebuild with the data gathered from that. In our build system, this is what actually happens on Linux:

  • Build everything with instrumentation.
  • Run instrumented binary against profile.
  • Remove half the object directory, including many non-compiled code things that are generated during a normal build.
  • Rebuild with optimizations guided by the collected data.

Yes, the last step repeats things from the second that are not necessary to be repeated.

Second, Mac universal builds, which happen in the following manner:

  • Build everything for i386.
  • Build everything for x86-64.
  • Merge the result of both builds.

Yes, "everything" in both "Build everything" includes the same non-compiled files. Then the third step checks that those non-compiled files actually match (and for buildconfig.html, has special treatment) and merges the i386 and x86-64 binaries in Mach-o fat binaries. Not only is this inefficient, but the code behind this is terrible, although it got better with the new packager code. And reproducing universal builds locally is not an easy task.

In the future, the build system would be able to compile binaries for different targets in a way that doesn't require jumping through hoops like the above. It could even allow to build e.g. a javascript shell for the build machine during a cross-compilation for Android without involving wrapper scripts to handle the situation.

Third party code

Building Firefox involves building several third party libraries. In some cases, they use gyp, and we convert their gyp files to moz.build at import time (angle, for instance). In other cases, they use gyp, and we just use those gyp files through moz.build rules, such that the gyp processing is done during configure instead of at import time (webrtc). In yet other cases, they use autoconf and automake, but we use a moz.build file to build them, while still running their configure script (freetype and jemalloc). In all those cases, the sources are handled as if they had been Mozilla code all along.

But in the case of NSPR, NSS and ICU, we don't necessarily build them in ways their respective build systems (were meant to) allow and rely on hacks around their build system to do our bidding. This is especially true for NSS (don't look at config/external/nss/Makefile.in if you care about your sanity). On top of using atrocious hacks, that makes the build dependable on Make for compilation, in inefficient ways, at that.

In the future, we wouldn't rely on NSPR, NSS and ICU build systems, and would build them as if they were Mozilla code, like the others. We need to find ways to allow that while limiting the cost of updates to new versions. This is especially true for ICU which is entirely third party. For NSPR and NSS, we have some kind of foothold. Although it is highly unlikely we can make them switch to moz.build (and in the current state of moz.build, being very tied to Gecko, not possible without possibly significant changes), we can probably come up with schemes that would allow to e.g. easily generate moz.build files from their Makefiles and/or manifests. I expect this to be somewhat manageable for NSS and NSPR. ICU is an entirely different story, sadly.

And more

There are many other aspects of the build system that I'm not mentioning here, but you'll excuse me as this post is already long enough (and took much longer to write than it really should).

2014-12-03 02:01:43+0900

p.m.o | 3 Comments »

Building a Firefox Debian package

It's actually been possible for some time, but I made that simpler recently, and I figured I should mention it.

  • Grab the iceweasel source
    $ apt-get source iceweasel
  • Install its build dependencies
    $ apt-get build-dep iceweasel
  • Build it
    $ cd iceweasel-*
    $ PRODUCT_NAME=firefox dpkg-buildpackage -rfakeroot

2014-11-11 11:26:38+0900

firefox | 2 Comments »

No PIE for you!

You are a software vendor. You distribute software on multiple operating systems. Let's say your software is a mildly popular internet browser. Let's say its logo represents an animal and a globe.

Now, because you care about the security of your users, let's say you would like the entire address space of your application to be randomized, including the main executable portion of it. That would be neat, wouldn't it? And there's even a feature for that: Position independent executables.

You get that working on (almost) all the operating systems you distribute software on. Great.

Then a Gnome user (or an Ubuntu user, for that matter) comes, and tells you they downloaded your software tarball, unpacked it, and tried opening your software, but all they get is a dialog telling them:

Could not display "application-name"
There is no application installed for "shared library" files

Because, you see, a Position independent executable, in ELF terms, is actually a (position independent) shared library that happens to be executable, instead of being an executable that happens to be position independent.

And nautilus (the file manager in Gnome and Ubuntu's Unity) usefully knows to distinguish between executables and shared libraries. And will happily refuse to execute shared libraries, even when they have the file-system-level executable bit set.

You'd think you can get around this by using a .desktop file, but the Exec field in those files requires a full path. (No, ./ doesn't work unless the executable is in the nautilus process current working directory, as in, the path nautilus was run from)

Dear lazyweb, please prove me wrong and tell me there's a way around this.

2014-10-03 18:00:03+0900

p.d.o, p.m.o | 8 Comments »

So, hum, bash…

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for "echo", "xterm" and "echo this is a test" are left as an exercise to the reader.

Update: Another thing that bugs me: Why is this feature even enabled in posix mode? (the mode you get from bash --posix, or, more importantly, when running bash as sh) After all, export -f is a bashism.

2014-09-25 09:43:14+0900

p.d.o, p.m.o | 8 Comments »

Firefox and Gtk+ 3

Folks from Collabora and Red Hat have been working on making Firefox on Gtk+ 3 a thing. See Emilio's blog post for some recent update. But getting Firefox to build and run locally is unfortunately not the whole story.

I've been working on getting Gtk+ 3 Firefox builds going on Mozilla build infrastructure, and I'm proud to announce today that those builds are now going through Mozilla continuous integration on a project branch: Elm, and receive the same automated testing as mozilla-central.

And when I said getting Firefox to build and run was unfortunately not the whole story, I meant it: if you click on the Elm link above, you'll notice that there's a lot of orange, when it should be all green.

So, yes, Firefox on Gtk+ 3 is a thing, and it now has continuous integration. But there's still a whole bunch of things to fix. So if you're interested in making those builds work better, you can hop in, there are many things you can do:

  • check the Gtk+ 3 tracking bug and its dependencies for a list of known issues or improvements to be made.
  • download one of the builds from the elm branch, test it, and file bugs if you find some that aren't currently tracked. There aren't nightlies, but you can get the latest builds for 32-bits and 64-bits systems.
  • and if you have level 1 commit access, you can test patches on the Try server, provided you pull from the elm branch or apply this patch on top of the tree you push there.

2014-07-02 08:24:25+0900

p.d.o, p.m.o | 4 Comments »

FileVault 2 + Mavericks upgrade = massive FAIL

Today, since I was using my Macbook Pro, I figured I'd upgrade OS X. Haha. What a mistake.

So. My Macbook Pro was running Mountain Lion with FileVault 2 enabled. Before that, it was running Lion, and if my recollection is correct, it was using FileVault 2 as well, so the upgrade to Mountain Lion preserved that properly.

The upgrade to Mavericks didn't.

After the installation and the following reboot, and after a few seconds with the Apple logo and the throbber, I would be presented the infamous slashed circle.

Tried various things, but one of the most important information I got was from booting in verbose mode (hold Command+V when turning the Mac on ; took me a while to stumble on a page that mentions this one), which told me, repeatedly "Still waiting for root device".

What bugged me the most is that it did ask for CoreStorage password before failing to boot, and it did complain when I purposefully typed the wrong password.

In Recovery mode (hold Command+R when turning the Mac on), the Disk Utility would show me the partition that was holding the data, but greyed out, and without a name. In the terminal, typing the diskutil list command displayed something like this:

/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *240.1 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                  Apple_HFS                         59.8 GB    disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
   4:       Microsoft Basic Data Windows HD              59.8 GB    disk0s4
   5:                  Apple_HFS Debian                  9.5 MB     disk0s5
   6:                  Linux LVM                         119.6 GB   disk0s6

(Yes, I have a triple-boot setup)

I wasn't convinced Apple_HFS was the right thing for disk0s2 (where the FileVault storage is supposed to be), so I took a USB disk and created an Encrypted HFS on it with the Disk Utility. And surely, the GPT type for that one was not Apple_HFS, but Apple_CoreStorage.

Having no idea how to change that under OS X, I booted under Debian, and ran gdisk:

# gdisk /dev/sda
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: hybrid
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with hybrid MBR; using GPT.

Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 3C08CA5E-92F3-4474-90F0-88EF0023E4FF
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 468862094
Partitions will be aligned on 8-sector boundaries
Total free space is 4054 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40          409639   200.0 MiB   EF00  EFI System Partition
   2          409640       117219583   55.7 GiB    AF00  Macintosh HD
   3       117219584       118489119   619.9 MiB   AB00  Recovery HD
   4       118491136       235284479   55.7 GiB    0700  Microsoft basic data
   5       235284480       235302943   9.0 MiB     AF00  Apple HFS/HFS+
   6       235304960       468862078   111.4 GiB   8E00  Linux LVM

And changed the type:

Command (? for help): t
Partition number (1-6): 2
Current type is 'Apple HFS/HFS+'
Hex code or GUID (L to show codes, Enter = 8300): af05
Changed type of partition to 'Apple Core Storage'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

After a reboot under OS X, it still was failing to boot, with more erratic behaviour. On the other hand, the firmware boot chooser wasn't displaying "Macintosh HD" as a choice, but "Mac OS X Base System". After rebooting under Recovery again, I opened the Startup Disk dialog and chose "Macintosh HD" there.

Rebooted again, and victory was mine, I finally got the Apple logo, and then the "Completing installation" dialog.

In hope this may help people hitting the same problem in the future. If you know how to change the GPT type from the command line in Recovery (that is, without booting Linux), that would be valuable information to add in a comment below.

2014-06-04 11:48:24+0900

me, p.m.o | 1 Comment »

More memory allocator flexibility now enabled by default, and jemalloc 3.6

A year and a half ago (already), I landed replace-malloc, a feature that allows more memory allocator flexibility in Firefox. It enabled building tools such as the dark matter detector (aka DMD) more easily. It also allows to replace our default allocator, (moz)jemalloc.

Until now, you had to explicitly enable the feature with --enable-replace-malloc. As of writing, it is enabled by default on all builds except Windows builds with jemalloc disabled (but that's due to change too). Note Windows debug builds, as well as local Windows builds have jemalloc disabled by default.

It currently is only on mozilla-inbound, but should propagate quickly to other branches. It won't, however, ride the trains: it will stay disabled on aurora, beta, release, esr.

Relatedly, two years ago (already), I landed jemalloc 3.0.0, and updated it until 3.2.0 six months later. It was and still is disabled by default. Sadly, it hasn't seen an update since then. The recent increase in activity around improving the memory footprint of our own fork of jemalloc (dating back to before version 1.0) made me want to update it at last.

This is now done, and the tree contains a (slightly patched) copy of jemalloc 3.6.0. And combined with replace-malloc, it is now possible to test it on nightly builds (well, starting from the one after the next mozilla-inbound merge to mozilla-central) with the following:

  • On GNU/Linux:
    $ LD_PRELOAD=/path/to/libreplace_jemalloc.so firefox
  • On OSX:
    $ DYLD_INSERT_LIBRARIES=/path/to/libreplace_jemalloc.dylib firefox
  • On Windows:
    $ MOZ_REPLACE_MALLOC_LIB=drive:\path\to\replace_jemalloc.dll firefox
  • On Android:
    $ am start -a android.activity.MAIN -n org.mozilla.fennec/.App --es env0 MOZ_REPLACE_MALLOC_LIB=/path/to/libreplace_jemalloc.so

No, you don't need to rebuild Firefox to test it with jemalloc 3.6.0. The relevant library is now shipped in the nightly builds. Except on Android, as I haven't figured where to put it, but you can take the .so file from a local build and use it with a nightly build.

I would appreciate if several people could start using jemalloc 3.x this way. There is still work to do to make it the default. In fact, the list of dependencies of the tracking bug still has the same bugs I filed a long time ago. Hopefully, the ease of use will make someone want to scratch those itches. Please ping me if you want to take one of those bugs.

2014-05-23 04:46:37+0900

p.m.o | 2 Comments »

Don’t ever use in-tree mozconfigs

I just saw two related gists about how some people are building Firefox.

Both are doing the same mistake, which is not really surprising, since one is based on the other. As I'm afraid people might pick that up, I'm posting this:

Don't ever use in-tree mozconfigs

If your mozconfig contains something like

. $topsrcdir/something

Then remove it. Now.

Those mozconfigs are for use in automated builds. They make many assumptions on the build environment being the one from the build slaves. Local developers shouldn't need anything but minimalistic, self contained mozconfigs. If there are things that can be changed in the build system to accommodate developers, file bugs (I could certainly see the .noindex thing automatically added to MOZ_OBJDIR by default on mac)

Corollary: if you can't build Firefox without a mozconfig (for a reason other than your build environment missing build requirements), file a bug.

2014-05-14 02:27:50+0900

p.m.o | 3 Comments »