July 2nd, 2014

Firefox and Gtk+ 3

Folks from Collabora and Red Hat have been working on making Firefox on Gtk+ 3 a thing. See Emilio’s blog post for some recent update. But getting Firefox to build and run locally is unfortunately not the whole story.

I’ve been working on getting Gtk+ 3 Firefox builds going on Mozilla build infrastructure, and I’m proud to announce today that those builds are now going through Mozilla continuous integration on a project branch: Elm, and receive the same automated testing as mozilla-central.

And when I said getting Firefox to build and run was unfortunately not the whole story, I meant it: if you click on the Elm link above, you’ll notice that there’s a lot of orange, when it should be all green.

So, yes, Firefox on Gtk+ 3 is a thing, and it now has continuous integration. But there’s still a whole bunch of things to fix. So if you’re interested in making those builds work better, you can hop in, there are many things you can do:

  • check the Gtk+ 3 tracking bug and its dependencies for a list of known issues or improvements to be made.
  • download one of the builds from the elm branch, test it, and file bugs if you find some that aren’t currently tracked. There aren’t nightlies, but you can get the latest builds for 32-bits and 64-bits systems.
  • and if you have level 1 commit access, you can test patches on the Try server, provided you pull from the elm branch or apply this patch on top of the tree you push there.

2014-07-02 08:24:25+0200

p.d.o, p.m.o | 5 Comments »

June 4th, 2014

FileVault 2 + Mavericks upgrade = massive FAIL

Today, since I was using my Macbook Pro, I figured I’d upgrade OS X. Haha. What a mistake.

So. My Macbook Pro was running Mountain Lion with FileVault 2 enabled. Before that, it was running Lion, and if my recollection is correct, it was using FileVault 2 as well, so the upgrade to Mountain Lion preserved that properly.

The upgrade to Mavericks didn’t.

After the installation and the following reboot, and after a few seconds with the Apple logo and the throbber, I would be presented the infamous slashed circle.

Tried various things, but one of the most important information I got was from booting in verbose mode (hold Command+V when turning the Mac on ; took me a while to stumble on a page that mentions this one), which told me, repeatedly “Still waiting for root device”.

What bugged me the most is that it did ask for CoreStorage password before failing to boot, and it did complain when I purposefully typed the wrong password.

In Recovery mode (hold Command+R when turning the Mac on), the Disk Utility would show me the partition that was holding the data, but greyed out, and without a name. In the terminal, typing the diskutil list command displayed something like this:

/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *240.1 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                  Apple_HFS                         59.8 GB    disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
   4:       Microsoft Basic Data Windows HD              59.8 GB    disk0s4
   5:                  Apple_HFS Debian                  9.5 MB     disk0s5
   6:                  Linux LVM                         119.6 GB   disk0s6

(Yes, I have a triple-boot setup)

I wasn’t convinced Apple_HFS was the right thing for disk0s2 (where the FileVault storage is supposed to be), so I took a USB disk and created an Encrypted HFS on it with the Disk Utility. And surely, the GPT type for that one was not Apple_HFS, but Apple_CoreStorage.

Having no idea how to change that under OS X, I booted under Debian, and ran gdisk:

# gdisk /dev/sda
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: hybrid
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with hybrid MBR; using GPT.

Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 3C08CA5E-92F3-4474-90F0-88EF0023E4FF
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 468862094
Partitions will be aligned on 8-sector boundaries
Total free space is 4054 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40          409639   200.0 MiB   EF00  EFI System Partition
   2          409640       117219583   55.7 GiB    AF00  Macintosh HD
   3       117219584       118489119   619.9 MiB   AB00  Recovery HD
   4       118491136       235284479   55.7 GiB    0700  Microsoft basic data
   5       235284480       235302943   9.0 MiB     AF00  Apple HFS/HFS+
   6       235304960       468862078   111.4 GiB   8E00  Linux LVM

And changed the type:

Command (? for help): t
Partition number (1-6): 2
Current type is 'Apple HFS/HFS+'
Hex code or GUID (L to show codes, Enter = 8300): af05
Changed type of partition to 'Apple Core Storage'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

After a reboot under OS X, it still was failing to boot, with more erratic behaviour. On the other hand, the firmware boot chooser wasn’t displaying “Macintosh HD” as a choice, but “Mac OS X Base System”. After rebooting under Recovery again, I opened the Startup Disk dialog and chose “Macintosh HD” there.

Rebooted again, and victory was mine, I finally got the Apple logo, and then the “Completing installation” dialog.

In hope this may help people hitting the same problem in the future. If you know how to change the GPT type from the command line in Recovery (that is, without booting Linux), that would be valuable information to add in a comment below.

2014-06-04 11:48:24+0200

me, p.m.o | 1 Comment »

May 23rd, 2014

More memory allocator flexibility now enabled by default, and jemalloc 3.6

A year and a half ago (already), I landed replace-malloc, a feature that allows more memory allocator flexibility in Firefox. It enabled building tools such as the dark matter detector (aka DMD) more easily. It also allows to replace our default allocator, (moz)jemalloc.

Until now, you had to explicitly enable the feature with –enable-replace-malloc. As of writing, it is enabled by default on all builds except Windows builds with jemalloc disabled (but that’s due to change too). Note Windows debug builds, as well as local Windows builds have jemalloc disabled by default.

It currently is only on mozilla-inbound, but should propagate quickly to other branches. It won’t, however, ride the trains: it will stay disabled on aurora, beta, release, esr.

Relatedly, two years ago (already), I landed jemalloc 3.0.0, and updated it until 3.2.0 six months later. It was and still is disabled by default. Sadly, it hasn’t seen an update since then. The recent increase in activity around improving the memory footprint of our own fork of jemalloc (dating back to before version 1.0) made me want to update it at last.

This is now done, and the tree contains a (slightly patched) copy of jemalloc 3.6.0. And combined with replace-malloc, it is now possible to test it on nightly builds (well, starting from the one after the next mozilla-inbound merge to mozilla-central) with the following:

  • On GNU/Linux:

    $ LD_PRELOAD=/path/to/libreplace_jemalloc.so firefox
  • On OSX:
    $ DYLD_INSERT_LIBRARIES=/path/to/libreplace_jemalloc.dylib firefox
  • On Windows:
    $ MOZ_REPLACE_MALLOC_LIB=drive:\path\to\replace_jemalloc.dll firefox
  • On Android:

    $ am start -a android.activity.MAIN -n org.mozilla.fennec/.App --es env0 MOZ_REPLACE_MALLOC_LIB=/path/to/libreplace_jemalloc.so

No, you don’t need to rebuild Firefox to test it with jemalloc 3.6.0. The relevant library is now shipped in the nightly builds. Except on Android, as I haven’t figured where to put it, but you can take the .so file from a local build and use it with a nightly build.

I would appreciate if several people could start using jemalloc 3.x this way. There is still work to do to make it the default. In fact, the list of dependencies of the tracking bug still has the same bugs I filed a long time ago. Hopefully, the ease of use will make someone want to scratch those itches. Please ping me if you want to take one of those bugs.

2014-05-23 04:46:37+0200

p.m.o | 2 Comments »

May 14th, 2014

Don’t ever use in-tree mozconfigs

I just saw two related gists about how some people are building Firefox.

Both are doing the same mistake, which is not really surprising, since one is based on the other. As I’m afraid people might pick that up, I’m posting this:

Don’t ever use in-tree mozconfigs

If your mozconfig contains something like

. $topsrcdir/something

Then remove it. Now.

Those mozconfigs are for use in automated builds. They make many assumptions on the build environment being the one from the build slaves. Local developers shouldn’t need anything but minimalistic, self contained mozconfigs. If there are things that can be changed in the build system to accommodate developers, file bugs (I could certainly see the .noindex thing automatically added to MOZ_OBJDIR by default on mac)

Corollary: if you can’t build Firefox without a mozconfig (for a reason other than your build environment missing build requirements), file a bug.

2014-05-14 02:27:50+0200

p.m.o | 4 Comments »

May 8th, 2014

Faster compilations for everyone?

If you’re following this blog, you may be aware of the recent work on shared compilation cache. This has been deployed with great results on Mozilla’s try server for all platforms (except a few build types, like ASAN or valgrind), and is being tested for Linux/Android builds on b2g-inbound (more on that in subsequent posts).

A side effect of the work to make it run on all platforms is that it now works to build Firefox on Windows, although it requires a specific setup. And since recently, it’s also possible to use it with local storage instead of S3. This means we now have a (basic) ccache for Windows that works to build Firefox.

If you wish to try it, here is what you need to do:

  • Clone the repository from github:

    $ git clone https://github.com/glandium/sccache

  • Add the following to your mozconfig:

    ac_add_options "--with-compiler-wrapper=python2.7 path/to/sccache/sccache.py"
    export _DEPEND_CFLAGS='-deps$(MDDEPDIR)/$(@F).pp'
    mk_add_options "export CC_WRAPPER="
    mk_add_options "export CXX_WRAPPER="
    mk_add_options "export COMPILE_PDB_FLAG="
    mk_add_options "export HOST_PDB_FLAG="
    mk_add_options "export MOZ_DEBUG_FLAGS=-Z7"

    Update: Currently, path/to/sccache/sccache.py needs to be a windows-like path (as opposed to msys/cygwin path) with forward slashes.

  • Then set the SCCACHE_DIR environment variable to some local directory.
  • And build happily.

A few things to note:

  • As of writing, sccache doesn’t support cleaning up the storage directory, so it will grow indefinitely (until you clean it up yourself).
  • Because the MSVC preprocessor is not exactly fast, and because sccache doesn’t have a direct mode like ccache, it doesn’t make as much difference as ccache does.
  • It also works on non-windows, but doesn’t require all the mozconfig changes, except for the --with-compiler-wrapper line.

Play with it and feel free to fork it on github, and improve it. Pull requests encouraged.

2014-05-08 09:36:24+0200

p.m.o | 1 Comment »

April 5th, 2014

怒り、失望、ストレス発散

I started learning japanese calligraphy a few months ago, with no prior experience with a brush and ink. It is an interesting endeavour. For various reasons, I had to skip class for a few weeks, but after the past ten days, I needed some stress relief on paper.

怒り
失望

スッキリしました。

2014-04-05 11:21:58+0200

me, p.d.o, p.m.o | 2 Comments »

March 4th, 2014

Linux and Android try builds, now up to twice as fast

(Taras told me to use sensationalist titles to draw more attention, so here we are)

Last week, I brought up the observable build times improvements on Linux try builds with the use of shared cache. I want to revisit those results now there have been more builds, and to look at the first results of the switch for Android try builds, which are now also using the shared cache.

Here is a comparison between the repartition of build times from last time (about ten days of try pushes, starting from the moment shared cache was enabled) vs. build times for the past ten days (which, almost, start at the point the previous data set stopped)):

As expected, the build times are still improving overall thanks to the cache being fuller. The slowest build times are now slightly lower than the slowest build times we were getting without the shared cache. There is a small “regression” in the number of builds taking between 15 and 20 minutes, but that’s likely related to changes in the tree creating more cache misses. To summarize the before/after:

Unified Non-unified
shared after 10 days shared initially ccache shared after 10 days shared initially ccache
Average 17:11 17:11 29:19 31:00 30:58 57:08
Median 13:03 13:30 30:10 22:07 22:27 60:57

[Note I'm not providing graphs for non-unified builds, they are boringly similar, with different values, which average and median values should give a grasp on]

Android try builds also got faster with shared cache. The situation looks pretty similar to what we observed after the first ten days of Linux try shared cache builds:

[Note I removed two builds without shared cache from those stats, both of which were taking more than an hour for some reason I haven't investigated]

The fastest shared cache builds are, like for Linux builds, slower than the fastest ccache builds, and the slowest builds too, but as we can see above, those slowest builds get faster as the cache fills up. And as I wrote last week, work is under way to make the fastest builds faster.

This is what the average and median look like for Android try builds:

Unified Non-unified
shared ccache shared ccache
Average 17:14 24:08 27:49 43:00
Median 13:52 24:57 20:35 47:17

2014-03-04 08:33:38+0200

p.m.o | 2 Comments »

February 25th, 2014

Analyzing shared cache on try

As mentioned in previous post, shared cache is now effective on try for linux and linux64, opt and debug builds, provided the push has changeset a62bde1d6efe in its history. The unknown in that equation was how long it takes for landings in mozilla-inbound or mozilla-central to propagate to try pushes.

So I took a period of about 8 days and observed, on a sliding 24-hours window, the percentage of pushes containing that changeset, and to see if the dev-tree-management post had an impact, I also looked at a random mozilla-central changeset, 339f0d450d46. This is what this looks like:

So it takes about 2 days and a half for a mozilla-central changeset to propagate to most try pushes, and it looks like my dev-tree-management post (which was cross-posted on dev-platform) didn’t have an impact, although 339f0d450d46 is close enough to the announcement that it could still be benefitting from it. I’ll revisit this with future unannounced changes.

The drop that can be seen on February 16 is due to there being less overall pushes over the week-end, and that somehow made pushes without changeset a62bde1d6efe more prominent. Maybe contributors pushing on the week-end are more likely to push old trees.

Now, let’s see what effect shared cache had on try build times. I took about the last two weeks of successful try build logs for linux and linux64, opt and debug, and analyzed them to extract the following data:

  • Where they were built (in-house vs AWS),
  • Whether they were built with unified sources or not (this significantly changes build times),
  • Whether they used the shared cache or not,
  • Whether they are PGO builds or not,
  • How long the “compile” step took (which, really, is “make -f client.mk”, so this includes more than compilation, like configure and copying many files),

There sadly weren’t enough PGO builds to plot anything about them, so I just excluded them. Then, since the shared cache is only enabled on AWS builds, and since AWS and in-house build times are so different, I excluded in-house builds. Further looking at the build times for linux opt, linux64 opt, linux debug and linux64 debug, they all looked similar enough that they didn’t need to be split in different buckets.

Update: I should mention that I also excluded my own try pushes because I tended to do multiple rebuilds on them, with all of them getting near 100% cache hit and best build times.

Sorting all that data by build time, the following are graphs showing how many builds took less than a given build time.

For unified sources builds (870 builds with ccache, 1111 builds with shared cache):

For non-unified sources builds (302 builds with ccache, 332 builds with shared cache):

The first thing to note is that this does include the very first try pushes with shared cache, which probably skews the slowest builds. It should also be noted that linux debug builds are (still) currently non-unified by default for some reason.

With that being said, for unified sources builds, there are about 3.25% of the builds that end up slower with shared cache than with ccache, and 5.2% for non-unified builds. Most of that is on the best build times end, where builds with shared cache can spend twice the time we’d spend with ccache. I’m currently working on changes that should make the difference slimmer (more on that in a subsequent post). Anyways, that still leaves more than 90% builds faster with shared cache, and makes for a big improvement in build times on average:

Unified Non-unified
shared ccache shared ccache
Average 17:11 29:19 30:58 57:08
Median 13:30 30:10 22:27 60:57

Interestingly, a few of the fastest non-unified builds with shared cache were significantly faster than the others, and it looks like what they have in common is that they were built on the US-East-1 region, instead of US-West-2 region. I haven’t looked into more details as to why those particular builds were much faster.

2014-02-25 03:57:20+0200

p.m.o | 2 Comments »

February 13th, 2014

Testing shared cache on try

After some success with the shared cache experiment (Read about it, and some more), the next step was to get it to work on the Mozilla continuous integration infrastructure, and it turned out to reveal a couple issues.

The first issue is that the DNS server for the AWS build slaves we use is not the AWS DNS, but our in-house DNS. Which has two consequences:

  • whatever geolocation S3 does at the DNS level may end up giving a S3 endpoint IP that is not optimal for the AWS region we’re in because it was correlated to the location of our in-house DNS
  • the roundtrip to the in-house DNS server was around 80ms, and because every compilation is an independent process, each one does a DNS request, so each one gets that 80ms hit. Note that while suboptimal, doing a DNS request for each compilation also allows to get different S3 endpoints because of both DNS round robin and geolocation S3 uses, which gives very different IPs every so often.

The consequence of this is that build times were very unstable, ranging from 11 minutes like during my experiments up to 45 minutes for a 99% cache hit build! After importing a DNS resolver in the shared cache script and making it use the AWS DNS, build times became much more stable between 11 and 12 minutes. (we actually do need to use the in-house DNS for normal operations on the build slaves, so it’s not possible to switch /etc/resolv.conf)

The second issue is that the US Standard region for S3 can have quite high latency depending on the region you’re connecting to it from. Our build slaves are located in Oregon and Northern Virginia, and while the slaves in Northern Virginia could reach S3 US Standard within 3ms, those in Oregon could only reach it within 90ms. Those numbers were unfortunately gotten with the in-house DNS, so geolocation may have had its impact on them, but after switching DNS, the build times on Oregon slaves were still way higher than on Northern Virginia slaves (~11 minutes vs. ~21 minutes). Which led us to use a S3 bucket per region.

With those issues dealt with, we’re now ready for more widespread testing, and as such I’ve turned the shared cache on on Linux opt, Linux debug, Linux64 opt and Linux64 debug builds, for try only, only if the push contains the relevant setup, which landed in changeset a62bde1d6efe.
See my post on dev-tree-management for a few more details, notably if you hit bugs.

Please note this is only the beginning. More platforms will use the cache soon, including some that aren’t currently using ccache. And I got some timing numbers during the initial tests on try that hint at the most immediate performance issues with the script that need addressing. So you can expect builds to get faster and faster as the cache populates, and as the script is improved with feedback from past experiments and current deployment (I’ll be collecting data from your try pushes). Also relatedly, I’m working on build system improvements that should make the ‘libs’ step much faster, cutting down the time spent on that step.

2014-02-13 10:18:45+0200

p.m.o | 1 Comment »

February 5th, 2014

Efficiency of incremental builds on inbound

Contrary to try, most other branches, like inbound, don’t start builds from an empty tree. They start from the result of the previous build for the same branch on the same slave. But sometimes that doesn’t work well, so we need to clobber (which means we remove the old build tree and start from scratch again). When that happens, we usually trigger a clobber on all subsequent builds for the branch. Or sometimes we just declare a slave too old and do a periodic clobber. Or sometimes a slave just doesn’t have a previous build tree.

As I mentioned in the previous post about ccache efficiency, the fact that so many builds run on different slaves may hinder those incremental builds. Let’s get numbers.

Taking the same sample of builds as before (spanning across 10 days after the holidays), I gathered some numbers for linux64 opt and macosx64 opt builds, based on the number of files ccache built: when starting from a previous build, ccache is not invoked as much (or so would we like), and that shows up in its stats.

The sample is 408 pushes, including a total of 1454 changesets. Of those pushes:

  • 344 had a linux64 opt build, 2 of which were retriggered because of a failure, for a total of 346 builds
  • 377 had a macosx64 opt build, 12 of which were retriggered because of a failure, and 6 more were retriggered for some other reason, for a total of 397 builds. This doesn’t line up because 2 pushes had their build retriggered twice.

It’s interesting to see how many builds we actually skip, most probably because of coalescing. I’d argue this is too many, but I haven’t looked exactly how many of those are legitimate “no need to build this because it is android only” or similar patterns.

Armed with an AWS linux builder, I replayed those 408 pushes in an optimal setup: no clobber besides those requested by the build system itself, all pushes built on the same machine, in the order they land. I however didn’t skip builds like the actual slaves do, but this really doesn’t matter anyways since they are not building consecutive pushes anyways. Note configure was rerun for every push because of how my builder handles pulling from mercurial. We don’t do that on build slaves but I’d argue we should, it would avoid plenty of build system level clobbers, and many “fun” build failures.

Of those 408 pushes, 6 requested a clobber at the build system level. But the numbers are very different on build slaves:

  • On linux, out of 346 builds:
    • 19 had a clobber by the build system
    • 8 had a forced clobber (when using the clobberer)
    • 1 had a periodic clobber
    • 162 (!) had no previous build tree at all for whatever reason (purged previously, or new slave)
    • for a total of 190 builds ending up starting with no previous build tree (54.9%)
  • On mac, out of 397 builds:
    • 23 had a clobber by the build system
    • 31 had a forced clobber
    • 34 had no previous build tree at all
    • for a total of 88 builds with no previous build tree (22.2%)

(Note the difference in numbers of build system clobbers and forced clobbers is due to them being masked by the lack of previous build tree on linux)

Like for ccache efficiency, the use of a bigger build slave pool for linux builds is hurting and making them start from scratch more often than not, which doesn’t help with the build turnaround times.

But even on the remaining non-clobber builds, if the source tree is significantly different, we may end up rebuilding as much as if we had clobbered in the first place. Sometimes it only takes a change to one file to do that (for example, add an AC_DEFINE in configure.in, and it will rebuild almost everything), but sometimes it can be an accumulation of changes. This is where the ccache stats get useful again.

A few preliminary observations:

  • There are always at least around 1.5% files rebuilt on ideal linux builds (which needs investigating), but a lot of the builds rebuilt around 5% because of bug 959519
  • The number of source files can vary across pushes, but I used a more or less appropriate constant value for all builds, so some near 100% values may actually be 100%
  • Mac builds surprisingly sometimes build the same files more than once. I filed bug 967976

The first thing to note on the above graph is that about 42% of mac builds and about 75% of linux builds are either clobbers or near-clobbers as I like to call them (incremental builds that just rebuild everything). Near-clobbers thus count for as many as 20% of overall builds on both platforms, or about 50% (!) of non-clobber builds on linux and about 25% of non-clobber builds on mac.

I can’t stress enough how the build slave pool sizes are hurting our turnaround times.

It can be noted that there are a few plateaus around 82% and 69% files built, which are likely due to central headers being changed and triggering that many files to be rebuilt. This is the kind of thing that efforts like using include-what-you-use helps with, and we’ve made progress on that in the past months.

Overall, with our current setup, we are in a vicious circle. Adding more build types (like recently ASAN, Root analysis, Valgrind, etc.), or landing more stuff requires more slaves. More slaves makes builds slower for reasons given here and in previous posts. Slower builds require more slaves to keep up with landings. Rinse, repeat. We need to break the feedback loop.

(Fun fact: While I haven’t been doing more than mercurial updates and building the tree to gather the ideal linux numbers (so no make package, no make check, etc.), it only took about a day. For 10 days worth of inbound pushes. With one machine)

2014-02-05 03:57:49+0200

p.m.o | 2 Comments »