Archive for the 'p.m.o' Category

No PIE for you!

You are a software vendor. You distribute software on multiple operating systems. Let’s say your software is a mildly popular internet browser. Let’s say its logo represents an animal and a globe.

Now, because you care about the security of your users, let’s say you would like the entire address space of your application to be randomized, including the main executable portion of it. That would be neat, wouldn’t it? And there’s even a feature for that: Position independent executables.

You get that working on (almost) all the operating systems you distribute software on. Great.

Then a Gnome user (or an Ubuntu user, for that matter) comes, and tells you they downloaded your software tarball, unpacked it, and tried opening your software, but all they get is a dialog telling them:

Could not display “application-name”
There is no application installed for “shared library” files

Because, you see, a Position independent executable, in ELF terms, is actually a (position independent) shared library that happens to be executable, instead of being an executable that happens to be position independent.

And nautilus (the file manager in Gnome and Ubuntu’s Unity) usefully knows to distinguish between executables and shared libraries. And will happily refuse to execute shared libraries, even when they have the file-system-level executable bit set.

You’d think you can get around this by using a .desktop file, but the Exec field in those files requires a full path. (No, ./ doesn’t work unless the executable is in the nautilus process current working directory, as in, the path nautilus was run from)

Dear lazyweb, please prove me wrong and tell me there’s a way around this.

2014-10-03 18:00:03+0200

p.d.o, p.m.o | 8 Comments »

So, hum, bash…

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Update: Another thing that bugs me: Why is this feature even enabled in posix mode? (the mode you get from bash --posix, or, more importantly, when running bash as sh) After all, export -f is a bashism.

2014-09-25 09:43:14+0200

p.d.o, p.m.o | 8 Comments »

Firefox and Gtk+ 3

Folks from Collabora and Red Hat have been working on making Firefox on Gtk+ 3 a thing. See Emilio’s blog post for some recent update. But getting Firefox to build and run locally is unfortunately not the whole story.

I’ve been working on getting Gtk+ 3 Firefox builds going on Mozilla build infrastructure, and I’m proud to announce today that those builds are now going through Mozilla continuous integration on a project branch: Elm, and receive the same automated testing as mozilla-central.

And when I said getting Firefox to build and run was unfortunately not the whole story, I meant it: if you click on the Elm link above, you’ll notice that there’s a lot of orange, when it should be all green.

So, yes, Firefox on Gtk+ 3 is a thing, and it now has continuous integration. But there’s still a whole bunch of things to fix. So if you’re interested in making those builds work better, you can hop in, there are many things you can do:

  • check the Gtk+ 3 tracking bug and its dependencies for a list of known issues or improvements to be made.
  • download one of the builds from the elm branch, test it, and file bugs if you find some that aren’t currently tracked. There aren’t nightlies, but you can get the latest builds for 32-bits and 64-bits systems.
  • and if you have level 1 commit access, you can test patches on the Try server, provided you pull from the elm branch or apply this patch on top of the tree you push there.

2014-07-02 08:24:25+0200

p.d.o, p.m.o | 4 Comments »

FileVault 2 + Mavericks upgrade = massive FAIL

Today, since I was using my Macbook Pro, I figured I’d upgrade OS X. Haha. What a mistake.

So. My Macbook Pro was running Mountain Lion with FileVault 2 enabled. Before that, it was running Lion, and if my recollection is correct, it was using FileVault 2 as well, so the upgrade to Mountain Lion preserved that properly.

The upgrade to Mavericks didn’t.

After the installation and the following reboot, and after a few seconds with the Apple logo and the throbber, I would be presented the infamous slashed circle.

Tried various things, but one of the most important information I got was from booting in verbose mode (hold Command+V when turning the Mac on ; took me a while to stumble on a page that mentions this one), which told me, repeatedly “Still waiting for root device”.

What bugged me the most is that it did ask for CoreStorage password before failing to boot, and it did complain when I purposefully typed the wrong password.

In Recovery mode (hold Command+R when turning the Mac on), the Disk Utility would show me the partition that was holding the data, but greyed out, and without a name. In the terminal, typing the diskutil list command displayed something like this:

/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *240.1 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                  Apple_HFS                         59.8 GB    disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
   4:       Microsoft Basic Data Windows HD              59.8 GB    disk0s4
   5:                  Apple_HFS Debian                  9.5 MB     disk0s5
   6:                  Linux LVM                         119.6 GB   disk0s6

(Yes, I have a triple-boot setup)

I wasn’t convinced Apple_HFS was the right thing for disk0s2 (where the FileVault storage is supposed to be), so I took a USB disk and created an Encrypted HFS on it with the Disk Utility. And surely, the GPT type for that one was not Apple_HFS, but Apple_CoreStorage.

Having no idea how to change that under OS X, I booted under Debian, and ran gdisk:

# gdisk /dev/sda
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
  MBR: hybrid
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with hybrid MBR; using GPT.

Command (? for help): p
Disk /dev/sda: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 3C08CA5E-92F3-4474-90F0-88EF0023E4FF
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 468862094
Partitions will be aligned on 8-sector boundaries
Total free space is 4054 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40          409639   200.0 MiB   EF00  EFI System Partition
   2          409640       117219583   55.7 GiB    AF00  Macintosh HD
   3       117219584       118489119   619.9 MiB   AB00  Recovery HD
   4       118491136       235284479   55.7 GiB    0700  Microsoft basic data
   5       235284480       235302943   9.0 MiB     AF00  Apple HFS/HFS+
   6       235304960       468862078   111.4 GiB   8E00  Linux LVM

And changed the type:

Command (? for help): t
Partition number (1-6): 2
Current type is 'Apple HFS/HFS+'
Hex code or GUID (L to show codes, Enter = 8300): af05
Changed type of partition to 'Apple Core Storage'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

After a reboot under OS X, it still was failing to boot, with more erratic behaviour. On the other hand, the firmware boot chooser wasn’t displaying “Macintosh HD” as a choice, but “Mac OS X Base System”. After rebooting under Recovery again, I opened the Startup Disk dialog and chose “Macintosh HD” there.

Rebooted again, and victory was mine, I finally got the Apple logo, and then the “Completing installation” dialog.

In hope this may help people hitting the same problem in the future. If you know how to change the GPT type from the command line in Recovery (that is, without booting Linux), that would be valuable information to add in a comment below.

2014-06-04 11:48:24+0200

me, p.m.o | 1 Comment »

More memory allocator flexibility now enabled by default, and jemalloc 3.6

A year and a half ago (already), I landed replace-malloc, a feature that allows more memory allocator flexibility in Firefox. It enabled building tools such as the dark matter detector (aka DMD) more easily. It also allows to replace our default allocator, (moz)jemalloc.

Until now, you had to explicitly enable the feature with –enable-replace-malloc. As of writing, it is enabled by default on all builds except Windows builds with jemalloc disabled (but that’s due to change too). Note Windows debug builds, as well as local Windows builds have jemalloc disabled by default.

It currently is only on mozilla-inbound, but should propagate quickly to other branches. It won’t, however, ride the trains: it will stay disabled on aurora, beta, release, esr.

Relatedly, two years ago (already), I landed jemalloc 3.0.0, and updated it until 3.2.0 six months later. It was and still is disabled by default. Sadly, it hasn’t seen an update since then. The recent increase in activity around improving the memory footprint of our own fork of jemalloc (dating back to before version 1.0) made me want to update it at last.

This is now done, and the tree contains a (slightly patched) copy of jemalloc 3.6.0. And combined with replace-malloc, it is now possible to test it on nightly builds (well, starting from the one after the next mozilla-inbound merge to mozilla-central) with the following:

  • On GNU/Linux:

    $ LD_PRELOAD=/path/to/libreplace_jemalloc.so firefox
  • On OSX:
    $ DYLD_INSERT_LIBRARIES=/path/to/libreplace_jemalloc.dylib firefox
  • On Windows:
    $ MOZ_REPLACE_MALLOC_LIB=drive:\path\to\replace_jemalloc.dll firefox
  • On Android:

    $ am start -a android.activity.MAIN -n org.mozilla.fennec/.App --es env0 MOZ_REPLACE_MALLOC_LIB=/path/to/libreplace_jemalloc.so

No, you don’t need to rebuild Firefox to test it with jemalloc 3.6.0. The relevant library is now shipped in the nightly builds. Except on Android, as I haven’t figured where to put it, but you can take the .so file from a local build and use it with a nightly build.

I would appreciate if several people could start using jemalloc 3.x this way. There is still work to do to make it the default. In fact, the list of dependencies of the tracking bug still has the same bugs I filed a long time ago. Hopefully, the ease of use will make someone want to scratch those itches. Please ping me if you want to take one of those bugs.

2014-05-23 04:46:37+0200

p.m.o | 2 Comments »

Don’t ever use in-tree mozconfigs

I just saw two related gists about how some people are building Firefox.

Both are doing the same mistake, which is not really surprising, since one is based on the other. As I’m afraid people might pick that up, I’m posting this:

Don’t ever use in-tree mozconfigs

If your mozconfig contains something like

. $topsrcdir/something

Then remove it. Now.

Those mozconfigs are for use in automated builds. They make many assumptions on the build environment being the one from the build slaves. Local developers shouldn’t need anything but minimalistic, self contained mozconfigs. If there are things that can be changed in the build system to accommodate developers, file bugs (I could certainly see the .noindex thing automatically added to MOZ_OBJDIR by default on mac)

Corollary: if you can’t build Firefox without a mozconfig (for a reason other than your build environment missing build requirements), file a bug.

2014-05-14 02:27:50+0200

p.m.o | 3 Comments »

Faster compilations for everyone?

If you’re following this blog, you may be aware of the recent work on shared compilation cache. This has been deployed with great results on Mozilla’s try server for all platforms (except a few build types, like ASAN or valgrind), and is being tested for Linux/Android builds on b2g-inbound (more on that in subsequent posts).

A side effect of the work to make it run on all platforms is that it now works to build Firefox on Windows, although it requires a specific setup. And since recently, it’s also possible to use it with local storage instead of S3. This means we now have a (basic) ccache for Windows that works to build Firefox.

If you wish to try it, here is what you need to do:

  • Clone the repository from github:

    $ git clone https://github.com/glandium/sccache

  • Add the following to your mozconfig:

    ac_add_options "--with-compiler-wrapper=python2.7 path/to/sccache/sccache.py"
    export _DEPEND_CFLAGS='-deps$(MDDEPDIR)/$(@F).pp'
    mk_add_options "export CC_WRAPPER="
    mk_add_options "export CXX_WRAPPER="
    mk_add_options "export COMPILE_PDB_FLAG="
    mk_add_options "export HOST_PDB_FLAG="
    mk_add_options "export MOZ_DEBUG_FLAGS=-Z7"

    Update: Currently, path/to/sccache/sccache.py needs to be a windows-like path (as opposed to msys/cygwin path) with forward slashes.

  • Then set the SCCACHE_DIR environment variable to some local directory.
  • And build happily.

A few things to note:

  • As of writing, sccache doesn’t support cleaning up the storage directory, so it will grow indefinitely (until you clean it up yourself).
  • Because the MSVC preprocessor is not exactly fast, and because sccache doesn’t have a direct mode like ccache, it doesn’t make as much difference as ccache does.
  • It also works on non-windows, but doesn’t require all the mozconfig changes, except for the --with-compiler-wrapper line.

Play with it and feel free to fork it on github, and improve it. Pull requests encouraged.

2014-05-08 09:36:24+0200

p.m.o | No Comments »

怒り、失望、ストレス発散

I started learning japanese calligraphy a few months ago, with no prior experience with a brush and ink. It is an interesting endeavour. For various reasons, I had to skip class for a few weeks, but after the past ten days, I needed some stress relief on paper.

怒り
失望

スッキリしました。

2014-04-05 11:21:58+0200

me, p.d.o, p.m.o | 1 Comment »

Linux and Android try builds, now up to twice as fast

(Taras told me to use sensationalist titles to draw more attention, so here we are)

Last week, I brought up the observable build times improvements on Linux try builds with the use of shared cache. I want to revisit those results now there have been more builds, and to look at the first results of the switch for Android try builds, which are now also using the shared cache.

Here is a comparison between the repartition of build times from last time (about ten days of try pushes, starting from the moment shared cache was enabled) vs. build times for the past ten days (which, almost, start at the point the previous data set stopped)):

As expected, the build times are still improving overall thanks to the cache being fuller. The slowest build times are now slightly lower than the slowest build times we were getting without the shared cache. There is a small “regression” in the number of builds taking between 15 and 20 minutes, but that’s likely related to changes in the tree creating more cache misses. To summarize the before/after:

Unified Non-unified
shared after 10 days shared initially ccache shared after 10 days shared initially ccache
Average 17:11 17:11 29:19 31:00 30:58 57:08
Median 13:03 13:30 30:10 22:07 22:27 60:57

[Note I'm not providing graphs for non-unified builds, they are boringly similar, with different values, which average and median values should give a grasp on]

Android try builds also got faster with shared cache. The situation looks pretty similar to what we observed after the first ten days of Linux try shared cache builds:

[Note I removed two builds without shared cache from those stats, both of which were taking more than an hour for some reason I haven't investigated]

The fastest shared cache builds are, like for Linux builds, slower than the fastest ccache builds, and the slowest builds too, but as we can see above, those slowest builds get faster as the cache fills up. And as I wrote last week, work is under way to make the fastest builds faster.

This is what the average and median look like for Android try builds:

Unified Non-unified
shared ccache shared ccache
Average 17:14 24:08 27:49 43:00
Median 13:52 24:57 20:35 47:17

2014-03-04 08:33:38+0200

p.m.o | 1 Comment »

Analyzing shared cache on try

As mentioned in previous post, shared cache is now effective on try for linux and linux64, opt and debug builds, provided the push has changeset a62bde1d6efe in its history. The unknown in that equation was how long it takes for landings in mozilla-inbound or mozilla-central to propagate to try pushes.

So I took a period of about 8 days and observed, on a sliding 24-hours window, the percentage of pushes containing that changeset, and to see if the dev-tree-management post had an impact, I also looked at a random mozilla-central changeset, 339f0d450d46. This is what this looks like:

So it takes about 2 days and a half for a mozilla-central changeset to propagate to most try pushes, and it looks like my dev-tree-management post (which was cross-posted on dev-platform) didn’t have an impact, although 339f0d450d46 is close enough to the announcement that it could still be benefitting from it. I’ll revisit this with future unannounced changes.

The drop that can be seen on February 16 is due to there being less overall pushes over the week-end, and that somehow made pushes without changeset a62bde1d6efe more prominent. Maybe contributors pushing on the week-end are more likely to push old trees.

Now, let’s see what effect shared cache had on try build times. I took about the last two weeks of successful try build logs for linux and linux64, opt and debug, and analyzed them to extract the following data:

  • Where they were built (in-house vs AWS),
  • Whether they were built with unified sources or not (this significantly changes build times),
  • Whether they used the shared cache or not,
  • Whether they are PGO builds or not,
  • How long the “compile” step took (which, really, is “make -f client.mk”, so this includes more than compilation, like configure and copying many files),

There sadly weren’t enough PGO builds to plot anything about them, so I just excluded them. Then, since the shared cache is only enabled on AWS builds, and since AWS and in-house build times are so different, I excluded in-house builds. Further looking at the build times for linux opt, linux64 opt, linux debug and linux64 debug, they all looked similar enough that they didn’t need to be split in different buckets.

Update: I should mention that I also excluded my own try pushes because I tended to do multiple rebuilds on them, with all of them getting near 100% cache hit and best build times.

Sorting all that data by build time, the following are graphs showing how many builds took less than a given build time.

For unified sources builds (870 builds with ccache, 1111 builds with shared cache):

For non-unified sources builds (302 builds with ccache, 332 builds with shared cache):

The first thing to note is that this does include the very first try pushes with shared cache, which probably skews the slowest builds. It should also be noted that linux debug builds are (still) currently non-unified by default for some reason.

With that being said, for unified sources builds, there are about 3.25% of the builds that end up slower with shared cache than with ccache, and 5.2% for non-unified builds. Most of that is on the best build times end, where builds with shared cache can spend twice the time we’d spend with ccache. I’m currently working on changes that should make the difference slimmer (more on that in a subsequent post). Anyways, that still leaves more than 90% builds faster with shared cache, and makes for a big improvement in build times on average:

Unified Non-unified
shared ccache shared ccache
Average 17:11 29:19 30:58 57:08
Median 13:30 30:10 22:27 60:57

Interestingly, a few of the fastest non-unified builds with shared cache were significantly faster than the others, and it looks like what they have in common is that they were built on the US-East-1 region, instead of US-West-2 region. I haven’t looked into more details as to why those particular builds were much faster.

2014-02-25 03:57:20+0200

p.m.o | 1 Comment »