For a reminder of the situation, see also:
I finally had time at work to implement a solution for the problem. The result is a bit less than 400 lines of code, which I hope to be able to make free-as-speech. I’ll probably have tons of paperwork to do with my employer before that can happen…
The program implements a fuse filesystem that you feed with a raw device image file (but I’m willing to implement more image file formats), and that shows the individual partitions as separate files, named “partn.fstype“. These files can then be mounted via loopback devices the standard way (which was not possible on vmfs for the reason you can find in my first post on the subject), with the benefit of not requiring offset adjustments (as when you want to mount partitions from a disk image), or some loopback device hack.
Additionally, it has an internal Copy-On-Write mechanism so that it is possible to mount “dirty” ext3 filesystems (e.g. a snapshot of a mounted filesystem) without modifying the original disk image. Note that there is no way, yet, to keep these writes after unmounting the fuse filesystem.
It uses libparted to handle the partition table reading, which means it will read any disk label types parted supports, such as BSD labels, sun partition tables, and so on. It doesn’t support LVM, though.
Unfortunately, VMWare ESX server 3.0 being based on RedHat Enterprise Linux 3, only an ancient version of libparted is available. By the way, this version (1.6.3) had a pretty bad bug that made it impossible to use it for regular files instead of devices: it was trying to do a BLKGETSIZE (or was it HDIO_GETGEO ?) ioctl on it. To workaround this, I implemented my own PedArchitecture. It was somehow a revelation, because with similar mechanism, I can implement support for different disk image types :).
Anyways, there have been quite some API changes in-between, so some updates will have to be done to the code…
In other news, I started working on ext3rminator again. There may be a new release in a few weeks.
Update: Thinking again about the ioctl issue, I’m wondering if vmfs couldn’t be responsible (again) for the failure, actually… I’ll check that on monday.