Playing evil with VMWare ESX Server, part 2

I finally gave a try to my pervert solution.
Using unionfs led to a kernel oops on the service console (which means it brought VMs down, too).
funionfs, as a fuse filesystem, could not lead to the same result, but i had some issues with it:

  • result of lseek() being put in an int variable while _FILE_OFFSET_BITS is set to 64... impossible to seek at more than 2GB of data, which ext3 recovery needed to do...
  • when opening read/write a file (which the loopback device does), it attempts to open read/write the original file on the read-only directory, which leads to EBUSY when the file is opened elsewhere (i.e. by VMWare ESX Server).

Once these issues solved, it ... doesn't work. The problem is funionfs doesn't handle files that are partially written to: it creates a new file in the r/w directory, and writes exactly what has been requested, creating a sparse file. On subsequent reads, it gets the data from that sparse file. This means most of the data, except data that has been written, is zeroed.
I'm afraid there's nothing better to do than take a somewhat arbitrary chunk size, write to the sparse file by chunks (reading missing data from the original file if necessary), and keep a bitmap of the chunks that have been written to the new file...
I should check what the original unionfs does with this.

2006-12-19 22:51:29+0900

diskimgfs

Both comments and pings are currently closed.

2 Responses to “Playing evil with VMWare ESX Server, part 2”

  1. textshell Says:

    if you can get the disk image as a readable loopback device maybe you can do the rest with device mapper snapshotting
    just a thought…

  2. glandium Says:

    textshell: I’m not sure device mapper snapshotting is stable enough on 2.4…