{"id":106,"date":"2006-12-19T22:51:29","date_gmt":"2006-12-19T20:51:29","guid":{"rendered":"http:\/\/web.glandium.org\/blog\/?p=106"},"modified":"2010-01-27T08:52:36","modified_gmt":"2010-01-27T07:52:36","slug":"playing-evil-with-vmware-esx-server-part-2","status":"publish","type":"post","link":"https:\/\/glandium.org\/blog\/?p=106","title":{"rendered":"Playing evil with VMWare ESX Server, part 2"},"content":{"rendered":"<p>I finally gave a try to <a href=\"http:\/\/web.glandium.org\/blog\/?p=105\">my pervert solution<\/a>.<br \/>\nUsing <a href=\"http:\/\/www.am-utils.org\/project-unionfs.html\">unionfs<\/a> led to a kernel oops on the service console (which means it brought VMs down, too).<br \/>\n<a href=\"http:\/\/funionfs.apiou.org\/\">funionfs<\/a>, as a fuse filesystem, could not lead to the same result, but i had some issues with it:<\/p>\n<ul>\n<li>result of lseek() being put in an int variable while _FILE_OFFSET_BITS is set to 64... impossible to seek at more than 2GB of data, which ext3 recovery needed to do...<\/li>\n<li>when opening read\/write a file (which the loopback device does), it attempts to open read\/write the original file on the read-only directory, which leads to EBUSY when the file is opened elsewhere (i.e. by VMWare ESX Server).<\/li>\n<\/ul>\n<p>Once these issues solved, it ... doesn't work. The problem is funionfs doesn't handle files that are partially written to: it creates a new file in the r\/w directory, and writes exactly what has been requested, creating a sparse file. On subsequent reads, it gets the data from that sparse file. This means most of the data, except data that has been written, is zeroed.<br \/>\nI'm afraid there's nothing better to do than take a somewhat arbitrary chunk size, write to the sparse file by chunks (reading missing data from the original file if necessary), and keep a bitmap of the chunks that have been written to the new file...<br \/>\nI should check what the original unionfs does with this.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I finally gave a try to my pervert solution. Using unionfs led to a kernel oops on the service console (which means it brought VMs down, too). funionfs, as a fuse filesystem, could not lead to the same result, but i had some issues with it: result of lseek() being put in an int variable [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[23],"class_list":["post-106","post","type-post","status-publish","format-standard","hentry","category-diskimgfs","tag-en"],"_links":{"self":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/106","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=106"}],"version-history":[{"count":1,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/106\/revisions"}],"predecessor-version":[{"id":750,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/106\/revisions\/750"}],"wp:attachment":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=106"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=106"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}