{"id":1051,"date":"2010-09-25T17:53:53","date_gmt":"2010-09-25T15:53:53","guid":{"rendered":"http:\/\/glandium.org\/blog\/?p=1051"},"modified":"2010-09-25T18:58:12","modified_gmt":"2010-09-25T16:58:12","slug":"dont-trust-btrfs-show","status":"publish","type":"post","link":"https:\/\/glandium.org\/blog\/?p=1051","title":{"rendered":"Don\u00e2\u20ac\u2122t trust btrfs-show"},"content":{"rendered":"<p>I recently started using btrfs, in both raid0 and raid1 setups, and migrated some data there.<\/p>\n<p>Once all the data was copied, I wanted to check how much overhead moving to btrfs meant, space-wise.<\/p>\n<blockquote><p><code># df -h \/mnt\/share<br \/>\nFilesystem            Size  Used Avail Use% Mounted on<br \/>\n\/dev\/sdb3             932G  682G  250G  74% \/mnt\/share<\/code><\/p>\n<p><code># btrfs-show \/dev\/sdb3<br \/>\nLabel: none  uuid: e06484de-fb18-4564-b6e6-adbaed8bebec<br \/>\n\tTotal devices 2 FS bytes used 681.61GB<br \/>\n\tdevid    2 size 465.66GB used 427.13GB path \/dev\/sdb3<br \/>\n\tdevid    1 size 465.66GB used 427.15GB path \/dev\/sda3<br \/>\n<\/code><\/p><\/blockquote>\n<p>The filesystem was freshly created, in a raid0 fashion, so both 465GB partitions are used to form (roughly) a 930GB volume.<br \/>\nSo, I had filled it with 682GB of data, and <code>df<\/code> was telling me 250GB were left, which is not unexpected. What I didn't expect, though, is <code>btrfs-show<\/code> telling me 427GB were used on each partition. Again, the filesystem was freshly created and the data had just been copied over, so I wasn't expecting these numbers to be off.<\/p>\n<p>Which meant 854GB were being allocated on disk for 681GB of data, 25% overhead !<\/p>\n<p>Curious to know how much left that really meant, I just tried a stupid thing: filling as much as btrfs permits:<\/p>\n<blockquote><p><code># dd if=\/dev\/zero of=\/mnt\/share\/zero bs=1M<br \/>\ndd: writing `\/mnt\/share\/zero': No space left on device<br \/>\n253359+0 records in<br \/>\n253358+0 records out<br \/>\n265665118208 bytes (266 GB) copied, 1144.41 s, 232 MB\/s<br \/>\n# du -sh \/mnt\/share\/zero<br \/>\n248G\t\/mnt\/share\/zero<br \/>\n<\/code><\/p><\/blockquote>\n<p>It turns out I actually could write for the amount of free space <code>df<\/code> was claiming was free (modulo GiB-GB conversions).<\/p>\n<blockquote><p><code># df -h \/mnt\/share<br \/>\nFilesystem            Size  Used Avail Use% Mounted on<br \/>\n\/dev\/sdb3             932G  930G  2.0G 100% \/mnt\/share<\/code><\/p>\n<p><code># btrfs-show \/dev\/sdb3<br \/>\nLabel: none  uuid: e06484de-fb18-4564-b6e6-adbaed8bebec<br \/>\n    Total devices 2 FS bytes used 929.36GB<br \/>\n    devid    2 size 465.66GB used 465.13GB path \/dev\/sdb3<br \/>\n    devid    1 size 465.66GB used 465.15GB path \/dev\/sda3<br \/>\n<\/code><\/p><\/blockquote>\n<p>So, this time <code>df<\/code> is slightly off, and <code>btrfs-show<\/code> somehow agrees the filesystem is full.<\/p>\n<p>But surprises don't stop there:<\/p>\n<blockquote><p><code># rm \/mnt\/share\/zero<\/code><\/p>\n<p><code># df -h \/mnt\/share<br \/>\nFilesystem            Size  Used Avail Use% Mounted on<br \/>\n\/dev\/sdb3             932G  682G  250G  74% \/mnt\/share<\/code><\/p>\n<p><code># btrfs-show \/dev\/sda3<br \/>\nLabel: none  uuid: e06484de-fb18-4564-b6e6-adbaed8bebec<br \/>\n\tTotal devices 2 FS bytes used 929.36GB<br \/>\n\tdevid    2 size 465.66GB used 465.13GB path \/dev\/sdb3<br \/>\n\tdevid    1 size 465.66GB used 465.15GB path \/dev\/sda3<br \/>\n<\/code><\/p><\/blockquote>\n<p>That's right, even after the file removal, for <code>btrfs-show<\/code>, everything is still in use. Even after several sync or several hours later.<\/p>\n<p>All in all, just don't trust what <code>btrfs-show<\/code> says.<\/p>\n<p>NB: one also needs to know that <code>df<\/code> reports wrong sizes for raid1 setups, but that's another story.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I recently started using btrfs, in both raid0 and raid1 setups, and migrated some data there. Once all the data was copied, I wanted to check how much overhead moving to btrfs meant, space-wise. # df -h \/mnt\/share Filesystem Size Used Avail Use% Mounted on \/dev\/sdb3 932G 682G 250G 74% \/mnt\/share # btrfs-show \/dev\/sdb3 Label: [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[23],"class_list":["post-1051","post","type-post","status-publish","format-standard","hentry","category-pdo","tag-en"],"_links":{"self":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1051","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1051"}],"version-history":[{"count":10,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1051\/revisions"}],"predecessor-version":[{"id":1066,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1051\/revisions\/1066"}],"wp:attachment":[{"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1051"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1051"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/glandium.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1051"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}