On Jan 23, 2018, at 1:44 AM, Mark H Weaver <m...@netris.org> wrote: > Andreas Dilger <adil...@dilger.ca> writes: > >> On Jan 20, 2018, at 5:06 PM, Mark H Weaver <m...@netris.org> wrote: >>> Yes, on Btrfs I reliably see (st_blocks == 0) on a recently written, >>> mostly sparse file with size > 8G, using linux-libre-4.14.14. More >>> specifically, the "storing sparse files > 8G" test in tar's test suite >>> reliably fails on my system: >>> >>> 140: storing sparse files > 8G FAILED (sparse03.at:29) >> >> I'd consider this a bug in Btrfs. > > On what basis? Can you formulate a precise rule regarding 'st_blocks' > that is worth defending, that would enable this optimization, and that > Btrfs is violating here?
We considered it a bug in ext4 and Lustre on the basis that it broke existing tools (tar, and AFAIR cp) that were working fine when delayed allocation and inline data features were not enabled. Since we were in a position to fix the filesystems faster than other tools (potentially those beyond tar/cp that we were not aware of), we decided to return an approximation for the st_blocks value to ensure that userspace tools do not behave incorrectly. >> As mentioned previously, we had the same problem with ext4 (twice) and >> Lustre, and in both cases fixed this by adding in the (dirty page >> cache pages/512) if the current block count is zero: > > Would you like to propose a fix to the Btrfs developers? I don't use Btrfs and don't know anything about the code, so am not really in a position to do that, but would be happy to discuss it with them if you CC me on a thread/bugzilla related to the issue. Cheers, Andreas
signature.asc
Description: Message signed with OpenPGP