On Fri, 4 Sep 2009, Andrew Deason wrote:
<snip>
The discrepancy we're hitting can be easily seen by plainly doing this
on a ZFS filesystem with default settings:
dd if=/dev/urandom of=somefile bs=1024 count=1024
sleep 10
dd if=/dev/urandom of=somefile bs=1 count=1
sleep 10
stat somefile | grep Size
Size: 1 Blocks: 261 IO Block: 131072 regular file
So, a file that is 1M then truncated down to 1 byte still takes up
130k-ish disk space.
<snap>
I'm not sure what to do about this. Does anyone reading this know enough
about ZFS internals to shed some light on this? I've got a few potential
directions to go in, though:
<snappeti>
Some additional (probably stupid) thoughts:
- What happens if you truncate the file by opening it with O_TRUNC, or
are we doing that already, or will it make the cache go bezerk?
- I was going to suggest doing fstat() on the fd, but it
probably won't help as it would return blocks _on disk_, and stuff
still hanging in memory pending a disk flush won't be counted.
- Or just use memcache...
/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se | [email protected]
---------------------------------------------------------------------------
Gravity is a law. Lawbreakers will be brought down!
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel