Mark Stapper a écrit :
Christian Walther wrote:
2009/7/29 grarpamp <grarp...@gmail.com>:
One week old build...

# df -i .
Filesystem   1K-blocks      Used Avail Capacity iused ifree %iused  Mounted on
ram01/mnt1 239465344 239465344     0   100%   13163     0  100%   /mnt1
# ls -aliT zero
20797 -rw-r--r--  1 user user  43515904 Jul 28 23:20:57 2009 zero
# rm -f zero
rm: zero: No space left on device
# :> zero
cannot create zero: File exists
# cp /dev/null zero
overwrite zero? (y/n [n]) y
# ls -aliT zero
20797 -rw-rw-rw-  1 root  wheel  0 Jul 28 23:25:17 2009 zero
# rm -f zero
[gone]
this is a known problem with the current version of ZFS. Due to the
way ZFS handles access to the data it stores, even a rm causes a
write, which requires some additional disk space in the beginning:
Instead of simply unlinking what should be removed ZFS creates another
tree without the removed data. Only if this new tree has been entirely
written to disk the old information is removed. This is a rather rough
explanation and probably not entirely correct, but I hope it suffices.
Only hope: Make sure that not all disk space is used.

Christian
Indeed, if by some coincident (like a growing logfile) every single byte
is used, even the copy action might fail...
To prevent this you could set the maximum size of all partitions in your
pool so that the sum of them is smaller then the size of your pool.
Just a thought.
Greetz,
Mark

Wouldn't that be counterproductive ?
One avantage of ZFS is that you don't need to manage space, and that free space is shared. Setting size as this would reintroduce the hassle of free space management. Wouldn't the simplest be to set a reservation on an unused (or known as non-growing) filesystem, and/or quotas to filesystem susceptible of growing out of control ?

Arnaud
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to