On Sep 18, 2009, at 7:36 AM, Andrew Deason wrote:
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski <mi...@task.gda.pl> wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much disk space is being used in the dataset.
Would it suffice for you?
No. We need to be able to tell how close to full we are, for
determining
when to start/stop removing things from the cache before we can add
new
items to the cache again.
The transactional nature of ZFS may work against you here.
Until the data is committed to disk, it is unclear how much space
it will consume. Compression clouds the crystal ball further.
I'd also _like_ not to require a dedicated dataset for it, but it's
not
like it's difficult for users to create one.
Use delegation. Users can create their own datasets, set parameters,
etc. For this case, you could consider changing recordsize, if you
really
are so worried about 1k. IMHO, it is easier and less expensive in
process and pain to just buy more disk when needed.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss