>On 2012-04-26 11:27, Fred Liu wrote:
>> "zfs 'userused@' properties" and "'zfs userspace' command" are good
>> enough to gather usage statistics.
>...
>> Since no one is focusing on enabling default user/group quota now, the
>> temporarily remedy could be a script which traverse all the
>> users/groups in the directory tree. Tough it is not so decent.
>
>find /export/home -type f -uid 12345 -exec du -ks '{}' \; | summing-script
>
>I think you could use some prefetch of dirtree traversal, like a "slocate"
>database, or roll your own (perl script).
>But yes, it does seem like stone age compared to ZFS ;)
>

Thanks for the hint. I mean " traverse all the users/groups in the directory 
tree"
as "getting all user/group info from naming service like nis/ldap" for a 
specific file system.
And for each found item, we can use "zfs set userquota@/groupquota@" to set the 
default value.
As for usage accounting, "zfs 'userused@' properties and 'zfs userspace' 
command" are good enough.
We can also use a script to do the summing jobs via traversing all the 
pools/filesystems.

>> Currently, dedup/compression is pool-based right now,
>
>Dedup is pool-wide, compression is dataset-wide, applied to individual blocks.
>Even deeper, both settings apply to new writes after the corresponding
>dataset's property was set (i.e. a dataset can have files with mixed
>compression levels, as well as both deduped and unique files).
>
>
>> they don't have
>> the granularity on file system or user or group level.
>> There is also a lot of improving space in this aspect.
>
>This particular problem was discussed a number of times back on OpenSolaris
>forum. It boiled down to what you actually want to have accounted and
>perhaps billed - the raw resources spent by storage system, or the logical
>resources accessed and used by its users?
>
>Say, you provide VMs with 100Gb of disk space, but your dedup is lucky
>enough to use 1TB overall for say 100 VMs. You can bill 100 users for full
>100Gb each, but your operations budget (and further planning, etc.) has only
>been hit for 1Tb.
>

The ideal situation is we know exactly both the logical usage and the physical 
usage per user/group.
But that is not applicable for now. And assuming even we know it, we still 
cannot estimate the physical
usage for dedup/compression varies by the using pattern.

Yes. We do get bonus from dedup/compression. But there is no good way to make 
it fit into budget plan from my side.

>HTH,
>//Jim
>
>_______________________________________________
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to