On 10/17/13 1:03 AM, Matthew Ahrens wrote:
> On Wed, Oct 16, 2013 at 5:00 PM, Steven Hartland
>     How about the case where the admin has specifically sized a smaller
>     zfs_arc_max to keep ZFS / ARC memory requirements down as they want
>     the memory for other uses, and there is no L2ARC.
> 
>     In this case sizing the hash based of the machine physmem could counter
>     act this and hence cause a problem, could it not?
> 
>     I know its extreme but for example a machine with 256GB of ram but
>     zfs_arc_max set to 1GB you'd be allocating 256MB of that as the hash
>     size, which is surely a massive waste as you wouldn't need 256MB of
>     hash for just 1GB of ARC buffers?
> 
>     Am I still barking up the wrong tree?
> 
> 
> They can dynamically change arc_c_max after we've booted, which could
> leave the hash table much too small, if it was sized based on what
> zfs_arc_max was when we booted.
> 
> I'd say keep it simple until we see a problem.

+1.

-- 
Saso

_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to