On 10/15/13 3:34 AM, Richard Yao wrote:
> On 10/12/2013 07:21 PM, Saso Kiselkov wrote:
>> The current implementation is hardcoded guesswork as to the correct hash
>> table size. In principle, it works by taking the amount of physical
>> memory and dividing it by a 64k block size. The result is the amount of
>> hash buckets that will be created. On 64-bit machines this comes out to
>> roughly 128kB of hash tables for each 1 GB of physical memory. This
>> approach has obvious flaws:
> 
> What happens if you export the pool on one system and then try to import
> it on a system with less memory? SAS switches make that scenario more
> likely than most would expect.

The hash table is sized by each system individually when they load the
zfs kernel module (it's a purely in-memory data structure), so one
system will have a larger hash table than the other. See buf_init() in arc.c

Cheers,
-- 
Saso
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to