On 10/18/13 10:16, Saso Kiselkov wrote:
On 10/18/13 4:36 PM, Prakash Surya wrote:
Other than what Matt brought up, it would be nice to have a way to
override the default values without recompiling. But we can add that to
the Linux port as needed. I don't really know how you guys support that
sort of thing.

On the Linux port I think the guys add sysctl's to change these values.
In Illumos we simply use mdb and modify the in-core value at runtime
(that's essentially what /etc/system does).

Kernel module options are used under Linux. They can all be set at module load time and some may be changed through the sysfs virtual filesystem.

Additional comments:

235  * We want to allocate one hash lock for every 4GB of memory with a
236  * of MIN_BUF_LOCKS.
237  */
238 uint64_t zfs_arc_ht_lock_shift = 32;
239 #define MIN_BUF_LOCKS   256

Have you considered allocating more hash locks by default? I've suspected for a while now that this relatively small number of locks can cause a noticable performance for highly parallel workloads. Perhaps increase the minimum to 1024, we're only talking about a few extra KB of space.

748 static inline uint64_t

The 'inline' keyword was added as part of C99. GNU C has supported it as an extension forever so this isn't an issue for Linux, but it is something to be aware of. Do you actually see any impact on performance when passing this hint?

888         kmem_free(buf_hash_table.ht_locks, sizeof (struct ht_lock) *
889             buf_hash_table.ht_num_locks);
....
986         buf_hash_table.ht_locks = kmem_zalloc(sizeof (struct ht_lock) *
987             buf_hash_table.ht_num_locks, KM_SLEEP);

I'm all for keeping this code simple, but please use vmem_zalloc() / vmem_free() so we can use this code unmodified on Linux. On 64-bit systems there's plenty of virtual address space for this and since it's a one time allocation we can afford the cost of setting up the mapping.

Aside from the above it looks good to me. It would great if you could propose an identical change for the dbuf hash table which suffers from the exact same issues.

Thanks,
Brian
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to