On 10/16/13 8:57 PM, Prakash Surya wrote:
> Well, we're just increasing the size of the hash. So, given enough RAM,
> the problem of having enough buffers in the cache to cause large chains
> still exists. Right? Although, "enough RAM" might not be practically
> achievable.

It would only be a problem if our relative resource usage changed
depending on the amount of the resource being present, i.e. if we used
0.1% of physmem at 1GB but 1% at 1TB and 10% at 1PB - that is a problem
that's getting out of hand and doesn't scale. But in this case, the
relative proportions *don't* change. It's still going to be 0.1%,
regardless of the actual amount of physical memory present.

> I'm no VM expert either, but speaking from Linux's perspective, I don't
> think virtual address space fragmentation is much of an issue. AFAIK,
> whether you're doing a 1M vmalloc or 1G vmalloc, VM fragmentation
> doesn't play much of an issue. The kernel will allocate non-contiguous
> pages and then present them as a contiguous region, so you just need
> enough free pages on the system to satisfy the request.
> 
> I should try and prod behlendorf about this, since he has much more
> experience on the subject than I do.
> 

Please do, I'd be very happy to simplify the code.

Cheers,
-- 
Saso
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to