On Thu, 26 Apr 2007, Nick Piggin wrote:

> But I maintain that the end result is better than the fragmentation
> based approach. A lot of people don't actually want a bigger page
> cache size, because they want efficient internal fragmentation as
> well, so your radix-tree based approach isn't really comparable.

Me? Radix tree based approach? That approach is in the kernel. Do not 
create a solution where there is no problem. If we do not want to 
support large blocksizes then lets be honest and say so instead of 
redefining what a block is. The current approach is fine if one is 
satisfied with scatter gather and the VM overhead coming with handling 
these pages. I fail to see what any of what you are proposing would add to 
that.

Lets be clear here: A bigger page cache size if its just one is not 
useful. 4k page size is a good size for many files on the system and 
chaning it would break the binary format.. I just do not want it to be the 
only one because different usage scenarios may require differnet page 
sizes for optimal application performance.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to