Re: zfs + uma

2010-09-24 Thread Andriy Gapon
on 22/09/2010 10:25 Andriy Gapon said the following: 2. patch that attempts to implement Jeff's three suggestions; I've tested per-CPU cache size adaptive behavior, works well, but haven't tested per-CPU cache draining yet: http://people.freebsd.org/~avg/uma-2.diff Now I've fully tested this

Re: zfs + uma

2010-09-22 Thread Andriy Gapon
on 21/09/2010 19:16 Alan Cox said the following: Actually, I think that there is a middle ground between per-cpu caches and directly from the VM that we are missing. When I've looked at the default configuration of ZFS (without the extra UMA zones enabled), there is an incredible amount of

Re: zfs + uma

2010-09-21 Thread Andriy Gapon
on 19/09/2010 11:42 Andriy Gapon said the following: on 19/09/2010 11:27 Jeff Roberson said the following: I don't like this because even with very large buffers you can still have high enough turnover to require per-cpu caching. Kip specifically added UMA support to address this issue in

Re: zfs + uma

2010-09-21 Thread Andriy Gapon
on 19/09/2010 01:16 Jeff Roberson said the following: Additionally we could make a last ditch flush mechanism that runs on each cpu in How would you qualify a last ditch trigger? Would this be called from standard vm_lowmem look or would there be some extra check for even more severe memory

Re: zfs + uma

2010-09-21 Thread Andriy Gapon
on 21/09/2010 09:39 Jeff Roberson said the following: I'm afraid there is not enough context here for me to know what 'the same mechanism' is or what solaris does. Can you elaborate? This was in my first post: [[[ There is this good book:

Re: zfs + uma

2010-09-21 Thread Andriy Gapon
on 21/09/2010 09:35 Jeff Roberson said the following: On Tue, 21 Sep 2010, Andriy Gapon wrote: on 19/09/2010 01:16 Jeff Roberson said the following: Additionally we could make a last ditch flush mechanism that runs on each cpu in How would you qualify a last ditch trigger? Would this be

Re: zfs + uma

2010-09-21 Thread Jeff Roberson
On Tue, 21 Sep 2010, Andriy Gapon wrote: on 19/09/2010 01:16 Jeff Roberson said the following: Additionally we could make a last ditch flush mechanism that runs on each cpu in How would you qualify a last ditch trigger? Would this be called from standard vm_lowmem look or would there be some

Re: zfs + uma

2010-09-21 Thread Jeff Roberson
On Tue, 21 Sep 2010, Andriy Gapon wrote: on 19/09/2010 11:42 Andriy Gapon said the following: on 19/09/2010 11:27 Jeff Roberson said the following: I don't like this because even with very large buffers you can still have high enough turnover to require per-cpu caching. Kip specifically

Re: zfs + uma

2010-09-21 Thread Alan Cox
On Tue, Sep 21, 2010 at 1:39 AM, Jeff Roberson jrober...@jroberson.netwrote: On Tue, 21 Sep 2010, Andriy Gapon wrote: on 19/09/2010 11:42 Andriy Gapon said the following: on 19/09/2010 11:27 Jeff Roberson said the following: I don't like this because even with very large buffers you can

Re: zfs + uma

2010-09-20 Thread Andriy Gapon
on 19/09/2010 11:27 Jeff Roberson said the following: On Sun, 19 Sep 2010, Andriy Gapon wrote: on 19/09/2010 01:16 Jeff Roberson said the following: Additionally we could make a last ditch flush mechanism that runs on each cpu in turn and flushes some or all of the buckets in per-cpu

Re: zfs + uma

2010-09-19 Thread Andriy Gapon
on item size seems to work rather well too, as my testing with zfs+uma shows. I will also try to add code to completely bypass the per-cpu cache for really huge items. -- Andriy Gapon ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org

Re: zfs + uma

2010-09-19 Thread Andriy Gapon
on 19/09/2010 11:27 Jeff Roberson said the following: I don't like this because even with very large buffers you can still have high enough turnover to require per-cpu caching. Kip specifically added UMA support to address this issue in zfs. If you have allocations which don't require

Re: zfs + uma

2010-09-19 Thread Jeff Roberson
anyone who wishes to undertake it. FWIW, the approach of simply limiting maximum bucket size based on item size seems to work rather well too, as my testing with zfs+uma shows. I will also try to add code to completely bypass the per-cpu cache for really huge items. I don't like this because even

Re: zfs + uma

2010-09-19 Thread Robert N. M. Watson
be happy to review patches from anyone who wishes to undertake it. FWIW, the approach of simply limiting maximum bucket size based on item size seems to work rather well too, as my testing with zfs+uma shows. I will also try to add code to completely bypass the per-cpu cache for really huge items

Re: zfs + uma

2010-09-19 Thread Robert N. M. Watson
On 19 Sep 2010, at 09:42, Andriy Gapon wrote: on 19/09/2010 11:27 Jeff Roberson said the following: I don't like this because even with very large buffers you can still have high enough turnover to require per-cpu caching. Kip specifically added UMA support to address this issue in zfs.

Re: zfs + uma

2010-09-18 Thread Robert Watson
On Fri, 17 Sep 2010, Andre Oppermann wrote: Although keeping free items around improves performance, it does consume memory too. And the fact that that memory is not freed on lowmem condition makes the situation worse. Interesting. We may run into related issues with excessive mbuf

Re: zfs + uma

2010-09-18 Thread Andriy Gapon
on 18/09/2010 14:23 Robert Watson said the following: I've been keeping a vague eye out for this over the last few years, and haven't spotted many problems in production machines I've inspected. You can use the umastat tool in the tools tree to look at the distribution of memory over

Re: zfs + uma

2010-09-18 Thread Robert N. M. Watson
On 18 Sep 2010, at 12:27, Andriy Gapon wrote: on 18/09/2010 14:23 Robert Watson said the following: I've been keeping a vague eye out for this over the last few years, and haven't spotted many problems in production machines I've inspected. You can use the umastat tool in the tools tree

Re: zfs + uma

2010-09-18 Thread Fabian Keil
Robert Watson rwat...@freebsd.org wrote: On Fri, 17 Sep 2010, Andre Oppermann wrote: Although keeping free items around improves performance, it does consume memory too. And the fact that that memory is not freed on lowmem condition makes the situation worse. Interesting. We

Re: zfs + uma

2010-09-18 Thread Andriy Gapon
on 18/09/2010 14:30 Robert N. M. Watson said the following: Those issues are closely related, and in particular, wanted to point Andre at umastat since he's probably not aware of it.. :-) I didn't know about the tool too, so thanks! But I perceived the issues as quite opposite: small items vs

Re: zfs + uma

2010-09-18 Thread Robert N. M. Watson
On 18 Sep 2010, at 13:35, Fabian Keil wrote: Doesn't build for me on amd64: f...@r500 /usr/src/tools/tools/umastat $make Warning: Object directory not changed from original /usr/src/tools/tools/umastat cc -O2 -pipe -fno-omit-frame-pointer -std=gnu99 -fstack-protector -Wsystem-headers

Re: zfs + uma

2010-09-18 Thread Garrett Cooper
On Sat, Sep 18, 2010 at 6:52 AM, Robert N. M. Watson rwat...@freebsd.org wrote: On 18 Sep 2010, at 13:35, Fabian Keil wrote: Doesn't build for me on amd64: f...@r500 /usr/src/tools/tools/umastat $make Warning: Object directory not changed from original /usr/src/tools/tools/umastat cc -O2

Re: zfs + uma

2010-09-18 Thread pluknet
On 18 September 2010 17:52, Robert N. M. Watson rwat...@freebsd.org wrote: On 18 Sep 2010, at 13:35, Fabian Keil wrote: Doesn't build for me on amd64: f...@r500 /usr/src/tools/tools/umastat $make Warning: Object directory not changed from original /usr/src/tools/tools/umastat cc -O2

Re: zfs + uma

2010-09-18 Thread Marcin Cieslak
FWIW, kvm_read taking the second argument as unsigned long instead of void* seems a bit inconsistent: I think it done on purpose, since address in the kernel address space has nothing to do with pointers for mere userland mortals. We shouldn't bother compiler with aliasing and other stuff in

Re: zfs + uma

2010-09-18 Thread Jeff Roberson
On Sat, 18 Sep 2010, Robert Watson wrote: On Fri, 17 Sep 2010, Andre Oppermann wrote: Although keeping free items around improves performance, it does consume memory too. And the fact that that memory is not freed on lowmem condition makes the situation worse. Interesting. We may run

zfs + uma

2010-09-17 Thread Andriy Gapon
I've been investigating interaction between zfs and uma for a while. You might remember that there is a noticeable fragmentation in zfs uma zones when uma use is not enabled for actual data/metadata buffers. I also noticed that when uma use is enabled for data/metadata buffers (zio.use_uma=1

Re: zfs + uma

2010-09-17 Thread Andriy Gapon
on 17/09/2010 15:30 Andre Oppermann said the following: Having a general solutions for that is appreciated. Maybe the size of the free per-cpu buckets should be specified when setting up the UMA zone. Of certain frequently re-used elements we may want to cache more, other less. This kind of

Re: zfs + uma

2010-09-17 Thread Andre Oppermann
On 17.09.2010 10:14, Andriy Gapon wrote: I've been investigating interaction between zfs and uma for a while. You might remember that there is a noticeable fragmentation in zfs uma zones when uma use is not enabled for actual data/metadata buffers. I also noticed that when uma use is enabled