on 22/09/2010 10:25 Andriy Gapon said the following:
2. patch that attempts to implement Jeff's three suggestions; I've tested
per-CPU cache size adaptive behavior, works well, but haven't tested per-CPU
cache draining yet:
http://people.freebsd.org/~avg/uma-2.diff
Now I've fully tested this
on 21/09/2010 19:16 Alan Cox said the following:
Actually, I think that there is a middle ground between per-cpu caches and
directly from the VM that we are missing. When I've looked at the default
configuration of ZFS (without the extra UMA zones enabled), there is an
incredible amount of
on 19/09/2010 11:42 Andriy Gapon said the following:
on 19/09/2010 11:27 Jeff Roberson said the following:
I don't like this because even with very large buffers you can still have
high
enough turnover to require per-cpu caching. Kip specifically added UMA
support
to address this issue in
on 19/09/2010 01:16 Jeff Roberson said the following:
Additionally we could make a last ditch flush mechanism that runs on each cpu
in
How would you qualify a last ditch trigger?
Would this be called from standard vm_lowmem look or would there be some extra
check for even more severe memory
on 21/09/2010 09:39 Jeff Roberson said the following:
I'm afraid there is not enough context here for me to know what 'the same
mechanism' is or what solaris does. Can you elaborate?
This was in my first post:
[[[
There is this good book:
on 21/09/2010 09:35 Jeff Roberson said the following:
On Tue, 21 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 01:16 Jeff Roberson said the following:
Additionally we could make a last ditch flush mechanism that runs on each
cpu in
How would you qualify a last ditch trigger?
Would this be
On Tue, 21 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 01:16 Jeff Roberson said the following:
Additionally we could make a last ditch flush mechanism that runs on each cpu in
How would you qualify a last ditch trigger?
Would this be called from standard vm_lowmem look or would there be some
On Tue, 21 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 11:42 Andriy Gapon said the following:
on 19/09/2010 11:27 Jeff Roberson said the following:
I don't like this because even with very large buffers you can still have high
enough turnover to require per-cpu caching. Kip specifically
On Tue, Sep 21, 2010 at 1:39 AM, Jeff Roberson jrober...@jroberson.netwrote:
On Tue, 21 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 11:42 Andriy Gapon said the following:
on 19/09/2010 11:27 Jeff Roberson said the following:
I don't like this because even with very large buffers you can
on 19/09/2010 11:27 Jeff Roberson said the following:
On Sun, 19 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 01:16 Jeff Roberson said the following:
Additionally we could make a last ditch flush mechanism that runs on each
cpu in
turn and flushes some or all of the buckets in per-cpu
on item size
seems to work rather well too, as my testing with zfs+uma shows.
I will also try to add code to completely bypass the per-cpu cache for really
huge items.
--
Andriy Gapon
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org
on 19/09/2010 11:27 Jeff Roberson said the following:
I don't like this because even with very large buffers you can still have high
enough turnover to require per-cpu caching. Kip specifically added UMA
support
to address this issue in zfs. If you have allocations which don't require
anyone who wishes to undertake it.
FWIW, the approach of simply limiting maximum bucket size based on item size
seems to work rather well too, as my testing with zfs+uma shows.
I will also try to add code to completely bypass the per-cpu cache for really
huge items.
I don't like this because even
be happy to review patches from anyone who wishes to undertake it.
FWIW, the approach of simply limiting maximum bucket size based on item size
seems to work rather well too, as my testing with zfs+uma shows.
I will also try to add code to completely bypass the per-cpu cache for really
huge items
On 19 Sep 2010, at 09:42, Andriy Gapon wrote:
on 19/09/2010 11:27 Jeff Roberson said the following:
I don't like this because even with very large buffers you can still have
high
enough turnover to require per-cpu caching. Kip specifically added UMA
support
to address this issue in zfs.
On Fri, 17 Sep 2010, Andre Oppermann wrote:
Although keeping free items around improves performance, it does consume
memory too. And the fact that that memory is not freed on lowmem condition
makes the situation worse.
Interesting. We may run into related issues with excessive mbuf
on 18/09/2010 14:23 Robert Watson said the following:
I've been keeping a vague eye out for this over the last few years, and
haven't
spotted many problems in production machines I've inspected. You can use the
umastat tool in the tools tree to look at the distribution of memory over
On 18 Sep 2010, at 12:27, Andriy Gapon wrote:
on 18/09/2010 14:23 Robert Watson said the following:
I've been keeping a vague eye out for this over the last few years, and
haven't
spotted many problems in production machines I've inspected. You can use the
umastat tool in the tools tree
Robert Watson rwat...@freebsd.org wrote:
On Fri, 17 Sep 2010, Andre Oppermann wrote:
Although keeping free items around improves performance, it does consume
memory too. And the fact that that memory is not freed on lowmem
condition
makes the situation worse.
Interesting. We
on 18/09/2010 14:30 Robert N. M. Watson said the following:
Those issues are closely related, and in particular, wanted to point Andre at
umastat since he's probably not aware of it.. :-)
I didn't know about the tool too, so thanks!
But I perceived the issues as quite opposite: small items vs
On 18 Sep 2010, at 13:35, Fabian Keil wrote:
Doesn't build for me on amd64:
f...@r500 /usr/src/tools/tools/umastat $make
Warning: Object directory not changed from original
/usr/src/tools/tools/umastat
cc -O2 -pipe -fno-omit-frame-pointer -std=gnu99 -fstack-protector
-Wsystem-headers
On Sat, Sep 18, 2010 at 6:52 AM, Robert N. M. Watson
rwat...@freebsd.org wrote:
On 18 Sep 2010, at 13:35, Fabian Keil wrote:
Doesn't build for me on amd64:
f...@r500 /usr/src/tools/tools/umastat $make
Warning: Object directory not changed from original
/usr/src/tools/tools/umastat
cc -O2
On 18 September 2010 17:52, Robert N. M. Watson rwat...@freebsd.org wrote:
On 18 Sep 2010, at 13:35, Fabian Keil wrote:
Doesn't build for me on amd64:
f...@r500 /usr/src/tools/tools/umastat $make
Warning: Object directory not changed from original
/usr/src/tools/tools/umastat
cc -O2
FWIW, kvm_read taking the second argument as unsigned long instead of
void* seems a bit inconsistent:
I think it done on purpose, since address in the kernel address space
has nothing to do with pointers for mere userland mortals. We shouldn't
bother compiler with aliasing and other stuff in
On Sat, 18 Sep 2010, Robert Watson wrote:
On Fri, 17 Sep 2010, Andre Oppermann wrote:
Although keeping free items around improves performance, it does consume
memory too. And the fact that that memory is not freed on lowmem
condition makes the situation worse.
Interesting. We may run
I've been investigating interaction between zfs and uma for a while.
You might remember that there is a noticeable fragmentation in zfs uma zones
when uma use is not enabled for actual data/metadata buffers.
I also noticed that when uma use is enabled for data/metadata buffers
(zio.use_uma=1
on 17/09/2010 15:30 Andre Oppermann said the following:
Having a general solutions for that is appreciated. Maybe the size
of the free per-cpu buckets should be specified when setting up the
UMA zone. Of certain frequently re-used elements we may want to
cache more, other less.
This kind of
On 17.09.2010 10:14, Andriy Gapon wrote:
I've been investigating interaction between zfs and uma for a while.
You might remember that there is a noticeable fragmentation in zfs uma zones
when uma use is not enabled for actual data/metadata buffers.
I also noticed that when uma use is enabled
28 matches
Mail list logo