On Tue, Nov 12, 2013 at 12:09 PM, Pawel Jakub Dawidek <[email protected]>wrote:

> On Tue, Nov 12, 2013 at 05:57:47PM +0200, Alexander Motin wrote:
> > On 12.11.2013 17:45, Pawel Jakub Dawidek wrote:
> > > On Tue, Nov 12, 2013 at 05:38:43PM +0200, Alexander Motin wrote:
> > >> Hi.
> > >>
> > >> While doing some performance tests I've found that LZ4 compression in
> > >> ZFS on FreeBSD each time allocates hash memory directly from VM, that
> on
> > >> multi-core system under significant load may consume more CPU time
> then
> > >> the compression itself. On 64-bit illumos that memory is allocated on
> > >> stack, but FreeBSD's kernel stack is smaller and has no sufficient
> space
> > >> (16K). I've made quite simple patch to reduce the allocation overhead
> by
> > >> creating allocation cache, same as it is done for ZIO. While for 64bit
> > >> illumos this patch is a nop, smaller architectures may still benefit
> > >> from it, same as FreeBSD does.
> > >>
> > >> Any comments about it: http://people.freebsd.org/~mav/lz4_alloc.patch?
> > >
> > > Isn't compression done using dedicated ZFS-only ZIO threads? Why can't
> > > we just increase stack size for those threads?
> >
> > Hmm. I haven't thought that way. May be we could, except there are two
> > layers of wrappers of Illumos and FreeBSD taskqueue(9) KPI in between,
> > that have no idea about stack size. And also in that way we will always
> > want some more.
>
> From what I see the stack consumption isn't related to I/O size, so we
> know exactly how much additional stack space we need here. Of course if
> we cannot measure the difference between using stack and UMA then UMA is
> clear choice. If we can however measure the difference then I'd be for
> growing the stack, as LZ4 is all about speed.
>
>
FYI, on illumos there was no measurable performance difference.  And it
could run out of stack space even with our 40KB stacks (yikes!).

--matt
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to