Re: Zerocopy NBD

2001-05-30 Thread Marcelo Tosatti


On Wed, 30 May 2001, Steve Whitehouse wrote:

> Hi,
> 
> > 
> > On Wed, 30 May 2001, Steve Whitehouse wrote:
> > >
> [info about NBD patch deleted] 
> > >
> > Cool. 
> > 
> > Are you seeing performance improvements with the patch ?
> >  
> 
> Yes, but my testing is not in anyway complete yet. The only network device
> I have which is supported by zerocopy is loopback and there appear to be
> problems with deadlocks when using NBD over loopback. So what I did was to
> modify the NBD server (the userland one from Pavel Machek's web site)
> so that it didn't actually do any disk I/O. It still copied the data from
> the network into a buffer on write and it returns zeroed buffers on read
> (not that thats important as only the write patch is affected in the patch).
> 
> I could then test using dd which is a bit artificial in that it creates
> large requests giving probably much more data per NBD request than would
> be usual under a filesystem load and hence also better with the zerocopy
> patch. A timed dd with 10 blocks of 1k spent 1.2 secs of system time
> to do the write with NBD in 2.4.5 and 0.8 secs with my patch.

Copying bunchs of sequential data with 'dd' is OK for testing it ---
you're trying to measure only device speed, not fs speed. 

> Also it may well be possible to adjust the network stack's memory management
> to give better performance. I upped the values in tcp_[r|w]mem but I've
> not checked what different vaules would do to those figures.
> 
> I want to do some more testing though in case I've made an error somewhere
> in the method. I'd be particularly interested to hear from someone who
> has any results for real hardware. If I have time I'll look into whether
> the eepro100 or SysKonnect GigE cards could be made to support zerocopy
> as they are the ones I have here,


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Zerocopy NBD

2001-05-30 Thread Steve Whitehouse

Hi,

> 
> On Wed, 30 May 2001, Steve Whitehouse wrote:
> >
[info about NBD patch deleted] 
> >
> Cool. 
> 
> Are you seeing performance improvements with the patch ?
>  

Yes, but my testing is not in anyway complete yet. The only network device
I have which is supported by zerocopy is loopback and there appear to be
problems with deadlocks when using NBD over loopback. So what I did was to
modify the NBD server (the userland one from Pavel Machek's web site)
so that it didn't actually do any disk I/O. It still copied the data from
the network into a buffer on write and it returns zeroed buffers on read
(not that thats important as only the write patch is affected in the patch).

I could then test using dd which is a bit artificial in that it creates
large requests giving probably much more data per NBD request than would
be usual under a filesystem load and hence also better with the zerocopy
patch. A timed dd with 10 blocks of 1k spent 1.2 secs of system time
to do the write with NBD in 2.4.5 and 0.8 secs with my patch.

Also it may well be possible to adjust the network stack's memory management
to give better performance. I upped the values in tcp_[r|w]mem but I've
not checked what different vaules would do to those figures.

I want to do some more testing though in case I've made an error somewhere
in the method. I'd be particularly interested to hear from someone who
has any results for real hardware. If I have time I'll look into whether
the eepro100 or SysKonnect GigE cards could be made to support zerocopy
as they are the ones I have here,

Steve.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Zerocopy NBD

2001-05-30 Thread Marcelo Tosatti



On Wed, 30 May 2001, Steve Whitehouse wrote:

> Hi,
> 
> Attached is a patch I came up with recently to do add zerocopy support to
> NBD for writes. I'm not intending that this should go into the kernel
> before at least 2.5, I'm just sending it here in case it is useful to anyone.
> 
> I wrote it is a simple way to experiment with the new zerocopy code
> rather than in a bid to improve the efficiency NBD dramatically. I'm
> currently preparing a paper for the UKUUG Linux Conference which will
> present some results obtained with the patch and discuss it in more
> detail. The paper will be available on the web too nearer the time.

Cool. 

Are you seeing performance improvements with the patch ?
 


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Zerocopy NBD

2001-05-30 Thread Jens Axboe

On Wed, May 30 2001, Steve Whitehouse wrote:
> + if (PageHighMem(page))
> + offset = (int)bh->b_data;
> + else
> + offset = (int)bh->b_data - (int)page_address(page);

Side note:

offset = bh_offset(bh);

will handle this nicely for you. No need for (nasty) casting and
checking for highmem pages.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Zerocopy NBD

2001-05-30 Thread Jens Axboe

On Wed, May 30 2001, Steve Whitehouse wrote:
 + if (PageHighMem(page))
 + offset = (int)bh-b_data;
 + else
 + offset = (int)bh-b_data - (int)page_address(page);

Side note:

offset = bh_offset(bh);

will handle this nicely for you. No need for (nasty) casting and
checking for highmem pages.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Zerocopy NBD

2001-05-30 Thread Marcelo Tosatti



On Wed, 30 May 2001, Steve Whitehouse wrote:

 Hi,
 
 Attached is a patch I came up with recently to do add zerocopy support to
 NBD for writes. I'm not intending that this should go into the kernel
 before at least 2.5, I'm just sending it here in case it is useful to anyone.
 
 I wrote it is a simple way to experiment with the new zerocopy code
 rather than in a bid to improve the efficiency NBD dramatically. I'm
 currently preparing a paper for the UKUUG Linux Conference which will
 present some results obtained with the patch and discuss it in more
 detail. The paper will be available on the web too nearer the time.

Cool. 

Are you seeing performance improvements with the patch ?
 


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Zerocopy NBD

2001-05-30 Thread Steve Whitehouse

Hi,

 
 On Wed, 30 May 2001, Steve Whitehouse wrote:
 
[info about NBD patch deleted] 
 
 Cool. 
 
 Are you seeing performance improvements with the patch ?
  

Yes, but my testing is not in anyway complete yet. The only network device
I have which is supported by zerocopy is loopback and there appear to be
problems with deadlocks when using NBD over loopback. So what I did was to
modify the NBD server (the userland one from Pavel Machek's web site)
so that it didn't actually do any disk I/O. It still copied the data from
the network into a buffer on write and it returns zeroed buffers on read
(not that thats important as only the write patch is affected in the patch).

I could then test using dd which is a bit artificial in that it creates
large requests giving probably much more data per NBD request than would
be usual under a filesystem load and hence also better with the zerocopy
patch. A timed dd with 10 blocks of 1k spent 1.2 secs of system time
to do the write with NBD in 2.4.5 and 0.8 secs with my patch.

Also it may well be possible to adjust the network stack's memory management
to give better performance. I upped the values in tcp_[r|w]mem but I've
not checked what different vaules would do to those figures.

I want to do some more testing though in case I've made an error somewhere
in the method. I'd be particularly interested to hear from someone who
has any results for real hardware. If I have time I'll look into whether
the eepro100 or SysKonnect GigE cards could be made to support zerocopy
as they are the ones I have here,

Steve.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: Zerocopy NBD

2001-05-30 Thread Marcelo Tosatti


On Wed, 30 May 2001, Steve Whitehouse wrote:

 Hi,
 
  
  On Wed, 30 May 2001, Steve Whitehouse wrote:
  
 [info about NBD patch deleted] 
  
  Cool. 
  
  Are you seeing performance improvements with the patch ?
   
 
 Yes, but my testing is not in anyway complete yet. The only network device
 I have which is supported by zerocopy is loopback and there appear to be
 problems with deadlocks when using NBD over loopback. So what I did was to
 modify the NBD server (the userland one from Pavel Machek's web site)
 so that it didn't actually do any disk I/O. It still copied the data from
 the network into a buffer on write and it returns zeroed buffers on read
 (not that thats important as only the write patch is affected in the patch).
 
 I could then test using dd which is a bit artificial in that it creates
 large requests giving probably much more data per NBD request than would
 be usual under a filesystem load and hence also better with the zerocopy
 patch. A timed dd with 10 blocks of 1k spent 1.2 secs of system time
 to do the write with NBD in 2.4.5 and 0.8 secs with my patch.

Copying bunchs of sequential data with 'dd' is OK for testing it ---
you're trying to measure only device speed, not fs speed. 

 Also it may well be possible to adjust the network stack's memory management
 to give better performance. I upped the values in tcp_[r|w]mem but I've
 not checked what different vaules would do to those figures.
 
 I want to do some more testing though in case I've made an error somewhere
 in the method. I'd be particularly interested to hear from someone who
 has any results for real hardware. If I have time I'll look into whether
 the eepro100 or SysKonnect GigE cards could be made to support zerocopy
 as they are the ones I have here,


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/