Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-30 Thread Raghavendra G
On Thu, Sep 29, 2016 at 11:11 AM, Raghavendra G 
wrote:

>
>
> On Wed, Sep 28, 2016 at 7:37 PM, Shyam  wrote:
>
>> On 09/27/2016 04:02 AM, Poornima Gurusiddaiah wrote:
>>
>>> W.r.t Samba consuming this, it requires a great deal of code change in
>>> Samba.
>>> Currently samba has no concept of getting buf from the underlying file
>>> system,
>>> the filesystem comes into picture only at the last layer(gluster plugin),
>>> where system calls are replaced by libgfapi calls. Hence, this is not
>>> readily
>>> consumable by Samba, and i think same will be the case with NFS_Ganesha,
>>> will
>>> let the Ganesha folksc comment on the same.
>>>
>>
>> This is exactly my reservation about the nature of change [2] that is
>> done in this patch. We expect all consumers to use *our* buffer management
>> system, which may not be possible all the time.
>>
>> From the majority of consumers that I know of, other than what Sachin
>> stated as an advantage for CommVault, none of the others can use the
>> gluster buffers at the moment (Ganesha, SAMBA, qemu. (I would like to
>> understand how CommVault can use gluster buffers in this situation without
>> copying out data to the same, just for clarity).
>>
>
> +Jeff cody, for comments on QEMU
>
>
>>
>> This is the reason I posted the comments at [1], stating we should copy
>> out the buffer, when Gluster needs it preserved, but use application
>> provided buffers as long as we can.
>>
>
> My concerns here are:
>
> * We are just moving the copy from gfapi layer to write-behind. Though I
> am not sure what percentage of writes that hit write-behind are
> "written-back", I would assume it to be a significant percentage (otherwise
> there is no benefit in having write-behind). However, we can try this
> approach and get some perf data before we make a decision.
>
> * Buffer management. All gluster code uses iobuf/iobrefs to manage the
> buffers of relatively large size. With the approach suggested above, I see
> two concerns:
> a. write-behind has to differentiate between iobufs that need copying
> (write calls through gfapi layer) and iobufs that can just be refed (writes
> from fuse etc) when "writing-back" the write. This adds more complexity.
> b. For the case where write-behind chooses to not "write-back" the
> write, we need a way of encapsulating the application buffer into
> iobuf/iobref. This might need changes in iobuf infra.
>
>
>> I do see the advantages of zero-copy, but not when gluster api is
>> managing the buffers, it just makes it more tedious for applications to use
>> this scheme, IMHO.
>>
>
Another point we can consider here is gfapi (and gluster internal xlator
stack) providing both behaviors as mentioned below:
1. Making Glusterfs xlator stack use application buffers.
2. Forcing applications to use only gluster managed buffers if they want
zero copy.

Let the applications make choice on what interface to use, based on their
use-cases (as there is a trade-off in terms of performance, code changes,
legacy applications which are resistant to change etc).


>> Could we think and negate (if possible) thoughts around using the
>> application passed buffers as is? One caveat here seems to be when using
>> RDMA (we need the memory registered if I am not wrong), as that would
>> involve a copy to RDMA buffers when using application passed buffers.
>
>
> Actually RDMA is not a problem in the current implementation (ruling out
> suggestions by others to use a pre-registered iobufs  for managing io-cache
> etc). This is because, in current implementation the responsibility of
> registering the memory region lies in transport/rdma. In other words
> transport/rdma doesn't expect pre-registered buffers.
>
>
> What are the other pitfalls?
>>
>> [1] http://www.gluster.org/pipermail/gluster-devel/2016-August/0
>> 50622.html
>>
>> [2] http://review.gluster.org/#/c/14784/
>>
>>
>>>
>>> Regards,
>>> Poornima
>>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-30 Thread Raghavendra G
On Wed, Sep 28, 2016 at 7:37 PM, Shyam  wrote:

> On 09/27/2016 04:02 AM, Poornima Gurusiddaiah wrote:
>
>> W.r.t Samba consuming this, it requires a great deal of code change in
>> Samba.
>> Currently samba has no concept of getting buf from the underlying file
>> system,
>> the filesystem comes into picture only at the last layer(gluster plugin),
>> where system calls are replaced by libgfapi calls. Hence, this is not
>> readily
>> consumable by Samba, and i think same will be the case with NFS_Ganesha,
>> will
>> let the Ganesha folksc comment on the same.
>>
>
> This is exactly my reservation about the nature of change [2] that is done
> in this patch. We expect all consumers to use *our* buffer management
> system, which may not be possible all the time.
>
> From the majority of consumers that I know of, other than what Sachin
> stated as an advantage for CommVault, none of the others can use the
> gluster buffers at the moment (Ganesha, SAMBA, qemu. (I would like to
> understand how CommVault can use gluster buffers in this situation without
> copying out data to the same, just for clarity).
>

+Jeff cody, for comments on QEMU


>
> This is the reason I posted the comments at [1], stating we should copy
> out the buffer, when Gluster needs it preserved, but use application
> provided buffers as long as we can.
>

My concerns here are:

* We are just moving the copy from gfapi layer to write-behind. Though I am
not sure what percentage of writes that hit write-behind are
"written-back", I would assume it to be a significant percentage (otherwise
there is no benefit in having write-behind). However, we can try this
approach and get some perf data before we make a decision.

* Buffer management. All gluster code uses iobuf/iobrefs to manage the
buffers of relatively large size. With the approach suggested above, I see
two concerns:
a. write-behind has to differentiate between iobufs that need copying
(write calls through gfapi layer) and iobufs that can just be refed (writes
from fuse etc) when "writing-back" the write. This adds more complexity.
b. For the case where write-behind chooses to not "write-back" the
write, we need a way of encapsulating the application buffer into
iobuf/iobref. This might need changes in iobuf infra.


> I do see the advantages of zero-copy, but not when gluster api is managing
> the buffers, it just makes it more tedious for applications to use this
> scheme, IMHO.
>
> Could we think and negate (if possible) thoughts around using the
> application passed buffers as is? One caveat here seems to be when using
> RDMA (we need the memory registered if I am not wrong), as that would
> involve a copy to RDMA buffers when using application passed buffers.


Actually RDMA is not a problem in the current implementation (ruling out
suggestions by others to use a pre-registered iobufs  for managing io-cache
etc). This is because, in current implementation the responsibility of
registering the memory region lies in transport/rdma. In other words
transport/rdma doesn't expect pre-registered buffers.


What are the other pitfalls?
>
> [1] http://www.gluster.org/pipermail/gluster-devel/2016-August/050622.html
>
> [2] http://review.gluster.org/#/c/14784/
>
>
>>
>> Regards,
>> Poornima
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-28 Thread Shyam

On 09/27/2016 04:02 AM, Poornima Gurusiddaiah wrote:

W.r.t Samba consuming this, it requires a great deal of code change in Samba.
Currently samba has no concept of getting buf from the underlying file system,
the filesystem comes into picture only at the last layer(gluster plugin),
where system calls are replaced by libgfapi calls. Hence, this is not readily
consumable by Samba, and i think same will be the case with NFS_Ganesha, will
let the Ganesha folksc comment on the same.


This is exactly my reservation about the nature of change [2] that is 
done in this patch. We expect all consumers to use *our* buffer 
management system, which may not be possible all the time.


From the majority of consumers that I know of, other than what Sachin 
stated as an advantage for CommVault, none of the others can use the 
gluster buffers at the moment (Ganesha, SAMBA, qemu. (I would like to 
understand how CommVault can use gluster buffers in this situation 
without copying out data to the same, just for clarity).


This is the reason I posted the comments at [1], stating we should copy 
out the buffer, when Gluster needs it preserved, but use application 
provided buffers as long as we can.


I do see the advantages of zero-copy, but not when gluster api is 
managing the buffers, it just makes it more tedious for applications to 
use this scheme, IMHO.


Could we think and negate (if possible) thoughts around using the 
application passed buffers as is? One caveat here seems to be when using 
RDMA (we need the memory registered if I am not wrong), as that would 
involve a copy to RDMA buffers when using application passed buffers. 
What are the other pitfalls?


[1] http://www.gluster.org/pipermail/gluster-devel/2016-August/050622.html

[2] http://review.gluster.org/#/c/14784/




Regards,
Poornima

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-27 Thread Niels de Vos
On Tue, Sep 27, 2016 at 09:25:40AM +0300, Ric Wheeler wrote:
> On 09/27/2016 08:53 AM, Raghavendra Gowdappa wrote:
> > 
> > - Original Message -
> > > From: "Ric Wheeler" <rwhee...@redhat.com>
> > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Saravanakumar 
> > > Arumugam" <sarum...@redhat.com>
> > > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Ben Turner" 
> > > <btur...@redhat.com>, "Ben England"
> > > <bengl...@redhat.com>
> > > Sent: Tuesday, September 27, 2016 10:51:48 AM
> > > Subject: Re: [Gluster-devel] libgfapi zero copy write - application in 
> > > samba, nfs-ganesha
> > > 
> > > On 09/27/2016 07:56 AM, Raghavendra Gowdappa wrote:
> > > > +Manoj, +Ben turner, +Ben England.
> > > > 
> > > > @Perf-team,
> > > > 
> > > > Do you think the gains are significant enough, so that smb and 
> > > > nfs-ganesha
> > > > team can start thinking about consuming this change?
> > > > 
> > > > regards,
> > > > Raghavendra
> > > This is a large gain but I think that we might see even larger gains (a 
> > > lot
> > > depends on how we implement copy offload :)).
> > Can you elaborate on what you mean "copy offload"? If it is the way we 
> > avoid a copy in gfapi (from application buffer), following is the workflow:
> > 
> > 
> > 
> > Work flow of zero copy write operation:
> > --
> > 
> > 1) Application requests a buffer of specific size. A new buffer is
> > allocated from iobuf pool, and this buffer is passed on to application.
> > Achieved using "glfs_get_buffer"
> > 
> > 2) Application writes into the received buffer, and passes that to
> > libgfapi, and libgfapi in turn passes the same buffer to underlying
> > translators. This avoids a memcpy in glfs write
> > Achieved using "glfs_zero_write"
> > 
> > 3) Once the write operation is complete, Application must take the
> > responsibilty of freeing the buffer.
> > Achieved using "glfs_free_buffer"
> > 
> > 
> > 
> > Do you've any suggestions/improvements on this? I think Shyam mentioned an 
> > alternative approach (for zero-copy readv I think), let me look up at that 
> > too.
> > 
> > regards,
> > Raghavendra
> 
> Both NFS and SMB support a copy offload that allows a client to produce a
> new copy of a file without bringing data over the wire. Both, if I remember
> correctly, do a ranged copy within a file.
> 
> The key here is that since the data does not move over the wire from server
> to client, we can shift the performance bottleneck to the storage server.
> 
> If we have a slow (1GB) link between client and server, we should be able to
> do that copy as if it happened just on the server itself. For a single NFS
> server (not a clustered, scale out server), that usually means we are as
> fast as the local file system copy.
> 
> Note that there are also servers that simply "reflink" that file, so we have
> a very small amount of time needed on the server to produce that copy.  This
> can be a huge win for say a copy of a virtual machine guest image.
> 
> Gluster and other distributed servers won't benefit as much as a local
> server would I suspect because of the need to do things internally over our
> networks between storage server nodes.
> 
> Hope that makes my thoughts clearer?
> 
> Here is a link to a brief overview of the new Linux system call:
> 
> https://kernelnewbies.org/Linux_4.5#head-6df3d298d8e0afa8e85e1125cc54d5f13b9a0d8c
> 
> Note that block devices or pseudo devices can also implement a copy offload.

Last week I shared an idea about doing server-side-copy with a few
developers. I plan to send out a bit more details to the devel list
later this week. Feedback by email, or in person at the Gluster Summit
next week would be welcome.

A first iteration would trigger a server-side-copy to a normal gfapi
application running as a service inside the storage environment. This
service will just do a read+write from 'localhost' to whatever bricks
contain the destination file. Further optimizations by reflinking and
other techniques should be possible to add later on.

This is a preview of what I currently have in an etherpad:
  https://public.pad.fsfe.org/p/gluster-server-side-copy?useMonospaceFont=true

Niels


> 
> Regards,
> 
> Ric
> 
> > 
> > > Worth looking at how we can make 

Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-27 Thread Ric Wheeler

Hi Poornima,

I think that the goal would be to add support for SMB specific copy offload 
command in Samba. NFS also has a protocol specific one. This is a matter of 
protocol support.


On the Samba server side (or NFS Ganesha side), we would get a client copy 
offload request that we could handle without shipping data over the wire to the 
client.


Under Samba or NFS Ganesha on the server side, using copy offload for the local 
XFS file system is a different problem I think.


Regards,
Ric


On 09/27/2016 11:02 AM, Poornima Gurusiddaiah wrote:

W.r.t Samba consuming this, it requires a great deal of code change in Samba.
Currently samba has no concept of getting buf from the underlying file system,
the filesystem comes into picture only at the last layer(gluster plugin),
where system calls are replaced by libgfapi calls. Hence, this is not readily
consumable by Samba, and i think same will be the case with NFS_Ganesha, will
let the Ganesha folksc comment on the same.


Regards,
Poornima

- Original Message -

From: "Ric Wheeler" <ricwhee...@gmail.com>
To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Ric Wheeler" 
<rwhee...@redhat.com>
Cc: "Ben England" <bengl...@redhat.com>, "Gluster Devel" <gluster-devel@gluster.org>, 
"Ben Turner"
<btur...@redhat.com>
Sent: Tuesday, September 27, 2016 2:25:40 AM
Subject: Re: [Gluster-devel] libgfapi zero copy write - application in samba, 
nfs-ganesha

On 09/27/2016 08:53 AM, Raghavendra Gowdappa wrote:

- Original Message -

From: "Ric Wheeler" <rwhee...@redhat.com>
To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Saravanakumar Arumugam"
<sarum...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Ben Turner"
<btur...@redhat.com>, "Ben England"
<bengl...@redhat.com>
Sent: Tuesday, September 27, 2016 10:51:48 AM
Subject: Re: [Gluster-devel] libgfapi zero copy write - application in
samba, nfs-ganesha

On 09/27/2016 07:56 AM, Raghavendra Gowdappa wrote:

+Manoj, +Ben turner, +Ben England.

@Perf-team,

Do you think the gains are significant enough, so that smb and
nfs-ganesha
team can start thinking about consuming this change?

regards,
Raghavendra

This is a large gain but I think that we might see even larger gains (a
lot
depends on how we implement copy offload :)).

Can you elaborate on what you mean "copy offload"? If it is the way we
avoid a copy in gfapi (from application buffer), following is the
workflow:



Work flow of zero copy write operation:
--

1) Application requests a buffer of specific size. A new buffer is
allocated from iobuf pool, and this buffer is passed on to application.
 Achieved using "glfs_get_buffer"

2) Application writes into the received buffer, and passes that to
libgfapi, and libgfapi in turn passes the same buffer to underlying
translators. This avoids a memcpy in glfs write
 Achieved using "glfs_zero_write"

3) Once the write operation is complete, Application must take the
responsibilty of freeing the buffer.
 Achieved using "glfs_free_buffer"



Do you've any suggestions/improvements on this? I think Shyam mentioned an
alternative approach (for zero-copy readv I think), let me look up at that
too.

regards,
Raghavendra

Both NFS and SMB support a copy offload that allows a client to produce a new
copy of a file without bringing data over the wire. Both, if I remember
correctly, do a ranged copy within a file.


Yup, also referred to as Server side copy, Niels is working on having this for 
Gluster.


The key here is that since the data does not move over the wire from server
to
client, we can shift the performance bottleneck to the storage server.

If we have a slow (1GB) link between client and server, we should be able to
do
that copy as if it happened just on the server itself. For a single NFS
server
(not a clustered, scale out server), that usually means we are as fast as the
local file system copy.

Note that there are also servers that simply "reflink" that file, so we have
a
very small amount of time needed on the server to produce that copy.  This
can
be a huge win for say a copy of a virtual machine guest image.

Gluster and other distributed servers won't benefit as much as a local server
would I suspect because of the need to do things internally over our networks
between storage server nodes.

Hope that makes my thoughts clearer?

Here is a link to a brief overview of the new Linux system call:

https://kernelnewbies.org/Linux_4.5#head-6df3d298d8e0afa8e85e1125cc54d5f13b9a0d8c

Note that block devices or pseudo devices can also implement a copy offload.

Regards,

Ric


Worth looking at how we can make use of it.

thanks!

Ric


- Original Message -

From: "Saravanakumar Arumugam" 

Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-27 Thread Poornima Gurusiddaiah
W.r.t Samba consuming this, it requires a great deal of code change in Samba.
Currently samba has no concept of getting buf from the underlying file system,
the filesystem comes into picture only at the last layer(gluster plugin),
where system calls are replaced by libgfapi calls. Hence, this is not readily
consumable by Samba, and i think same will be the case with NFS_Ganesha, will
let the Ganesha folksc comment on the same.


Regards,
Poornima

- Original Message -
> From: "Ric Wheeler" <ricwhee...@gmail.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Ric Wheeler" 
> <rwhee...@redhat.com>
> Cc: "Ben England" <bengl...@redhat.com>, "Gluster Devel" 
> <gluster-devel@gluster.org>, "Ben Turner"
> <btur...@redhat.com>
> Sent: Tuesday, September 27, 2016 2:25:40 AM
> Subject: Re: [Gluster-devel] libgfapi zero copy write - application in samba, 
> nfs-ganesha
> 
> On 09/27/2016 08:53 AM, Raghavendra Gowdappa wrote:
> >
> > - Original Message -
> >> From: "Ric Wheeler" <rwhee...@redhat.com>
> >> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Saravanakumar Arumugam"
> >> <sarum...@redhat.com>
> >> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Ben Turner"
> >> <btur...@redhat.com>, "Ben England"
> >> <bengl...@redhat.com>
> >> Sent: Tuesday, September 27, 2016 10:51:48 AM
> >> Subject: Re: [Gluster-devel] libgfapi zero copy write - application in
> >> samba, nfs-ganesha
> >>
> >> On 09/27/2016 07:56 AM, Raghavendra Gowdappa wrote:
> >>> +Manoj, +Ben turner, +Ben England.
> >>>
> >>> @Perf-team,
> >>>
> >>> Do you think the gains are significant enough, so that smb and
> >>> nfs-ganesha
> >>> team can start thinking about consuming this change?
> >>>
> >>> regards,
> >>> Raghavendra
> >> This is a large gain but I think that we might see even larger gains (a
> >> lot
> >> depends on how we implement copy offload :)).
> > Can you elaborate on what you mean "copy offload"? If it is the way we
> > avoid a copy in gfapi (from application buffer), following is the
> > workflow:
> >
> > 
> >
> > Work flow of zero copy write operation:
> > --
> >
> > 1) Application requests a buffer of specific size. A new buffer is
> > allocated from iobuf pool, and this buffer is passed on to application.
> > Achieved using "glfs_get_buffer"
> >
> > 2) Application writes into the received buffer, and passes that to
> > libgfapi, and libgfapi in turn passes the same buffer to underlying
> > translators. This avoids a memcpy in glfs write
> > Achieved using "glfs_zero_write"
> >
> > 3) Once the write operation is complete, Application must take the
> > responsibilty of freeing the buffer.
> > Achieved using "glfs_free_buffer"
> >
> > 
> >
> > Do you've any suggestions/improvements on this? I think Shyam mentioned an
> > alternative approach (for zero-copy readv I think), let me look up at that
> > too.
> >
> > regards,
> > Raghavendra
> 
> Both NFS and SMB support a copy offload that allows a client to produce a new
> copy of a file without bringing data over the wire. Both, if I remember
> correctly, do a ranged copy within a file.
> 

Yup, also referred to as Server side copy, Niels is working on having this for 
Gluster.

> The key here is that since the data does not move over the wire from server
> to
> client, we can shift the performance bottleneck to the storage server.
> 
> If we have a slow (1GB) link between client and server, we should be able to
> do
> that copy as if it happened just on the server itself. For a single NFS
> server
> (not a clustered, scale out server), that usually means we are as fast as the
> local file system copy.
> 
> Note that there are also servers that simply "reflink" that file, so we have
> a
> very small amount of time needed on the server to produce that copy.  This
> can
> be a huge win for say a copy of a virtual machine guest image.
> 
> Gluster and other distributed servers won't benefit as much as a local server
> would I suspect because of the need to do things internally over our networks
> between storage server nodes.
> 
> Hope that makes my thoughts clearer?
> 
> Here is a link to a brief overview of the new Linux 

Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-27 Thread Raghavendra G
+sachin.

On Tue, Sep 27, 2016 at 11:23 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -
> > From: "Ric Wheeler" <rwhee...@redhat.com>
> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Saravanakumar
> Arumugam" <sarum...@redhat.com>
> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Ben Turner" <
> btur...@redhat.com>, "Ben England"
> > <bengl...@redhat.com>
> > Sent: Tuesday, September 27, 2016 10:51:48 AM
> > Subject: Re: [Gluster-devel] libgfapi zero copy write - application in
> samba, nfs-ganesha
> >
> > On 09/27/2016 07:56 AM, Raghavendra Gowdappa wrote:
> > > +Manoj, +Ben turner, +Ben England.
> > >
> > > @Perf-team,
> > >
> > > Do you think the gains are significant enough, so that smb and
> nfs-ganesha
> > > team can start thinking about consuming this change?
> > >
> > > regards,
> > > Raghavendra
> >
> > This is a large gain but I think that we might see even larger gains (a
> lot
> > depends on how we implement copy offload :)).
>
> Can you elaborate on what you mean "copy offload"? If it is the way we
> avoid a copy in gfapi (from application buffer), following is the workflow:
>
> 
>
> Work flow of zero copy write operation:
> --
>
> 1) Application requests a buffer of specific size. A new buffer is
> allocated from iobuf pool, and this buffer is passed on to application.
>Achieved using "glfs_get_buffer"
>
> 2) Application writes into the received buffer, and passes that to
> libgfapi, and libgfapi in turn passes the same buffer to underlying
> translators. This avoids a memcpy in glfs write
>Achieved using "glfs_zero_write"
>
> 3) Once the write operation is complete, Application must take the
> responsibilty of freeing the buffer.
>Achieved using "glfs_free_buffer"
>
> 
>
> Do you've any suggestions/improvements on this? I think Shyam mentioned an
> alternative approach (for zero-copy readv I think), let me look up at that
> too.
>
> regards,
> Raghavendra
>
> >
> > Worth looking at how we can make use of it.
> >
> > thanks!
> >
> > Ric
> >
> > >
> > > - Original Message -
> > >> From: "Saravanakumar Arumugam" <sarum...@redhat.com>
> > >> To: "Gluster Devel" <gluster-devel@gluster.org>
> > >> Sent: Monday, September 26, 2016 7:18:26 PM
> > >> Subject: [Gluster-devel] libgfapi zero copy write - application in
> samba,
> > >>nfs-ganesha
> > >>
> > >> Hi,
> > >>
> > >> I have carried out "basic" performance measurement with zero copy
> write
> > >> APIs.
> > >> Throughput of zero copy write is 57 MB/sec vs default write 43 MB/sec.
> > >> ( I have modified Ben England's gfapi_perf_test.c for this. Attached
> the
> > >> same
> > >> for reference )
> > >>
> > >> We would like to hear how samba/ nfs-ganesha who are libgfapi users
> can
> > >> make
> > >> use of this.
> > >> Please provide your comments. Refer attached results.
> > >>
> > >> Zero copy in write patch: http://review.gluster.org/#/c/14784/
> > >>
> > >> Thanks,
> > >> Saravana
> >
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-26 Thread Raghavendra Gowdappa


- Original Message -
> From: "Ric Wheeler" <rwhee...@redhat.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Saravanakumar Arumugam" 
> <sarum...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Ben Turner" 
> <btur...@redhat.com>, "Ben England"
> <bengl...@redhat.com>
> Sent: Tuesday, September 27, 2016 10:51:48 AM
> Subject: Re: [Gluster-devel] libgfapi zero copy write - application in samba, 
> nfs-ganesha
> 
> On 09/27/2016 07:56 AM, Raghavendra Gowdappa wrote:
> > +Manoj, +Ben turner, +Ben England.
> >
> > @Perf-team,
> >
> > Do you think the gains are significant enough, so that smb and nfs-ganesha
> > team can start thinking about consuming this change?
> >
> > regards,
> > Raghavendra
> 
> This is a large gain but I think that we might see even larger gains (a lot
> depends on how we implement copy offload :)).

Can you elaborate on what you mean "copy offload"? If it is the way we avoid a 
copy in gfapi (from application buffer), following is the workflow:



Work flow of zero copy write operation:
--

1) Application requests a buffer of specific size. A new buffer is
allocated from iobuf pool, and this buffer is passed on to application.
   Achieved using "glfs_get_buffer"

2) Application writes into the received buffer, and passes that to
libgfapi, and libgfapi in turn passes the same buffer to underlying
translators. This avoids a memcpy in glfs write
   Achieved using "glfs_zero_write"

3) Once the write operation is complete, Application must take the
responsibilty of freeing the buffer.
   Achieved using "glfs_free_buffer"



Do you've any suggestions/improvements on this? I think Shyam mentioned an 
alternative approach (for zero-copy readv I think), let me look up at that too.

regards,
Raghavendra

> 
> Worth looking at how we can make use of it.
> 
> thanks!
> 
> Ric
> 
> >
> > - Original Message -
> >> From: "Saravanakumar Arumugam" <sarum...@redhat.com>
> >> To: "Gluster Devel" <gluster-devel@gluster.org>
> >> Sent: Monday, September 26, 2016 7:18:26 PM
> >> Subject: [Gluster-devel] libgfapi zero copy write - application in samba,
> >>nfs-ganesha
> >>
> >> Hi,
> >>
> >> I have carried out "basic" performance measurement with zero copy write
> >> APIs.
> >> Throughput of zero copy write is 57 MB/sec vs default write 43 MB/sec.
> >> ( I have modified Ben England's gfapi_perf_test.c for this. Attached the
> >> same
> >> for reference )
> >>
> >> We would like to hear how samba/ nfs-ganesha who are libgfapi users can
> >> make
> >> use of this.
> >> Please provide your comments. Refer attached results.
> >>
> >> Zero copy in write patch: http://review.gluster.org/#/c/14784/
> >>
> >> Thanks,
> >> Saravana
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-26 Thread Ric Wheeler

On 09/27/2016 07:56 AM, Raghavendra Gowdappa wrote:

+Manoj, +Ben turner, +Ben England.

@Perf-team,

Do you think the gains are significant enough, so that smb and nfs-ganesha team 
can start thinking about consuming this change?

regards,
Raghavendra


This is a large gain but I think that we might see even larger gains (a lot 
depends on how we implement copy offload :)).


Worth looking at how we can make use of it.

thanks!

Ric



- Original Message -

From: "Saravanakumar Arumugam" 
To: "Gluster Devel" 
Sent: Monday, September 26, 2016 7:18:26 PM
Subject: [Gluster-devel] libgfapi zero copy write - application in samba,   
nfs-ganesha

Hi,

I have carried out "basic" performance measurement with zero copy write APIs.
Throughput of zero copy write is 57 MB/sec vs default write 43 MB/sec.
( I have modified Ben England's gfapi_perf_test.c for this. Attached the same
for reference )

We would like to hear how samba/ nfs-ganesha who are libgfapi users can make
use of this.
Please provide your comments. Refer attached results.

Zero copy in write patch: http://review.gluster.org/#/c/14784/

Thanks,
Saravana


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] libgfapi zero copy write - application in samba, nfs-ganesha

2016-09-26 Thread Raghavendra Gowdappa
+Manoj, +Ben turner, +Ben England.

@Perf-team,

Do you think the gains are significant enough, so that smb and nfs-ganesha team 
can start thinking about consuming this change?

regards,
Raghavendra

- Original Message -
> From: "Saravanakumar Arumugam" 
> To: "Gluster Devel" 
> Sent: Monday, September 26, 2016 7:18:26 PM
> Subject: [Gluster-devel] libgfapi zero copy write - application in samba, 
> nfs-ganesha
> 
> Hi,
> 
> I have carried out "basic" performance measurement with zero copy write APIs.
> Throughput of zero copy write is 57 MB/sec vs default write 43 MB/sec.
> ( I have modified Ben England's gfapi_perf_test.c for this. Attached the same
> for reference )
> 
> We would like to hear how samba/ nfs-ganesha who are libgfapi users can make
> use of this.
> Please provide your comments. Refer attached results.
> 
> Zero copy in write patch: http://review.gluster.org/#/c/14784/
> 
> Thanks,
> Saravana
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel