Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-23 Thread Soumya Koduri



On 09/23/2016 11:48 AM, Poornima Gurusiddaiah wrote:



- Original Message -

From: "Niels de Vos" <nde...@redhat.com>
To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Wednesday, September 21, 2016 3:52:39 AM
Subject: Re: [Gluster-devel] review request - Change the way client uuid is 
built

On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:

Hi all,

[1] might have implications across different components in the stack. Your
reviews are requested.



rpc : Change the way client uuid is built

Problem:
Today the main users of client uuid are protocol layers, locks, leases.
Protocolo layers requires each client uuid to be unique, even across
connects and disconnects. Locks and leases on the server side also use
the same client uid which changes across graph switches and across
file migrations. Which makes the graph switch and file migration
tedious for locks and leases.
As of today lock migration across graph switch is client driven,
i.e. when a graph switches, the client reassociates all the locks(which
were associated with the old graph client uid) with the new graphs
client uid. This means flood of fops to get and set locks for each fd.
Also file migration across bricks becomes even more difficult as
client uuid for the same client, is different on the other brick.

The exact set of issues exists for leases as well.

Hence the solution:
Make the migration of locks and leases during graph switch and migration,
server driven instead of client driven. This can be achieved by changing
the format of client uuid.

Client uuid currently:
%s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume
count/reconnect count)

Proposed Client uuid:
"CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
-  CTX_ID: This is will be constant per client.
-  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume
count)
remains the same.

With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
constant across file migration, thus the migration is made easier.

Locks and leases store only the first part CTX_ID+GRAPH_ID as their
client identification. This means, when the new graph connects,


Can we assume that CTX_ID+GRAPH_ID shall be unique across clients all 
the time? If not, wouldn't we get into issues of clientB's locks/leases 
not conflicting with locks/leases of clientA's.



the locks and leases xlator should walk through their database
to update the client id, to have new GRAPH_ID. Thus the graph switch
is made server driven and saves a lot of network traffic.


What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
applications? This would be important for NFS-Ganesha failover where one
NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
to an other NFS-Ganesha server.


Sharing it across multiple gfapi applications is currently not supported.
Do you mean, setting the CTX_ID+GRAPH_ID at the init of the other client,
or during replay of locks during the failover?
If its the former, we need an api in gfapi to take the CTX_ID+GRAPH_ID as
an argument and other things.

Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
would allow us to add a configuration option to NFS-Ganesha and have the
whole NFS-Ganesha cluster use the same locking/leases.

Ah, ok. the whole of cluster will have the same CTX_ID(+GRAPH_ID?), but then
the cleanup logic will not work, as the disconnect cleanup happens as soon as
one of the NFS-Ganesha disconnects?


yes. If we have uniform ID (CTX_ID+GRAPH_ID?) across clients, we should 
keep locks/leases as long as even one client is connected and not clean 
them up as part of fd cleanup during disconnects.


Thanks,
Soumya



This patch doesn't eliminate the migration that is required during graph switch,
it still is necessary, but it can be server driven instead of client driven.


Thanks,
Niels




Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
BUG: 1369028
Signed-off-by: Poornima G <pguru...@redhat.com>
Signed-off-by: Susant Palai <spa...@redhat.com>



[1] http://review.gluster.org/#/c/13901/10/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-23 Thread Poornima Gurusiddaiah


- Original Message -
> From: "Niels de Vos" <nde...@redhat.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Wednesday, September 21, 2016 3:52:39 AM
> Subject: Re: [Gluster-devel] review request - Change the way client uuid is 
> built
> 
> On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> > Hi all,
> > 
> > [1] might have implications across different components in the stack. Your
> > reviews are requested.
> > 
> > 
> > 
> > rpc : Change the way client uuid is built
> > 
> > Problem:
> > Today the main users of client uuid are protocol layers, locks, leases.
> > Protocolo layers requires each client uuid to be unique, even across
> > connects and disconnects. Locks and leases on the server side also use
> > the same client uid which changes across graph switches and across
> > file migrations. Which makes the graph switch and file migration
> > tedious for locks and leases.
> > As of today lock migration across graph switch is client driven,
> > i.e. when a graph switches, the client reassociates all the locks(which
> > were associated with the old graph client uid) with the new graphs
> > client uid. This means flood of fops to get and set locks for each fd.
> > Also file migration across bricks becomes even more difficult as
> > client uuid for the same client, is different on the other brick.
> > 
> > The exact set of issues exists for leases as well.
> > 
> > Hence the solution:
> > Make the migration of locks and leases during graph switch and migration,
> > server driven instead of client driven. This can be achieved by changing
> > the format of client uuid.
> > 
> > Client uuid currently:
> > %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume
> > count/reconnect count)
> > 
> > Proposed Client uuid:
> > "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> > -  CTX_ID: This is will be constant per client.
> > -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume
> > count)
> > remains the same.
> > 
> > With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> > constant across file migration, thus the migration is made easier.
> > 
> > Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> > client identification. This means, when the new graph connects,
> > the locks and leases xlator should walk through their database
> > to update the client id, to have new GRAPH_ID. Thus the graph switch
> > is made server driven and saves a lot of network traffic.
> 
> What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
> applications? This would be important for NFS-Ganesha failover where one
> NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
> to an other NFS-Ganesha server.
> 
Sharing it across multiple gfapi applications is currently not supported.
Do you mean, setting the CTX_ID+GRAPH_ID at the init of the other client,
or during replay of locks during the failover?
If its the former, we need an api in gfapi to take the CTX_ID+GRAPH_ID as
an argument and other things.
> Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
> would allow us to add a configuration option to NFS-Ganesha and have the
> whole NFS-Ganesha cluster use the same locking/leases.
Ah, ok. the whole of cluster will have the same CTX_ID(+GRAPH_ID?), but then
the cleanup logic will not work, as the disconnect cleanup happens as soon as
one of the NFS-Ganesha disconnects?

This patch doesn't eliminate the migration that is required during graph switch,
it still is necessary, but it can be server driven instead of client driven.
> 
> Thanks,
> Niels
> 
> 
> > 
> > Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> > BUG: 1369028
> > Signed-off-by: Poornima G <pguru...@redhat.com>
> > Signed-off-by: Susant Palai <spa...@redhat.com>
> > 
> > 
> > 
> > [1] http://review.gluster.org/#/c/13901/10/
> > 
> > regards,
> > Raghavendra
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-22 Thread Raghavendra Gowdappa
+Poornima

- Original Message -
> From: "Niels de Vos" <nde...@redhat.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Wednesday, September 21, 2016 1:22:39 PM
> Subject: Re: [Gluster-devel] review request - Change the way client uuid is 
> built
> 
> On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> > Hi all,
> > 
> > [1] might have implications across different components in the stack. Your
> > reviews are requested.
> > 
> > 
> > 
> > rpc : Change the way client uuid is built
> > 
> > Problem:
> > Today the main users of client uuid are protocol layers, locks, leases.
> > Protocolo layers requires each client uuid to be unique, even across
> > connects and disconnects. Locks and leases on the server side also use
> > the same client uid which changes across graph switches and across
> > file migrations. Which makes the graph switch and file migration
> > tedious for locks and leases.
> > As of today lock migration across graph switch is client driven,
> > i.e. when a graph switches, the client reassociates all the locks(which
> > were associated with the old graph client uid) with the new graphs
> > client uid. This means flood of fops to get and set locks for each fd.
> > Also file migration across bricks becomes even more difficult as
> > client uuid for the same client, is different on the other brick.
> > 
> > The exact set of issues exists for leases as well.
> > 
> > Hence the solution:
> > Make the migration of locks and leases during graph switch and migration,
> > server driven instead of client driven. This can be achieved by changing
> > the format of client uuid.
> > 
> > Client uuid currently:
> > %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume
> > count/reconnect count)
> > 
> > Proposed Client uuid:
> > "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> > -  CTX_ID: This is will be constant per client.
> > -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume
> > count)
> > remains the same.
> > 
> > With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> > constant across file migration, thus the migration is made easier.
> > 
> > Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> > client identification. This means, when the new graph connects,
> > the locks and leases xlator should walk through their database
> > to update the client id, to have new GRAPH_ID. Thus the graph switch
> > is made server driven and saves a lot of network traffic.
> 
> What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
> applications? This would be important for NFS-Ganesha failover where one
> NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
> to an other NFS-Ganesha server.
> 
> Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
> would allow us to add a configuration option to NFS-Ganesha and have the
> whole NFS-Ganesha cluster use the same locking/leases.
> 
> Thanks,
> Niels
> 
> 
> > 
> > Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> > BUG: 1369028
> > Signed-off-by: Poornima G <pguru...@redhat.com>
> > Signed-off-by: Susant Palai <spa...@redhat.com>
> > 
> > 
> > 
> > [1] http://review.gluster.org/#/c/13901/10/
> > 
> > regards,
> > Raghavendra
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-21 Thread Niels de Vos
On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> Hi all,
> 
> [1] might have implications across different components in the stack. Your 
> reviews are requested.
> 
> 
> 
> rpc : Change the way client uuid is built
> 
> Problem:
> Today the main users of client uuid are protocol layers, locks, leases.
> Protocolo layers requires each client uuid to be unique, even across
> connects and disconnects. Locks and leases on the server side also use
> the same client uid which changes across graph switches and across
> file migrations. Which makes the graph switch and file migration
> tedious for locks and leases.
> As of today lock migration across graph switch is client driven,
> i.e. when a graph switches, the client reassociates all the locks(which
> were associated with the old graph client uid) with the new graphs
> client uid. This means flood of fops to get and set locks for each fd.
> Also file migration across bricks becomes even more difficult as
> client uuid for the same client, is different on the other brick.
> 
> The exact set of issues exists for leases as well.
> 
> Hence the solution:
> Make the migration of locks and leases during graph switch and migration,
> server driven instead of client driven. This can be achieved by changing
> the format of client uuid.
> 
> Client uuid currently:
> %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume 
> count/reconnect count)
> 
> Proposed Client uuid:
> "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> -  CTX_ID: This is will be constant per client.
> -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume 
> count)
> remains the same.
> 
> With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> constant across file migration, thus the migration is made easier.
> 
> Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> client identification. This means, when the new graph connects,
> the locks and leases xlator should walk through their database
> to update the client id, to have new GRAPH_ID. Thus the graph switch
> is made server driven and saves a lot of network traffic.

What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
applications? This would be important for NFS-Ganesha failover where one
NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
to an other NFS-Ganesha server.

Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
would allow us to add a configuration option to NFS-Ganesha and have the
whole NFS-Ganesha cluster use the same locking/leases.

Thanks,
Niels


> 
> Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> BUG: 1369028
> Signed-off-by: Poornima G 
> Signed-off-by: Susant Palai 
> 
> 
> 
> [1] http://review.gluster.org/#/c/13901/10/
> 
> regards,
> Raghavendra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] review request - Change the way client uuid is built

2016-09-20 Thread Raghavendra Gowdappa
Hi all,

[1] might have implications across different components in the stack. Your 
reviews are requested.



rpc : Change the way client uuid is built

Problem:
Today the main users of client uuid are protocol layers, locks, leases.
Protocolo layers requires each client uuid to be unique, even across
connects and disconnects. Locks and leases on the server side also use
the same client uid which changes across graph switches and across
file migrations. Which makes the graph switch and file migration
tedious for locks and leases.
As of today lock migration across graph switch is client driven,
i.e. when a graph switches, the client reassociates all the locks(which
were associated with the old graph client uid) with the new graphs
client uid. This means flood of fops to get and set locks for each fd.
Also file migration across bricks becomes even more difficult as
client uuid for the same client, is different on the other brick.

The exact set of issues exists for leases as well.

Hence the solution:
Make the migration of locks and leases during graph switch and migration,
server driven instead of client driven. This can be achieved by changing
the format of client uuid.

Client uuid currently:
%s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume count/reconnect 
count)

Proposed Client uuid:
"CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
-  CTX_ID: This is will be constant per client.
-  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume count)
remains the same.

With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
constant across file migration, thus the migration is made easier.

Locks and leases store only the first part CTX_ID+GRAPH_ID as their
client identification. This means, when the new graph connects,
the locks and leases xlator should walk through their database
to update the client id, to have new GRAPH_ID. Thus the graph switch
is made server driven and saves a lot of network traffic.

Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
BUG: 1369028
Signed-off-by: Poornima G 
Signed-off-by: Susant Palai 



[1] http://review.gluster.org/#/c/13901/10/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel