Hi Kotresh,

 

Thanks for response! 

 

After taking more tests with this specific geo-replication configuration I 
realized that  

file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as 
well during geo replication.

I’m concern about attribute trusted.gfid because value of the attribute has to 
be unique for glusterfs cluster.

But this is not a case in my tests. File on master and slave volumes has the 
same trusted.gfid attribute.

To handle this issue the geo replication configuration option sync-xattrs = 
false was tested on glusterfs version 3.12.3. 

After changes of the option from true to false the geo-replication was stopped, 
volume was stopped, glusterd was stopped, glusterd was started, volume was 
started and the geo-replication was started again.

It had no effect on syncing of trusted.gfid.

 

How it is critical to have duplicated gfid’s? Can volume data be corrupted in 
this case somehow?

 

Best regards,

 

Viktor Nosov 

 

From: Kotresh Hiremath Ravishankar [mailto:[email protected]] 
Sent: Tuesday, January 16, 2018 7:59 PM
To: Viktor Nosov
Cc: Gluster Users; [email protected]
Subject: Re: [Gluster-users] Deploying geo-replication to local peer

 

Hi Viktor,

Answers inline

 

On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov <[email protected]> wrote:

Hi,

I'm looking for glusterfs feature that can be used to transform data between
volumes of different types provisioned on the same nodes.
It could be, for example, transformation from disperse to distributed
volume.
The possible option is to invoke geo-replication between volumes. It seems
is works properly.
But I'm concern about  requirement from Administration Guide for Red Hat
Gluster Storage 3.3 (10.3.3. Prerequisites):

"Slave node must not be a peer of the any of the nodes of the Master trusted
storage pool."

     This doesn't limit geo-rep feature in anyway. It's a  recommendation. You

can go ahead and use it.

 

Is this restriction is set to limit usage of geo-replication to disaster
recovery scenarios only or there is a problem with data synchronization
between
master and slave volumes?

Anybody has experience with this issue?

Thanks for any information!

Viktor Nosov


_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users




-- 

Thanks and Regards,

Kotresh H R

_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to