Re: [Gluster-users] Bitrot strange behavior

2018-04-17 Thread Sweta Anandpara

Hi Cedric,

Any file is picked up for signing by the bitd process after the 
predetermined wait of 120 seconds. This default value is captured in the 
volume option 'features.expiry-time' and is configurable - in your case, 
it can be set to 0 or 1.


Point 2 is correct. A file corrupted before the bitrot signature is 
generated will not be successfully detected by the scrubber. That would 
require admin/manual intervention to explicitly heal the corrupted file.


-Sweta

On 04/16/2018 10:42 PM, Cedric Lemarchand wrote:

Hello,

I am playing around with the bitrot feature and have some questions:

1. when a file is created, the "trusted.bit-rot.signature” attribute
seems only created approximatively 120 seconds after its creations
(the cluster is idle and there is only one file living on it). Why ?
Is there a way to make this attribute generated at the same time of
the file creation ?

2. corrupting a file (adding a 0 locally on a brick) before the
creation of the "trusted.bit-rot.signature” do not provide any
warning: its signature is different than the 2 others copies on other
bricks. Starting a scrub did not show up anything. I would think that
Gluster compares signature between bricks for this particular use
cases, but it seems the check is only local, so a file corrupted
before it’s bitrot signature creation stay corrupted, and thus could
be served to clients whith bad data ?

Gluster 3.12.8 on Debian Stretch, bricks on ext4.

Volume Name: vol1
Type: Replicate
Volume ID: 85ccfaf2-5793-46f2-bd20-3f823b0a2232
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster-01:/data/brick1
Brick2: gluster-02:/data/brick2
Brick3: gluster-03:/data/brick3
Options Reconfigured:
storage.build-pgfid: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
features.bitrot: on
features.scrub: Active
features.scrub-throttle: aggressive
features.scrub-freq: hourly

Cheers,

Cédric
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] bit-rot resolution

2017-02-28 Thread Sweta Anandpara
Bitrot has a scrub process which detects the corrupted file. Once 
detected, it is the prerogative of the user to follow the sequence of 
steps [1] to trigger a heal. Having said that, client access to that 
file is not impacted as it continues to serve data from the good copy.

Steps remain the same irrespective of a replica 2 or replica 3 volume.

[1] 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/ch20s03.html


Thanks,
Sweta

On 02/28/2017 06:28 PM, Gandalf Corvotempesta wrote:

In a replica 3, what happens in case of bit-rot detection on a file ?
Is gluster smart enough to detect this and automatically heal the
corrupted files from other replicas ?
What if in case of replica 2 ? How do know know which is right,
server1 or server2, without a quorum ?

What if the underling FS (like ZFS) is retuning an error in case of
bit-rot ? ZFS should return an error if file is corrupted (and has not
RAID to recover from), thus gluster should see the file as
missing/corrupted and automatically trigger selfheal from other
replicas ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How to raed / write in geo-replication slave

2015-07-14 Thread Sweta Anandpara
Hi Milos,

If I get this correctly, this is your configuration below:

* Geo-rep master- Europe
* Geo-rep slave - Asia
* Same web application installed, and users from Asia are redirected to 
slave (in Asia), users from rest of the world are directed to master (in 
Europe)

And you want read/write facility to be available in slave as well.

YES, this is possible.
BUT, it defeats the purpose of geo-replication. As per the concept of 
disaster recovery, read/writes should be carried out only at the master, 
and slave (if at all used), should be limited to only read workload. In 
your case, the very first write to the slave is going to render your 
geo-rep setup out-of-sync, and you would eventually have to delete off 
all the (new) content at the slave to get back the master and slave in sync.

1. If you are okay with this and would still like to go ahead with 
writes at both master and slave, all you would have to do is get the 
slave volume online.
2. Alternative: IF you are okay with writes happening at only one of the 
place at a time, then you could alternate between making Asia as 
master(and Europe as slave) and Asia as slave (and Europe as master), 
with the steps mentioned in this section of the Admin guide:
http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Storage/3/html-single/Administration_Guide/index.html#Promoting_a_Slave_to_Master

Would like to reiterate again - (2) is fine..  (1) is NOT recommended.
Do reach out for any other queries.

Thanks,
Shweta




On 07/14/2015 07:57 PM, Milos Cuculovic - MDPI wrote:
 Hi All,

 I have two GlusterFS servers, one master and one slave.
 They are doing geo-replication.

 I need to read and write on both of them.

 Here are some details:

 2 data centers (One in Europe and one in Asia)
 The same web application is installed on both data centers
 Users from Asia are redirected to the Asian DC
 Users from the rest of the World and redirected in Europe
 The application is r/w some files from the glusterFs storage server
 The master is in Europe, and the r/w is fast
 I need people from the Asian DC to also r/w directly in Asia

 Is this possible, if so, any help please?

 I appriciate your help.

 Milos
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How to raed / write in geo-replication slave

2015-07-14 Thread Sweta Anandpara

Hi Milos,

If I get this correctly, this is your configuration below:

* Geo-rep master- Europe
* Geo-rep slave - Asia
* Same web application installed, and users from Asia are redirected to 
slave (in Asia), users from rest of the world are directed to master (in 
Europe)


And you want read/write facility to be available in slave as well.

YES, this is possible.
BUT, it defeats the purpose of geo-replication. As per the concept of 
disaster recovery, read/writes should be carried out only at the master, 
and slave (if at all used), should be limited to only read workload. In 
your case, the very first write to the slave is going to render your 
geo-rep setup out-of-sync, and you would eventually have to delete off 
all the (new) content at the slave to get back the master and slave in sync.


1. If you are okay with this and would still like to go ahead with 
writes at both master and slave, all you would have to do is get the 
slave volume online.
2. Alternative: IF you are okay with writes happening at only one of the 
place at a time, then you could alternate between making Asia as 
master(and Europe as slave) and Asia as slave (and Europe as master), 
with the steps mentioned in this section of the Admin guide:

http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Storage/3/html-single/Administration_Guide/index.html#Promoting_a_Slave_to_Master

Would like to reiterate again - (2) is fine..  (1) is NOT recommended.
Do reach out for any other queries.

Thanks,
Shweta




On 07/14/2015 07:57 PM, Milos Cuculovic - MDPI wrote:

Hi All,

I have two GlusterFS servers, one master and one slave.
They are doing geo-replication.

I need to read and write on both of them.

Here are some details:

2 data centers (One in Europe and one in Asia)
The same web application is installed on both data centers
Users from Asia are redirected to the Asian DC
Users from the rest of the World and redirected in Europe
The application is r/w some files from the glusterFs storage server
The master is in Europe, and the r/w is fast
I need people from the Asian DC to also r/w directly in Asia

Is this possible, if so, any help please?

I appriciate your help.

Milos
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster peer status disconnected

2015-04-19 Thread Sweta Anandpara

Hi Shyam,

Were the peers in a connected state anytime *before*?

If yes, then these are the things you can do to troubleshoot and get the 
cluster back to normal (some of which Neils has already mentioned ):


1. Check if the nodes are reachable.. 'ping ipaddress'
2. Check if the glusterd daemon is in an active state in all the nodes.. 
'service glusterd status'
3. Firewall issue - flush IPtables on the node which is disconnected..  
'iptables -F'


Do let us know if this helps..

Regarding *how* you landed up in this situation, as Atin mentioned, it 
would be better to be a little specific on the things that you did at 
your end.


Thanks,
Shweta

On 04/19/2015 06:45 PM, Niels de Vos wrote:

On Sun, Apr 19, 2015 at 03:04:15PM +0530, Shyam Deshmukh wrote:

Hi,

I am getting following status and my cluster is down due to the same.
Please help.

gluster@gluster1:~$ sudo gluster peer status
Number of Peers: 3

Hostname: gluster3
Uuid: c6f6574b-9779-4635-a7c2-06185a9ae973
State: Peer in Cluster (Disconnected)

Hostname: gluster4
Uuid: c0e2ad61-da65-4e4c-a863-e9e8c4e84710
State: Peer in Cluster (Disconnected)

Hostname: gluster2
Uuid: 45ac276d-9e65-4e32-8abe-51f8ba0241ce
State: Peer in Cluster (Disconnected)
gluster@gluster1:~$

You can probably start by verifying that the hostnames exactly as listed
above can get resolved to their correct IP-addresses. If that works
fine, check if you can connect to port 24007 on those systems.

This might be a network issue, firewall or glusterd is not running
(anymore) on those systems.

Sometimes it also helps to check on other systems, and se if the output
of gluster peer status is different there.

Good luck!


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users