[Gluster-users] gluster0:group1 not matching up with mounted directory

2016-10-17 Thread Cory Sanders
I have volumes set up like this:
gluster> volume info

Volume Name: machines0
Type: Distribute
Volume ID: f602dd45-ddab-4474-8308-d278768f1e00
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster4:/data/brick1/machines0

Volume Name: group1
Type: Distribute
Volume ID: cb64c8de-1f76-46c8-8136-8917b1618939
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/brick1/group1

Volume Name: backups
Type: Replicate
Volume ID: d7cb93c4-4626-46fd-b638-65fd244775ae
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster3:/data/brick1/backups
Brick2: gluster4:/data/brick1/backups

Volume Name: group0
Type: Distribute
Volume ID: 0c52b522-5b04-480c-a058-d863df9ee949
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster0:/data/brick1/group0

My problem is that when I do a disk free, group1 is filled up:

root@node0:~# df -h
Filesystem  Size  Used Avail Use% Mounted on
udev 10M 0   10M   0% /dev
tmpfs   3.2G  492K  3.2G   1% /run
/dev/mapper/pve-root 24G   12G   11G  52% /
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs   6.3G   56M  6.3G   1% /run/shm
/dev/mapper/pve-data 48G  913M   48G   2% /var/lib/vz
/dev/sda1   495M  223M  248M  48% /boot
/dev/sdb1   740G  382G  359G  52% /data/brick1
/dev/fuse30M   64K   30M   1% /etc/pve
gluster0:group0 740G  382G  359G  52% /mnt/pve/group0
16.xx.xx.137:backups  1.9T  1.6T  233G  88% /mnt/pve/backups
node4:machines0 7.3T  5.1T  2.3T  70% /mnt/pve/machines0
gluster0:group1 740G  643G   98G  87% /mnt/pve/group1
gluster2:/var/lib/vz1.7T  182G  1.5T  11% /mnt/pve/node2local

When I do a du -h in the respective directories, this is what I get.  They 
don't match up with what a df -h shows.  Gluster0:group0 shows the right amount 
of disk free, but gluster0:group1 is too fat and does not correspond to what is 
in /mnt/pve/group1

root@node0:/mnt/pve/group0# du -h -d 2
0   ./images/2134
0   ./images/8889
6.3G./images/134
56G ./images/140
31G ./images/153
9.9G./images/144
0   ./images/166
29G ./images/141
9.9G./images/152
22G ./images/142
0   ./images/155
0   ./images/145
18G ./images/146
25G ./images/148
24G ./images/151
0   ./images/156
11G ./images/143
0   ./images/157
0   ./images/158
0   ./images/159
0   ./images/160
0   ./images/161
0   ./images/162
0   ./images/164
0   ./images/9149
0   ./images/7186
0   ./images/9150
9.7G./images/149
29G ./images/150
0   ./images/9100
0   ./images/9145
17G ./images/147
51G ./images/187
12G ./images/9142
0   ./images/186
0   ./images/184
0   ./images/9167
0   ./images/102
0   ./images/99102
30G ./images/9153
382G./images
0   ./template/iso
0   ./template
0   ./dump
382G.

root@node0:/mnt/pve/group1/images# du -h -d 2
2.7G./9153
9.7G./162
9.9G./164
11G ./166
9.6G./161
0   ./146
9.8G./155
9.8G./156
9.9G./157
9.7G./159
9.9G./160
9.9G./158
21G ./185
11G ./165
0   ./153
11G ./154
0   ./9167
11G ./168
11G ./169
11G ./167
0   ./9165
11G ./171
0   ./9171
182G.

root@node0:/data/brick1# du -h -d2
382G./group0/.glusterfs
8.0K./group0/images
0   ./group0/template
0   ./group0/dump
382G./group0
0   ./group1/.glusterfs
0   ./group1/images
0   ./group1/template
0   ./group1/dump
0   ./group1
382G.
root@node0:/data/brick1#

gluster> peer status
Number of Peers: 3

Hostname: 10.0.0.137
Uuid: 92071298-6809-49ff-9d6c-3761c01039ea
State: Peer in Cluster (Connected)

Hostname: 10.0.0.138
Uuid: 040a3b67-c516-4c9b-834b-f7f7470e8dfd
State: Peer in Cluster (Connected)

Hostname: gluster1
Uuid: 71cbcefb-0aea-4414-b88f-11f8954a8be2
State: Peer in Cluster (Connected)
gluster>


gluster> pool list
UUIDHostnameState
92071298-6809-49ff-9d6c-3761c01039ea10.0.0.137  Connected
040a3b67-c516-4c9b-834b-f7f7470e8dfd10.0.0.138  Connected
71cbcefb-0aea-4414-b88f-11f8954a8be2gluster1Connected
398228da-2300-4bc9-8e66-f4ae06a7c98elocalhost   Connected
gluster>


There are 5 nodes in a ProxMox cluster.

Node0 has a 900GB RAID1 and is primarily responsible for running VMs from 
gluster0:group0   /mnt/pve/group0
Node1 has a 900GB RAID1 and  is primarily responsible for running VMS from 
gluster0:group1  /mnt/pve/group1
Node2 is a development machine: gluster2:/var/lib/vz   /mnt/pve/node2local
Node3 has backups: /mnt/pve/backups
Node4 has backups and also is supposed to mirror gluster0:group0 and group1

I think things are off on the configs.


Thanks, I'm a bit of a newbie at gluster.  Wanting to learn.



[Gluster-users] Network bonding

2016-10-17 Thread Thing
Hi,

I there any performance gain (or can you even?) in bonding  2 x 1gb?

regards

Steven
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] opportunist for outreachy

2016-10-17 Thread Ms ms
Hi,

As a part of my initial contribution to the organization, I have done the
following:

   1. Installed gluster on 2 nodes on digitalocean
   2. Specs of the nodes: 2 GB Memory / 40 GB Disk / NYC2 - Ubuntu 16.04.1
   x64
   3. Set up a volume and mounted it
   4. Set up a client machine, and ran the benchmark tests from there only.

After the setup of the nodes and the client machine, I ran the benchmark
tests from [1]. I found the following issues while creating the benchmark:

   - there were no instructions on the latest ppa for gluster, so I had to
   search on launchpad for Ubuntu
   - mount_point is a required variable which is not accepted properly
   - no instructions on setting up IOZone.
   - IOZone configuration file isn't generated properly for IOZone3.
   - smallfile is configured on all clients, however the check for
   smallfile is done on localhost, so even though it's installed it installs
   it everytime.
   - error handling is weak on tests
   - the mounted volume name is hard coded for smallfile

I would like to improve the benchmark script. Also, could I make a pull
request for the same?

On fixing the errors and after some research I was able to generate the
following results for the benchmark:

Sequential Writes 64 record size: 549,826 kBps
Sequential Reads 64 record size: 919,270 kBps
Random Writes 64 record size: 278,564 kBps Random Reads 64 record size:
81,914 KBps Smallfile Creates 64 file size: 236 files/sec Smallfile Reads
64 file size: 627 files/sec
Smallfile ls -l 64 file size: 10693 files/sec

[1] https://github.com/gluster/gbench/tree/master/bench-tests/bt--0001

Cheers,
Soumya.

On 14 October 2016 at 21:27, Shyam  wrote:

> On 10/14/2016 10:48 AM, Manikandan Selvaganesh wrote:
>
>> Hi Soumya,
>>
>> Welcome to the community.
>>
>> Here[1] is the link for Gluster Documentation. I would suggest you to
>> google and
>> read a bit about GlusterFS and then get started with "Quick Start
>> Guide[2]".
>> Once you have done your setup and have played a bit around the
>> installation and
>> configuration move on with "Developers Guide[3]".
>>
>> If you want to get started with Code contributions pick some EasyFix
>> bugs which
>> can be found here[4]. After this I hope you would have got a minimal
>> idea and then
>> explore more in depth and pick up the project/component which interests
>> you more.
>> Again, we have some list of projects[5] already listed, check out if
>> anything interests
>> you here. Feel free to bring your own ideas as well. These are quite
>> generic for anyone
>> who is new to the community and in case if  you want to know
>> specifically about
>> Outreachy, someone in the community will surely respond to you shortly.
>>
>
> Let me take the Outreachy part up.
>
> There are 2 projects there, one relating to the documentation, for which
> Manikandan has filled in some links and thoughts. The other being the
> instrumentation tooling around performance.
>
> For the latter, I would suggest that you get a gluster volume up and
> running, and attempt the GlusterBench.py [6] against it, and start with
> reporting the results. Again, Manikandan has covered getting gluster up and
> running. For any questions, or things that you get stuck on when running
> the bench script, post back here and we will help as needed.
>
>
>> If you have queries, please mail us back. Also, we are always available
>> on #gluster-dev
>> and #gluster-meeting in Freenode.
>>
>> All the best :-)
>>
>> [1] https://gluster.readthedocs.io/en/latest/
>>
>> [2] https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/
>> Quickstart/
>>
>> [3] https://gluster.readthedocs.io/en/latest/Developer-guide/Dev
>> elopers-Index/
>>
>> [4] https://gluster.readthedocs.io/en/latest/Developer-guide/Eas
>> y-Fix-Bugs/
>>
>> [5] https://gluster.readthedocs.io/en/latest/Developer-guide/Projects/
>>
>
> [6] GlusterBench.py : https://github.com/gluster/gbe
> nch/tree/master/bench-tests/bt--0001
>
>
>>
>> --
>> Cheers,
>> Manikandan Selvaganesh.
>>
>> On Fri, Oct 14, 2016 at 7:56 PM, Ms ms > > wrote:
>>
>> Hi,
>>
>> I'm a research student pursuing my Masters in IIIT-Hyderabad. I am
>> keen on working on Gluster's Outreachy project.
>>
>> I have prior experience in configuring, maintaining and managing
>> systems in an MHRD project. I have completed the required course
>> credits towards my degree and am working on my Thesis currently. It
>> would be great opportunity for me to learn and contribute to the
>> project as well.
>>
>> As I am a bit new to the community it would be nice if anyone can
>> guide me a few useful resources to get me started.
>>
>> Thanks and regards,
>> Soumya
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> 

Re: [Gluster-users] [Gluster-devel] Need help in understanding IOZone config file

2016-10-17 Thread Ashish Pandey
++Bhaskar 

- Original Message -

From: "Ashish Pandey"  
To: "Menaka Mohan"  
Cc: "Gluster Users"  
Sent: Monday, October 17, 2016 4:15:02 PM 
Subject: Re: [Gluster-users] [Gluster-devel] Need help in understanding IOZone 
config file 


Keeping Bhaskar in loop as he has done testing on glusterfs with iozone. 


- Original Message -

From: "Menaka Mohan"  
To: gluster-de...@gluster.org 
Sent: Tuesday, October 11, 2016 1:18:13 AM 
Subject: [Gluster-devel] Need help in understanding IOZone config file 



Hi, 




I am Menaka M. I am new to this open source world. Kindly help me with the 
following query. 




I have set up the Gluster development environment with two servers and one 
client. I am trying to run the basic bench test on the Gluster cluster from 
this GitHub repo . I also have IOZone installed. In that, how to generate the 
clients.ioz file (prerequisite 3) ? Does that refer to the file containing 
(client_name work_dir path_to_IOZone_on_client) ? 




I have read multiple blogs on how to analyze the IOZone results and also the 
performance testing section in docs. Kindly help me resolve this confusion. If 
I had asked a very basic thing, apologies. I will quickly learn them. 




Regards, 

Menaka M 

___ 
Gluster-devel mailing list 
gluster-de...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel 


___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Need help in understanding IOZone config file

2016-10-17 Thread Ashish Pandey

Keeping Bhaskar in loop as he has done testing on glusterfs with iozone. 


- Original Message -

From: "Menaka Mohan"  
To: gluster-de...@gluster.org 
Sent: Tuesday, October 11, 2016 1:18:13 AM 
Subject: [Gluster-devel] Need help in understanding IOZone config file 



Hi, 




I am Menaka M. I am new to this open source world. Kindly help me with the 
following query. 




I have set up the Gluster development environment with two servers and one 
client. I am trying to run the basic bench test on the Gluster cluster from 
this GitHub repo . I also have IOZone installed. In that, how to generate the 
clients.ioz file (prerequisite 3) ? Does that refer to the file containing 
(client_name work_dir path_to_IOZone_on_client) ? 




I have read multiple blogs on how to analyze the IOZone results and also the 
performance testing section in docs. Kindly help me resolve this confusion. If 
I had asked a very basic thing, apologies. I will quickly learn them. 




Regards, 

Menaka M 

___ 
Gluster-devel mailing list 
gluster-de...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Unable to reset gluster node after crash as filesystem ran out of space

2016-10-17 Thread Atin Mukherjee
On Tue, Oct 11, 2016 at 11:52 AM, Abeer Mahendroo  wrote:

> Hi all.
>
> We had a strange issue with Gluster 3.8.4 under RHEL 7.2.
>
>
>
> Initially, the partition storing the Gluster bricks ran out of space. We
> tried recovering after expanding the underlying partition. Eventually we
> decided to ‘reset’ Gluster, create the volume again from scratch. I tried
> purging gluster by running something like:
>
>
>
>
>
> yum remove –y ‘glusterfs*’
>
> rm -rf /var/lib/glusterd
>
> rm –rf /etc/gluster*
>
>
>
> Reinstalling gluster:
>
>
>
> yum install –y glusterfs-server
>
> systemctl start glusterd
>
>
>
> Now a simple peer probe operation crashes the daemon:
>
>
>
> $ gluster peer probe 
>
>
>
> Connection failed. Please check if gluster daemon is operational.
>

This typically indicates that glusterd is not running. Could you check if
glusterd instance is running on this node? If not any error message on
glusterd log file?


>
>
>
>
> Looks like there is something I missed in the filesystem.
>
>
>
>
>
> On a clean gluster install on another host,
>
>
>
> $ gluster peer probe 
>
>
>
> peer probe: failed: Probe returned with Transport endpoint is not connected
>

Same question here, is glusterd running on the host which you are trying to
probe? Are the firewalld/iptables rules clean?


>
> Which is expected.
>
>
>
> So somehow my clean install is not clean any more. This host I can
> rebuild, but would be good to know the issue if this occurs on a host that
> cannot easily be rebuilt.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 

--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Multiple disks per brick

2016-10-17 Thread Angelo Scagnetti
Hi all, I would like to set up 2 Gluster node
to use with a VMWare server via NFS. With Gluster
I can set up only a volume to share with NFS, so
I would like to create a single Gluster volume
with 3 disks per node. Is there a way to achieve
this directely with Gluster? In the case the answer
is yes, which syntax I have to use? If the answer
is no, should I go with LVM?

Each Gluster node have 3 disks of 3 TB each.

Thank you in advance.

Cheers,
  Angelo
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Unable to reset gluster node after crash as filesystem ran out of space

2016-10-17 Thread Abeer Mahendroo
Hi all.

We had a strange issue with Gluster 3.8.4 under RHEL 7.2.



Initially, the partition storing the Gluster bricks ran out of space. We
tried recovering after expanding the underlying partition. Eventually we
decided to ‘reset’ Gluster, create the volume again from scratch. I tried
purging gluster by running something like:





yum remove –y ‘glusterfs*’

rm -rf /var/lib/glusterd

rm –rf /etc/gluster*



Reinstalling gluster:



yum install –y glusterfs-server

systemctl start glusterd



Now a simple peer probe operation crashes the daemon:



$ gluster peer probe 



Connection failed. Please check if gluster daemon is operational.





Looks like there is something I missed in the filesystem.





On a clean gluster install on another host,



$ gluster peer probe 



peer probe: failed: Probe returned with Transport endpoint is not connected



Which is expected.



So somehow my clean install is not clean any more. This host I can rebuild,
but would be good to know the issue if this occurs on a host that cannot
easily be rebuilt.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS and NFS issues

2016-10-17 Thread deZillium
I got the NFS mounts to work, but I can't remember what fixed them. It 
might have something to do with name resolution on one of the gluster 
servers, I changed too many things to remember :-)


NFS mounts are working beautifully now, thanks all.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Check the possibility to incorporate DEBUG info permanently in build

2016-10-17 Thread ABHISHEK PALIWAL
Hi Vijay,

It is quite difficult to provide the exact instances but below are the two
mostly occurred cases.

1. Get duplicate peer entries in 'peer status' command
2. We lost sync between two boards due to gluster mount point is not
present at one of the board.


Regards,
Abhisehk

On Mon, Oct 17, 2016 at 6:40 AM, Vijay Bellur  wrote:

> On 10/14/2016 04:30 AM, ABHISHEK PALIWAL wrote:
>
>> Hi Team,
>>
>> As we are seeing many issues in gluster. And we are failing to address
>> most of the gluster issues due to lack of information for fault analysis.
>>
>> And for the many issue unfortunately with the initial gluster logs we
>> get a very limited information which is not at all possible to find the
>> root cause/conclude the issue.
>> Every time enabling the LOG_LEVEL to DEBUG is not feasible and few of
>> the cases are very rarely seen.
>>
>> Hence, I request you to check if there is a possibility  to incorporate
>> the debug information in build or check if its possible to introduce a
>> new debug level that can always be activated.
>>
>> Please come back on this!
>>
>
> Abhishek - please provide specific instances of the nature of logs that
> could have helped you better. The query posted by you is very broad based
> and such broad queries seldom helps us in achieving the desired outcome.
>
> Regards,
> Vijay
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-10-17 Thread Gandalf Corvotempesta
Il 14 ott 2016 17:37, "David Gossage"  ha
scritto:
>
> Sorry to resurrect an old email but did any resolution occur for this or
a cause found?  I just see this as a potential task I may need to also run
through some day and if their are pitfalls to watch for would be good to
know.
>

I think that the issue wrote in these emails must be addressed in some way.
It's really bad that adding bricks to a cluster lead to data corruption as
adding bricks is a standard administration task

I hope that the issue will be detected and fixed asap.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] rot-13 Translator query

2016-10-17 Thread Xavier Hernandez

Hi Ankireddypalle,

On 16/10/16 11:10, Ankireddypalle Reddy wrote:

The encryption xlator is the last one before posix and it’s here that
the data is getting encrypted.  When the data is read back the encrypted
data is returned. Decryption is supposed to happen in read callback
which does not seem to be happening. The fact that encrypted data is
getting returned indicates that data in turn is getting returned from
the posix/underlying fs layer.  Is it possible that data be returned by
reading from the underlying fs by any translator other than posix.


It could be because of quick-read translator. It caches some data from 
the beginning of files on lookups (even before an actual open and read 
is done on the file), so the first small read sent to the file could 
return cached data directly from what was obtained in the lookup fop 
without issuing a read fop.


You would need to handle that case in lookup also.

Anyway, to be sure you should do as Ravi has said and disable all 
performance xlators. In this case all reads should arrive as regular 
reads to your xlator.


Xavi





Thanks and Regards,

Ram

*From:*Ravishankar N [mailto:ravishan...@redhat.com]
*Sent:* Sunday, October 16, 2016 12:19 AM
*To:* Ankireddypalle Reddy; gluster-users@gluster.org
*Subject:* Re: [Gluster-users] rot-13 Translator query



On 10/15/2016 08:22 PM, Ankireddypalle Reddy wrote:

Hi,

  I am trying to follow the below document for developing a
translator.



https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/translator-development.md



  I’ve created a replica volume and modified the vol file to
include rot-13 translator. Below is the snippet from vol file.



volume myvol-posix

type storage/posix

option volume-id b492191e-77a5-4fc3-9394-49218e36dae2

option directory /brick1/repli

end-volume



volume *myvol-rot13*

type encryption/rot-13

subvolumes *myvol-posix*

end-volume



volume myvol-trash

type features/trash

option trash-internal-op off

option brick-path /brick1/repli

option trash-dir .trashcan

subvolumes *myvol-rot13*

end-volume

…



The writes are getting intercepted by the translator and the file is
getting encrypted. But the reads don’t seem to be getting
intercepted by the translator.  I tried setting break point in the
posix_readv function and attach the brick daemons to gdb. But
posix_readv does not seem to be getting called on the brick daemon
and the read completes on the application side.



Can someone please explain how the reads are getting serviced here
without hitting the posix layer.

It could be due to client side caching. I usually disable all
performance xlators (write-behind, read-head, io-cache, stat-prefetch,
quick-read, open-behind) when I want to remove caching effects while
debugging. drop-caches also helps.

HTH,
Ravi




Thanks and Regards,

Ram

***Legal Disclaimer***

"This communication may contain confidential and privileged material for
the

sole use of the intended recipient. Any unauthorized review, use or
distribution

by others is strictly prohibited. If you have received the message by
mistake,

please advise the sender by reply email and delete the message. Thank you."

**


___

Gluster-users mailing list

Gluster-users@gluster.org 

http://www.gluster.org/mailman/listinfo/gluster-users



***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or
distribution
by others is strictly prohibited. If you have received the message by
mistake,
please advise the sender by reply email and delete the message. Thank you."
**


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-10-17 Thread Kevin Lemonnier
> 
>I see that network.ping-timeout on your setup is 15 seconds andA  that's
>too low. Could you reconfigure that to 30 seconds?
> 

Yes, I can. I set it to 15 to be sure no browser would timeout when trying to 
load
a website on a frozen VM during the timeout, 15 seemed pretty good since it just
feels like the website was a bit slow, which happens. I guess 30 should still 
work,
do you think 15 could cause problems ? We've had that on our clusters for a few 
months
already without noticing anything. The heals are totally transparent now so I 
figured
I don't really mind if it heals everytime there is a little lag ..

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-10-17 Thread Kevin Lemonnier
On Fri, Oct 14, 2016 at 10:37:03AM -0500, David Gossage wrote:
>Sorry to resurrect an old email but did any resolution occur for this or a
>cause found?A  I just see this as a potential task I may need to also run
>through some day and if their are pitfalls to watch for would be good to
>know.

Unfortunatly no, I ended up restoring almost all the VMs from backups then
we created two small clusters instead of a big one, and I guess we'll keep
creating 3 bricks cluster when needed for now.

Maybe just make sure you are running > 3.7.12, and if possible test it
on a non-production environment first. Still. hard to replicate the
same load for tests ..

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users