Re: [Gluster-users] [Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-12-11 Thread Niels de Vos
Many thanks for testing, Alastair! The glusterfs-3.12.3-1 packages have now been marked for release into the normal gluster-3.12 repository of the CentOS Storage SIG. I expect that the packages will land on the mirrors during the day tomorrow. Niels On Mon, Dec 11, 2017 at 04:24:21PM -0500,

[Gluster-users] Gluster 3.13.0-1.el7 Packages Tested

2017-12-11 Thread Sam McLeod
Hi Niels, FYI - tested the install of the 3.13.0-1.el7 packages and all seems well with the install under CentOS 7. yum install -y https://cbs.centos.org/kojifiles/packages/centos-release-gluster313/1.0/1.el7.centos/noarch/centos-release-gluster313-1.0-1.el7.centos.noarch.rpm

Re: [Gluster-users] reset-brick command questions

2017-12-11 Thread Ashish Pandey
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that

Re: [Gluster-users] active/active failover

2017-12-11 Thread Alex Chekholko
Hi Stefan, I think what you propose will work, though you should test it thoroughly. I think more generally, "the GlusterFS way" would be to use 2-way replication instead of a distributed volume; then you can lose one of your servers without outage. And re-synchronize when it comes back up.

Re: [Gluster-users] [Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-12-11 Thread Alastair Neil
Neil I don;t know if this is adequate but I did run a simple smoke test today on the 3.12.3-1 bits. I installed the 3.12.3-1 but on 3 fresh install Centos 7 VMs created a 2G image files and wrote a xfs files system on them on each system mount each under /export/brick1, and created

[Gluster-users] active/active failover

2017-12-11 Thread Stefan Solbrig
Dear all, I'm rather new to glusterfs but have some experience running lager lustre and beegfs installations. These filesystems provide active/active failover. Now, I discovered that I can also do this in glusterfs, although I didn't find detailed documentation about it. (I'm using glusterfs

[Gluster-users] reset-brick command questions

2017-12-11 Thread Jorick Astrego
Hi, I'm trying to use the reset-brick command, but it's not completely clear to me > > Introducing reset-brick command > > /Notes for users:/ The reset-brick command provides support to > reformat/replace the disk(s) represented by a brick within a volume. > This is helpful when a disk

Re: [Gluster-users] How large the Arbiter node?

2017-12-11 Thread Martin Toth
Hi, there is good suggestion here : http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing Since the arbiter brick does not store file

[Gluster-users] How large the Arbiter node?

2017-12-11 Thread Nux!
Hi, I see gluster now recommends the use of an arbiter brick in "replica 2" situations. How large should this brick be? I understand only metadata is to be stored. Let's say total storage usage will be 5TB of mixed size files. How large should such a brick be? -- Sent from the Delta quadrant

[Gluster-users] Release 3.12.4 : Scheduled for the 12th of December

2017-12-11 Thread Jiffin Tony Thottan
Hi, It's time to prepare the 3.12.4 release, which falls on the 10th of each month, and hence would be 12-12-2017 this time around. This mail is to call out the following, 1) Are there any pending *blocker* bugs that need to be tracked for 3.12.4? If so mark them against the provided tracker