Re: [Gluster-users] [Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-12-11 Thread Niels de Vos
Many thanks for testing, Alastair!

The glusterfs-3.12.3-1 packages have now been marked for release into
the normal gluster-3.12 repository of the CentOS Storage SIG. I expect
that the packages will land on the mirrors during the day tomorrow.

Niels


On Mon, Dec 11, 2017 at 04:24:21PM -0500, Alastair Neil wrote:
> Neil I  don;t know if this is adequate but I did run a simple smoke test
> today on the 3.12.3-1 bits.   I installed the 3.12.3-1 but on 3 fresh
> install Centos 7 VMs
> 
> created a 2G image  files and wrote a xfs files system on them on each
> system
> 
> mount each under /export/brick1,  and created /export/birck1/test  on each
> node.
> probes the two other systems from one node (a).  abd created a replica 3
> volume using the bricks at export/brick1/test on each node.
> 
> started the volume and mounted it under /mnt/gluster test on nodes a.
> 
> did some brief  tests using dd into the mount point on node a, all seemed
> fine - no errors nothing unexpected.
> 
> 
> 
> 
> 
> 
> 
> 
> On 23 October 2017 at 17:42, Niels de Vos  wrote:
> 
> > On Mon, Oct 23, 2017 at 02:12:53PM -0400, Alastair Neil wrote:
> > > Any idea when these packages will be in the CentOS mirrors? there is no
> > > sign of them on download.gluster.org.
> >
> > We're waiting for someone other than me to test the new packages at
> > least a little. Installing the packages and run something on top of a
> > Gluster volume is already sufficient, just describe a bit what was
> > tested. Once a confirmation is sent that it works for someone, we can
> > mark the packages for releasing to the mirrors.
> >
> > Getting the (unsigned) RPMs is easy, run this on your test environment:
> >
> >   # yum --enablrepo=centos-gluster312-test update glusterfs
> >
> > This does not restart the brick processes so I/O is not affected with
> > the installation. Make sure to restart the processes (or just reboot)
> > and do whatever validation you deem sufficient.
> >
> > Thanks,
> > Niels
> >
> >
> > >
> > > On 13 October 2017 at 08:45, Jiffin Tony Thottan 
> > > wrote:
> > >
> > > > The Gluster community is pleased to announce the release of Gluster
> > 3.12.2
> > > > (packages available at [1,2,3]).
> > > >
> > > > Release notes for the release can be found at [4].
> > > >
> > > > We still carry following major issues that is reported in the
> > > > release-notes as follows,
> > > >
> > > > 1.) - Expanding a gluster volume that is sharded may cause file
> > corruption
> > > >
> > > > Sharded volumes are typically used for VM images, if such volumes
> > are
> > > > expanded or possibly contracted (i.e add/remove bricks and rebalance)
> > there
> > > > are reports of VM images getting corrupted.
> > > >
> > > > The last known cause for corruption (Bug #1465123) has a fix with
> > this
> > > > release. As further testing is still in progress, the issue is
> > retained as
> > > > a major issue.
> > > >
> > > > Status of this bug can be tracked here, #1465123
> > > >
> > > >
> > > > 2 .) Gluster volume restarts fail if the sub directory export feature
> > is
> > > > in use. Status of this issue can be tracked here, #1501315
> > > >
> > > > 3.) Mounting a gluster snapshot will fail, when attempting a FUSE based
> > > > mount of the snapshot. So for the current users, it is recommend to
> > only
> > > > access snapshot via
> > > >
> > > > ".snaps" directory on a mounted gluster volume. Status of this issue
> > can
> > > > be tracked here, #1501378
> > > >
> > > > Thanks,
> > > >  Gluster community
> > > >
> > > >
> > > > [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.2/
> > > > 
> > > > [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
> > > > 
> > > > [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> > > >
> > > > [4] Release notes: https://gluster.readthedocs.
> > > > io/en/latest/release-notes/3.12.2/
> > > > 
> > > >
> > > > ___
> > > > Gluster-devel mailing list
> > > > gluster-de...@gluster.org
> > > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> > > >
> >
> > > ___
> > > Gluster-devel mailing list
> > > gluster-de...@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> >
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster 3.13.0-1.el7 Packages Tested

2017-12-11 Thread Sam McLeod
Hi Niels,

FYI - tested the install of the 3.13.0-1.el7 packages and all seems well with 
the install under CentOS 7.

yum install -y 
https://cbs.centos.org/kojifiles/packages/centos-release-gluster313/1.0/1.el7.centos/noarch/centos-release-gluster313-1.0-1.el7.centos.noarch.rpm
 
;
 yum --enablerepo=centos-gluster* upgrade -y; systemctl daemon-reload; 
systemctl restart glusterd

Cluster and clients upgraded, working as expected.

--
Sam McLeod
https://smcleod.net 
https://twitter.com/s_mcleod

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] reset-brick command questions

2017-12-11 Thread Ashish Pandey
Hi Jorick, 

1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just 
want to replace the disk and get it back into the volume. 

Reset brick command can be used in different scenarios. One more case could be 
where you just want to change the host name to IP address of that node of 
bricks. 
In this case also you will follow the same steps but just have to provide IP 
address 

gluster volume reset-brick glustervol gluster1:/gluster/brick1/glusterbrick1 
"gluster1 IP address" :/gluster/brick1/glusterbrick1 commit force 

Now as we have this command for different cases, to keep uniformity of the 
command, we chose to provide brick path twice. 

Coming to your case, I think you followed all the steps correctly and it should 
be successful. 
Please provide guster volume status of the volume and also try to use "commit 
force" and only "commit" and let us know the result. 
You may have to raise a bug if it does not work so be prepared to provide 
glusterd logs in /var/log/glusterfs/ 

-- 
Ashish 


- Original Message -

From: "Jorick Astrego"  
To: gluster-users@gluster.org 
Sent: Monday, December 11, 2017 7:32:53 PM 
Subject: [Gluster-users] reset-brick command questions 



Hi, 

I'm trying to use the reset-brick command, but it's not completely clear to me 


Introducing reset-brick command 


Notes for users: The reset-brick command provides support to reformat/replace 
the disk(s) represented by a brick within a volume. This is helpful when a disk 
goes bad etc 



That's what I need, the use case is a disk goes bad on a disperse gluster node 
and we want to replace it with a new disk 




Start reset process - 
gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH start 




This works, I can see in gluster volume status the brick is not there anymore 




The above command kills the respective brick process. Now the brick can be 
reformatted. 

To restart the brick after modifying configuration - 
gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit 






If the brick was killed to replace the brick with same brick path, restart with 
following command - 
gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit 
force 




This fails, I unmounted the gluster path, formatted a fresh disk, mounted it on 
the old mount point and created the brick subdir on it. 



gluster volume reset-brick glustervol gluster1:/gluster/brick1/glusterbrick1 
gluster1:/gluster/brick1/glusterbrick1 commit force 


volume reset-brick: failed: Source brick must be stopped. Please use gluster 
volume reset-brick   start. 



Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want 
to replace the disk and get it back into the volume. 











Met vriendelijke groet, With kind regards, 

Jorick Astrego 

Netbulae Virtualization Experts 

Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3AKvK 
08198180 
Fax: 053 20 30 271  www.netbulae.eu 7547 TA EnschedeBTW 
NL821234584B01 



___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] active/active failover

2017-12-11 Thread Alex Chekholko
Hi Stefan,

I think what you propose will work, though you should test it thoroughly.

I think more generally, "the GlusterFS way" would be to use 2-way
replication instead of a distributed volume; then you can lose one of your
servers without outage.  And re-synchronize when it comes back up.

Chances are if you weren't using the SAN volumes; you could have purchased
two servers each with enough disk to make two copies of the data, all for
less dollars...

Regards,
Alex


On Mon, Dec 11, 2017 at 12:52 PM, Stefan Solbrig 
wrote:

> Dear all,
>
> I'm rather new to glusterfs but have some experience running lager lustre
> and beegfs installations. These filesystems provide active/active
> failover.  Now, I discovered that I can also do this in glusterfs, although
> I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
>
> So my question is: can I really use glusterfs to do failover in the way
> described below, or am I misusing glusterfs? (and potentially corrupting my
> data?)
>
> My setup is: I have two servers (qlogin and gluster2) that access a shared
> SAN storage. Both servers connect to the same SAN (SAS multipath) and I
> implement locking via lvm2 and sanlock, so I can mount the same storage on
> either server.
> The idea is that normally each server serves one brick, but in case one
> server fails, the other server can serve both bricks. (I'm not interested
> on automatic failover, I'll always do this manually.  I could also use this
> to do maintainance on one server, with only minimal downtime.)
>
>
> #normal setup:
> [root@qlogin ~]# gluster volume info g2
> #...
> # Volume Name: g2
> # Type: Distribute
> # Brick1: qlogin:/glust/castor/brick
> # Brick2: gluster2:/glust/pollux/brick
>
> #  failover: let's artificially fail one server by killing one glusterfsd:
> [root@qlogin] systemctl status glusterd
> [root@qlogin] kill -9 
>
> # unmount brick
> [root@qlogin] umount /glust/castor/
>
> # deactive LV
> [root@qlogin] lvchange  -a n vgosb06vd05/castor
>
>
> ###  now do the failover:
>
> # active same storage on other server:
> [root@gluster2] lvchange  -a y vgosb06vd05/castor
>
> # mount on other server
> [root@gluster2] mount /dev/mapper/vgosb06vd05-castor  /glust/castor
>
> # now move the "failed" brick to the other server
> [root@gluster2] gluster volume replace-brick g2
> qlogin:/glust/castor/brick gluster2:/glust/castor/brick commit force
> ### The last line is the one I have doubts about
>
> #now I'm in failover state:
> #Both bricks on one server:
> [root@qlogin ~]# gluster volume info g2
> #...
> # Volume Name: g2
> # Type: Distribute
> # Brick1: gluster2:/glust/castor/brick
> # Brick2: gluster2:/glust/pollux/brick
>
>
> Is it intended to work this way?
>
> Thanks a lot!
>
> best wishes,
> Stefan
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-12-11 Thread Alastair Neil
Neil I  don;t know if this is adequate but I did run a simple smoke test
today on the 3.12.3-1 bits.   I installed the 3.12.3-1 but on 3 fresh
install Centos 7 VMs

created a 2G image  files and wrote a xfs files system on them on each
system

mount each under /export/brick1,  and created /export/birck1/test  on each
node.
probes the two other systems from one node (a).  abd created a replica 3
volume using the bricks at export/brick1/test on each node.

started the volume and mounted it under /mnt/gluster test on nodes a.

did some brief  tests using dd into the mount point on node a, all seemed
fine - no errors nothing unexpected.








On 23 October 2017 at 17:42, Niels de Vos  wrote:

> On Mon, Oct 23, 2017 at 02:12:53PM -0400, Alastair Neil wrote:
> > Any idea when these packages will be in the CentOS mirrors? there is no
> > sign of them on download.gluster.org.
>
> We're waiting for someone other than me to test the new packages at
> least a little. Installing the packages and run something on top of a
> Gluster volume is already sufficient, just describe a bit what was
> tested. Once a confirmation is sent that it works for someone, we can
> mark the packages for releasing to the mirrors.
>
> Getting the (unsigned) RPMs is easy, run this on your test environment:
>
>   # yum --enablrepo=centos-gluster312-test update glusterfs
>
> This does not restart the brick processes so I/O is not affected with
> the installation. Make sure to restart the processes (or just reboot)
> and do whatever validation you deem sufficient.
>
> Thanks,
> Niels
>
>
> >
> > On 13 October 2017 at 08:45, Jiffin Tony Thottan 
> > wrote:
> >
> > > The Gluster community is pleased to announce the release of Gluster
> 3.12.2
> > > (packages available at [1,2,3]).
> > >
> > > Release notes for the release can be found at [4].
> > >
> > > We still carry following major issues that is reported in the
> > > release-notes as follows,
> > >
> > > 1.) - Expanding a gluster volume that is sharded may cause file
> corruption
> > >
> > > Sharded volumes are typically used for VM images, if such volumes
> are
> > > expanded or possibly contracted (i.e add/remove bricks and rebalance)
> there
> > > are reports of VM images getting corrupted.
> > >
> > > The last known cause for corruption (Bug #1465123) has a fix with
> this
> > > release. As further testing is still in progress, the issue is
> retained as
> > > a major issue.
> > >
> > > Status of this bug can be tracked here, #1465123
> > >
> > >
> > > 2 .) Gluster volume restarts fail if the sub directory export feature
> is
> > > in use. Status of this issue can be tracked here, #1501315
> > >
> > > 3.) Mounting a gluster snapshot will fail, when attempting a FUSE based
> > > mount of the snapshot. So for the current users, it is recommend to
> only
> > > access snapshot via
> > >
> > > ".snaps" directory on a mounted gluster volume. Status of this issue
> can
> > > be tracked here, #1501378
> > >
> > > Thanks,
> > >  Gluster community
> > >
> > >
> > > [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.2/
> > > 
> > > [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
> > > 
> > > [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> > >
> > > [4] Release notes: https://gluster.readthedocs.
> > > io/en/latest/release-notes/3.12.2/
> > > 
> > >
> > > ___
> > > Gluster-devel mailing list
> > > gluster-de...@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >
>
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] active/active failover

2017-12-11 Thread Stefan Solbrig
Dear all, 

I'm rather new to glusterfs but have some experience running lager lustre and 
beegfs installations. These filesystems provide active/active failover.  Now, I 
discovered that I can also do this in glusterfs, although I didn't find 
detailed documentation about it. (I'm using glusterfs 3.10.8)

So my question is: can I really use glusterfs to do failover in the way 
described below, or am I misusing glusterfs? (and potentially corrupting my 
data?)

My setup is: I have two servers (qlogin and gluster2) that access a shared SAN 
storage. Both servers connect to the same SAN (SAS multipath) and I implement 
locking via lvm2 and sanlock, so I can mount the same storage on either server. 
The idea is that normally each server serves one brick, but in case one server 
fails, the other server can serve both bricks. (I'm not interested on automatic 
failover, I'll always do this manually.  I could also use this to do 
maintainance on one server, with only minimal downtime.)


#normal setup:
[root@qlogin ~]# gluster volume info g2 
#...
# Volume Name: g2
# Type: Distribute
# Brick1: qlogin:/glust/castor/brick
# Brick2: gluster2:/glust/pollux/brick

#  failover: let's artificially fail one server by killing one glusterfsd:
[root@qlogin] systemctl status glusterd 
[root@qlogin] kill -9 

# unmount brick
[root@qlogin] umount /glust/castor/ 

# deactive LV
[root@qlogin] lvchange  -a n vgosb06vd05/castor 


###  now do the failover:

# active same storage on other server:
[root@gluster2] lvchange  -a y vgosb06vd05/castor 

# mount on other server
[root@gluster2] mount /dev/mapper/vgosb06vd05-castor  /glust/castor 

# now move the "failed" brick to the other server
[root@gluster2] gluster volume replace-brick g2 qlogin:/glust/castor/brick 
gluster2:/glust/castor/brick commit force
### The last line is the one I have doubts about

#now I'm in failover state:
#Both bricks on one server:
[root@qlogin ~]# gluster volume info g2 
#...
# Volume Name: g2
# Type: Distribute
# Brick1: gluster2:/glust/castor/brick
# Brick2: gluster2:/glust/pollux/brick


Is it intended to work this way?

Thanks a lot!

best wishes,
Stefan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] reset-brick command questions

2017-12-11 Thread Jorick Astrego
Hi,

I'm trying to use the reset-brick command, but it's not completely clear 
to me

>
>   Introducing reset-brick command
>
> /Notes for users:/ The reset-brick command provides support to 
> reformat/replace the disk(s) represented by a brick within a volume. 
> This is helpful when a disk goes bad etc
>
That's what I need, the use case is a disk goes bad on a disperse 
gluster node and we want to replace it with a new disk
>
> Start reset process -
>
> |gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH start |
This works, I can see in gluster volume status the brick is not there 
anymore
>
> The above command kills the respective brick process. Now the brick 
> can be reformatted.
>
> To restart the brick after modifying configuration -
>
> |gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH 
> HOSTNAME:BRICKPATH commit |
>
> If the brick was killed to replace the brick with same brick path, 
> restart with following command -
>
> |gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH 
> HOSTNAME:BRICKPATH commit force |
This fails, I unmounted the gluster path, formatted a fresh disk, 
mounted it on the old mount point and created the brick subdir on it.

gluster volume reset-brick glustervol
gluster1:/gluster/brick1/glusterbrick1
gluster1:/gluster/brick1/glusterbrick1 commit force

volume reset-brick: failed: Source brick must be stopped. Please use
gluster volume reset-brick   start.

Why would I even need to specify the "|HOSTNAME:BRICKPATH|" twice? I 
just want to replace the disk and get it back into the volume.






Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How large the Arbiter node?

2017-12-11 Thread Martin Toth
Hi,

there is good suggestion here : 
http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing
 

Since the arbiter brick does not store file data, its disk usage will be 
considerably less than the other bricks of the replica. The sizing of the brick 
will depend on how many files you plan to store in the volume. A good estimate 
will be 4kb times the number of files in the replica.

BR,

Martin

> On 11 Dec 2017, at 17:43, Nux!  wrote:
> 
> Hi,
> 
> I see gluster now recommends the use of an arbiter brick in "replica 2" 
> situations. 
> How large should this brick be? I understand only metadata is to be stored.
> Let's say total storage usage will be 5TB of mixed size files. How large 
> should such a brick be?
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] How large the Arbiter node?

2017-12-11 Thread Nux!
Hi,

I see gluster now recommends the use of an arbiter brick in "replica 2" 
situations. 
How large should this brick be? I understand only metadata is to be stored.
Let's say total storage usage will be 5TB of mixed size files. How large should 
such a brick be?

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.12.4 : Scheduled for the 12th of December

2017-12-11 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.4 release, which falls on the 10th of
each month, and hence would be 12-12-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.4? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.


Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.4

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users