Re: [Gluster-users] Sent and Received peer request (Connected)

2018-01-15 Thread Dj Merrill
On 1/15/2018 8:00 AM, Atin Mukherjee wrote: What you’d need to do is to set ‘state=3’ for the peer which is not in connected state in /var/lib/glusterd/peers/ and then restart the glusterd service. Thank you Atin, that worked perfectly! On glusterfs2, I edited the uuid file for glusterfs1

[Gluster-users] Sent and Received peer request (Connected)

2018-01-11 Thread Dj Merrill
This morning I did a rolling update from the latest 3.7.x to 3.12.4, with no client activity. "Rolling" as in, shut down the Gluster services on the first server, update, reboot, wait until up and running, proceed to the next server. I anticipated that a 3.12 server might not properly talk to a

Re: [Gluster-users] Announcing Gluster 3.9

2016-11-28 Thread Dj Merrill
On 11/28/2016 12:26 PM, Ben Werthmann wrote: > This may be helpful as > well: https://www.gluster.org/community/release-schedule/ > Definitely, thank you! :-) Part of my curiousity was "why" are there three actively supported versions at the same time, and that helps. -Dj

Re: [Gluster-users] yum errors

2016-09-06 Thread Dj Merrill
On 09/06/2016 01:54 AM, Kaushal M wrote: >> Following down through the docs on that link, I find the Centos Storage >> > SIG repo has 3.7.13, and the Storage testing repo has 3.7.15. >> > >> > What is a typical timeframe for releases to transition from the testing >> > repo to the normal repo? >

[Gluster-users] yum errors

2016-09-05 Thread Dj Merrill
A few days ago we started getting errors from the Gluster yum repo: http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Looking into this we found a readme file in that directory indicating: RPMs for

Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill
On 5/26/2016 5:11 PM, Gandalf Corvotempesta wrote: Upgrade part of the Gluster infrastructure, then migrate your critical items to the upgraded servers, then upgrade the rest, etc. This is exactly what i would like to archieve but is not possible. I am not sure I understand what isn't

Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill
On 05/26/2016 04:43 PM, Gandalf Corvotempesta wrote: > If bring down everything is really needed to upgrade, gluster can not be > considered highly available > > Bring down single host or server is ok, what is not ok and is nonsense > is bring down the whole infrastructure as stated on official

Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill
On 05/26/2016 02:28 PM, Gandalf Corvotempesta wrote: > Ad long as clients are able to talk with newer server > And what about major version like 3.5 to 3.6 or 3.7? > I believe that is one of the design criteria, for minor revs. For major revs, personally I would want to take things offline and

Re: [Gluster-users] Gluster update

2016-05-26 Thread Dj Merrill
On 05/26/2016 11:57 AM, Gandalf Corvotempesta wrote: > I've seen that the raccomenfedd procedure is with downtime, shutting > down all clients and after that upgrade gluster > Our upgrade procedure is to upgrade the servers first (shutdown the Gluster service on a server, upgrade that server,

Re: [Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-22 Thread Dj Merrill
On 04/20/2016 07:32 PM, Atin Mukherjee wrote: > Unfortunately there is no such document. But I can take you through > couple of code files [1] [2] where the first one defines all the volume > tunables and their respective supported op-version where the later has > the exact number of all those

Re: [Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-20 Thread Dj Merrill
On 04/20/2016 12:06 PM, Atin Mukherjee wrote: >> Curious, is there any reason why this isn't automatically updated when >> managing the updates with "yum update"? > This is still manual as we want to give users choose whether they want > to use a new feature or not. If they want, then a manual

Re: [Gluster-users] What is the corresponding op-version for a glusterfs release?

2016-04-20 Thread Dj Merrill
On 04/19/2016 05:42 PM, Atin Mukherjee wrote: >> After a brief search, I discovered the following solution for RHGS: >> https://access.redhat.com/solutions/2050753 It suggests updating the >> op-version of the cluster after the upgrade. There isn't any evidence of >> this procedure in the

Re: [Gluster-users] 3.7.8-1 vs 3.7.8-3

2016-03-07 Thread Dj Merrill
On 3/7/2016 1:09 PM, Kaleb Keithley wrote: The %changelog of the glusterfs.spec file used to build the rpms! `rpm -q --changelog glusterfs` (after updating). Thank you! :-) -Dj ___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] 3.7.8-1 vs 3.7.8-3

2016-03-07 Thread Dj Merrill
I noticed a release 3.7.8-3 appear for Centos 7 in the glusterfs repo over the weekend. Are there any release notes available noting the changes between 3.7.8-1 and 3.7.8-3? I am probably just looking in the wrong place. Thanks, -Dj ___

Re: [Gluster-users] glusterfs client crashes

2016-02-23 Thread Dj Merrill
On 2/23/2016 10:27 AM, Raghavendra Gowdappa wrote: Came across a glibc bug which could've caused some corruptions. On googling about possible problems, we found that there is an issue (https://bugzilla.redhat.com/show_bug.cgi?id=1305406) fixed in glibc-2.17-121.el7. We have the latest

Re: [Gluster-users] glusterfs client crashes

2016-02-22 Thread Dj Merrill
On 2/21/2016 2:23 PM, Dj Merrill wrote: > Very interesting. They were reporting both bricks offline, but the > processes on both servers were still running. Restarting glusterfsd on > one of the servers brought them both back online. I realize I wasn't clear in my comments yesterday

Re: [Gluster-users] glusterfs client crashes

2016-02-21 Thread Dj Merrill
On 2/21/2016 1:27 PM, Gaurav Garg wrote: Its seems that your brick process are offline or all brick process have crashed. Could you paste output of #gluster volume status and #gluster volume info command and attach core file. Very interesting. They were reporting both bricks offline, but

[Gluster-users] glusterfs client crashes

2016-02-21 Thread Dj Merrill
Several weeks ago we started seeing some weird behaviour on our Gluster client systems. Things would be working fine for several days, then the client could no longer access the Gluster filesystems, giving an error: ls: cannot access /mnt/hpc: Transport endpoint is not connected We were