On 1/15/2018 8:00 AM, Atin Mukherjee wrote:
What you’d need to do is to set ‘state=3’ for the peer which is not in
connected state in /var/lib/glusterd/peers/ and then restart
the glusterd service.
Thank you Atin, that worked perfectly!
On glusterfs2, I edited the uuid file for glusterfs1 an
This morning I did a rolling update from the latest 3.7.x to 3.12.4,
with no client activity. "Rolling" as in, shut down the Gluster
services on the first server, update, reboot, wait until up and running,
proceed to the next server. I anticipated that a 3.12 server might not
properly talk to a 3
On 11/28/2016 12:26 PM, Ben Werthmann wrote:
> This may be helpful as
> well: https://www.gluster.org/community/release-schedule/
>
Definitely, thank you! :-)
Part of my curiousity was "why" are there three actively supported
versions at the same time, and that helps.
-Dj
_
On 11/23/2016 8:23 AM, Amye Scavarda wrote:
Gluster
versions 3.9, 3.8 and 3.7 are all actively maintained.
This might be a bit of a silly question, but how would one know which
version of Gluster to use?
If you wanted to use Gluster as a scratch space for an HPC cluster, and
needed a solid
On 09/06/2016 01:54 AM, Kaushal M wrote:
>> Following down through the docs on that link, I find the Centos Storage
>> > SIG repo has 3.7.13, and the Storage testing repo has 3.7.15.
>> >
>> > What is a typical timeframe for releases to transition from the testing
>> > repo to the normal repo?
> R
A few days ago we started getting errors from the Gluster yum repo:
http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-7/x86_64/repodata/repomd.xml:
[Errno 14] HTTP Error 404 - Not Found
Looking into this we found a readme file in that directory indicating:
RPMs for RHEL
On 5/26/2016 5:11 PM, Gandalf Corvotempesta wrote:
Upgrade part of the Gluster infrastructure, then migrate your critical
items to the upgraded servers, then upgrade the rest, etc.
This is exactly what i would like to archieve but is not possible.
I am not sure I understand what isn't possib
On 05/26/2016 04:43 PM, Gandalf Corvotempesta wrote:
> If bring down everything is really needed to upgrade, gluster can not be
> considered highly available
>
> Bring down single host or server is ok, what is not ok and is nonsense
> is bring down the whole infrastructure as stated on official do
On 05/26/2016 02:28 PM, Gandalf Corvotempesta wrote:
> Ad long as clients are able to talk with newer server
> And what about major version like 3.5 to 3.6 or 3.7?
>
I believe that is one of the design criteria, for minor revs.
For major revs, personally I would want to take things offline and n
On 05/26/2016 11:57 AM, Gandalf Corvotempesta wrote:
> I've seen that the raccomenfedd procedure is with downtime, shutting
> down all clients and after that upgrade gluster
>
Our upgrade procedure is to upgrade the servers first (shutdown the
Gluster service on a server, upgrade that server, the
On 04/20/2016 07:32 PM, Atin Mukherjee wrote:
> Unfortunately there is no such document. But I can take you through
> couple of code files [1] [2] where the first one defines all the volume
> tunables and their respective supported op-version where the later has
> the exact number of all those vers
On 04/20/2016 12:06 PM, Atin Mukherjee wrote:
>> Curious, is there any reason why this isn't automatically updated when
>> managing the updates with "yum update"?
> This is still manual as we want to give users choose whether they want
> to use a new feature or not. If they want, then a manual bump
On 04/19/2016 05:42 PM, Atin Mukherjee wrote:
>> After a brief search, I discovered the following solution for RHGS:
>> https://access.redhat.com/solutions/2050753 It suggests updating the
>> op-version of the cluster after the upgrade. There isn't any evidence of
>> this procedure in the commun
On 3/7/2016 1:09 PM, Kaleb Keithley wrote:
The %changelog of the glusterfs.spec file used to build the rpms!
`rpm -q --changelog glusterfs` (after updating).
Thank you! :-)
-Dj
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.g
I noticed a release 3.7.8-3 appear for Centos 7 in the glusterfs repo
over the weekend. Are there any release notes available noting the
changes between 3.7.8-1 and 3.7.8-3? I am probably just looking in the
wrong place.
Thanks,
-Dj
___
Gluster-us
On 2/23/2016 10:27 AM, Raghavendra Gowdappa wrote:
Came across a glibc bug which could've caused some corruptions. On googling
about possible problems, we found that there is an issue
(https://bugzilla.redhat.com/show_bug.cgi?id=1305406) fixed in
glibc-2.17-121.el7.
We have the latest versio
On 2/21/2016 2:23 PM, Dj Merrill wrote:
> Very interesting. They were reporting both bricks offline, but the
> processes on both servers were still running. Restarting glusterfsd on
> one of the servers brought them both back online.
I realize I wasn't clear in my comments yeste
On 2/21/2016 1:27 PM, Gaurav Garg wrote:
Its seems that your brick process are offline or all brick process have
crashed. Could you paste output of #gluster volume status and #gluster volume
info command and attach core file.
Very interesting. They were reporting both bricks offline, but
Several weeks ago we started seeing some weird behaviour on our Gluster
client systems. Things would be working fine for several days, then the
client could no longer access the Gluster filesystems, giving an error:
ls: cannot access /mnt/hpc: Transport endpoint is not connected
We were runni
19 matches
Mail list logo