i have a 12 nodes cluster with 2 nodes downgraded and currently down. The
procedure of taking down 3.13 nodes means a cluster downtime?
if it matters /var/lib/glusterd/glusterd.info is empty on the 3.12.9 nodes.
On Tue, May 15, 2018 at 2:47 PM, Kaleb S. KEITHLEY
wrote:
>
On 05/15/2018 08:08 AM, Davide Obbi wrote:
> Thanks Kaleb,
>
> any chance i can make the node working after the downgrade?
> thanks
Without knowing what doesn't work, I'll go out on a limb and guess that
it's an op-version problem.
Shut down your 3.13 nodes, change their op-version to one of
Thanks Kaleb,
any chance i can make the node working after the downgrade?
thanks
On Tue, May 15, 2018 at 2:02 PM, Kaleb S. KEITHLEY
wrote:
>
> You can still get them from
> https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/
>
> (I don't know how much
You can still get them from
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/
(I don't know how much longer they'll be there. I suggest you copy them
if you think you're going to need them in the future.)
n 05/15/2018 04:58 AM, Davide Obbi wrote:
> hi,
>
> i noticed that
Thank you Ravi for your fast answer. As requested you will find below the
"stat" and "getfattr" of one of the files and its parent directory from all
three nodes of my cluster.
NODE 1:
File:
hi,
i noticed that this repo for glusterfs 3.13 does not exists anymore at:
http://mirror.centos.org/centos/7/storage/x86_64/
i knew was not going to be long term supported however the downgrade to
3.12 breaks the server node i believe the issue is with:
*[2018-05-15 08:54:39.981101] E
On 05/15/2018 12:38 PM, mabi wrote:
Dear all,
I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from
3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I
still have exactly the same problem as initially posted in this thread.
It looks like this
Dear all,
I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from
3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I
still have exactly the same problem as initially posted in this thread.
It looks like this bug is not resolved as I just got