On 10/08/2015 09:52 PM, Gene Liverman wrote:
> Thanks for all the replies! Just to make sure I have this right, the
> following should work for *both* machines with and machines without a
> currently populated brick if the name and IP stay the same:
>
> * reinstall os
> * reinstall gluster software
> * start gluster
>
> Do I need to do any peer probing or anything else? Do I need to do any
> brick removal / adding (I'm thinking no but want to make sure)?
No you don't.
>
>
>
>
> Thanks,
> *Gene Liverman*
> Systems Integration Architect
> Information Technology Services
> University of West Georgia
> [email protected] <mailto:[email protected]>
>
> ITS: Making Technology Work for You!
>
>
>
> On Thu, Oct 8, 2015 at 9:52 AM, Alastair Neil <[email protected]
> <mailto:[email protected]>> wrote:
>
> Ahh that is good to know.
>
> On 8 October 2015 at 09:50, Atin Mukherjee
> <[email protected] <mailto:[email protected]>> wrote:
>
> -Atin
> Sent from one plus one
> On Oct 8, 2015 7:17 PM, "Alastair Neil" <[email protected]
> <mailto:[email protected]>> wrote:
> >
> > I think you should back up /var/lib/glusterd and then restore it
> after the reinstall and installation of glusterfs packages. Assuming the
> node will have the same hostname and ip addresses and you are installing the
> same version gluster bits, I think it should be fine. I am assuming you are
> not using ssl for the connections if so you will need to back up the keys for
> that too.
> If the same machine is used with out hostname/ IP change,
> backing up glusterd configuration *is not* needed as syncing the
> configuration will be taken care peer handshaking.
>
>
> >
> > -Alastair
> >
> > On 8 October 2015 at 00:12, Atin Mukherjee
> <[email protected] <mailto:[email protected]>> wrote:
> >>
> >>
> >>
> >> On 10/07/2015 10:28 PM, Gene Liverman wrote:
> >> > I want to replace my existing CentOS 6 nodes with CentOS 7
> ones. Is
> >> > there a recommended way to go about this from the
> perspective of
> >> > Gluster? I am running a 3 node replicated cluster (3
> servers each with 1
> >> > brick). In case it makes a difference, my bricks are on
> separate drives
> >> > formatted as XFS so it is possible that I can do my OS
> reinstall without
> >> > wiping out the data on two nodes (the third had a hardware
> failure so it
> >> > will be fresh from the ground up).
> >> That's possible. You could do the re-installation one at a
> time. Once
> >> the node comes back online self heal daemon will take care of
> healing
> >> the data. AFR team can correct me if I am wrong.
> >>
> >> Thanks,
> >> Atin
> >> >
> >> >
> >> >
> >> >
> >> > Thanks,
> >> > *Gene Liverman*
> >> > Systems Integration Architect
> >> > Information Technology Services
> >> > University of West Georgia
> >> > [email protected] <mailto:[email protected]>
> <mailto:[email protected] <mailto:[email protected]>>
> >> >
> >> > ITS: Making Technology Work for You!
> >> >
> >> >
> >> > _______________________________________________
> >> > Gluster-users mailing list
> >> > [email protected] <mailto:[email protected]>
> >> > http://www.gluster.org/mailman/listinfo/gluster-users
> >> >
> >> _______________________________________________
> >> Gluster-users mailing list
> >> [email protected] <mailto:[email protected]>
> >> http://www.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > [email protected] <mailto:[email protected]>
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> [email protected] <mailto:[email protected]>
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://www.gluster.org/mailman/listinfo/gluster-users
>
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users