Hi John, Would you please share your scenario?
* How many nodes are running as gluster server? * Which application is accesing gluster volume by which means (NFS, CIFS, Gluster Client)? * Are you accessing volume through a client or the clients are accessing volume themselves (like KVM nodes) Best Regards Aytac On Wed, Feb 25, 2015 at 1:18 AM, John Gardeniers < [email protected]> wrote: > Problem solved, more or less. > > After reading Aytac's comment about 3.6.2 not being considered stable yet > I removed it from the new node, removed /var/lib/glusterd/, rebooted (just > to be sure) and installed 3.5.3. After detaching and re-probing the peer > the replace-brick command worked and the volume is currently happily > undergoing a self-heal. At a later and more convenient time I'll upgrade > the 3.4.2 node to the same version. As previously stated, I cannot upgrade > the clients, so they will just have to stay where they are. > > regards, > John > > > On 25/02/15 08:27, aytac zeren wrote: > > Hi John, > > 3.6.2 is a major release and introduces some new features in cluster wide > concept. Additionally it is not stable yet. The best way of doing it would > be establishing another 3.6.2 cluster, accessing 3.4.0 cluster via nfs or > native client, and copying content to 3.6.2 cluster gradually. While your > volume size decreases on 3.4.0 cluster, you can unmount 3.4.0 members from > cluster, upgrade them and add 3.6.2 trusted pool with brick. Please be > careful while doing this operation, as number of nodes in your cluster > should be reliable with your cluster design. (Stripped, Replicated, > Distributed or a combination of them). > > Notice: I don't take any responsibility on the actions you have > undertaken with regards to my recommendations, as my recommendations are > general and does not take your archtiectural design into consideration. > > BR > Aytac > > On Tue, Feb 24, 2015 at 11:19 PM, John Gardeniers < > [email protected]> wrote: > >> Hi All, >> >> We have a gluster volume consisting of a single brick, using replica 2. >> Both nodes are currently running gluster 3.4.2 and I wish to replace one of >> the nodes with a new server (rigel), which has gluster 3.6.2 >> >> Following this link: >> >> >> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Replacing_an_Old_Brick_with_a_New_Brick_on_a_Replicate_or_Distribute-replicate_Volume.html >> >> I tried to do a replace brick but got "volume replace-brick: failed: Host >> rigel is not in 'Peer in Cluster' state". Is this due to a version >> incompatibility or is it due to some other issue? A bit of googling reveals >> the error message in bug reports but I've not yet found anything that >> applies to this specific case. >> >> Incidentally, the clients (RHEV bare metal hypervisors, so we have no >> upgrade option) are running 3.4.0. Will this be a problem if the nodes are >> on 3.6.2? >> >> regards, >> John >> >> _______________________________________________ >> Gluster-users mailing list >> [email protected] >> http://www.gluster.org/mailman/listinfo/gluster-users >> > > > ______________________________________________________________________ > This email has been scanned by the Symantec Email Security.cloud service. > For more information please visit http://www.symanteccloud.com > ______________________________________________________________________ > > >
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
