Move its recoverygrops to the other node by putting the other node as primary server for it:
mmchrecoverygroup rgname --servers otherServer,thisServer And verify that it's now active on the other node by "mmlsrecoverygroup rgname -L". Move away any filesystem managers or cluster manager role if that's active on it. Check with mmlsmgr, move with mmchmgr/mmchmgr -c. Then you can run mmshutdown on it (assuming you also have enough quorum nodes in the remaining cluster). -jf man. 19. des. 2016 kl. 15.53 skrev Damir Krstic <[email protected]>: > We have a single ESS GL6 system running GPFS 4.2.0-1. Last night one of > the IO servers phoned home with memory error. IBM is coming out today to > replace the faulty DIMM. > > What is the correct way of taking this system out for maintenance? > > Before ESS we had a large GPFS 3.5 installation with 14 IO servers. When > we needed to do maintenance on the old system, we would migrate manager > role and also move primary and secondary server roles if one of those > systems had to be taken down. > > With ESS and resource pool manager roles etc. is there a correct way of > shutting down one of the IO serves for maintenance? > > Thanks, > Damir > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
