Hi Heiner, Try doing "mmces service stop -N <node-name>" and/or "mmces service disable -N <node-name>". You'll definitely want the node suspended first, since I don't think the service commands do an address migration first.
On Wed, Nov 14, 2018 at 04:20:12PM +0000, Billich Heinrich Rainer (PSI) wrote: > Hello, > > how can I prevent smb, ctdb, nfs (and object) to start when I reboot the node > or restart gpfs on a suspended ces node? Being able to do this would make > updates much easier > > With > > # mmces node suspend ???stop > > I can move all IPs to other CES nodes and stop all CES services, what also > releases the ces-shared-root-directory and allows to unmount the underlying > filesystem. > But after a reboot/restart only the IPs stay on the on the other nodes, the > CES services start up. Hm, sometimes I would very much prefer the services to > stay down as long as the nodes is suspended and to keep the node out of the > CES cluster as much as possible. > > I did not try rough things like just renaming smbd, this seems likely to > create unwanted issues. > > Thank you, > > Cheers, > > Heiner Billich > -- > Paul Scherrer Institut > Heiner Billich > System Engineer Scientific Computing > Science IT / High Performance Computing > WHGA/106 > Forschungsstrasse 111 > 5232 Villigen PSI > Switzerland > > Phone +41 56 310 36 02 > [email protected]<mailto:[email protected]> > https://www.psi.ch > > > > From: <[email protected]> on behalf of Madhu Konidena > <[email protected]> > Reply-To: gpfsug main discussion list <[email protected]> > Date: Sunday 11 November 2018 at 22:06 > To: gpfsug main discussion list <[email protected]> > Subject: Re: [gpfsug-discuss] If you're attending KubeCon'18 > > I will be there at both. Please stop by our booth at SC18 for a quick chat. > > Madhu Konidena > [cid:ii_d4d3894a4c2f4773] > [email protected] > > > > On Nov 10, 2018, at 3:37 PM, Jon Bernard > <[email protected]<mailto:[email protected]>> wrote: > Hi Vasily, > I will be at Kubecon with colleagues from Tower Research Capital (and at SC). > We have a few hundred nodes across several Kubernetes clusters, most of them > mounting Scale from the host. > Jon > On Fri, Oct 26, 2018, 5:58 PM Vasily Tarasov > <[email protected]<mailto:[email protected]> wrote: > Folks, Please let me know if anyone is attending KubeCon'18 in Seattle this > December (via private e-mail). We will be there and would like to meet in > person with people that already use or consider using Kubernetes/Swarm/Mesos > with Scale. The goal is to share experiences, problems, visions. P.S. If > you are not attending KubeCon, but are interested in the topic, shoot me an > e-mail anyway. Best, -- Vasily Tarasov, Research Staff Member, Storage > Systems Research, IBM Research - Almaden > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org<http://spectrumscale.org> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org<http://spectrumscale.org> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson ([email protected]) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
