This is exactly what we did in your situation. It doesn't seem to have caused any problems.
Thanks, June Taylor System Administrator, Minnesota Population Center University of Minnesota On Fri, Apr 29, 2016 at 1:37 AM, Klaus Ma <[email protected]> wrote: > maybe we can create a document for it in FAQ umbrella :). > > ---- > Da (Klaus), Ma (马达) | PMP® | Advisory Software Engineer > Platform OpenSource Technology, STG, IBM GCG > +86-10-8245 4084 | [email protected] | http://k82.me > > On Fri, Apr 29, 2016 at 11:13 AM, Vinod Kone <[email protected]> wrote: > >> I think what you did seems correct. >> >> On Thu, Apr 28, 2016 at 6:31 PM, Shuai Lin <[email protected]> >> wrote: >> >>> Hi list, >>> >>> For some reason I need to change the role of an existing framework >>> (marathon) from the default role "*" to a specific role, say "services", I >>> don't find any existing documentation on this, so here are the steps that I >>> take on a staging cluster: >>> >>> - stop all HA marathon instances, only left one running >>> >>> - set the marathon role (/etc/marathon/conf/mesos_role), and restart >>> marathon >>> - at this moment marathon is still using "*" role because master won't >>> update the role of a framework when it re-registers >>> - for that to happen we need to do a mesos master fail over >>> >>> - stop the current active mesos-master, so marathon would use the new >>> role after the master failover >>> >>> - now: marathon is using "services" role, which means it would accept >>> resources from both slaves with default '*' role and slaves with "services" >>> role >>> >>> - for each slave: >>> - stop the slave >>> - change the role (/etc/mesos-slave/default_role) to "services" >>> - remove /tmp/mesos/meta/slaves >>> - restart docker (otherwise the old running executors/tasks won't be >>> killed) >>> - restart the slave >>> >>> During the process all running tasks are killed and restarted, but >>> that's acceptable to me. >>> >>> Now all slaves is running with role "services" and marathon is running >>> with role "services". So far the cluster seems to be working fine, but I'm >>> not sure if the steps I take have any un-noticed impacts, since this is a >>> somewhat un-documented procedure. >>> >>> Any comments? >>> >>> Regards, >>> Shuai >>> >>> >>> >>> >> >

