[ceph-users] Moving an MDS

2013-06-11 Thread Bryan Stillwell
I have a cluster I originally built on argonaut and have since
upgraded it to bobtail and then cuttlefish.  I originally configured
it with one node for both the mds node and mon node, and 4 other nodes
for hosting osd's:

a1: mon.a/mds.a
b1: osd.0, osd.1, osd.2, osd.3, osd.4, osd.20
b2: osd.5, osd.6, osd.7, osd.8, osd.9, osd.21
b3: osd.10, osd.11, osd.12, osd.13, osd.14, osd.22
b4: osd.15, osd.16, osd.17, osd.18, osd.19, osd.23

Yesterday I added two more mon nodes and moved mon.a off of a1 so it
now looks like:

a1: mds.a
b1: osd.0, osd.1, osd.2, osd.3, osd.4, osd.20
b2: mon.a, osd.5, osd.6, osd.7, osd.8, osd.9, osd.21
b3: mon.b, osd.10, osd.11, osd.12, osd.13, osd.14, osd.22
b4: mon.c, osd.15, osd.16, osd.17, osd.18, osd.19, osd.23

What I would like to do is move mds.a to server b1 so I can power-off
a1 and bring up b5 with another 6 osd's (power in my basement is at a
premium), but I'm not finding much in the way of documentation on how
to do that.  I found some docs on doing it with ceph-deploy, but since
I built this a while ago I haven't been using ceph-deploy (and I
haven't had a great experience using it for building a new cluster
either).

Could some one point me at some docs on how to do this?  Also should I
be running with multiple mds nodes at this time?

Thanks,
Bryan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Moving an MDS

2013-06-11 Thread Gregory Farnum
On Tue, Jun 11, 2013 at 2:35 PM, Bryan Stillwell
bstillw...@photobucket.com wrote:
 I have a cluster I originally built on argonaut and have since
 upgraded it to bobtail and then cuttlefish.  I originally configured
 it with one node for both the mds node and mon node, and 4 other nodes
 for hosting osd's:

 a1: mon.a/mds.a
 b1: osd.0, osd.1, osd.2, osd.3, osd.4, osd.20
 b2: osd.5, osd.6, osd.7, osd.8, osd.9, osd.21
 b3: osd.10, osd.11, osd.12, osd.13, osd.14, osd.22
 b4: osd.15, osd.16, osd.17, osd.18, osd.19, osd.23

 Yesterday I added two more mon nodes and moved mon.a off of a1 so it
 now looks like:

 a1: mds.a
 b1: osd.0, osd.1, osd.2, osd.3, osd.4, osd.20
 b2: mon.a, osd.5, osd.6, osd.7, osd.8, osd.9, osd.21
 b3: mon.b, osd.10, osd.11, osd.12, osd.13, osd.14, osd.22
 b4: mon.c, osd.15, osd.16, osd.17, osd.18, osd.19, osd.23

 What I would like to do is move mds.a to server b1 so I can power-off
 a1 and bring up b5 with another 6 osd's (power in my basement is at a
 premium), but I'm not finding much in the way of documentation on how
 to do that.  I found some docs on doing it with ceph-deploy, but since
 I built this a while ago I haven't been using ceph-deploy (and I
 haven't had a great experience using it for building a new cluster
 either).

 Could some one point me at some docs on how to do this?  Also should I
 be running with multiple mds nodes at this time?

 Thanks,
 Bryan

You should not run more than one active MDS (less stable than a
single-MDS configuration, bla bla bla), but you can run multiple
daemons and let the extras serve as a backup in case of failure. The
process for moving an MDS is pretty easy: turn on a daemon somewhere
else, confirm it's connected to the cluster, then turn off the old
one.
Doing it that way will induce ~30 seconds of MDS inavailability while
it times out, but on cuttlefish you should be able to force an instant
takeover if the new daemon uses the same name as the old one (I
haven't worked with this much myself so I might be missing a detail;
if this is important you should check).

(These relatively simple takeovers are thanks to the MDS only storing
data in RADOS, and are one of the big design considerations in the
system architecture).
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Moving an MDS

2013-06-11 Thread Bryan Stillwell
On Tue, Jun 11, 2013 at 3:50 PM, Gregory Farnum g...@inktank.com wrote:
 You should not run more than one active MDS (less stable than a
 single-MDS configuration, bla bla bla), but you can run multiple
 daemons and let the extras serve as a backup in case of failure. The
 process for moving an MDS is pretty easy: turn on a daemon somewhere
 else, confirm it's connected to the cluster, then turn off the old
 one.
 Doing it that way will induce ~30 seconds of MDS inavailability while
 it times out, but on cuttlefish you should be able to force an instant
 takeover if the new daemon uses the same name as the old one (I
 haven't worked with this much myself so I might be missing a detail;
 if this is important you should check).

 (These relatively simple takeovers are thanks to the MDS only storing
 data in RADOS, and are one of the big design considerations in the
 system architecture).

Thanks Greg!

That sounds pretty easy.  Although it has me wondering what config
option differentiates between an active MDS and a backup MDS daemon?

Bryan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Moving an MDS

2013-06-11 Thread Gregory Farnum
On Tue, Jun 11, 2013 at 3:04 PM, Bryan Stillwell
bstillw...@photobucket.com wrote:
 On Tue, Jun 11, 2013 at 3:50 PM, Gregory Farnum g...@inktank.com wrote:
 You should not run more than one active MDS (less stable than a
 single-MDS configuration, bla bla bla), but you can run multiple
 daemons and let the extras serve as a backup in case of failure. The
 process for moving an MDS is pretty easy: turn on a daemon somewhere
 else, confirm it's connected to the cluster, then turn off the old
 one.
 Doing it that way will induce ~30 seconds of MDS inavailability while
 it times out, but on cuttlefish you should be able to force an instant
 takeover if the new daemon uses the same name as the old one (I
 haven't worked with this much myself so I might be missing a detail;
 if this is important you should check).

 (These relatively simple takeovers are thanks to the MDS only storing
 data in RADOS, and are one of the big design considerations in the
 system architecture).

 Thanks Greg!

 That sounds pretty easy.  Although it has me wondering what config
 option differentiates between an active MDS and a backup MDS daemon?

 Bryan

You can manipulate this with the mds standby for name, mds standby
for rank, and mds standby replay config options — but in general
it's just a race to see who contacts the monitor first. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com