On Friday, April 20, 2012 at 12:00 PM, Sławomir Skowron wrote:
> Maybe it's a lame question, but is anybody knows simplest procedure,
> for most non-disrubtive upgrade of ceph cluster with real workload on
> it ??

Unfortunate though it is, non-disruptive upgrades aren't a great idea to 
attempt right now. We've architected the system to make it possible, and we 
*try* to keep things forward-compatible, but we don't currently do any of the 
testing necessary to promise something like that.
It will be possible Very Soon in one form or another, but for now you shouldn't 
count on it. When you can, you'll hear about it — we'll be proudly sharing that 
we're testing it, it works, whether it's on our main branch or a new long-term 
stable, etc etc. ;)

> It's most important if we want semi-automate this process with some
> tools. Maybe there is a cookbook for this operation ?? I know that
> automate this is not simple, and dangerous, but even in manual upgrade
> it's important to know what can we expect.

So, for now what we recommend is shutting down the cluster, upgrading 
everything all at once, and then starting up the monitors, OSDs, and MDS (in 
that order). Handling disk changes is a lot easier to write and test than 
making sure that things are wire-compatible, and has been working well for a 
long time. If for some reason it makes you feel better you should also be able 
to upgrade the monitors as a group, then the OSDs as a group, then the MDS. 
Things will start back up and the OSDs will go through a brief peering period, 
but since nobody will have extra data or anything it should be fast.
-Greg


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to