I had several kernel mapped rbds as well as ceph-fuse mounted CephFS
clients when I upgraded from Jewel to Luminous. I rolled out the client
upgrades over a few weeks after the upgrade. I had tested that the client
use cases I had would be fine running Jewel connecting to a Luminous
cluster so there weren't any surprised for me when I did it in production.

On Tue, Apr 3, 2018, 11:21 PM Konstantin Shalygin <k0...@k0ste.ru> wrote:

> > The VMs are XenServer VMs with virtual Disk saved at the NFS Server
> which has the RBD mounted … So there is nor migration from my POV as there
> is no second storage to migrate to ...
>
>
>
> All your pain is self-inflicted.
>
> Just FYI clients are not interrupted when you upgrade ceph. Client will
> be interrupted only when update, so if you (suddenly) change crush
> tunables, minimum_required_version for example (for this reason clients
> must be upgraded before cluster).
>
>
>
>
> k
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to