We did a similar upgrade on a test system yesterday, from mimic to nautilus.
All of the PGSstayed offlien till we issued this command:

ceph osd require-osd-release nautlius --yes-i-really-mean-it}

On Wed, 27 Feb 2019 at 12:19, Zhenshi Zhou <deader...@gmail.com> wrote:

> Hi,
>
> The servers have moved to the new datacenter and I got it online
> following the instruction.
>
> # ceph -s
>   cluster:
>     id:     7712ab7e-3c38-44b3-96d3-4e1de9da0ff6
>     health: HEALTH_OK
>
>   services:
>     mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3
>     mgr: ceph-mon3(active), standbys: ceph-mon1, ceph-mon2
>     mds: cephfs-1/1/1 up  {0=ceph-mds=up:active}, 1 up:standby
>     osd: 63 osds: 63 up, 63 in
>
>   data:
>     pools:   4 pools, 640 pgs
>     objects: 108.6 k objects, 379 GiB
>     usage:   1.3 TiB used, 228 TiB / 229 TiB avail
>     pgs:     640 active+clean
>
> Thanks guys:)
>
> Eugen Block <ebl...@nde.ag> 于2019年2月27日周三 上午2:45写道:
>
>> Hi,
>>
>> > Well, I've just reacted to all the text at the beginning of
>> >
>> http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
>> > including the title "the messy way". If the cluster is clean I see no
>> > reason for doing brain surgery on monmaps
>> > just to "save" a few minutes of redoing correctly from scratch.
>>
>> with that I would agree. Careful planning and an installation
>> following the docs should be first priority. But I would also
>> encourage users to experiment with ceph before going into production.
>> Dealing with failures and outages on a production cluster causes much
>> more headache than on a test cluster. ;-)
>>
>> If the cluster is empty anyway, I would also rather reinstall it, it
>> doesn't take that much time. I just wanted to point out that there is
>> a way that worked for me, although that was only a test cluster.
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Janne Johansson <icepic...@gmail.com>:
>>
>> > Den mån 25 feb. 2019 kl 13:40 skrev Eugen Block <ebl...@nde.ag>:
>> >> I just moved a (virtual lab) cluster to a different network, it worked
>> >> like a charm.
>> >> In an offline method - you need to:
>> >>
>> >> - set osd noout, ensure there are no OSDs up
>> >> - Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
>> >> ADDRESS", MONs are the only ones really
>> >> sticky with the IP
>> >> - Ensure ceph.conf has the new MON IPs and network IPs
>> >> - Start MONs with new monmap, then start OSDs
>> >>
>> >> > No, certain ips will be visible in the databases, and those will
>> >> not change.
>> >> I'm not sure where old IPs will be still visible, could you clarify
>> >> that, please?
>> >
>> > Well, I've just reacted to all the text at the beginning of
>> >
>> http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
>> > including the title "the messy way". If the cluster is clean I see no
>> > reason for doing brain surgery on monmaps
>> > just to "save" a few minutes of redoing correctly from scratch. What
>> > if you miss some part, some command gives you an error
>> > you really aren't comfortable with, something doesn't really feel
>> > right after doing it, then the whole lifetime of that cluster
>> > will be followed by a small nagging feeling that it might have been
>> > that time you followed a guide that tries to talk you out of
>> > doing it that way, for a cluster with no data.
>> >
>> > I think that is the wrong way to learn how to run clusters.
>> >
>> > --
>> > May the most significant bit of your life be positive.
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to