On Wed, Jul 24, 2019 at 2:56 PM Peter Eisch <peter.ei...@virginpulse.com>
wrote:

> Hi Paul,
>
> To do better to answer you question, I'm following:
> http://docs.ceph.com/docs/nautilus/releases/nautilus/
>
> At step 6, upgrade OSDs, I jumped on an OSD host and did a full 'yum
> update' for patching the host and rebooted to pick up the current centos
> kernel.
>

If you are at Step 6 then it is *crucial* to understand that the tooling
used to create the OSDs is no longer available and Step 7 *is absolutely
required*.

ceph-volume has to scan the system and give you the output of all OSDs
found so that it can persist them in /etc/ceph/osd/*.json files and then
can later be
"activated".


> I didn't do anything to specific commands for just updating the ceph RPMs
> in this process.
>
>
It is not clear if you are at Step 6 and wondering why OSDs are not up, or
you are past that and ceph-volume wasn't able to detect anything.


> peter
>
> Peter Eisch
> Senior Site Reliability Engineer
> T *1.612.659.3228* <1.612.659.3228>
> [image: Facebook] <https://www.facebook.com/VirginPulse>
> [image: LinkedIn] <https://www.linkedin.com/company/virgin-pulse>
> [image: Twitter] <https://twitter.com/virginpulse>
> *virginpulse.com* <https://www.virginpulse.com/>
> | *virginpulse.com/global-challenge*
> <https://www.virginpulse.com/en-gb/global-challenge/>
>
> Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | 
> Switzerland | United Kingdom | USA
> Confidentiality Notice: The information contained in this e-mail,
> including any attachment(s), is intended solely for use by the designated
> recipient(s). Unauthorized use, dissemination, distribution, or
> reproduction of this message by anyone other than the intended
> recipient(s), or a person designated as responsible for delivering such
> messages to the intended recipient, is strictly prohibited and may be
> unlawful. This e-mail may contain proprietary, confidential or privileged
> information. Any views or opinions expressed are solely those of the author
> and do not necessarily represent those of Virgin Pulse, Inc. If you have
> received this message in error, or are not the named recipient(s), please
> immediately notify the sender and delete this e-mail message.
> v2.59
>
> From: Paul Emmerich <paul.emmer...@croit.io>
> Date: Wednesday, July 24, 2019 at 1:39 PM
> To: Peter Eisch <peter.ei...@virginpulse.com>
> Cc: Xavier Trilla <xavier.tri...@clouding.io>, "ceph-users@lists.ceph.com"
> <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Upgrading and lost OSDs
>
> On Wed, Jul 24, 2019 at 8:36 PM Peter Eisch <mailto:
> peter.ei...@virginpulse.com> wrote:
> # lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 1.7T 0 disk
> ├─sda1 8:1 0 100M 0 part
> ├─sda2 8:2 0 1.7T 0 part
> └─sda5 8:5 0 10M 0 part
> sdb 8:16 0 1.7T 0 disk
> ├─sdb1 8:17 0 100M 0 part
> ├─sdb2 8:18 0 1.7T 0 part
> └─sdb5 8:21 0 10M 0 part
> sdc 8:32 0 1.7T 0 disk
> ├─sdc1 8:33 0 100M 0 part
>
> That's ceph-disk which was removed, run "ceph-volume simple scan"
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at
> https://nam02.safelinks.protection.outlook.com/?url=https://croit.io&data=02|01|peter.ei...@virginpulse.com|93235ab7971a4beceab708d710664a14|b123a16e892b4cf6a55a6f8c7606a035|0|0|636995903843215231&sdata=YEQI+UvikVPVeOFNSB2ikqVRiul8ElD3JEZDVOQI+NY=&reserved=0
> <https://nam02.safelinks.protection.outlook.com/?url=https://croit.io&data=02%7c01%7cpeter.ei...@virginpulse.com%7C93235ab7971a4beceab708d710664a14%7Cb123a16e892b4cf6a55a6f8c7606a035%7C0%7C0%7C636995903843215231&sdata=YEQI+UvikVPVeOFNSB2ikqVRiul8ElD3JEZDVOQI+NY=&reserved=0>
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
>
> https://nam02.safelinks.protection.outlook.com/?url=http://www.croit.io&data=02|01|peter.ei...@virginpulse.com|93235ab7971a4beceab708d710664a14|b123a16e892b4cf6a55a6f8c7606a035|0|0|636995903843225224&sdata=83sD9wJHxE5W0renuDE7RGR/cPznR6jl9rEfl1AO+oA=&reserved=0
> <https://nam02.safelinks.protection.outlook.com/?url=http://www.croit.io&data=02%7c01%7cpeter.ei...@virginpulse.com%7C93235ab7971a4beceab708d710664a14%7Cb123a16e892b4cf6a55a6f8c7606a035%7C0%7C0%7C636995903843225224&sdata=83sD9wJHxE5W0renuDE7RGR/cPznR6jl9rEfl1AO+oA=&reserved=0>
> Tel: +49 89 1896585 90
>
>
> ...
> I'm thinking the OSD would start (I can recreate the .service definitions
> in systemctl) if the above were mounted in a way like they are on another
> of my hosts:
> # lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 1.7T 0 disk
> ├─sda1 8:1 0 100M 0 part
> │ └─97712be4-1234-4acc-8102-2265769053a5 253:17 0 98M 0 crypt
> /var/lib/ceph/osd/ceph-16
> ├─sda2 8:2 0 1.7T 0 part
> │ └─049b7160-1234-4edd-a5dc-fe00faca8d89 253:16 0 1.7T 0 crypt
> └─sda5 8:5 0 10M 0 part
> /var/lib/ceph/osd-lockbox/97712be4-9674-4acc-1234-2265769053a5
> sdb 8:16 0 1.7T 0 disk
> ├─sdb1 8:17 0 100M 0 part
> │ └─f03f0298-1234-42e9-8b28-f3016e44d1e2 253:26 0 98M 0 crypt
> /var/lib/ceph/osd/ceph-17
> ├─sdb2 8:18 0 1.7T 0 part
> │ └─51177019-1234-4963-82d1-5006233f5ab2 253:30 0 1.7T 0 crypt
> └─sdb5 8:21 0 10M 0 part
> /var/lib/ceph/osd-lockbox/f03f0298-1234-42e9-8b28-f3016e44d1e2
> sdc 8:32 0 1.7T 0 disk
> ├─sdc1 8:33 0 100M 0 part
> │ └─0184df0c-1234-404d-92de-cb71b1047abf 253:8 0 98M 0 crypt
> /var/lib/ceph/osd/ceph-18
> ├─sdc2 8:34 0 1.7T 0 part
> │ └─fdad7618-1234-4021-a63e-40d973712e7b 253:13 0 1.7T 0 crypt
> ...
>
> Thank you for your time on this,
>
> peter
>
> From: Xavier Trilla <mailto:xavier.tri...@clouding.io>
> Date: Wednesday, July 24, 2019 at 1:25 PM
> To: Peter Eisch <mailto:peter.ei...@virginpulse.com>
> Cc: "mailto:ceph-users@lists.ceph.com"; <mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Upgrading and lost OSDs
>
> Hi Peter,
>
> Im not sure but maybe after some changes the OSDs are not being
> recongnized by ceph scripts.
>
> Ceph used to use udev to detect the OSDs and then moved to lvm, which kind
> of OSDs are you running? Blustore or filestore? Which version did you use
> to create them?
>
> Cheers!
>
> El 24 jul 2019, a les 20:04, Peter Eisch <mailto:mailto:
> peter.ei...@virginpulse.com> va escriure:
> Hi,
>
> I’m working through updating from 12.2.12/luminious to 14.2.2/nautilus on
> centos 7.6. The managers are updated alright:
>
> # ceph -s
>   cluster:
>     id:     2fdb5976-1234-4b29-ad9c-1ca74a9466ec
>     health: HEALTH_WARN
>             Degraded data redundancy: 24177/9555955 objects degraded
> (0.253%), 7 pgs degraded, 1285 pgs undersized
>             3 monitors have not enabled msgr2
>  ...
>
> I updated ceph on a OSD host with 'yum update' and then rebooted to grab
> the current kernel. Along the way, the contents of all the directories in
> /var/lib/ceph/osd/ceph-*/ were deleted. Thus I have 16 OSDs down from this.
> I can manage the undersized but I'd like to get these drives working again
> without deleting each OSD and recreating them.
>
> So far I've pulled the respective cephx key into the 'keyring' file and
> populated 'bluestore' into the 'type' files but I'm unsure how to get the
> lockboxes mounted to where I can get the OSDs running. The osd-lockbox
> directory is otherwise untouched from when the OSDs were deployed.
>
> Is there a way to run ceph-deploy or some other tool to rebuild the mounts
> for the drives?
>
> peter
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to