OK I am in some trouble now and would love some help!  After updating none
of the OSDs on the node will come back up:
[email protected]
 loaded failed failed    Ceph disk activation: /dev/sdb1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdb2
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdb3
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdb4
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdb5
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdc1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdc2
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdc3
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdc4
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdc5
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdd1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sde1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdf1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdg1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdh1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdi1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdj1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdk1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdl1
● [email protected]
 loaded failed failed    Ceph disk activation: /dev/sdm1
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon
● [email protected]
 loaded failed failed    Ceph object storage daemon

I did some searching and saw that the issue is that the disks aren't
mounting... My question is how can I mount them correctly again (note sdb
and sdc are ssd for cache)? I am not sure which disk maps to ceph-osd@0 and
so on.  Also, can I add them to /etc/fstab to work around?

Cheers,
Mike

On Tue, Nov 29, 2016 at 10:41 AM, Mike Jacobacci <[email protected]> wrote:

> Hello,
>
> I would like to install OS updates on the ceph cluster and activate a
> second 10gb port on the OSD nodes, so I wanted to verify the correct steps
> to perform maintenance on the cluster.  We are only using rbd to back our
> xenserver vm's at this point, and our cluster consists of 3 OSD nodes, 3
> Mon nodes and 1 admin node...  So would this be the correct steps:
>
> 1. Shut down VM's?
> 2. run "ceph osd set noout" on admin node
> 3. install updates on each monitoring node and reboot one at a time.
> 4. install updates on OSD nodes and activate second 10gb port, reboot one
> OSD node at a time
> 5. once all nodes back up, run "ceph osd unset noout"
> 6. bring VM's back online
>
> Does this sound correct?
>
>
> Cheers,
> Mike
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to