One thing than can cause this is messed-up partition ID's / typecodes.   Check 
out the ceph-disk script to see how they get applied.  I have a few systems 
that somehow got messed up -- at boot they don't get started, but if I mounted 
them manually on /mnt, checked out the whoami file and remounted accordingly, 
then started, they ran fine.

# for i in b c d e f g h i j k ; do sgdisk 
--typecode=1:4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D /dev/sd$i ; done

# for i in b c d e f g h i j k ; do sgdisk 
--typecode=2:45B0969E-9B03-4F30-B4C6-B4B80CEFF106 /dev/sd$i ; done

One system I botched and set all the GUID's to a constant; I went back and 
fixed that:

# for i in b c d e f g h i j k ; do sgdisk 
--typecode=2:45B0969E-9B03-4F30-B4C6-B4B80CEFF106 --partition-guid=$(uuidgen 
-r) /dev/sd$i ; done

Note that I have not yet rebooted these systems to validate this approach, so 
YMMV, proceed at your own risk, this advice is not FDIC-insured and may lose 
value.


# sgdisk -i 1 /dev/sdb
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 61397DDD-E203-4D9A-9256-24E0F5F97344
First sector: 20973568 (at 10.0 GiB)
Last sector: 5859373022 (at 2.7 TiB)
Partition size: 5838399455 sectors (2.7 TiB)
Attribute flags: 0000000000000000
Partition name: 'ceph data'

# sgdisk -i 2 /dev/sdb
Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown)
Partition unique GUID: EF292AB7-985E-40A2-B185-DD5911D17BD7
First sector: 2048 (at 1024.0 KiB)
Last sector: 20971520 (at 10.0 GiB)
Partition size: 20969473 sectors (10.0 GiB)
Attribute flags: 0000000000000000
Partition name: 'ceph journal'

--aad


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to