Hi,
I was trying the scenario where i have partitioned my drive (/dev/sdb) into
4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility:
# sgdisk -z /dev/sdb
# sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal"
# sgdisk -n 1:0:+1024 /dev/sdb -c 2:"ceph journal"
# sgdisk -n 1:0:+4096 /dev/sdb -c 3:"ceph data"
# sgdisk -n 1:0:+4096 /dev/sdb -c 3:"ceph data"
checked the partition with lsblk and it has created the partitions as
expected.
im using the ceph-disk command to create the osd's:
# ceph-disk prepare --cluster ceph /dev/sdb3 /dev/sdb1
prepare_device: OSD will not be hot-swappable if journal is not the same
device as the osd data
prepare_device: Journal /dev/sdb1 was not prepared with ceph-disk.
Symlinking directly.
set_data_partition: incorrect partition UUID:
0fc63daf-8483-4772-8e79-3d69d8477de4,
expected ['4fbd7e29-9d25-41b8-afd0-5ec00ceff05d',
'4fbd7e29-9d25-41b8-afd0-062c0ceff05d', '4fbd7e29-8ae0-4982-bf9d-5a8d867af560',
'4fbd7e29-9d25-41b8-afd0-35865ceff05d']
meta-data=/dev/sdb3 isize=2048 agcount=4, agsize=261760 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=1047040, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=65536 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# chown ceph:ceph /dev/sdb*
# ceph-disk activate /dev/sdb3
got monmap epoch 1
added key for osd.2
Created symlink from /etc/systemd/system/ceph-osd.t
arget.wants/[email protected] to /usr/lib/systemd/system/[email protected].
# lsblk
sdb 8:16 0 10G 0 disk
├─sdb1 8:17 0 1G 0 part
├─sdb2 8:18 0 1G 0 part
├─sdb3 8:19 0 4G 0 part /var/lib/ceph/osd/ceph-2
└─sdb4 8:20 0 4G 0 part
# systemctl status [email protected]
● [email protected] - Ceph object storage daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled;
vendor preset: disabled)
Active: active (running) since Fri 2016-12-16 13:44:44 IST; 1min 38s ago
Process: 4599 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster
${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 4650 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/[email protected]
└─4650 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph
-...
Dec 16 13:46:19 admin ceph-osd[4650]: 2016-12-16 13:46:19.816627
7f1a7cd127...4)
Dec 16 13:46:19 admin ceph-osd[4650]: 2016-12-16 13:46:19.816652
7f1a7cd127...4)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.534610
7f1a54b2c7...9)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.534638
7f1a54b2c7...9)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.816934
7f1a7cd127...1)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.816979
7f1a7cd127...1)
Dec 16 13:46:21 admin ceph-osd[4650]: 2016-12-16 13:46:21.817323
7f1a7cd127...8)
Dec 16 13:46:21 admin ceph-osd[4650]: 2016-12-16 13:46:21.817436
7f1a7cd127...8)
Dec 16 13:46:22 admin ceph-osd[4650]: 2016-12-16 13:46:22.826281
7f1a7cd127...7)
Dec 16 13:46:22 admin ceph-osd[4650]: 2016-12-16 13:46:22.826334
7f1a7cd127...7)
Hint: Some lines were ellipsized, use -l to show in full.
but when i reboot the node , the osd doesn't comes up automatically!!
# lsblk
sdb 8:16 0 10G 0 disk
├─sdb1 8:17 0 1G 0 part
├─sdb2 8:18 0 1G 0 part
├─sdb3 8:19 0 4G 0 part
└─sdb4 8:20 0 4G 0 part
]# systemctl status [email protected]
● [email protected] - Ceph object storage daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled;
vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2016-12-16 13:48:52 IST;
2min 6s ago
Process: 2491 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i
--setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Process: 2446 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster
${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 2491 (code=exited, status=1/FAILURE)
Dec 16 13:48:52 admin systemd[1]: [email protected]: main process exited,
code=exited, status=1/FAILURE
Dec 16 13:48:52 admin systemd[1]: Unit [email protected] entered failed
state.
Dec 16 13:48:52 admin systemd[1]: [email protected] failed.
Dec 16 13:48:52 admin systemd[1]: [email protected] holdoff time over,
scheduling restart.
Dec 16 13:48:52 admin systemd[1]: start request repeated too quickly for
[email protected]
Dec 16 13:48:52 admin systemd[1]: Failed to start Ceph object storage
daemon.
Dec 16 13:48:52 admin systemd[1]: Unit [email protected] entered failed
state.
Dec 16 13:48:52 admin systemd[1]: [email protected] failed.
# systemctl start [email protected]
Job for [email protected] failed because start of the service was
attempted too often. See "systemctl status [email protected]" and
"journalctl -xe" for details.
To force a start use "systemctl reset-failed [email protected]" followed
by "systemctl start [email protected]" again.
But if i do it with the single osd per drive, it works fine....
Anyone else faced the same issue anytime??
Thanks,
Sandeep
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com