I’ll try the prepare/activiate commands again. I spent the least amount of time
with them since activate _always_ failed for me. I’ll go back and check my
logs, but probably because I was attempting to activate the same location I
used in the ‘prepare’ instead of the partition 1 like you suggest (which is
exactly how it is show in the documentation example).
I seemed to get the closest to a working cluster using the ‘manual’ commands
below. I could try changing the XFS mount point to be on a partition of the hdd
I’m using for the osd.
mkdir /var/lib/ceph/osd/ceph-$OSD
mkfs -t xfs -f /dev/sd$i
mount -t xfs /dev/sd$i /var/lib/ceph/osd/ceph-$OSD
ceph-osd -i $OSD --mkfs --mkkey --osd-journal /dev/md0p$PART
What I find most confusing using ceph-deploy with multiple osds on the same
host is that when ‘ceph-deploy osd create [data] [journal]’ completes there is
no osd directory for each osd under:
[root@ceph0 ceph]# ll /var/lib/ceph/osd/
total 0
[root@ceph0 ceph]#
From: ceph-users [mailto:[email protected]] On Behalf Of Jason
King
Sent: Thursday, August 14, 2014 8:13 PM
To: [email protected]
Subject: Re: [ceph-users] How to create multiple OSD's per host?
2014-08-15 7:56 GMT+08:00 Bruce McFarland
<[email protected]<mailto:[email protected]>>:
This is an example of the output from ‘ceph-deploy osd create [data] [journal’
I’ve noticed that all of the ‘ceph-conf’ commands use the same parameter of
‘–name=osd.’ Everytime ceph-deploy is called. I end up with 30 osd’s – 29 in
the prepared and 1 active according to the ‘ceph-disk list’ output and only 1
osd that has a xfs mount point. I’ve tried both with all data/journal devices
on the same ceph-deploy command line and issuing 1 ceph-deploy cmd for each OSD
data/journal pair (easier to script).
+ ceph-deploy osd create ceph0:/dev/sdl:/dev/md0p17
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.10): /usr/bin/ceph-deploy osd create
ceph0:/dev/sdl:/dev/md0p17
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph0:/dev/sdl:/dev/md0p17
[ceph0][DEBUG ] connected to host: ceph0
[ceph0][DEBUG ] detect platform information from remote host
[ceph0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph0
[ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph0][INFO ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph0 disk /dev/sdl journal
/dev/md0p17 activate True
[ceph0][INFO ] Running command: ceph-disk -v prepare --fs-type xfs --cluster
ceph -- /dev/sdl /dev/md0p17
[ceph0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph0][DEBUG ] order to align on 2048-sector boundaries.
[ceph0][DEBUG ] The operation has completed successfully.
[ceph0][DEBUG ] meta-data=/dev/sdl1 isize=2048 agcount=4,
agsize=244188597 blks
[ceph0][DEBUG ] = sectsz=512 attr=2,
projid32bit=0
[ceph0][DEBUG ] data = bsize=4096 blocks=976754385,
imaxpct=5
[ceph0][DEBUG ] = sunit=0 swidth=0 blks
[ceph0][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
[ceph0][DEBUG ] log =internal log bsize=4096 blocks=476930,
version=2
[ceph0][DEBUG ] = sectsz=512 sunit=0 blks,
lazy-count=1
[ceph0][DEBUG ] realtime =none extsz=4096 blocks=0,
rtextents=0
[ceph0][DEBUG ] The operation has completed successfully.
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=osd_journal_size
[ceph0][WARNIN] DEBUG:ceph-disk:Journal /dev/md0p17 is a partition
[ceph0][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is
not the same device as the osd data
[ceph0][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdl
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk
--largest-new=1 --change-name=1:ceph data
--partition-guid=1:a96b4af4-11f4-4257-9476-64a6e4c93c28
--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdl
[ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdl
[ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[ceph0][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdl1
[ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i
size=2048 -- /dev/sdl1
[ceph0][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdl1 on
/var/lib/ceph/tmp/mnt.8xAu31 with options noatime
[ceph0][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime --
/dev/sdl1 /var/lib/ceph/tmp/mnt.8xAu31
[ceph0][WARNIN] DEBUG:ceph-disk:Preparing osd data dir
/var/lib/ceph/tmp/mnt.8xAu31
[ceph0][WARNIN] DEBUG:ceph-disk:Creating symlink
/var/lib/ceph/tmp/mnt.8xAu31/journal -> /dev/md0p17
[ceph0][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.8xAu31
[ceph0][WARNIN] INFO:ceph-disk:Running command: /bin/umount --
/var/lib/ceph/tmp/mnt.8xAu31
[ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk
--typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdl
[ceph0][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdl
[ceph0][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdl
[ceph0][WARNIN] BLKPG: Device or resource busy
[ceph0][WARNIN] error adding partition 1
[ceph0][INFO ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph0][INFO ] checking OSD status...
[ceph0][INFO ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph0 is now ready for osd use.
From: Bruce McFarland
Sent: Thursday, August 14, 2014 11:45 AM
To: '[email protected]<mailto:[email protected]>'
Subject: How to create multiple OSD's per host?
I’ve tried using ceph-deploy but it wants to assign the same id for each osd
and I end up with a bunch of “prepared” ceph-disk’s and only 1 “active”. If I
use the manual “short form” method the activate step fails and there are no xfs
mount points on the ceph-disks. If I use the manual “long form” it seems like
I’m the closest to getting active ceph-disks/osd’s but the monitor always shows
the osds as “down/in” and the ceph-disks don’t persist over a boot cycle.
Is there a document anywhere that anyone knows of that explains a step by step
process for bringing up multiple osd’s per host – 1 hdd with ssd journal
partition per osd?
Thanks,
Bruce
_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Bruce,
I depolyed 3 OSDs per host a few days ago and it worked pretty well.
Examined your log and guess you should try:
ceph-deploy osd prepare ceph0:/dev/sdl:/dev/md0p17
ceph-deploy osd activate ceph0:/dev/sdl1:/dev/md0p17
Or you could create a partition on /dev/sdl by yourself and
ceph-deploy osd create ceph0:/dev/sdl1:/dev/md0p17
Hope this works.
Jason
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com