Hello,
We experienced the same error as reported by Navendra, although we're
running Ubuntu Server 12.04.
We managed to work around the error (by trial and error). Below are the
steps we performed, perhaps this can help you track down the error.
*Step 1 - This was the error*
openstack@monitor3:~/cluster1$ *ceph-deploy -v osd prepare
ceph1:sde:/dev/sdb*
[ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy -v osd
prepare ceph1:sde:/dev/sdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph1:/dev/sde:/dev/sdb
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
*Step 2 - We then ran the following commands:*
Deploy-node : ceph-deploy uninstall ceph1
Ceph1-node : sudo rm –fr /etc/ceph/*
Deploy-node : ceph-deploy gatherkeys ceph1
Deploy-node : ceph-deploy -v install ceph1
*Step 3 - Apparently the problem was solved:*
Deploy-node : *ceph-deploy -v osd prepare ceph1:sde:/dev/sdb*
openstack@monitor3:~/cluster1$ ceph-deploy -v osd prepare ceph1:sde:/dev/sdb
[ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy -v osd
prepare ceph1:sde:/dev/sdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph1:/dev/sde:/dev/sdb
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] osd keyring does not exist yet, creating one
[ceph1][INFO ] Running command: sudo udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph1 disk /dev/sde journal
/dev/sdb activate False
[ceph1][INFO ] Running command: sudo ceph-disk-prepare --fs-type xfs
--cluster ceph -- /dev/sde /dev/sdb
[ceph1][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if journal
is not the same device as the osd data
[ceph1][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph1][DEBUG ] order to align on 2048-sector boundaries.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph1][DEBUG ] order to align on 2048-sector boundaries.
[ceph1][DEBUG ] The operation has completed successfully.
[ceph1][DEBUG ] meta-data=/dev/sde1 isize=2048 agcount=4,
agsize=61047597 blks
[ceph1][DEBUG ] = sectsz=512 attr=2,
projid32bit=0
[ceph1][DEBUG ] data = bsize=4096
blocks=244190385, imaxpct=25
[ceph1][DEBUG ] = sunit=0 swidth=0 blks
[ceph1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
[ceph1][DEBUG ] log =internal log bsize=4096
blocks=119233, version=2
[ceph1][DEBUG ] = sectsz=512 sunit=0 blks,
lazy-count=1
[ceph1][DEBUG ] realtime =none extsz=4096 blocks=0,
rtextents=0
[ceph1][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph1 is now ready for osd use.
Thanks!
Mike
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com