Hi to all,
I'm having some issues while trying to deploy a OSD:
ceph@cephdeploy01:~/ceph-deploy$ ceph-deploy osd prepare --fs-type
btrfs cephosd01:sdd:sde
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy osd
prepare --fs-type btrfs cephosd01:sdd:sde
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
cephosd01:/dev/sdd:/dev/sde
[cephosd01][DEBUG ] connected to host: cephosd01
[cephosd01][DEBUG ] detect platform information from remote host
[cephosd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to cephosd01
[cephosd01][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf
[cephosd01][INFO ] Running command: sudo udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host cephosd01 disk /dev/sdd
journal /dev/sde activate False
[cephosd01][INFO ] Running command: sudo ceph-disk-prepare --fs-type
btrfs --cluster ceph -- /dev/sdd /dev/sde
[cephosd01][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if
journal is not the same device as the osd data
[cephosd01][WARNIN] Turning ON incompat feature 'extref': increased
hardlink limit per file to 65536
[cephosd01][DEBUG ] The operation has completed successfully.
[cephosd01][DEBUG ] Creating new GPT entries.
[cephosd01][DEBUG ] The operation has completed successfully.
[cephosd01][DEBUG ]
[cephosd01][DEBUG ] WARNING! - Btrfs v3.12 IS EXPERIMENTAL
[cephosd01][DEBUG ] WARNING! - see http://btrfs.wiki.kernel.org before
using
[cephosd01][DEBUG ]
[cephosd01][DEBUG ] fs created label (null) on /dev/sdd1
[cephosd01][DEBUG ] nodesize 32768 leafsize 32768 sectorsize 4096
size 2.73TiB
[cephosd01][DEBUG ] Btrfs v3.12
[cephosd01][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host cephosd01 is now ready for osd use.
ceph@cephdeploy01:~/ceph-deploy$
ceph@cephdeploy01:~/ceph-deploy$ ceph-deploy osd activate --fs-type
btrfs cephosd01:sdd1:sde2
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy osd
activate --fs-type btrfs cephosd01:sdd1:sde2
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
cephosd01:/dev/sdd1:/dev/sde2
[cephosd01][DEBUG ] connected to host: cephosd01
[cephosd01][DEBUG ] detect platform information from remote host
[cephosd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host cephosd01 disk /dev/sdd1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[cephosd01][INFO ] Running command: sudo ceph-disk-activate
--mark-init upstart --mount /dev/sdd1
[cephosd01][WARNIN] INFO:ceph-disk:ceph osd.0 already mounted in
position; unmounting ours.
ceph@cephdeploy01:~/ceph-deploy$
Ok, it appear to have no errors..but then when i run a ceph osd tree I
found that the osd.X is down and out:
ceph@cephmon01:~$ sudo ceph osd tree
# id weight type name up/down reweight
-1 2.73 root default
-2 2.73 host cephosd01
0 2.73 osd.0 down 0
From the ceph -w output
...
2014-08-07 15:01:26.844127 mon.0 [INF] pgmap v15: 192 pgs: 192
creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
2014-08-07 15:03:04.762202 mon.0 [INF] osdmap e15: 1 osds: 0 up, 0 in
...
ceph@cephosd01:/var/lib/ceph/bootstrap-osd$ ls -ltr
total 4
-rw------- 1 root root 71 Aug 7 14:58 ceph.keyring
ceph@cephosd01:/etc/ceph$ ls -l
total 12
-rw-r--r-- 1 root root 63 Aug 7 14:58 ceph.client.admin.keyring
-rw-r--r-- 1 root root 302 Aug 7 15:02 ceph.conf
-rw-r--r-- 1 root root 92 May 12 11:14 rbdmap
Any ideas? I'm stuck here and can't go any further.
Thanks in advance,
Best regards,
German Anders
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com