Hi,

 I am a little frustrated. After 6 times trying to deploy a test ceph
always get the same error in the osd activation stage.
 The version is firefly (for el6 repo), 3 mons, 3 osds all of then Xen VMs.

 The mons wake up correctly and I do not know why two osd servers too after
a lot of errors, always the same

#ceph-deploy --verbose osd prepare ceph02:xvdb (works fine)

[root@ceph02 ~]# parted /dev/xvdb
GNU Parted 2.1
Using /dev/xvdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdb: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name          Flags
 2      1049kB  10,7GB  10,7GB  xfs          ceph journal
 1      10,7GB  107GB   96,6GB  xfs          ceph data


But the activate give us this error:

 ceph-deploy --verbose osd activate ceph02:xvdb1:/dev/xvdb2

  [ceph02][WARNIN] 2014-06-27 12:27:34.750160 7f123b33d7a0 -1
*filestore(/var/lib/ceph/tmp/mnt.HacFAP)
mkjournal error creating journal on /var/lib/ceph/tmp/mnt.HacFAP/journal:
(2) No such file or directory*
[ceph02][WARNIN] 2014-06-27 12:27:34.750281 7f123b33d7a0 -1 OSD::mkfs:
ObjectStore::mkfs failed with error -2
[ceph02][WARNIN] 2014-06-27 12:27:34.750416 7f123b33d7a0 -1 * ** ERROR:
error creating empty object store in /var/lib/ceph/tmp/mnt.HacFAP: (2) No
such file or directory*
[ceph02][WARNIN] ERROR:ceph-disk:Failed to activate

Two of then following the same procedure after the same error several time,
raise up.
 [ceph@ceph03 ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      5.0G  2.2G  2.6G  46% /
tmpfs           935M     0  935M   0% /dev/shm
/dev/xvdb1       90G   37M   90G   1% /var/lib/ceph/osd/ceph-6


Any idea please?


Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to