Hi Alfredo,
This is the complete procedure:
On OSD node:
[ceph@ceph02 ~]$ sudo parted /dev/xvdb
GNU Parted 2.1
Using /dev/xvdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdb: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
[ceph@ceph02 ~]$ sudo ls -la /var/lib/ceph/tmp/
total 8
drwxr-xr-x 2 root root 4096 Jun 27 16:30 .
drwxr-xr-x 7 root root 4096 Jun 26 22:30 ..
[ceph@ceph02 ~]$ sudo ls -la /var/lib/ceph/osd/
total 8
drwxr-xr-x 2 root root 4096 Jun 27 12:14 .
drwxr-xr-x 7 root root 4096 Jun 26 22:30 ..
On ceph admin node:
[ceph@cephadm ~]$ sudo ceph osd tree
# id weight type name up/down reweight
-1 0.14 root default
-2 0.009995 host ceph02
1 0.009995 osd.1 DNE
-3 0.03999 host ceph04
4 0.03999 osd.4 up 1
-4 0.09 host ceph03
6 0.09 osd.6 up 1
[ceph@cephadm ceph-cloud]$ ceph-deploy osd prepare ceph02:xvdb
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.5): /usr/bin/ceph-deploy osd prepare
ceph02:xvdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph02:/dev/xvdb:
[ceph02][DEBUG ] connected to host: ceph02
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Scientific Linux 6.2 Carbon
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph02
[ceph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph02][INFO ] Running command: sudo udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph02 disk /dev/xvdb journal None
activate False
[ceph02][INFO ] Running command: sudo ceph-disk-prepare --fs-type xfs
--cluster ceph -- /dev/xvdb
[ceph02][DEBUG ] Setting name!
[ceph02][DEBUG ] partNum is 1
[ceph02][DEBUG ] REALLY setting name!
[ceph02][DEBUG ] The operation has completed successfully.
[ceph02][DEBUG ] Setting name!
[ceph02][DEBUG ] partNum is 0
[ceph02][DEBUG ] REALLY setting name!
[ceph02][DEBUG ] The operation has completed successfully.
[ceph02][DEBUG ] meta-data=/dev/xvdb1 isize=2048 agcount=4,
agsize=5897919 blks
[ceph02][DEBUG ] = sectsz=512 attr=2
[ceph02][DEBUG ] data = bsize=4096
blocks=23591675, imaxpct=25
[ceph02][DEBUG ] = sunit=0 swidth=0 blks
[ceph02][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
[ceph02][DEBUG ] log =internal log bsize=4096
blocks=11519, version=2
[ceph02][DEBUG ] = sectsz=512 sunit=0
blks, lazy-count=1
[ceph02][DEBUG ] realtime =none extsz=4096 blocks=0,
rtextents=0
[ceph02][DEBUG ] The operation has completed successfully.
[ceph02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/xvdb
[ceph02][INFO ] checking OSD status...
[ceph02][INFO ] Running command: sudo ceph --cluster=ceph osd stat
--format=json
[ceph_deploy.osd][DEBUG ] Host ceph02 is now ready for osd use.
If i make create instead of prepare do the same (create do not make the
trick prepare+activate )
In the OSD:
[ceph@ceph02 ~]$ sudo parted /dev/xvdb
GNU Parted 2.1
Using /dev/xvdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdb: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
2 1049kB 10.7GB 10.7GB xfs ceph journal
1 10.7GB 107GB 96.6GB xfs ceph data
(parted) q
[ceph@ceph02 ~]$ sudo ls -la /var/lib/ceph/osd/
total 8
drwxr-xr-x 2 root root 4096 Jun 27 12:14 .
drwxr-xr-x 7 root root 4096 Jun 26 22:30 ..
[ceph@ceph02 ~]$ sudo ls -la /var/lib/ceph/tmp/
total 8
drwxr-xr-x 2 root root 4096 Jun 27 16:32 .
drwxr-xr-x 7 root root 4096 Jun 26 22:30 ..
-rw-r--r-- 1 root root 0 Jun 27 16:32 ceph-disk.prepare.lock
[ceph@ceph02 ~]$ sudo ceph-disk list
/dev/xvda1 other, ext4, mounted on /
/dev/xvdb :
/dev/xvdb1 ceph data, prepared, cluster ceph, journal /dev/xvdb2
/dev/xvdb2 ceph journal, for /dev/xvdb1
In the cephadm:
[ceph@cephadm ceph-cloud]$ ceph-deploy osd activate ceph02:xvdb1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.5): /usr/bin/ceph-deploy osd
activate ceph02:xvdb1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph02:/dev/xvdb1:
[ceph02][DEBUG ] connected to host: ceph02
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Scientific Linux 6.2 Carbon
[ceph_deploy.osd][DEBUG ] activating host ceph02 disk /dev/xvdb1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph02][INFO ] Running command: sudo ceph-disk-activate --mark-init
sysvinit --mount /dev/xvdb1
[ceph02][WARNIN] got monmap epoch 2
[ceph02][WARNIN] 2014-06-27 16:35:49.948865 7f143254e7a0 -1
filestore(/var/lib/ceph/tmp/mnt.9VZHpR) mkjournal error creating journal on
/var/lib/ceph/tmp/mnt.9VZHpR/journal: (2) No such file or directory
[ceph02][WARNIN] 2014-06-27 16:35:49.948893 7f143254e7a0 -1 OSD::mkfs:
ObjectStore::mkfs failed with error -2
[ceph02][WARNIN] 2014-06-27 16:35:49.948957 7f143254e7a0 -1 ** ERROR:
error creating empty object store in /var/lib/ceph/tmp/mnt.9VZHpR: (2) No
such file or directory
[ceph02][WARNIN] ERROR:ceph-disk:Failed to activate
[ceph02][WARNIN] Traceback (most recent call last):
[ceph02][WARNIN] File "/usr/sbin/ceph-disk", line 2579, in <module>
[ceph02][WARNIN] main()
[ceph02][WARNIN] File "/usr/sbin/ceph-disk", line 2557, in main
[ceph02][WARNIN] args.func(args)
[ceph02][WARNIN] File "/usr/sbin/ceph-disk", line 1910, in main_activate
[ceph02][WARNIN] init=args.mark_init,
[ceph02][WARNIN] File "/usr/sbin/ceph-disk", line 1686, in mount_activate
[ceph02][WARNIN] (osd_id, cluster) = activate(path,
activate_key_template, init)
[ceph02][WARNIN] File "/usr/sbin/ceph-disk", line 1849, in activate
[ceph02][WARNIN] keyring=keyring,
[ceph02][WARNIN] File "/usr/sbin/ceph-disk", line 1484, in mkfs
[ceph02][WARNIN] '--keyring', os.path.join(path, 'keyring'),
[ceph02][WARNIN] File "/usr/sbin/ceph-disk", line 303, in
command_check_call
[ceph02][WARNIN] return subprocess.check_call(arguments)
[ceph02][WARNIN] File "/usr/lib64/python2.6/subprocess.py", line 505, in
check_call
[ceph02][WARNIN] raise CalledProcessError(retcode, cmd)
[ceph02][WARNIN] subprocess.CalledProcessError: Command
'['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '0',
'--monmap', '/var/lib/ceph/tmp/mnt.9VZHpR/activate.monmap', '--osd-data',
'/var/lib/ceph/tmp/mnt.9VZHpR', '--osd-journal',
'/var/lib/ceph/tmp/mnt.9VZHpR/journal', '--osd-uuid',
'5e93fa7c-b6f7-4684-981b-bf73254bd87a', '--keyring',
'/var/lib/ceph/tmp/mnt.9VZHpR/keyring']' returned non-zero exit status 1
[ceph02][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init sysvinit --mount /dev/xvdb1
At OSD log:
[ceph@ceph02 ~]$ tail -100 /var/log/ceph/ceph-osd.0.log
2014-06-27 16:35:49.859984 7f143254e7a0 0 ceph version 0.80.1
(a38fe1169b6d2ac98b427334c12d7cf81f809b74), process ceph-osd, pid 6590
2014-06-27 16:35:49.861265 7f143254e7a0 1
filestore(/var/lib/ceph/tmp/mnt.9VZHpR) mkfs in /var/lib/ceph/tmp/mnt.9VZHpR
2014-06-27 16:35:49.861319 7f143254e7a0 1
filestore(/var/lib/ceph/tmp/mnt.9VZHpR) mkfs fsid is already set to
5e93fa7c-b6f7-4684-981b-bf73254bd87a
2014-06-27 16:35:49.948589 7f143254e7a0 1
filestore(/var/lib/ceph/tmp/mnt.9VZHpR) leveldb db exists/created
2014-06-27 16:35:49.948865 7f143254e7a0 -1
filestore(/var/lib/ceph/tmp/mnt.9VZHpR) mkjournal error creating journal on
/var/lib/ceph/tmp/mnt.9VZHpR/journal: (2) No such file or directory
2014-06-27 16:35:49.948893 7f143254e7a0 -1 OSD::mkfs: ObjectStore::mkfs
failed with error -2
2014-06-27 16:35:49.948957 7f143254e7a0 -1 ** ERROR: error creating empty
object store in /var/lib/ceph/tmp/mnt.9VZHpR: (2) No such file or directory
[ceph@ceph02 ~]$
[ceph@ceph02 ~]$ sudo ceph-disk list
/dev/xvda1 other, ext4, mounted on /
/dev/xvdb :
/dev/xvdb1 ceph data, prepared, cluster ceph, osd.0, journal /dev/xvdb2
/dev/xvdb2 ceph journal, for /dev/xvdb1
Thanks in advance, I
2014-06-27 15:30 GMT+02:00 Alfredo Deza <[email protected]>:
> Can you paste the full ceph-deploy logs? there are a few reasons why
> this might be happening.
>
>
>
> On Fri, Jun 27, 2014 at 6:42 AM, Iban Cabrillo <[email protected]>
> wrote:
> > Hi,
> >
> > I am a little frustrated. After 6 times trying to deploy a test ceph
> always
> > get the same error in the osd activation stage.
> > The version is firefly (for el6 repo), 3 mons, 3 osds all of then Xen
> VMs.
> >
> > The mons wake up correctly and I do not know why two osd servers too
> after
> > a lot of errors, always the same
> >
> > #ceph-deploy --verbose osd prepare ceph02:xvdb (works fine)
> >
> > [root@ceph02 ~]# parted /dev/xvdb
> > GNU Parted 2.1
> > Using /dev/xvdb
> > Welcome to GNU Parted! Type 'help' to view a list of commands.
> > (parted) p
> > Model: Xen Virtual Block Device (xvd)
> > Disk /dev/xvdb: 107GB
> > Sector size (logical/physical): 512B/512B
> > Partition Table: gpt
> >
> > Number Start End Size File system Name Flags
> > 2 1049kB 10,7GB 10,7GB xfs ceph journal
> > 1 10,7GB 107GB 96,6GB xfs ceph data
> >
> >
> > But the activate give us this error:
> >
> > ceph-deploy --verbose osd activate ceph02:xvdb1:/dev/xvdb2
> >
> > [ceph02][WARNIN] 2014-06-27 12:27:34.750160 7f123b33d7a0 -1
> > filestore(/var/lib/ceph/tmp/mnt.HacFAP) mkjournal error creating journal
> on
> > /var/lib/ceph/tmp/mnt.HacFAP/journal: (2) No such file or directory
> > [ceph02][WARNIN] 2014-06-27 12:27:34.750281 7f123b33d7a0 -1 OSD::mkfs:
> > ObjectStore::mkfs failed with error -2
> > [ceph02][WARNIN] 2014-06-27 12:27:34.750416 7f123b33d7a0 -1 ** ERROR:
> error
> > creating empty object store in /var/lib/ceph/tmp/mnt.HacFAP: (2) No such
> > file or directory
> > [ceph02][WARNIN] ERROR:ceph-disk:Failed to activate
> >
> > Two of then following the same procedure after the same error several
> time,
> > raise up.
> > [ceph@ceph03 ~]$ df -h
> > Filesystem Size Used Avail Use% Mounted on
> > /dev/xvda1 5.0G 2.2G 2.6G 46% /
> > tmpfs 935M 0 935M 0% /dev/shm
> > /dev/xvdb1 90G 37M 90G 1% /var/lib/ceph/osd/ceph-6
> >
> >
> > Any idea please?
> >
> >
> > Bertrand Russell:
> > "El problema con el mundo es que los estúpidos están seguros de todo y
> los
> > inteligentes están llenos de dudas"
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
--
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com