Running the following after prepare and a reboot, "solves" this problem.

[root@osd01 ~]# partx -v -a /dev/mapper/mpatha
partition: none, disk: /dev/mapper/mpatha, lower: 0, upper: 0
/dev/mapper/mpatha: partition table type 'gpt' detected
partx: /dev/mapper/mpatha: adding partition #1 failed: Invalid argument
partx: /dev/mapper/mpatha: adding partition #2 failed: Invalid argument
partx: /dev/mapper/mpatha: error adding partitions 1-2


The disk is then activated and in and up. It seems like the partuuid was
not correctly imported into the kernel.
Even if it states that partitions 1 - 2 were not added, they are (this disk
has only two partitions).

Should I open a bug?

Kind regards,
Kevin

2018-02-04 19:05 GMT+01:00 Kevin Olbrich <k...@sv01.de>:

> I also noticed there are no folders under /var/lib/ceph/osd/ ...
>
>
> Mit freundlichen Grüßen / best regards,
> Kevin Olbrich.
>
> 2018-02-04 19:01 GMT+01:00 Kevin Olbrich <k...@sv01.de>:
>
>> Hi!
>>
>> Currently I try to re-deploy a cluster from filestore to bluestore.
>> I zapped all disks (multiple times) but I fail adding a disk array:
>>
>> Prepare:
>>
>>> ceph-deploy --overwrite-conf osd prepare --bluestore --block-wal
>>> /dev/sdb --block-db /dev/sdb osd01.cloud.example.local:/dev
>>> /mapper/mpatha
>>
>>
>> Activate:
>>
>>> ceph-deploy --overwrite-conf osd activate osd01.cloud.example.local:/dev
>>> /mapper/mpatha1
>>
>>
>> Error on activate:
>>
>>> [osd01.cloud.example.local][WARNIN] got monmap epoch 2
>>> [osd01.cloud.example.local][WARNIN] command_check_call: Running
>>> command: /usr/bin/ceph-osd --cluster ceph --mkfs -i 0 --monmap
>>> /var/lib/ceph/tmp/mnt.pAfCl4/activate.monmap --osd-data
>>> /var/lib/ceph/tmp/mnt.pAfCl4 --osd-uuid d5b6ab85-9437-4cb2-a34d-16a29067ba27
>>> --setuser ceph --setgroup ceph
>>>
>>> *[osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900368
>>> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block)
>>> _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No
>>> such file or directory[osd01.cloud.example.local][WARNIN] 2018-02-04
>>> 18:52:43.900405 7f00d6359d00 -1
>>> bluestore(/var/lib/ceph/tmp/mnt.pAfCl4/block) _read_bdev_label failed to
>>> open /var/lib/ceph/tmp/mnt.pAfCl4/block: (2) No such file or directory*
>>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900462
>>> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4)
>>> _setup_block_symlink_or_file failed to open block file: (13) Permission
>>> denied
>>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900480
>>> 7f00d6359d00 -1 bluestore(/var/lib/ceph/tmp/mnt.pAfCl4) mkfs failed,
>>> (13) Permission denied
>>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900485
>>> 7f00d6359d00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (13)
>>> Permission denied
>>> [osd01.cloud.example.local][WARNIN] 2018-02-04 18:52:43.900662
>>> 7f00d6359d00 -1  ** ERROR: error creating empty object store in
>>> /var/lib/ceph/tmp/mnt.pAfCl4: (13) Permission denied
>>> [osd01.cloud.example.local][WARNIN] mount_activate: Failed to activate
>>> [osd01.cloud.example.local][WARNIN] unmount: Unmounting
>>> /var/lib/ceph/tmp/mnt.pAfCl4
>>>
>>
>>
>> Same problem on 2x 14 disks. I was unable to get this cluster up.
>>
>> Any ideas?
>>
>> Kind regards,
>> Kevin
>>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to