Re: [ceph-users] mkjournal error creating journal ... : (13) Permission denied

2017-03-15 Thread Gunwoo Gim
 Thank you so much Peter. The 'udevadm trigger' after 'partprobe' triggered
the udev rules and I've found out that even before the udev ruleset
triggers the owner is already ceph:ceph.

 I've dug into ceph-disk a little more and found out that there is a
symbolic link of /dev/disk/by-partuuid/120c536d-cb30-4cea-b607-dd347022a497
at [/dev/mapper/vg--hdd1-lv--hdd1p1(the_filestore_osd)]/journal and the
source doesn't exist. though it exists in /dev/disk/by-parttypeuuid which
has been populated by /lib/udev/rules.d/60-ceph-by-parttypeuuid.rules

 So I added this in /lib/udev/rules.d/60-ceph-by-parttypeuuid.rules:
# when ceph-disk prepares a filestore osd it makes a symbolic link by
disk/by-partuuid but LVM2 doesn't seem to populate /dev/disk/by-partuuid.
ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_PART_ENTRY_TYPE}=="?*",
ENV{ID_PART_ENTRY_UUID}=="?*",
SYMLINK+="disk/by-partuuid/$env{ID_PART_ENTRY_UUID}"
 And finally got the osds all up and in. :D

 Yeah, It wasn't actually a permission problem, but the link just wasn't
existing.


~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1
...
mount: Mounting /dev/mapper/vg--hdd1-lv--hdd1p1 on
/var/lib/ceph/tmp/mnt.ECAifr with options noatime,largeio,inode64,swalloc
command_check_call: Running command: /bin/mount -t xfs -o
noatime,largeio,inode64,swalloc -- /dev/mapper/vg--hdd1-lv--hdd1p1
/var/lib/ceph/tmp/mnt.ECAifr
mount: DIGGIN ls -al /var/lib/ceph/tmp/mnt.ECAifr
mount: DIGGIN total 36
drwxr-xr-x 3 ceph ceph  174 Mar 14 11:51 .
drwxr-xr-x 6 ceph ceph 4096 Mar 16 11:30 ..
-rw-r--r-- 1 root root  202 Mar 16 11:19 activate.monmap
-rw-r--r-- 1 ceph ceph   37 Mar 14 11:45 ceph_fsid
drwxr-xr-x 3 ceph ceph   39 Mar 14 11:51 current
-rw-r--r-- 1 ceph ceph   37 Mar 14 11:45 fsid
lrwxrwxrwx 1 ceph ceph   58 Mar 14 11:45 journal ->
/dev/disk/by-partuuid/120c536d-cb30-4cea-b607-dd347022a497
-rw-r--r-- 1 ceph ceph   37 Mar 14 11:45 journal_uuid
-rw-r--r-- 1 ceph ceph   21 Mar 14 11:45 magic
-rw-r--r-- 1 ceph ceph4 Mar 14 11:51 store_version
-rw-r--r-- 1 ceph ceph   53 Mar 14 11:51 superblock
-rw-r--r-- 1 ceph ceph2 Mar 14 11:51 whoami
...
ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs',
'--mkkey', '-i', u'0', '--monmap',
'/var/lib/ceph/tmp/mnt.ECAifr/activate.monmap', '-
-osd-data', '/var/lib/ceph/tmp/mnt.ECAifr', '--osd-journal',
'/var/lib/ceph/tmp/mnt.ECAifr/journal', '--osd-uuid',
u'377c336b-278d-4caf-b2f5-592ac72cd9b6', '-
-keyring', '/var/lib/ceph/tmp/mnt.ECAifr/keyring', '--setuser', 'ceph',
'--setgroup', 'ceph'] failed : 2017-03-16 11:30:05.238725 7f918fbc0a40 -1
filestore(/v
ar/lib/ceph/tmp/mnt.ECAifr) mkjournal error creating journal on
/var/lib/ceph/tmp/mnt.ECAifr/journal: (13) Permission denied
2017-03-16 11:30:05.238756 7f918fbc0a40 -1 OSD::mkfs: ObjectStore::mkfs
failed with error -13
2017-03-16 11:30:05.238833 7f918fbc0a40 -1  ** ERROR: error creating empty
object store in /var/lib/ceph/tmp/mnt.ECAifr: (13) Permission denied


~ # blkid /dev/mapper/vg--*lv-*p* | grep
'120c536d-cb30-4cea-b607-dd347022a497'
/dev/mapper/vg--ssd1-lv--ssd1p1: PARTLABEL="ceph journal"
PARTUUID="120c536d-cb30-4cea-b607-dd347022a497"
~ # ls -al /dev/disk/by-id | grep dm-22
lrwxrwxrwx 1 root root   11 Mar 16 11:37 dm-name-vg--ssd1-lv--ssd1p1 ->
../../dm-22
lrwxrwxrwx 1 root root   11 Mar 16 11:37
dm-uuid-part1-LVM-n1SH1FvtfjgxJOMWN9aHurFvn2BpIsLZi89GWxA68hLmUQV6l5oyiEOPsFciRbKg
-> ../../dm-22
~ # ls -al /dev/disk/by-parttypeuuid | grep dm-22
lrwxrwxrwx 1 root root  11 Mar 16 11:37
45b0969e-9b03-4f30-b4c6-b4b80ceff106.120c536d-cb30-4cea-b607-dd347022a497
-> ../../dm-22
~ # ls -al /dev/disk/by-uuid | grep dm-22
~ # ls -al /dev/disk/by-partuuid/ | grep dm-22
~ # ls -al /dev/disk/by-path | grep dm-22


Best Regards,
Nicholas Gim.

On Wed, Mar 15, 2017 at 6:46 PM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:

On 03/15/17 08:43, Gunwoo Gim wrote:

 After a reboot, all the partitions of LVM don't show up in /dev/mapper
-nor in the /dev/dm- or /proc/partitions- though the whole disks
show up; I have to make the hosts run one 'partprobe' every time they boot
so as to have the partitions all show up.

Maybe you need this after partprobe:

udevadm trigger



 I've found out that the udev rules have never triggered even when I
removed the DEVTYPE checking part; checked with a udev
line: RUN+="/bin/echo 'add /dev/$name' >> /root/log.txt"
 I've also tried chowning all the /dev/dm- to ceph:disk in vain. Do I
have to use the udev rules even if the /dev/dm- s are already owned by
ceph:ceph?

No, I think you just need them owned by ceph:ceph. Test that with something
like:

sudo -u ceph hexdump -C /dev/dm-${number} | head

(which reads, not writes...so not a full test, but close enough)

And also make sure the files in /var/lib/ceph/{osd,mon,...} are owned by
ceph:ceph too. Maybe you have a mix of root and ceph, which is easy to
cause by running it as roo

Re: [ceph-users] mkjournal error creating journal ... : (13) Permission denied

2017-03-15 Thread Gunwoo Gim
 After a reboot, all the partitions of LVM don't show up in /dev/mapper
-nor in the /dev/dm- or /proc/partitions- though the whole disks
show up; I have to make the hosts run one 'partprobe' every time they boot
so as to have the partitions all show up.

 I've found out that the udev rules have never triggered even when I
removed the DEVTYPE checking part; checked with a udev
line: RUN+="/bin/echo 'add /dev/$name' >> /root/log.txt"
 I've also tried chowning all the /dev/dm- to ceph:disk in vain. Do I
have to use the udev rules even if the /dev/dm- s are already owned by
ceph:ceph?

 Thank you very much for reading.

Best Regards,
Nicholas.

On Wed, Mar 15, 2017 at 1:06 AM Gunwoo Gim <wind8...@gmail.com> wrote:

>  Thank you very much, Peter.
>
>  I'm sorry for not clarifying the version number; it's kraken and
> 11.2.0-1xenial.
>
>  I guess the udev rules in this file are supposed to change them :
> /lib/udev/rules.d/95-ceph-osd.rules
>  ...but the rules' filters don't seem to match the DEVTYPE part of the
> prepared partitions on the LVs I've got on the host.
>
>  Would it have been the cause of trouble? I'd love to be informed of a
> good way to make it work with the logical volumes; should I fix the udev
> rule?
>
> ~ # cat /lib/udev/rules.d/95-ceph-osd.rules | head -n 19
> # OSD_UUID
> ACTION=="add", SUBSYSTEM=="block", \
>   ENV{DEVTYPE}=="partition", \
>   ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
>   OWNER:="ceph", GROUP:="ceph", MODE:="660", \
>   RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
> ACTION=="change", SUBSYSTEM=="block", \
>   ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
>   OWNER="ceph", GROUP="ceph", MODE="660"
>
> # JOURNAL_UUID
> ACTION=="add", SUBSYSTEM=="block", \
>   ENV{DEVTYPE}=="partition", \
>   ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
>   OWNER:="ceph", GROUP:="ceph", MODE:="660", \
>   RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
> ACTION=="change", SUBSYSTEM=="block", \
>   ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
>   OWNER="ceph", GROUP="ceph", MODE="660"
>
>
> ~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep ID_PART_ENTRY_TYPE
> E: ID_PART_ENTRY_TYPE=45b0969e-9b03-4f30-b4c6-b4b80ceff106
>
> ~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep DEVTYPE
> E: DEVTYPE=disk
>
>
> Best Regards,
> Nicholas.
>
> On Tue, Mar 14, 2017 at 6:37 PM Peter Maloney <
> peter.malo...@brockmann-consult.de> wrote:
>
> Is this Jewel? Do you have some udev rules or anything that changes the
> owner on the journal device (eg. /dev/sdx or /dev/nvme0n1p1) to ceph:ceph?
>
>
> On 03/14/17 08:53, Gunwoo Gim wrote:
>
> I'd love to get helped out; it'd be much appreciated.
>
> Best Wishes,
> Nicholas.
>
> On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim <wind8...@gmail.com> wrote:
>
>  Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks in
> ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
> again; 'ceph-disk activate osd' fails.
>
>  Please let me just show you the error message and the output of 'ls':
>
> ~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1
>
> [...]
>
> ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs',
> '--mkkey', '-i', u'1', '--monmap',
> '/var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap', '--osd-data',
> '/var/lib/ceph/tmp/mnt.cJDc7I', '--osd-journal',
> '/var/lib/ceph/tmp/mnt.cJDc7I/journal', '--osd-uuid',
> u'5097be3f-349e-480d-8b0d-d68c13ae2f72', '--keyring',
> '/var/lib/ceph/tmp/mnt.cJDc7I/keyring', '--setuser', 'ceph', '--setgroup',
> 'ceph'] failed : 2017-03-14 16:01:10.051537 7fdc9a025a40 -1
> filestore(/var/lib/ceph/tmp/mnt.cJDc7I) mkjournal error creating journal on
> /var/lib/ceph/tmp/mnt.cJDc7I/journal: (13) Permission denied
> 2017-03-14 16:01:10.051565 7fdc9a025a40 -1 OSD::mkfs: ObjectStore::mkfs
> failed with error -13
> 2017-03-14 16:01:10.051624 7fdc9a025a40 -1  ** ERROR: error creating empty
> object store in /var/lib/ceph/tmp/mnt.cJDc7I: (13) Permission denied
>
> ~ # ls -al /var/lib/ceph/tmp
> total 8
> drwxr-xr-x  2 ceph ceph 4096 Mar 14 16:01 .
> drwxr-xr-x 11 ceph ceph 4096 Mar 14 11:12 ..
> -rwxr-xr-x  1 root root0 Mar 14 11:12 ceph-disk.activate.lock
> -rwxr-xr-x  1 root root0 Mar 14 11:44 ceph-di

Re: [ceph-users] mkjournal error creating journal ... : (13) Permission denied

2017-03-14 Thread Gunwoo Gim
 Thank you very much, Peter.

 I'm sorry for not clarifying the version number; it's kraken and
11.2.0-1xenial.

 I guess the udev rules in this file are supposed to change them :
/lib/udev/rules.d/95-ceph-osd.rules
 ...but the rules' filters don't seem to match the DEVTYPE part of the
prepared partitions on the LVs I've got on the host.

 Would it have been the cause of trouble? I'd love to be informed of a good
way to make it work with the logical volumes; should I fix the udev rule?

~ # cat /lib/udev/rules.d/95-ceph-osd.rules | head -n 19
# OSD_UUID
ACTION=="add", SUBSYSTEM=="block", \
  ENV{DEVTYPE}=="partition", \
  ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
  OWNER:="ceph", GROUP:="ceph", MODE:="660", \
  RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
ACTION=="change", SUBSYSTEM=="block", \
  ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
  OWNER="ceph", GROUP="ceph", MODE="660"

# JOURNAL_UUID
ACTION=="add", SUBSYSTEM=="block", \
  ENV{DEVTYPE}=="partition", \
  ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
  OWNER:="ceph", GROUP:="ceph", MODE:="660", \
  RUN+="/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name"
ACTION=="change", SUBSYSTEM=="block", \
  ENV{ID_PART_ENTRY_TYPE}=="45b0969e-9b03-4f30-b4c6-b4b80ceff106", \
  OWNER="ceph", GROUP="ceph", MODE="660"


~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep ID_PART_ENTRY_TYPE
E: ID_PART_ENTRY_TYPE=45b0969e-9b03-4f30-b4c6-b4b80ceff106

~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep DEVTYPE
E: DEVTYPE=disk


Best Regards,
Nicholas.

On Tue, Mar 14, 2017 at 6:37 PM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:

> Is this Jewel? Do you have some udev rules or anything that changes the
> owner on the journal device (eg. /dev/sdx or /dev/nvme0n1p1) to ceph:ceph?
>
>
> On 03/14/17 08:53, Gunwoo Gim wrote:
>
> I'd love to get helped out; it'd be much appreciated.
>
> Best Wishes,
> Nicholas.
>
> On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim <wind8...@gmail.com> wrote:
>
>  Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks in
> ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
> again; 'ceph-disk activate osd' fails.
>
>  Please let me just show you the error message and the output of 'ls':
>
> ~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1
>
> [...]
>
> ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs',
> '--mkkey', '-i', u'1', '--monmap',
> '/var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap', '--osd-data',
> '/var/lib/ceph/tmp/mnt.cJDc7I', '--osd-journal',
> '/var/lib/ceph/tmp/mnt.cJDc7I/journal', '--osd-uuid',
> u'5097be3f-349e-480d-8b0d-d68c13ae2f72', '--keyring',
> '/var/lib/ceph/tmp/mnt.cJDc7I/keyring', '--setuser', 'ceph', '--setgroup',
> 'ceph'] failed : 2017-03-14 16:01:10.051537 7fdc9a025a40 -1
> filestore(/var/lib/ceph/tmp/mnt.cJDc7I) mkjournal error creating journal on
> /var/lib/ceph/tmp/mnt.cJDc7I/journal: (13) Permission denied
> 2017-03-14 16:01:10.051565 7fdc9a025a40 -1 OSD::mkfs: ObjectStore::mkfs
> failed with error -13
> 2017-03-14 16:01:10.051624 7fdc9a025a40 -1  ** ERROR: error creating empty
> object store in /var/lib/ceph/tmp/mnt.cJDc7I: (13) Permission denied
>
> ~ # ls -al /var/lib/ceph/tmp
> total 8
> drwxr-xr-x  2 ceph ceph 4096 Mar 14 16:01 .
> drwxr-xr-x 11 ceph ceph 4096 Mar 14 11:12 ..
> -rwxr-xr-x  1 root root0 Mar 14 11:12 ceph-disk.activate.lock
> -rwxr-xr-x  1 root root0 Mar 14 11:44 ceph-disk.prepare.lock
>
>
> ~ # ls -l /dev/mapper/vg-*-lv-*p*
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd1-lv--hdd1p1 ->
> ../dm-12
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd2-lv--hdd2p1 ->
> ../dm-14
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd3-lv--hdd3p1 ->
> ../dm-16
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd4-lv--hdd4p1 ->
> ../dm-18
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd5-lv--hdd5p1 ->
> ../dm-20
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd6-lv--hdd6p1 ->
> ../dm-22
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd7-lv--hdd7p1 ->
> ../dm-24
> lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd8-lv--hdd8p1 ->
> ../dm-26
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mapper/vg--hdd9-lv--hdd9p1 ->
> ../dm-28
> lrwxrwxrwx 1 root root 8 Mar 14 13:47 /dev/mappe

Re: [ceph-users] mkjournal error creating journal ... : (13) Permission denied

2017-03-14 Thread Gunwoo Gim
I'd love to get helped out; it'd be much appreciated.

Best Wishes,
Nicholas.

On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim <wind8...@gmail.com> wrote:

>  Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks in
> ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
> again; 'ceph-disk activate osd' fails.
>
>  Please let me just show you the error message and the output of 'ls':
>
> ~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1
> main_activate: path = /dev/mapper/vg--hdd1-lv--hdd1p1
> get_dm_uuid: get_dm_uuid /dev/mapper/vg--hdd1-lv--hdd1p1 uuid path is
> /sys/dev/block/252:12/dm/uuid
> get_dm_uuid: get_dm_uuid /dev/mapper/vg--hdd1-lv--hdd1p1 uuid is
> part1-LVM-ETn7wXOmnesc9MNpleTYzP29jjOkp19J12ELrQez43LFPfdFc1dItn8EFF299401
>
> command: Running command: /sbin/blkid -p -s TYPE -o value --
> /dev/mapper/vg--hdd1-lv--hdd1p1
> command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
> --lookup osd_mount_options_xfs
> mount: Mounting /dev/mapper/vg--hdd1-lv--hdd1p1 on
> /var/lib/ceph/tmp/mnt.cJDc7I with options noatime,largeio,inode64,swalloc
> command_check_call: Running command: /bin/mount -t xfs -o
> noatime,largeio,inode64,swalloc -- /dev/mapper/vg--hdd1-lv--hdd1p1
> /var/lib/ceph/tmp/mnt.cJDc7I
> activate: Cluster uuid is 0bc0ea6d-ed8a-4ef0-9e82-ba6454a7214e
> command: Running command: /usr/bin/ceph-osd --cluster=ceph
> --show-config-value=fsid
> activate: Cluster name is ceph
> activate: OSD uuid is 5097be3f-349e-480d-8b0d-d68c13ae2f72
> activate: OSD id is 1
> activate: Initializing OSD...
> command_check_call: Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon
> getmap -o /var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap
> got monmap epoch 2
> command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph
> --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap
> --osd-data /var/lib/ceph/tmp/mnt.cJDc7I --osd-journal
> /var/lib/ceph/tmp/mnt.cJDc7I/journal --osd-uuid
> 5097be3f-349e-480d-8b0d-d68c13ae2f72 --keyring
> /var/lib/ceph/tmp/mnt.cJDc7I/keyring --setuser ceph --setgroup ceph
> mount_activate: Failed to activate
> unmount: Unmounting /var/lib/ceph/tmp/mnt.cJDc7I
> command_check_call: Running command: /bin/umount --
> /var/lib/ceph/tmp/mnt.cJDc7I
> Traceback (most recent call last):
>   File "/usr/sbin/ceph-disk", line 9, in 
> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5251, in
> run
> main(sys.argv[1:])
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5202, in
> main
> args.func(args)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3553, in
> main_activate
> reactivate=args.reactivate,
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3310, in
> mount_activate
> (osd_id, cluster) = activate(path, activate_key_template, init)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3486, in
> activate
> keyring=keyring,
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2948, in
> mkfs
> '--setgroup', get_ceph_group(),
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2895, in
> ceph_osd_mkfs
> raise Error('%s failed : %s' % (str(arguments), error))
> ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs',
> '--mkkey', '-i', u'1', '--monmap',
> '/var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap', '--osd-data',
> '/var/lib/ceph/tmp/mnt.cJDc7I', '--osd-journal',
> '/var/lib/ceph/tmp/mnt.cJDc7I/journal', '--osd-uuid',
> u'5097be3f-349e-480d-8b0d-d68c13ae2f72', '--keyring',
> '/var/lib/ceph/tmp/mnt.cJDc7I/keyring', '--setuser', 'ceph', '--setgroup',
> 'ceph'] failed : 2017-03-14 16:01:10.051537 7fdc9a025a40 -1
> filestore(/var/lib/ceph/tmp/mnt.cJDc7I) mkjournal error creating journal on
> /var/lib/ceph/tmp/mnt.cJDc7I/journal: (13) Permission denied
> 2017-03-14 16:01:10.051565 7fdc9a025a40 -1 OSD::mkfs: ObjectStore::mkfs
> failed with error -13
> 2017-03-14 16:01:10.051624 7fdc9a025a40 -1  ** ERROR: error creating empty
> object store in /var/lib/ceph/tmp/mnt.cJDc7I: (13) Permission denied
>
> ~ # ls -al /var/lib/ceph/tmp
> total 8
> drwxr-xr-x  2 ceph ceph 4096 Mar 14 16:01 .
> drwxr-xr-x 11 ceph ceph 4096 Mar 14 11:12 ..
> -rwxr-xr-x  1 root root0 Mar 14 11:12 ceph-disk.activate.lock
> -rwxr-xr-x  1 root root0 Mar 14 11:44 ceph-disk.prepare.lock
>
> ~ # ls -l /dev/map

[ceph-users] mkjournal error creating journal ... : (13) Permission denied

2017-03-14 Thread Gunwoo Gim
 Hello, I'm trying to deploy a ceph filestore cluster with LVM using
ceph-ansible playbook. I've been fixing a couple of code blocks in
ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
again; 'ceph-disk activate osd' fails.

 Please let me just show you the error message and the output of 'ls':

~ # ceph-disk -v activate /dev/mapper/vg--hdd1-lv--hdd1p1
main_activate: path = /dev/mapper/vg--hdd1-lv--hdd1p1
get_dm_uuid: get_dm_uuid /dev/mapper/vg--hdd1-lv--hdd1p1 uuid path is
/sys/dev/block/252:12/dm/uuid
get_dm_uuid: get_dm_uuid /dev/mapper/vg--hdd1-lv--hdd1p1 uuid is
part1-LVM-ETn7wXOmnesc9MNpleTYzP29jjOkp19J12ELrQez43LFPfdFc1dItn8EFF299401

command: Running command: /sbin/blkid -p -s TYPE -o value --
/dev/mapper/vg--hdd1-lv--hdd1p1
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd.
--lookup osd_mount_options_xfs
mount: Mounting /dev/mapper/vg--hdd1-lv--hdd1p1 on
/var/lib/ceph/tmp/mnt.cJDc7I with options noatime,largeio,inode64,swalloc
command_check_call: Running command: /bin/mount -t xfs -o
noatime,largeio,inode64,swalloc -- /dev/mapper/vg--hdd1-lv--hdd1p1
/var/lib/ceph/tmp/mnt.cJDc7I
activate: Cluster uuid is 0bc0ea6d-ed8a-4ef0-9e82-ba6454a7214e
command: Running command: /usr/bin/ceph-osd --cluster=ceph
--show-config-value=fsid
activate: Cluster name is ceph
activate: OSD uuid is 5097be3f-349e-480d-8b0d-d68c13ae2f72
activate: OSD id is 1
activate: Initializing OSD...
command_check_call: Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon
getmap -o /var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap
got monmap epoch 2
command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph
--mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap
--osd-data /var/lib/ceph/tmp/mnt.cJDc7I --osd-journal
/var/lib/ceph/tmp/mnt.cJDc7I/journal --osd-uuid
5097be3f-349e-480d-8b0d-d68c13ae2f72 --keyring
/var/lib/ceph/tmp/mnt.cJDc7I/keyring --setuser ceph --setgroup ceph
mount_activate: Failed to activate
unmount: Unmounting /var/lib/ceph/tmp/mnt.cJDc7I
command_check_call: Running command: /bin/umount --
/var/lib/ceph/tmp/mnt.cJDc7I
Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 9, in 
load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5251, in
run
main(sys.argv[1:])
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5202, in
main
args.func(args)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3553, in
main_activate
reactivate=args.reactivate,
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3310, in
mount_activate
(osd_id, cluster) = activate(path, activate_key_template, init)
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3486, in
activate
keyring=keyring,
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2948, in
mkfs
'--setgroup', get_ceph_group(),
  File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2895, in
ceph_osd_mkfs
raise Error('%s failed : %s' % (str(arguments), error))
ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs',
'--mkkey', '-i', u'1', '--monmap',
'/var/lib/ceph/tmp/mnt.cJDc7I/activate.monmap', '--osd-data',
'/var/lib/ceph/tmp/mnt.cJDc7I', '--osd-journal',
'/var/lib/ceph/tmp/mnt.cJDc7I/journal', '--osd-uuid',
u'5097be3f-349e-480d-8b0d-d68c13ae2f72', '--keyring',
'/var/lib/ceph/tmp/mnt.cJDc7I/keyring', '--setuser', 'ceph', '--setgroup',
'ceph'] failed : 2017-03-14 16:01:10.051537 7fdc9a025a40 -1
filestore(/var/lib/ceph/tmp/mnt.cJDc7I) mkjournal error creating journal on
/var/lib/ceph/tmp/mnt.cJDc7I/journal: (13) Permission denied
2017-03-14 16:01:10.051565 7fdc9a025a40 -1 OSD::mkfs: ObjectStore::mkfs
failed with error -13
2017-03-14 16:01:10.051624 7fdc9a025a40 -1  ** ERROR: error creating empty
object store in /var/lib/ceph/tmp/mnt.cJDc7I: (13) Permission denied

~ # ls -al /var/lib/ceph/tmp
total 8
drwxr-xr-x  2 ceph ceph 4096 Mar 14 16:01 .
drwxr-xr-x 11 ceph ceph 4096 Mar 14 11:12 ..
-rwxr-xr-x  1 root root0 Mar 14 11:12 ceph-disk.activate.lock
-rwxr-xr-x  1 root root0 Mar 14 11:44 ceph-disk.prepare.lock

~ # ls -l /dev/mapper/vg-*-lv-*p*
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd1-lv--hdd1p1 ->
../dm-12
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd2-lv--hdd2p1 ->
../dm-14
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd3-lv--hdd3p1 ->
../dm-16
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd4-lv--hdd4p1 ->
../dm-18
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd5-lv--hdd5p1 ->
../dm-20
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd6-lv--hdd6p1 ->
../dm-22
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd7-lv--hdd7p1 ->
../dm-24
lrwxrwxrwx 1 root root 8 Mar 14 13:46 /dev/mapper/vg--hdd8-lv--hdd8p1 ->
../dm-26
lrwxrwxrwx 1 root