Hi Cephers,

i have multiple rbds to map and mount and the bootup hangs forever
while running rbdmap.service script. This was my mount-entry for
/etc/fstab:

/dev/rbd/ptxdev/WORK_CEPH_BLA /ptx/work/ceph/bla xfs 
noauto,x-systemd.automount,defaults,noatime,_netdev,logbsize=256k,nofail  0  0

(the mount is activated at boottime by an nfs-server that exports this
filesystem)
And i have a lot of these rbd mounts. Via systemds debug-shell.service
i found out, that the boot hangs at rbdmap.service. I added an 'set -x'
to /usr/bin/rbdmap and it showed me, that it hangs at

  mount --fake /dev/rbd/$DEV >>/dev/null 2>&1

Why is this called there? Why is this done one rbd at a time? 

As there was no mention of it in the manual mounting documentation, I
masked rbdmap.service and created a rbdmap@.service instead:

<file /etc/systemd/system/rbdmap@.service>
[Unit]
Description=Map RBD device ptxdev/%i

After=network-online.target local-fs.target
Wants=network-online.target local-fs.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/rbd map %I --id dev --keyring 
/etc/ceph/ceph.client.dev.keyring
ExecStop=/usr/bin/rbd unmap /dev/rbd/%I
</file>

and added the option 
  x-systemd.requires=rbdmap@ptxdev-WORK_CEPH_BLA.service
 to my fstab-entry.

Now systemd is able to finish the boot process, but this is clearly
only a workaround as there is now duplicated configuration data in the
servicetemplate and in /etc/ceph/rbdmap.

To do this right, there should be a systemd.generator(7) that reads
/etc/ceph/rbdmap at boottime and generates the rbdmap@ptxdev-
WORK_CEPH_BLA.service files.

Is this the correct way?

have a nice weekend
Björn Lässig
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to