This patch series is an update and adds the Cephfs to our list of storages.
You can mount the storage through the kernel or fuse client. The plugin for
now allows all content formats, but this needs further testing.

Config and keyfile locations are the same as in the RBD plugin.

Example entry:
cephfs: cephfs0
        monhost 192.168.1.2:6789
        path /mnt/pve/cephfs0
        content iso,backup,images,vztmpl,rootdir
        subdir /blubb
        fuse 0
        username admin

Comments and tests are very welcome. ;)

Changes in V2:
After some testing, I decided to remove the image/rootfs option from the
plugin in this version.
Also cephfs incorrectly propagates sparse files to the stat() system call, as
cephfs doesn't track which part is written. This will confuse users looking at
their image files and directories with tools such as du.

My test results:
### directly on cephfs
# fio --filename=/mnt/pve/cephfs0/testfile --size=10G --direct=1 --sync=1 
--rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based 
--group_reporting --name=cephfs-test
  WRITE: io=273200KB, aggrb=4553KB/s, minb=4553KB/s, maxb=4553KB/s, 
mint=60001msec, maxt=60001msec

### /dev/loop0 -> raw image on cephfs
# fio --filename=/dev/loop0 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=cephfs-test
  WRITE: io=258644KB, aggrb=4310KB/s, minb=4310KB/s, maxb=4310KB/s, 
mint=60001msec, maxt=60001msec

### /dev/rbd0 -> rbd image mapped 
# fio --filename=/dev/rbd0 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=cephfs-test
  WRITE: io=282064KB, aggrb=4700KB/s, minb=4700KB/s, maxb=4700KB/s, 
mint=60001msec, maxt=60001msec

### ext4 on mapped rbd image
# fio --ioengine=libaio --filename=/opt/testfile --size=10G --direct=1 --sync=1 
--rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based 
--group_reporting --name=fio
  WRITE: io=122608KB, aggrb=2043KB/s, minb=2043KB/s, maxb=2043KB/s, 
mint=60002msec, maxt=60002msec

### timed cp -r linux kernel source from tempfs
# -> cephfs
real    0m23.522s
user    0m0.744s
sys 0m3.292s

# -> /root/ (SSD MX100)
real    0m3.318s
user    0m0.502s
sys 0m2.770s

# -> rbd mapped ext4 (SM863a)
real    0m3.313s
user    0m0.441s
sys 0m2.826s


Alwin Antreich (1):
  Cephfs storage plugin

 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/API2/Storage/Status.pm  |   2 +-
 PVE/Storage.pm              |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 ++++++++++++++++++++++++++++++++++++++++++++
 PVE/Storage/Makefile        |   2 +-
 PVE/Storage/Plugin.pm       |   1 +
 debian/control              |   2 +
 7 files changed, 270 insertions(+), 3 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

Alwin Antreich (1):
  Cephfs storage wizard

 www/manager6/Makefile              |  1 +
 www/manager6/Utils.js              | 10 ++++++
 www/manager6/storage/CephFSEdit.js | 71 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 82 insertions(+)
 create mode 100644 www/manager6/storage/CephFSEdit.js

-- 
2.11.0


_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to