I'm testing cephfs. I have 3 nodes, with 2 hard disks and one ssd on each.
cephfs is set to put metadata on ssd and data on hdd.
With the two pools set size = 3, untar'ing a 19 G file with 90K files in it
takes 4.5 minutes.
With size = 2, it takes 40 sec. (The tar file is stored in a file system that's
in memory.)
Is that expected?
This is the current version of ceph, deployed with cephadm. The only
non-default setup is allocating metadata to ssd and data to hdd.
data_devices:
rotational: 1
db_devices:
rotational: 0
ceph osd crush rule create-replicated replicated_hdd default host hdd
ceph osd crush rule create-replicated replicated_ssd default host ssd
ceph osd pool set cephfs.main.data crush_rule replicated_hdd
ceph osd pool set cephfs.main.meta crush_rule replicated_ssd
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]