>> Reverting back to filestore is quite a lot of work and time again. 
 >> Maybe see first if with some tuning of the vms you can get better 
results?
 >
 >None of the VMs are particularly disk-intensive.  There's two users 
accessing the system over a WiFi network for email, and some HTTP/SMTP 
traffic coming in via an ADSL2 Internet connection.
 >
 >If Bluestore can't manage this, then I'd consider it totally worthless 
in any enterprise installation -- so clearly something is wrong.


I have a cluster mainly intended for backups to cephfs, 4 nodes, sata 
disks and mostly 5400rpm. Because the cluster is doing nothing. I 
decided to put vm's on them. I am running 15 vm's without problems on 
the hdd pool. Going to move more to them. One of them is an macos 
machine, I did once a fio test in it and gave me 917 iops at 4k random 
reads. (technically not possible I would say, I have mostly default 
configurations in libvirt)


 >
 >> What you also can try is for io intensive vm's add an ssd pool?
 >
 >How well does that work in a cluster with 0 SSD-based OSDs?
 >
 >For 3 of the nodes, the cases I'm using for the servers can fit two 
2.5"
 >drives.  I have one 120GB SSD for the OS, that leaves one space spare 
for the OSD.  


I think this could be your bottle neck, I have 31 drives, so the load is 
spread across 31 (hopefully). If you have only 3 drives you have 
3x60iops to share amongst your vms. 
I am getting the impression that ceph development is not really 
interested in setups quite different from the advised standards. I once 
made an attempt to get things better working for 1Gb adapters[0].

 >
 >I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs 
for the OS and like the other nodes have a single 2.5" drive bay.
 >
 >This is being done as a hobby and a learning exercise I might add -- 
so while I have spent a lot of money on this, the funds I have to throw 
at this are not infinite.


Same here ;) 


 >
 >> I moved
 >> some exchange servers on them. Tuned down the logging, because that 
is 
 >> writing constantly to disk.
 >> With such setup you are at least secured for the future.
 >
 >The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a 
few OpenBSD VMs for things like routers between virtual networks.
 >

[0] https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to