Hi David
I'll try to perform these tests soon.
Thank you.
On 08/22/2017 04:52 PM, David Turner wrote:
I would run some benchmarking throughout the cluster environment to
see where your bottlenecks are before putting time and money into
something that might not be your limiting resource. Seb
Thanks for your advices Maged, Chris
I'll answer bellow
On 08/22/2017 04:30 PM, Mazzystr wrote:
Also examine your network layout. Any saturation in the private
cluster network or client facing network will be felt in clients /
libvirt / virtual machines
As OSD count increases...
* Ensur
I would run some benchmarking throughout the cluster environment to see
where your bottlenecks are before putting time and money into something
that might not be your limiting resource. Sebastian Han put together a
great guide for benchmarking your cluster here.
https://www.sebastien-han.fr/blog/
Also examine your network layout. Any saturation in the private cluster
network or client facing network will be felt in clients / libvirt /
virtual machines
As OSD count increases...
- Ensure client network private cluster network seperation - different
nics, different wires, different sw
It is likely your 2 spinning disks cannot keep up with the load. Things
are likely to improve if you double your OSDs hooking them up to your
existing SSD journal. Technically it would be nice to run a
load/performance tool (either atop/collectl/sysstat) and measure how
busy your resources are, but
Hello everyone,
I've been using ceph to provide storage using RBD for 60 KVM virtual
machines running on proxmox.
The ceph cluster we have is very small (2 OSDs + 1 mon per node, and a
total of 3 nodes) and we are having some performace issues, like big
latency times (apply lat:~0.5 s; commi