Hi,
Here’s what we do to identify our top RBD users.

First, enable log level 10 for the filestore so you can see all the IOs coming 
from the VMs. Then use a script like this (used on a dumpling cluster):

  https://github.com/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl

to summarize the osd logs and identify the top clients.

Then its just a matter of scripting to figure out the ops/sec per volume, but 
for us at least the main use-case has been to identify who is responsible for a 
new peak in overall ops — and daily-granular statistics from the above script 
tends to suffice.

BTW, do you throttle your clients? We found that its absolutely necessary, 
since without a throttle just a few active VMs can eat up the entire iops 
capacity of the cluster.

Cheers, Dan

-- Dan van der Ster || Data & Storage Services || CERN IT Department --


On 08 Aug 2014, at 13:51, Andrija Panic 
<[email protected]<mailto:[email protected]>> wrote:

Hi,

we just had some new clients, and have suffered very big degradation in CEPH 
performance for some reasons (we are using CloudStack).

I'm wondering if there is way to monitor OP/s or similar usage by client 
connected, so we can isolate the heavy client ?

Also, what is the general best practice to monitor these kind of changes in 
CEPH ? I'm talking about R/W or OP/s change or similar...

Thanks,
--

Andrija Panić

_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to