RE: [ceph-users] Opensource plugin for pulling out cluster recovery and client IO metric

2015-08-29 Thread GuangYang

 Date: Fri, 28 Aug 2015 12:07:39 +0100
 From: gfar...@redhat.com
 To: vickey.singh22...@gmail.com
 CC: ceph-us...@lists.ceph.com; ceph-us...@ceph.com; ceph-devel@vger.kernel.org
 Subject: Re: [ceph-users] Opensource plugin for pulling out cluster recovery 
 and client IO metric

 On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh
 vickey.singh22...@gmail.com wrote:
 Hello Ceph Geeks

 I am planning to develop a python plugin that pulls out cluster recovery IO
 and client IO operation metrics , that can be further used with collectd.

 For example , i need to take out these values

 recovery io 814 MB/s, 101 objects/s
 client io 85475 kB/s rd, 1430 kB/s wr, 32 op/s
The calculation *window* for those stats are very small, IIRC, they are two PG 
version which most likely map to two seconds (average of the last two seconds), 
you may increase mon_stat_smooth_intervals to enlarge the window, but I didn't 
try it myself.

I found the 'ceph status -f json' has better formatted output and more 
information.


 Could you please help me in understanding how ceph -s and ceph -w outputs
 prints cluster recovery IO and client IO information.
 Where this information is coming from. Is it coming from perf dump ? If yes
 then which section of perf dump output is should focus on. If not then how
 can i get this values.

 I tried ceph --admin-daemon /var/run/ceph/ceph-osd.48.asok perf dump , but
 it generates hell lot of information and i am confused which section of
 output should i use.
perf counters have a tone of information which needs time to understand the 
details, but if the purpose is just to dump as what they are and do better 
aggregation/reporting, you can check 'perf schema' first to get the type of the 
field, can cross check the perf_counter's definition for each type, to 
determine how you collection/aggregate those data.

 This information is generated only on the monitors based on pg stats
 from the OSDs, is slightly laggy, and can be most easily accessed by
 calling ceph -s on a regular basis. You can get it with json output
 that is easier to parse, and you can optionally set up an API server
 for more programmatic access. I'm not sure on the details of doing
 that last, though.
 -Greg
 ___
 ceph-users mailing list
 ceph-us...@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
  --
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Opensource plugin for pulling out cluster recovery and client IO metric

2015-08-28 Thread Gregory Farnum
On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh
vickey.singh22...@gmail.com wrote:
 Hello Ceph Geeks

 I am planning to develop a python plugin that pulls out cluster recovery IO
 and client IO operation metrics , that can be further used with collectd.

 For example , i need to take out these values

 recovery io 814 MB/s, 101 objects/s
 client io 85475 kB/s rd, 1430 kB/s wr, 32 op/s


 Could you please help me in understanding how ceph -s  and ceph -w outputs
 prints cluster recovery IO and client IO information.
 Where this information is coming from. Is it coming from perf dump ? If yes
 then which section of perf dump output is should focus on. If not then how
 can i get this values.

 I tried ceph --admin-daemon /var/run/ceph/ceph-osd.48.asok perf dump , but
 it generates hell lot of information and i am confused which section of
 output should i use.

This information is generated only on the monitors based on pg stats
from the OSDs, is slightly laggy, and can be most easily accessed by
calling ceph -s on a regular basis. You can get it with json output
that is easier to parse, and you can optionally set up an API server
for more programmatic access. I'm not sure on the details of doing
that last, though.
-Greg
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html