Hm, true...
One final question, I might be a noob...
13923 B/s rd, 4744 kB/s wr, 1172 op/s
what does this op/s represent - is it classic IOps (4k reads/writes) or
something else ? how much is too much :)  - I'm familiar with SATA/SSD IO/s
specs/tests, etc, but not sure what CEPH menas by op/s - could not find
anything with google...

Thanks again Wido.
Andrija


On 8 August 2014 14:07, Wido den Hollander <w...@42on.com> wrote:

> On 08/08/2014 02:02 PM, Andrija Panic wrote:
>
>> Thanks Wido, yes I'm aware of CloudStack in that sense, but would prefer
>> some precise OP/s per ceph Image at least...
>> Will check CloudStack then...
>>
>>
> Ceph doesn't really know that since RBD is just a layer on top of RADOS.
> In the end the CloudStack hypervisors are doing I/O towards RADOS objects,
> so giving exact stats of how many IOps you are seeing per image is hard to
> figure out.
>
> The hypervisor knows this best since it sees all the I/O going through.
>
> Wido
>
>  Thx
>>
>>
>> On 8 August 2014 13:53, Wido den Hollander <w...@42on.com
>> <mailto:w...@42on.com>> wrote:
>>
>>     On 08/08/2014 01:51 PM, Andrija Panic wrote:
>>
>>         Hi,
>>
>>         we just had some new clients, and have suffered very big
>>         degradation in
>>         CEPH performance for some reasons (we are using CloudStack).
>>
>>         I'm wondering if there is way to monitor OP/s or similar usage
>>         by client
>>         connected, so we can isolate the heavy client ?
>>
>>
>>     This is not very easy to do with Ceph, but CloudStack keeps track of
>>     this in the usage database.
>>
>>     With never versions of CloudStack you can also limit the IOps of
>>     Instances to prevent such situations.
>>
>>         Also, what is the general best practice to monitor these kind of
>>         changes
>>         in CEPH ? I'm talking about R/W or OP/s change or similar...
>>
>>         Thanks,
>>         --
>>
>>         Andrija Panić
>>
>>
>>
>>         _________________________________________________
>>         ceph-users mailing list
>>         ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>         http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
>>
>>         <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>>
>>
>>     --
>>     Wido den Hollander
>>     42on B.V.
>>     Ceph trainer and consultant
>>
>>     Phone: +31 (0)20 700 9902 <tel:%2B31%20%280%2920%20700%209902>
>>     Skype: contact42on
>>     _________________________________________________
>>     ceph-users mailing list
>>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>     http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
>>
>>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>>
>>
>>
>> --
>>
>> Andrija Panić
>> --------------------------------------
>> http://admintweets.com
>> --------------------------------------
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
> Ceph trainer and consultant
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>



-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to