There isn't anything built-in to Ceph to help with that. You could
either increase the logging level for the OSDs and grep out
"rbd_data." object accesses (assuming v2 image format), group by the
image id, and figure out which image is being hammered. You could also
create custom "iptables" LOG rules on the OSD nodes for each
client-node to see which client client is sending/receiving the most
data.

On Fri, Dec 28, 2018 at 11:43 AM Sinan Polat <[email protected]> wrote:
>
> Hi Jason,
>
> Thanks for your reply.
>
> Unfortunately we do not have access to the clients.
>
> We are running Red Hat Ceph 2.x which is based on Jewel, that means we cannot 
> pinpoint who or what is causing the load on the cluster, am I right?
>
> Thanks!
> Sinan
>
> > Op 28 dec. 2018 om 15:14 heeft Jason Dillaman <[email protected]> het 
> > volgende geschreven:
> >
> > With the current releases of Ceph, the only way to accomplish this is
> > by gathering the IO stats on each client node. However, with the
> > future Nautilus release, this data will now be available directly from
> > the OSDs.
> >
> >> On Fri, Dec 28, 2018 at 6:18 AM Sinan Polat <[email protected]> wrote:
> >>
> >> Hi all,
> >>
> >> We have a couple of hundreds RBD volumes/disks in our Ceph cluster, each 
> >> RBD disk is mounted by a different client. Currently we see quite high 
> >> IOPS happening on the cluster, but we don't know which client/RBD is 
> >> causing it.
> >>
> >> Is it somehow easily to see the utilization per RBD disk?
> >>
> >> Thanks!
> >> Sinan
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> [email protected]
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Jason
>


-- 
Jason
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to