Hi Aaron,

I don't think that exists currently.

Matt

On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett
<aaron.bass...@nantomics.com> wrote:
>
> I have an radogw log centralizer that we use to for an audit trail for data 
> access in our ceph clusters. We've enabled the ops log socket and added 
> logging of the http_authorization header to it:
>
> rgw log http headers = "http_authorization"
> rgw ops log socket path = /var/run/ceph/rgw-ops.sock
> rgw enable ops log = true
>
> We have a daemon that listens on the ops socket, extracts/manipulates some 
> information from the ops log, and sends it off to our log aggregator.
>
> This setup works pretty well for the most part, except when the cluster comes 
> under heavy load, it can get _very_ laggy - sometimes up to several hours 
> behind. I'm having a hard time nailing down whats causing this lag. The 
> daemon is rather naive, basically just some nc with jq in between, but the 
> log aggregator has plenty of spare capacity, so I don't think its slowing 
> down how fast the daemon is consuming from the socket.
>
> I was revisiting the documentation about this ops log and noticed the 
> following which I hadn't seen previously:
>
> When specifying a UNIX domain socket, it is also possible to specify the 
> maximum amount of memory that will be used to keep the data backlog:
> rgw ops log data backlog = <size in bytes>
> Any backlogged data in excess to the specified size will be lost, so the 
> socket needs to be read constantly.
>
> I'm wondering if theres a way I can query radosgw for the current size of 
> that backlog to help me narrow down where the bottleneck may be occuring.
>
> Thanks,
> Aaron
>
>
>
> CONFIDENTIALITY NOTICE
> This e-mail message and any attachments are only for the use of the intended 
> recipient and may contain information that is privileged, confidential or 
> exempt from disclosure under applicable law. If you are not the intended 
> recipient, any disclosure, distribution or other use of this e-mail message 
> or attachments is prohibited. If you have received this e-mail message in 
> error, please delete and notify the sender immediately. Thank you.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to