On Tue, Oct 1, 2019 at 7:54 AM Robert LeBlanc <[email protected]> wrote:
>
> On Mon, Sep 30, 2019 at 5:12 PM Sasha Litvak
> <[email protected]> wrote:
> >
> > At this point, I ran out of ideas.  I changed nr_requests and readahead 
> > parameters to 128->1024 and 128->4096, tuned nodes to 
> > performance-throughput.  However, I still get high latency during benchmark 
> > testing.  I attempted to disable cache on ssd
> >
> > for i in {a..f}; do hdparm -W 0 -A 0 /dev/sd$i; done
> >
> > and I think it make things not better at all.  I have H740 and H730 
> > controllers with drives in HBA mode.
> >
> > Other them converting them one by one to RAID0 I am not sure what else I 
> > can try.
> >
> > Any suggestions?
>
> If you haven't already tried this, add this to your ceph.conf and
> restart your OSDs, this should help bring down the variance in latency
> (It will be the default in Octopus):
>
> osd op queue = wpq
> osd op queue cut off = high

I should clarify. This will reduce the variance in latency for client
OPs. If this counter is also including recovery/backfill/deep_scrub
OP-, then the latency can still be high as these settings make
recovery/backfill/deep_scrub less impactful to client I/O at the cost
of them possibly being delayed a bit.
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to