Hello Robert,
I did not make any changes, so I'm still using the prio queue.
Regards
Le lun. 10 juin 2019 à 17:44, Robert LeBlanc a
écrit :
> I'm glad it's working, to be clear did you use wpq, or is it still the
> prio queue?
>
> Sent from a mobile device, please excuse any typos.
>
> On Mon,
I'm glad it's working, to be clear did you use wpq, or is it still the prio
queue?
Sent from a mobile device, please excuse any typos.
On Mon, Jun 10, 2019, 4:45 AM BASSAGET Cédric
wrote:
> an update from 12.2.9 to 12.2.12 seems to have fixed the problem !
>
> Le lun. 10 juin 2019 à 12:25,
an update from 12.2.9 to 12.2.12 seems to have fixed the problem !
Le lun. 10 juin 2019 à 12:25, BASSAGET Cédric
a écrit :
> Hi Robert,
> Before doing anything on my prod env, I generate r/w on ceph cluster using
> fio .
> On my newest cluster, release 12.2.12, I did not manage to get
> the
Hi Robert,
Before doing anything on my prod env, I generate r/w on ceph cluster using
fio .
On my newest cluster, release 12.2.12, I did not manage to get
the (REQUEST_SLOW) warning, even if my OSD disk usage goes above 95% (fio
ran from 4 diffrent hosts)
On my prod cluster, release 12.2.9, as
On Mon, Jun 10, 2019 at 1:00 AM BASSAGET Cédric <
cedric.bassaget...@gmail.com> wrote:
> Hello Robert,
> My disks did not reach 100% on the last warning, they climb to 70-80%
> usage. But I see rrqm / wrqm counters increasing...
>
> Device: rrqm/s wrqm/s r/s w/srkB/s
Hello Robert,
My disks did not reach 100% on the last warning, they climb to 70-80%
usage. But I see rrqm / wrqm counters increasing...
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sda 0.00 4.00
With the low number of OSDs, you are probably satuarting the disks. Check
with `iostat -xd 2` and see what the utilization of your disks are. A lot
of SSDs don't perform well with Ceph's heavy sync writes and performance is
terrible.
If some of your drives are 100% while others are lower
Hello,
I see messages related to REQUEST_SLOW a few times per day.
here's my ceph -s :
root@ceph-pa2-1:/etc/ceph# ceph -s
cluster:
id: 72d94815-f057-4127-8914-448dfd25f5bc
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-pa2-1,ceph-pa2-2,ceph-pa2-3
mgr: