Something seems to have happened to the rbd I was writing to. Earlier
my instances had /dev/vdb attached, and I ran my fio tests against it.
After a while the performance got terrible, I'm not sure why this
happened. However, I have deleted and recreated the block to the
instance and the IOPS got recovered.
I wanted to test with the intel_idle.max_cstate=0 but got misled by
the rbd being terribly slow, so I thought the parameter you suggested
caused the slowness, maybe something else caused this. I will try
ntel_idle.max_cstate=0 again with a fresh block device to rule out any
cause due to the additional kernel parameter. thx will
On Tue, Oct 18, 2016 at 6:40 PM, William Josefsson
> On Mon, Oct 17, 2016 at 6:16 PM, Nick Fisk <n...@fisk.me.uk> wrote:
>>> -----Original Message-----
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> William Josefsson
>>> Sent: 17 October 2016 10:39
>>> To: n...@fisk.me.uk
>>> Cc: firstname.lastname@example.org
>>> Subject: Re: [ceph-users] RBD with SSD journals and SAS OSDs
>>> hi nick, I earlier did cpupower frequency-set --cpu-governor performance on
>>> all my hosts, which bumped all CPUs up to almost max
>>> speed or more.
>> Did you also set /check the c-states, this can have a large impact as well?
> hi nick, yes I tried intel_idle.max_cstate=0 today, and there weren't
> any difference, my performance has got terrible now too.. I'm not sure
> why. Something else seem to be the problem as my performance with all
> ceph hosts rebooted is terrible, it has dropped to 500-1000IOPs for
> the 4k blocks, sync, direct, and the latency has also gone crazy high
> to around 100ms. Something is wrong causing these latencies, and I'm
> not sure why this happens.
> Is there any bios setting on dell PE 730xd you can think of that would
> improve latency and performance, there may be some relevant
> performance parameters. thx will
ceph-users mailing list