Re: [ceph-users] Odd cyclical cluster performance

2017-05-16 Thread Patrick Dinnen
nce behaviors? > -Greg > > On Thu, May 11, 2017 at 12:47 PM, Patrick Dinnen wrote: >> Seeing some odd behaviour while testing using rados bench. This is on >> a pre-split pool, two node cluster with 12 OSDs total. >> >> ceph osd pool create newerpoolofhopes 2048 2048

[ceph-users] Odd cyclical cluster performance

2017-05-11 Thread Patrick Dinnen
isks. The activity on the OSDs and SSDs seems anti-correlated. SSDs peak in activity as OSDs reach the bottom of the trough. Then the reverse. Repeat. Does anyone have any suggestions as to what could possibly be causing a regular pattern like this at such a low frequency

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-02 Thread Patrick Dinnen
ss value or use something like > https://hoytech.com/vmtouch/ to pre-cache inodes entries. > > You could tarball the smaller files before loading them into Ceph maybe. > > How are the ten clients accessing Ceph by the way? > > On May 1, 2017, at 14:23, Patrick Dinnen wrote: > > One

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-02 Thread Patrick Dinnen
rease the cost >> too much, probably. >> >> You could change swappiness value or use something >> like https://hoytech.com/vmtouch/ to pre-cache inodes entries. >> >> You could tarball the smaller files before loading them into Ceph maybe. >> >> How ar

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-01 Thread Patrick Dinnen
One additional detail, we also did filestore testing using Jewel and saw substantially similar results to those on Kraken. On Mon, May 1, 2017 at 2:07 PM, Patrick Dinnen wrote: > Hello Ceph-users, > > Florian has been helping with some issues on our proof-of-concept cluster, > wher

[ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-01 Thread Patrick Dinnen
Hello Ceph-users, Florian has been helping with some issues on our proof-of-concept cluster, where we've been experiencing these issues. Thanks for the replies so far. I wanted to jump in with some extra details. All of our testing has been with scrubbing turned off, to remove that as a fact