Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-21 Thread Eric K. Miller
...@yourcmc.ru] Sent: Tuesday, January 21, 2020 3:43 PM To: Eric K. Miller Cc: ceph-users@lists.ceph.com Subject: RE: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext] Hi! Thanks. The parameter gets reset when you reconnect the SSD so in fact it requires

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-21 Thread Eric K. Miller
We were able to isolate an individual Micron 5200 and perform Vitaliy's tests in his spreadsheet. An interesting item - write cache changes do NOT require a power cycle to take effect, at least on a Micron 5200. The complete results from fio are included at the end of this message for the

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-20 Thread Eric K. Miller
Hi Vitaliy, > You say you don't have access to raw drives. What does it mean? Do you > run Ceph OSDs inside VMs? In that case you should probably disable > Micron caches on the hosts, not just in VMs. Sorry, I should have been more clear. This cluster is in production, so I needed to schedule

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-18 Thread Eric K. Miller
Hi Vitaliy, Similar to Stefan, we have a bunch of Micron 5200's (3.84TB ECO SATA version) in a Ceph cluster (Nautilus) and performance seems less than optimal. I have followed all instructions on your site (thank you for your wonderful article btw!!), but I haven't seen much change.