...@yourcmc.ru]
Sent: Tuesday, January 21, 2020 3:43 PM
To: Eric K. Miller
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] low io with enterprise SSDs ceph luminous - can we
expect more? [klartext]
Hi! Thanks.
The parameter gets reset when you reconnect the SSD so in fact it requires
Hi! Thanks.
The parameter gets reset when you reconnect the SSD so in fact it requires not
to power cycle it after changing the parameter :-)
Ok, this case seems lucky, ~2x change isn't a lot. Can you tell the exact model
and capacity of this Micron, and what controller was used in this test?
We were able to isolate an individual Micron 5200 and perform Vitaliy's
tests in his spreadsheet.
An interesting item - write cache changes do NOT require a power cycle
to take effect, at least on a Micron 5200.
The complete results from fio are included at the end of this message
for the
From: Sasha Litvak
Sent: 21 January 2020 10:19
To: Frank Schilder
Cc: ceph-users
Subject: Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we
expect more? [klartext]
Frank,
Sorry for the confusion. I thought that turning off cache using hdparm -W
Frank,
Sorry for the confusion. I thought that turning off cache using hdparm -W
0 /dev/sdx takes effect right away and in case of non-raid controllers and
Seagate or Micron SSDs I would see a difference starting fio benchmark
right after executing hdparm. So I wonder it makes a difference
> So hdparam -W 0 /dev/sdx doesn't work or it makes no difference?
I wrote "We found the raw throughput in fio benchmarks to be very different for
write-cache enabled and disabled, exactly as explained in the performance
article.", so yes, it makes a huge difference.
> Also I am not sure I
Hi Vitaliy,
> You say you don't have access to raw drives. What does it mean? Do you
> run Ceph OSDs inside VMs? In that case you should probably disable
> Micron caches on the hosts, not just in VMs.
Sorry, I should have been more clear. This cluster is in production, so I
needed to schedule
-users-boun...@lists.ceph.com] ON BEHALF
OF Stefan Bauer
SENT: Tuesday, January 14, 2020 10:28 AM
TO: undisclosed-recipients
CC: ceph-users@lists.ceph.com
SUBJECT: Re: [ceph-users] low io with enterprise SSDs ceph luminous -
can we expect more? [klartext]
Thank you all,
performance is indeed bet
fter, but if you know
> anything about these, I'm all ears. :)
>
> Thank you!
>
> Eric
>
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Stefan Bauer
> Sent: Tuesday, January 14, 2020 10:28 AM
> To: undisclosed-recipients
> Cc: cep
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Eric K.
Miller
Sent: 19 January 2020 04:24:33
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] low io with enterprise SSDs c
Gesendet: Dienstag 14 Januar 2020 10:28
An: Wido den Hollander ; Stefan Bauer
CC: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] low io with enterprise SSDs ceph luminous -
can we expect more? [klartext]
...disable signatures and rbd cache. I didn'
with enterprise SSDs ceph luminous - can we
expect more? [klartext]
...disable signatures and rbd cache. I didn't mention it in the email to not
repeat myself. But I have it in the article :-)
--
With best regards,
Vitaliy Filippov___
ceph-users mailing
iable signatures" ?
KR
Stefan
-Ursprüngliche Nachricht-
VON: Виталий Филиппов
GESENDET: Dienstag 14 Januar 2020 10:28
AN: Wido den Hollander ; Stefan Bauer
CC: ceph-users@lists.ceph.com
BETREFF: Re: [ceph-users] low io with enterprise SSDs ceph luminous
- can we expect more? [klartext]
s.ceph.com
Betreff: Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we
expect more? [klartext]
...disable signatures and rbd cache. I didn't mention it in the email to not
repeat myself. But I have it in the article :-)
--
With best regard
Hi Stefan,
thank you for your time.
"temporary write through" does not seem to be a legit parameter.
However write through is already set:
root@proxmox61:~# echo "temporary write through" >
/sys/block/sdb/device/scsi_disk/*/cache_type
root@proxmox61:~# cat
...disable signatures and rbd cache. I didn't mention it in the email to not
repeat myself. But I have it in the article :-)
--
With best regards,
Vitaliy Filippov___
ceph-users mailing list
ceph-users@lists.ceph.com
On 1/13/20 6:37 PM, vita...@yourcmc.ru wrote:
>> Hi,
>>
>> we're playing around with ceph but are not quite happy with the IOs.
>> on average 5000 iops / write
>> on average 13000 iops / read
>>
>> We're expecting more. :( any ideas or is that all we can expect?
>
> With server SSD you can
Hi,
we're playing around with ceph but are not quite happy with the IOs.
on average 5000 iops / write
on average 13000 iops / read
We're expecting more. :( any ideas or is that all we can expect?
With server SSD you can expect up to ~1 write / ~25000 read iops per
a single client.
Hi Stefan,
Am 13.01.20 um 17:09 schrieb Stefan Bauer:
> Hi,
>
>
> we're playing around with ceph but are not quite happy with the IOs.
>
>
> 3 node ceph / proxmox cluster with each:
>
>
> LSI HBA 3008 controller
>
> 4 x MZILT960HAHQ/007 Samsung SSD
>
> Transport protocol: SAS (SPL-3)
>
Do those SSD's have capacitors (aka power loss protection)? I took a
look at the spec sheet on samsung's site and I don't see it mentioned.
If that's the case it could certainly explain the performance you're
seeing. Not all enterprise SSD's have it and it's a must have for Ceph
since it syncs
20 matches
Mail list logo