Ceph is a massive overhead, so it seems it maxes out at ~1 (at most
15000) write iops per one ssd with queue depth of 128 and ~1000 iops with
queue depth of 1 (1ms latency). Or maybe 2000-2500 write iops (0.4-0.5ms)
with best possible hardware. Micron has only squeezed ~8750 iops from eac
rados bench is garbage, it creates and benches a very small amount of objects.
If you want RBD better test it with fio ioengine=rbd
7 февраля 2019 г. 15:16:11 GMT+03:00, Ryan пишет:
>I just ran your test on a cluster with 5 hosts 2x Intel 6130, 12x 860
>Evo
>2TB SSD per host (6 per SAS3008), 2x
> That's a usefull conclusion to take back.
Last question - We have our SSD pool set to 3x replication, Micron states
that NVMe is good at 2x - is this "taste and safety" or is there any
general
thoughts about SSD-robustness in a Ceph setup?
Jesper
__
> On 07/02/2019 17:07, jes...@krogh.cc wrote:
> Thanks for your explanation. In your case, you have low concurrency
> requirements, so focusing on latency rather than total iops is your
> goal. Your current setup gives 1.9 ms latency for writes and 0.6 ms for
> read. These are considered good, it i
On 07/02/2019 17:07, jes...@krogh.cc wrote:
Hi Maged
Thanks for your reply.
6k is low as a max write iops value..even for single client. for cluster
of 3 nodes, we see from 10k to 60k write iops depending on hardware.
can you increase your threads to 64 or 128 via -t parameter
I can absolu
Hi Maged
Thanks for your reply.
> 6k is low as a max write iops value..even for single client. for cluster
> of 3 nodes, we see from 10k to 60k write iops depending on hardware.
>
> can you increase your threads to 64 or 128 via -t parameter
I can absolutely get it higher by increasing the paral
On 07/02/2019 09:17, jes...@krogh.cc wrote:
Hi List
We are in the process of moving to the next usecase for our ceph cluster
(Bulk, cheap, slow, erasurecoded, cephfs) storage was the first - and
that works fine.
We're currently on luminous / bluestore, if upgrading is deemed to
change what we
I just ran your test on a cluster with 5 hosts 2x Intel 6130, 12x 860 Evo
2TB SSD per host (6 per SAS3008), 2x bonded 10GB NIC, 2x Arista switches.
Pool with 3x replication
rados bench -p scbench -b 4096 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4096 bytes to objects of
4xnodes, around 100GB, 2x2660, 10Gbit, 2xLSI Logic SAS2308
Thanks for the confirmation Marc
Can you put in a but more hardware/network details?
Jesper
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Thanks for the confirmation Marc
Can you put in a but more hardware/network details?
Jesper
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Min latency(s): 0.000155521
-Original Message-
From: jes...@krogh.cc [mailto:jes...@krogh.cc]
Sent: 07 February 2019 08:17
To: ceph-users@lists.ceph.com
Subject: [ceph-users] rados block on SSD - performance - how to tune and
get insight?
Hi List
We are in the process of
> On 2/7/19 8:41 AM, Brett Chancellor wrote:
>> This seems right. You are doing a single benchmark from a single client.
>> Your limiting factor will be the network latency. For most networks this
>> is between 0.2 and 0.3ms. if you're trying to test the potential of
>> your cluster, you'll need
> On Thu, 7 Feb 2019 08:17:20 +0100 jes...@krogh.cc wrote:
>> Hi List
>>
>> We are in the process of moving to the next usecase for our ceph cluster
>> (Bulk, cheap, slow, erasurecoded, cephfs) storage was the first - and
>> that works fine.
>>
>> We're currently on luminous / bluestore, if upgradi
On 2/7/19 8:41 AM, Brett Chancellor wrote:
> This seems right. You are doing a single benchmark from a single client.
> Your limiting factor will be the network latency. For most networks this
> is between 0.2 and 0.3ms. if you're trying to test the potential of
> your cluster, you'll need multi
Hello,
On Thu, 7 Feb 2019 08:17:20 +0100 jes...@krogh.cc wrote:
> Hi List
>
> We are in the process of moving to the next usecase for our ceph cluster
> (Bulk, cheap, slow, erasurecoded, cephfs) storage was the first - and
> that works fine.
>
> We're currently on luminous / bluestore, if upgra
This seems right. You are doing a single benchmark from a single client.
Your limiting factor will be the network latency. For most networks this is
between 0.2 and 0.3ms. if you're trying to test the potential of your
cluster, you'll need multiple workers and clients.
On Thu, Feb 7, 2019, 2:17 A
Hi List
We are in the process of moving to the next usecase for our ceph cluster
(Bulk, cheap, slow, erasurecoded, cephfs) storage was the first - and
that works fine.
We're currently on luminous / bluestore, if upgrading is deemed to
change what we're seeing then please let us know.
We have 6 O
17 matches
Mail list logo