Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Виталий Филиппов
Is it a question to me or Victor? :-) I did test my drives, intel nvmes are capable of something like 95100 single thread iops. 10 марта 2019 г. 1:31:15 GMT+03:00, Martin Verges пишет: >Hello, > >did you test the performance of your individual drives? > >Here is a small snippet:

Re: [ceph-users] ceph osd pg-upmap-items not working

2019-03-09 Thread Kári Bertilsson
Thanks I did apply https://github.com/ceph/ceph/pull/26179. Running manual upmap commands work now. I did run "ceph balancer optimize new"and It did add a few upmaps. But now another issue. Distribution is far from perfect but the balancer can't find further optimization. Specifically OSD 23 is

Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-03-09 Thread Pavan Rallabhandi
That can happen if you have lot of objects with swift object expiry (TTL) enabled. You can 'listomapkeys' on these log pool objects and check for the objects that have registered for TTL as omap entries. I know this is the case with at least Jewel version. Thanks, -Pavan. On 3/7/19, 10:09

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Martin Verges
Hello, did you test the performance of your individual drives? Here is a small snippet: - DRIVE=/dev/XXX smartctl --a $DRIVE for i in 1 2 4 8 16; do echo "Test $i"; fio --filename=$DRIVE --direct=1 --sync=1 --rw=write --bs=4k --numjobs=$i --iodepth=1 --runtime=60 --time_based

Re: [ceph-users] CEPH ISCSI Gateway

2019-03-09 Thread Mike Christie
On 03/07/2019 09:22 AM, Ashley Merrick wrote: > Been reading into the gateway, and noticed it’s been mentioned a few > times it can be installed on OSD servers. > > I am guessing therefore there be no issues like is sometimes mentioned > when using kRBD on a OSD node apart from the extra

[ceph-users] priorize degraged objects than misplaced

2019-03-09 Thread Fabio Abreu
HI Everybody, I have a doubt about degraded objects in the Jewel 10.2.7 version, can I priorize the degraded objects than misplaced? I asking this because I try simulate a disaster recovery scenario. Thanks and best regards, Fabio Abreu Reis http://fajlinux.com.br *Tel : *+55 21 98244-0161

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Vitaliy Filippov
There are 2: fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=1 -rw=randwrite -pool=bench -rbdname=testimg fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=128 -rw=randwrite -pool=bench -rbdname=testimg The first measures your min possible latency - it does not scale with the

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Victor Hooi
Hi, I have retested with 4K blocks - results are below. I am currently using 4 OSDs per Optane 900P drive. This was based on some posts I found on Proxmox Forums, and what seems to be "tribal knowledge" there. I also saw this presentation

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Vitaliy Filippov
Welcome to our "slow ceph" party :))) However I have to note that: 1) 50 iops is for 4 KB blocks. You're testing it with 4 MB ones. That's kind of unfair comparison. 2) fio -ioengine=rbd is better than rados bench for testing. 3) You can't "compensate" for Ceph's overhead even by

[ceph-users] Rocksdb ceph bluestore

2019-03-09 Thread Vasiliy Tolstov
Hi, i'm interesting in implementation how ceph store wal in raw block device in rocksdb? As i know rocksdb uses fs to put files, i find blobdb in rocksdb code in utilities does ceph uses it? Or how ceph uses rocksdb to put kv in raw block device? ___

[ceph-users] OpenStack with Ceph RDMA

2019-03-09 Thread Lazuardi Nasution
Hi, I'm looking for information about where is the RDMA messaging of Ceph happen, on cluster network, public network or both (it seem both, CMIIW)? I'm talking about configuration of ms_type, ms_cluster_type and ms_public_type. In case of OpenStack integration with RBD, which of above three is

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Victor Hooi
Hi Ahsley, Right - so the 50% bandwidth is OK, I guess, but it was more the drop in IOPS that was concerning (hence the subject line about 200 IOPS) *sad face*. That, and the Optane drives weren't exactly cheap, and I was hoping they would compensate for the overhead of Ceph. At random read,

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Konstantin Shalygin
These results (800 MB/s writes, 1500 Mb/s reads, and 200 write IOPS, 400 read IOPS) seems incredibly low - particularly considering what the Optane 900p is meant to be capable of. Is this in line with what you might expect on this hardware with Ceph though? Or is there some way to find out the

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Ashley Merrick
What kind of results are you expecting? Looking at the specs they are "up to" 2000 Write, and 2500 Read, so your around 50-60% of the max up to speed, which I wouldn't say is to bad due to the fact CEPH / Bluestore has an overhead specially when using a single disk for DB & WAL & Content.

[ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Victor Hooi
Hi, I'm setting up a 3-node Proxmox cluster with Ceph as the shared storage, based around Intel Optane 900P drives (which are meant to be the bee's knees), and I'm seeing pretty low IOPS/bandwidth. - 3 nodes, each running a Ceph monitor daemon, and OSDs. - Node 1 has 48 GB of RAM and 10