On 9/03/19 10:07 PM, Victor Hooi wrote:
> Hi,
>
> I'm setting up a 3-node Proxmox cluster with Ceph as the shared storage,
> based around Intel Optane 900P drives (which are meant to be the bee's
> knees), and I'm seeing pretty low IOPS/bandwidth.
We found that CPU performance, specifically
These options aren't needed, numjobs is 1 by default and RBD has no "sync"
concept at all. Operations are always "sync" by default.
In fact even --direct=1 may be redundant because there's no page cache
involved. However I keep it just in case - there is the RBD cache, what if
one day fio
how about adding: --sync=1 --numjobs=1 to the command as well?
On Sat, Mar 9, 2019 at 12:09 PM Vitaliy Filippov wrote:
> There are 2:
>
> fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=1 -rw=randwrite
> -pool=bench -rbdname=testimg
>
> fio -ioengine=rbd -direct=1 -name=test -bs=4k
Is it a question to me or Victor? :-)
I did test my drives, intel nvmes are capable of something like 95100 single
thread iops.
10 марта 2019 г. 1:31:15 GMT+03:00, Martin Verges
пишет:
>Hello,
>
>did you test the performance of your individual drives?
>
>Here is a small snippet:
Hello,
did you test the performance of your individual drives?
Here is a small snippet:
-
DRIVE=/dev/XXX
smartctl --a $DRIVE
for i in 1 2 4 8 16; do echo "Test $i"; fio --filename=$DRIVE --direct=1
--sync=1 --rw=write --bs=4k --numjobs=$i --iodepth=1 --runtime=60
--time_based
There are 2:
fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=1 -rw=randwrite
-pool=bench -rbdname=testimg
fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=128 -rw=randwrite
-pool=bench -rbdname=testimg
The first measures your min possible latency - it does not scale with the
Hi,
I have retested with 4K blocks - results are below.
I am currently using 4 OSDs per Optane 900P drive. This was based on some
posts I found on Proxmox Forums, and what seems to be "tribal knowledge"
there.
I also saw this presentation
Welcome to our "slow ceph" party :)))
However I have to note that:
1) 50 iops is for 4 KB blocks. You're testing it with 4 MB ones.
That's kind of unfair comparison.
2) fio -ioengine=rbd is better than rados bench for testing.
3) You can't "compensate" for Ceph's overhead even by
Hi Ahsley,
Right - so the 50% bandwidth is OK, I guess, but it was more the drop in
IOPS that was concerning (hence the subject line about 200 IOPS) *sad face*.
That, and the Optane drives weren't exactly cheap, and I was hoping they
would compensate for the overhead of Ceph.
At random read,
These results (800 MB/s writes, 1500 Mb/s reads, and 200 write IOPS, 400
read IOPS) seems incredibly low - particularly considering what the Optane
900p is meant to be capable of.
Is this in line with what you might expect on this hardware with Ceph
though?
Or is there some way to find out the
What kind of results are you expecting?
Looking at the specs they are "up to" 2000 Write, and 2500 Read, so your
around 50-60% of the max up to speed, which I wouldn't say is to bad due to
the fact CEPH / Bluestore has an overhead specially when using a single
disk for DB & WAL & Content.
Hi,
I'm setting up a 3-node Proxmox cluster with Ceph as the shared storage,
based around Intel Optane 900P drives (which are meant to be the bee's
knees), and I'm seeing pretty low IOPS/bandwidth.
- 3 nodes, each running a Ceph monitor daemon, and OSDs.
- Node 1 has 48 GB of RAM and 10
12 matches
Mail list logo