Hi,
I see CephFS read performance a bit lower than RBD sequential read single
threaded performance.
Is it an expected behaviour?
Is file access with CephFS single threaded by design?
fio shows 70 MB/s seq read with 4M blocks, libaio, 1 thread, direct.
fio seq write 200 MB/s
# rados bench -t 1
Hi Ilja,
On 5 December 2015 at 07:16, Ilja Slepnev wrote:
> fio shows 70 MB/s seq read with 4M blocks, libaio, 1 thread, direct.
> fio seq write 200 MB/s
The fio numbers are from fio running on a CephFS mount I take it?
> # rados bench -t 1 -p test 60 write --no-cleanup
I
Hi all,
We switched to a, now free, Sandy Bridge based server.
This has resolved our read issues. So something about the Quad AMD box
was very bad for reads...
I've got numbers if people are interested.. but I would say that AMD is
not a great idea for OSD's.
Thanks for all the pointers!
On 04/21/2013 06:18 PM, Malcolm Haak wrote:
Hi all,
We switched to a, now free, Sandy Bridge based server.
This has resolved our read issues. So something about the Quad AMD box
was very bad for reads...
I've got numbers if people are interested.. but I would say that AMD is
not a great idea
On 04/17/2013 11:35 PM, Malcolm Haak wrote:
Hi all,
Hi Malcolm!
I jumped into the IRC channel yesterday and they said to email
ceph-devel. I have been having some read performance issues. With Reads
being slower than writes by a factor of ~5-8.
I recently saw this kind of behaviour
Hi Mark!
Thanks for the quick reply!
I'll reply inline below.
On 18/04/13 17:04, Mark Nelson wrote:
On 04/17/2013 11:35 PM, Malcolm Haak wrote:
Hi all,
Hi Malcolm!
I jumped into the IRC channel yesterday and they said to email
ceph-devel. I have been having some read performance issues.
Morning all,
Did the echos on all boxes involved... and the results are in..
[root@dogbreath ~]#
[root@dogbreath ~]# dd if=/todd-rbd-fs/DELETEME of=/dev/null bs=4M
count=1 iflag=direct
1+0 records in
1+0 records out
4194304 bytes (42 GB) copied, 144.083 s, 291 MB/s
On 04/18/2013 07:27 PM, Malcolm Haak wrote:
Morning all,
Did the echos on all boxes involved... and the results are in..
[root@dogbreath ~]#
[root@dogbreath ~]# dd if=/todd-rbd-fs/DELETEME of=/dev/null bs=4M
count=1 iflag=direct
1+0 records in
1+0 records out
4194304 bytes (42
Ok this is getting interesting.
rados -p pool bench 300 write --no-cleanup
Total time run: 301.103933
Total writes made: 22477
Write size: 4194304
Bandwidth (MB/sec): 298.595
Stddev Bandwidth: 171.941
Max bandwidth (MB/sec): 832
Min bandwidth (MB/sec): 8
Hi all,
I jumped into the IRC channel yesterday and they said to email
ceph-devel. I have been having some read performance issues. With Reads
being slower than writes by a factor of ~5-8.
First info:
Server
SLES 11 SP2
Ceph 0.56.4.
12 OSD's that are Hardware Raid 5 each of the twelve is
10 matches
Mail list logo