Hello,

On Tue, 08 Apr 2014 09:31:18 +0200 Josef Johansson wrote:

> Hi all,
> 
> I am currently benchmarking a standard setup with Intel DC S3700 disks
> as journals and Hitachi 4TB-disks as data-drives, all on LACP 10GbE
> network.
>
Unless that is the 400GB version of the DC3700, you're already limiting
yourself to 365MB/s throughput with the 200GB variant.
If sequential write speed is that important to you and you think you'll
ever get those 5 HDs to write at full speed with Ceph (unlikely).
 
> The size of my journals are 25GB each, and I have two journals per
> machine, with 5 OSDs per journal, with 5 machines in total. We currently
> use the tunables optimal and the version of ceph is the latest dumpling.
> 
> Benchmarking writes with rbd show that there's no problem hitting upper
> levels on the 4TB-disks with sequential data, thus maxing out 10GbE. At
> this moment we see full utilization on the journals.
> 
> But lowering the byte-size to 4k shows that the journals are utilized to
> about 20%, and the 4TB-disks 100%. (rados -p <pool> -b 4096 -t 256 100
> write)
> 
When you're saying utilization I assume you're talking about iostat or
atop output?

That's not a bug, that's comparing apple to oranges.

The rados bench default is 4MB, which not only happens to be the default
RBD objectsize but also to generate a nice amount of bandwidth. 

While at 4k writes your SDD is obviously bored, but actual OSD needs to
handle all those writes and becomes limited by the IOPS it can peform.

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
[email protected]           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to