What queue depth are you testing at?

 

You will struggle to get much more than about 500iops for a single threaded 
write, no matter what the backing disk is.

 

Nick

 

From: ceph-users [mailto:[email protected]] On Behalf Of 
[email protected]
Sent: 27 May 2015 00:55
To: Vasiliy Angapov; Karsten Heymann
Cc: ceph-users
Subject: Re: [ceph-users] SSD IO performance

 

Hi,
Sorry for all  , the network is 1000Mbit/s ,I've a state erroe before!  The 
network is not 100Mbit/s .

 

  _____  

[email protected] <mailto:[email protected]> 

 

From: Vasiliy Angapov <mailto:[email protected]> 

Date: 2015-05-26 22:36

To: Karsten Heymann <mailto:[email protected]> ; [email protected] 
<mailto:[email protected]> 

CC: ceph-users <mailto:[email protected]> 

Subject: Re: [ceph-users] SSD IO performance

Hi,

 

Hi, I guess the author here means that for random loads 100Mb network should 
generate 2500-3000 IOPS for 4k blocks.

So the complaint is reasonable, I suppose.

 

Regards, Vasily.  

 

On Tue, May 26, 2015 at 5:27 PM, Karsten Heymann <[email protected] 
<mailto:[email protected]> > wrote:

Hi ,

you should definitely increase the speed of the network. 100Mbit/s is
way too slow for all use cases I could think of, as it results in a
maximum data transfer of less than 10 Mbyte per second, which is
slower than a usb 2.0 thumb drive.

Best,
Karsten


2015-05-26 15:53 GMT+02:00 [email protected] <mailto:[email protected]>  
<[email protected] <mailto:[email protected]> >:
>
> Hi ALL:
>     I've built a ceph0.8 cluster including 2 nodes ,which  contains 5
> osds(ssd) each , with 100MB/s network . Testing a rbd device with default
> configuration ,the result is no ideal.To got better performance ,except the
> capability of random r/w  of  SSD, which should to give a change?
>
>     2 nodes  5 osds(SSD) *2  , 1 mon, 32GB RAM
>     100MB/S network
> and now the whole iops is just 500 . Should we change the filestore or
> journal part ? Thanks for any help!
>
> ________________________________
> [email protected] <mailto:[email protected]> 
>

> _______________________________________________
> ceph-users mailing list
> [email protected] <mailto:[email protected]> 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected] <mailto:[email protected]> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 




_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to