Re: [ceph-users] capacity planning - iops

2016-09-19 Thread Jan Schermer
Are you talking about global IOPS or per-VM/per-RBD device?
And at what queue depth?
It all comes down to latency. Not sure what the numbers can be on recent 
versions of Ceph and on modern OSes but I doubt it will be <1ms for the OSD 
daemon alone. That gives you 1000 real synchronous IOPS. With higher queue 
depths (or with more RBD devices in parallel) you could reach higher numbers, 
but you need to know what you application needs.
For SATA drives, you need to add their latency to this number, and it scales 
only when the writes are distributed to all the drives (so if you hammer a 4k 
region it will still hit the same drives, even with higher queue depth, which 
might/or might not, increase throughput or even make it worse...)

Jan


> On 19 Sep 2016, at 16:23, Matteo Dacrema  wrote:
> 
> Hi All,
> 
> I’m trying to estimate how many iops ( 4k direct random write )  my ceph 
> cluster should deliver.
> I’ve Journal on SSDs and SATA 7.2k drives for OSD.
> 
> The question is: does journal on SSD increase the number of maximum write 
> iops or I need to consider only the IOPS provided by SATA drives divided by 
> replica count?
> 
> Regards
> M.
> 
> 
> 
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the system manager. This 
> message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail. Please notify the sender 
> immediately by e-mail if you have received this e-mail by mistake and delete 
> this e-mail from your system. If you are not the intended recipient you are 
> notified that disclosing, copying, distributing or taking any action in 
> reliance on the contents of this information is strictly prohibited.
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] capacity planning - iops

2016-09-19 Thread Nick Fisk
 

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Matteo 
Dacrema
Sent: 19 September 2016 15:24
To: ceph-users@lists.ceph.com
Subject: [ceph-users] capacity planning - iops

 

Hi All,

 

I’m trying to estimate how many iops ( 4k direct random write )  my ceph 
cluster should deliver.

I’ve Journal on SSDs and SATA 7.2k drives for OSD.

 

 

The question is: does journal on SSD increase the number of maximum write iops 
or I need to consider only the IOPS provided by SATA drives divided by replica 
count?

 

Yes, pretty much this is correct. A journal may help increase the total 
throughput if the workload isn’t completely random, but I guess you always need 
to design for the worst case scenario. What a SSD journal will do is lower the 
latency per IO.

 

 

 

Regards

M.

 

 

 

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com