We easily see line rate sequential io of most disk. 

I would say that 150GB/s with 40G networking and a minimum of 20 host is no 
problem. 





        

Tyler Bishop 
Chief Technical Officer 
513-299-7108 x10 



[email protected] 


If you are not the intended recipient of this transmission you are notified 
that disclosing, copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited. 




From: "Nick Fisk" <[email protected]> 
To: "Gerald Spencer" <[email protected]>, [email protected] 
Sent: Thursday, September 29, 2016 11:04:45 AM 
Subject: Re: [ceph-users] Interested in Ceph, but have performance questions 



Hi Gerald, 



I would say it’s definitely possible. I would make sure you invest in the 
networking to make sure you have enough bandwidth and choose disks based on 
performance rather than capacity. Either lots of lower capacity disks or SSD’s 
would be best. The biggest challenge may be around the client interface (ie 
block,object,file) and if you can get it to create the parallelism required to 
drive the underlying RADOS cluster. 



With my 60 disk cluster I can max out a 10G Nic with both read and writes. 
Ceph’s performance will increase with scale, so I don’t see why with 40G 
networking those figures wouldn’t be achievable. 



Nick 




From: ceph-users [mailto:[email protected]] On Behalf Of Gerald 
Spencer 
Sent: 29 September 2016 15:38 
To: [email protected] 
Subject: [ceph-users] Interested in Ceph, but have performance questions 




Greetings new world of Ceph, 





Long story short, at work we perform high throughput volumetric imaging and 
create a decent chunk of data per machine. We are about to bring the next 
generation of our system online and the IO requirements will outpace our 
current storage solution (jbod using zfs on Linux). We are currently searching 
for a template-able scale out solution that we can add as we bring each new 
system online starting in a few months. There are several quotes floating 
around from all of the big players, but the buy in on hardware and software is 
unsettling as they are a hefty chunk of change. 





The current performance we are currently estimating is per machine: 


- simultaneous 30Gbps read and 30Gbps write 


- 180 TB capacity (roughly a two day buffer into a public cloud) 








So our question is: are these types of performances possible using Ceph? I 
haven't found any benchmarks of this nature beyond 


https://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf
 


Which claims 150GB/s? I think perhaps they meant 150Gb/s (150 1Gbps clients). 





Cheers, 


Gerald Spencer 


_______________________________________________ 
ceph-users mailing list 
[email protected] 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to