You are talking about 20 “GIG” (what is that? GB/s? Gb/s? I assume the latter) 
then talk about 40Mbit/s.

Am I the only one who cannot parse this? :-)

> On 24 Nov 2015, at 17:27, Marek Dohojda <mdoho...@altitudedigital.com> wrote:
> 
> 7 total servers, 20 GIG pipe between servers, both reads and writes.  The 
> network itself has plenty of pipe left, it is averaging 40Mbits/s 
> 
> Rados Bench SAS 30 writes
>  Total time run:         30.591927
> Total writes made:      386
> Write size:             4194304
> Bandwidth (MB/sec):     50.471 
> 
> Stddev Bandwidth:       48.1052
> Max bandwidth (MB/sec): 160
> Min bandwidth (MB/sec): 0
> Average Latency:        1.25908
> Stddev Latency:         2.62018
> Max latency:            21.2809
> Min latency:            0.029227
> 
> Rados Bench SSD writes
>  Total time run:         20.425192
> Total writes made:      1405
> Write size:             4194304
> Bandwidth (MB/sec):     275.150 
> 
> Stddev Bandwidth:       122.565
> Max bandwidth (MB/sec): 576
> Min bandwidth (MB/sec): 0
> Average Latency:        0.231803
> Stddev Latency:         0.190978
> Max latency:            0.981022
> Min latency:            0.0265421
> 
> 
> As you can see SSD is better but not as much as I would expect SSD to be. 
> 
> 
> 
> On Tue, Nov 24, 2015 at 9:10 AM, Alan Johnson <al...@supermicro.com 
> <mailto:al...@supermicro.com>> wrote:
> Hard to know without more config details such as no of servers, network  – 
> GigE or !0 GigE, also not sure how you are measuring, (reads or writes) you 
> could try RADOS bench as a baseline, I would expect more performance with 7 X 
> 10K spinners journaled to SSDs. The fact that SSDs did not perform much 
> better may mean to a bottleneck elsewhere – network perhaps?
> 
> From: Marek Dohojda [mailto:mdoho...@altitudedigital.com 
> <mailto:mdoho...@altitudedigital.com>] 
> Sent: Tuesday, November 24, 2015 10:37 AM
> To: Alan Johnson
> Cc: Haomai Wang; ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> 
> Subject: Re: [ceph-users] Performance question
> 
>  
> 
> Yeah they are, that is one thing I was planning on changing, What I am really 
> interested at the moment, is vague expected performance.  I mean is 100MB 
> around normal, very low, or "could be better"?
> 
>  
> 
> On Tue, Nov 24, 2015 at 8:02 AM, Alan Johnson <al...@supermicro.com 
> <mailto:al...@supermicro.com>> wrote:
> 
> Are the journals on the same device – it might be better to use the SSDs for 
> journaling since you are not getting better performance with SSDs?
> 
>  
> 
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com 
> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of Marek Dohojda
> Sent: Monday, November 23, 2015 10:24 PM
> To: Haomai Wang
> Cc: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Performance question
> 
>  
> 
>  Sorry I should have specified SAS is the 100 MB :) , but to be honest SSD 
> isn't much faster.
> 
>  
> 
> On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang <haomaiw...@gmail.com 
> <mailto:haomaiw...@gmail.com>> wrote:
> 
> On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda
> <mdoho...@altitudedigital.com <mailto:mdoho...@altitudedigital.com>> wrote:
> > No SSD and SAS are in two separate pools.
> >
> > On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang <haomaiw...@gmail.com 
> > <mailto:haomaiw...@gmail.com>> wrote:
> >>
> >> On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
> >> <mdoho...@altitudedigital.com <mailto:mdoho...@altitudedigital.com>> wrote:
> >> > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs.  7 of which
> >> > are
> >> > SSD and 7 of which are SAS 10K drives.  I get typically about 100MB IO
> >> > rates
> >> > on this cluster.
> 
> So which pool you get with 100 MB?
> 
> 
> >>
> >> You mixed up sas and ssd in one pool?
> >>
> >> >
> >> > I have a simple question.  Is 100MB within my configuration what I
> >> > should
> >> > expect, or should it be higher? I am not sure if I should be looking for
> >> > issues, or just accept what I have.
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> >> > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> >> >
> >>
> >>
> >>
> >> --
> >> Best Regards,
> >>
> >> Wheat
> >
> >
> 
> 
> --
> Best Regards,
> 
> Wheat
> 
>  
> 
>  
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to