Hi Christian,

Thanks for your reply...now it's clear to me.....thanks for your help...


On Fri, Apr 11, 2014 at 10:25 AM, Christian Balzer <[email protected]> wrote:

>
> Hello,
>
> On Fri, 11 Apr 2014 09:48:56 +0800 Punit Dambiwal wrote:
>
> > Hi,
> >
> > What is the drawback to run the journals on the RAID1...??
> >
> Did you read what I wrote below?
>
> > My plan is 2 SSD RAID1 (then i will create virtual disks for OS as well
> > as for every OSD).That means one virtual disk of OS and other 24 virtual
> > disks for journals.... ??
> >
> Assuming your SSD (even though you have 2 since you're using RAID1 you only
> have the write capacity of one) is a decent Intel DC 3700 400GB, the
> maximum write speed of that is 460MB/s.
> Meaning 2 things:
> a) if your're using 10GbE for networking, you already limited your max
> throughput to that, instead of the 1GB/s your network could theoretically
> achieve.
> b) at that speed your SSD journal can serve about 4 actual OSD disks at
> full speed. You crippled the performance potential of your storage node
> immensely.
>
> Even with the limitation of the single min-SAS link to the backplane in
> this storage server (it is a 4x lane, so 24Gb/s total) you would need 4 or
> 5 dedicated SSDs at the above speed to saturate that link.
>
> Of course the bandwidth of that link is theoretical, with 24 disks to
> server it will be lower in practice.
>
> > Please suggest me better way to do this ??
> >
> I already did below.
>
> Regards,
>
> Christian
> >
> > On Wed, Apr 9, 2014 at 4:02 PM, Christian Balzer <[email protected]> wrote:
> >
> > > On Wed, 9 Apr 2014 14:59:30 +0800 Punit Dambiwal wrote:
> > >
> > > > Hi,
> > > >
> > > > I have changed my plan and now i want to use the following supermicro
> > > > server :-
> > > >
> > > > SuperStorage Server 6047R-E1R24L
> > > >
> > > > Can any one tell me....is this server is good for the OSD
> > > > nodes...two ssd on RAID1 (OS & journal) and 24 HDD for OSD (JBOD on
> > > > the motherboard controller).
> > > >
> > > Wrong on so many levels.
> > >
> > > Firstly, 2 SSDs (really just one if you're using RAID1 for the journal
> > > partitions as well) for 24 OSDs.
> > > The SSD will be a speed bottleneck and also have to handle ALL the
> > > writes that ever happen to the whole machine (meaning it will wear out
> > > quickly).
> > >
> > > If you want/need SSD journals, a sensible ratio would be 3-4 OSDs per
> > > journal SSD (partitioned into the respective amount of journals).
> > > So something like 6 SSDs and 18 HDDs.
> > >
> > > Secondly that backplane is connected to the HBA with one mini-SAS link.
> > > That means at best 4 lanes of 6Gb/s for 24 drives, but it might be just
> > > one lane, the manual is typical Supermicro quality. =.=
> > > Another, potentially massive bottleneck.
> > >
> > > Also what are your goals in term of throughput, IOPS here?
> > > If you're planning on getting lots of 24 disk boxes, fine.
> > > Otherwise you might be better off getting smaller nodes.
> > >
> > > Regards,
> > >
> > > Christian.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Fri, Apr 4, 2014 at 11:51 AM, Ирек Фасихов <[email protected]>
> > > > wrote:
> > > >
> > > > > You need to use Dell OpenManage:
> > > > >
> > > > > https://linux.dell.com/repo/hardware/.
> > > > >
> > > > >
> > > > >
> > > > > 2014-04-04 7:26 GMT+04:00 Punit Dambiwal <[email protected]>:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> I want to use Dell R515/R510 for the OSD node purpose....
> > > > >>
> > > > >> 1. 2*SSD for OS purpose (Raid 1)
> > > > >> 2. 10* Segate 3.5' HDDx 3TB for OSD purpose (No RAID...JBOD)
> > > > >>
> > > > >> To create JBOD...i created all 10 HDD as raid0....but the problem
> > > > >> is when i will plug out the HDD from the server and plug-in
> > > > >> again,i need to import the RAID configuration again to make this
> > > > >> OSD working....
> > > > >>
> > > > >> Can anyone suggest me good way to do this ??
> > > > >>
> > > > >> Thanks,
> > > > >> Punit
> > > > >>
> > > > >> _______________________________________________
> > > > >> ceph-users mailing list
> > > > >> [email protected]
> > > > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > >>
> > > > >>
> > > > >
> > > > >
> > > > > --
> > > > > С уважением, Фасихов Ирек Нургаязович
> > > > > Моб.: +79229045757
> > > > >
> > >
> > >
> > > --
> > > Christian Balzer        Network/Systems Engineer
> > > [email protected]           Global OnLine Japan/Fusion Communications
> > > http://www.gol.com/
> > > _______________________________________________
> > > ceph-users mailing list
> > > [email protected]
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
>
>
> --
> Christian Balzer        Network/Systems Engineer
> [email protected]           Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to