Hi Markus, X10DRH-CT can support only 16 drive as default. If you want to connect more drive,there is a special SKU for more drive support from super micro or you need additional SAS controller. We are using 2630 V3( 8 core - 2.4GHz) *2 for 30 drives on SM X10DRI-T. It is working perfectly on replication based cluster. If you are planning to use erasure coding, you have to think about higher spec.
Does any one know about the exact processor requirement of 30 drives node for erasure coding? . I can't find suitable hardware recommendation for erasure coding. Cheers K.Mohamed Pakkeer On Thu, Apr 9, 2015 at 1:30 PM, Markus Goldberg <[email protected]> wrote: > Hi, > i have a backup-storage with ceph 0,93 > As every backup-system it is only been written and hopefully never read. > > The hardware is 3 Supermicro SC847-cases with 30 SATA-HDDS each (2- and > 4-TB-WD-disks) = 250TB > I have realized, that the motherboards and CPUs are totally undersized, so > i want to install new boards. > I'm thinking of the following: > 3 Supermicro X10DRH-CT or X10DRC-T4+ with 128GB memory each. > What do you think about these boards? Will they fit into the SC847? > They have SAS and 10G-Base-T onboard, so no extra controller seems to be > necessary. > What Xeon-v3 should i take, how many cores? > Does anyone know if M.2-SSDs are supported in their pci-e-slots? > > Thank you very much, > Markus > > -------------------------------------------------------------------------- > Markus Goldberg Universität Hildesheim > Rechenzentrum > Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany > Fax +49 5121 88392823 email [email protected] > -------------------------------------------------------------------------- > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Thanks & Regards K.Mohamed Pakkeer Mobile- 0091-8754410114
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
