Thanks, Tom and John, both of your input really helpful and helped to put
things into perspective.
Much appreciated.

@John, I am based out of Dubai.


On Wed, Aug 29, 2018 at 2:06 AM John Hearns <[email protected]> wrote:

> James, you also use the words enterprise and production ready.
> Is Redhat support important to you?
>
>
>
>
> On Tue, 28 Aug 2018 at 23:56, John Hearns <[email protected]> wrote:
>
>> James, well for a start don't use a SAN. I speak as someone who managed a
>> SAN with Brocade switches and multipathing for an F1 team. CEPH is Software
>> Defined Storage. You want discreet storage servers with a high bandwidth
>> Ethernet (or maybe Infiniband) fabric.
>>
>> Fibrechannel still has it place here though if you want servers with FC
>> attached JBODs.
>>
>> Also you ask about the choice between spinning disks, SSDs and NVMe
>> drives. Think about the COST for your petabyte archive.
>> True, these days you can argue that all SSD could be comparable to
>> spinning disks. But NVMe? Yes you get the best performance.. but do you
>> really want all that video data on $$$ NVMe? You need tiering.
>>
>> Also dont forget low and slow archive tiers - shingled archive disks and
>> perhaps tape.
>>
>> Me, I would start from the building blocks of Supermicro 36 bay storage
>> servers. Fill them with 12 Tbyte helium drives.
>> Two slots in the back for SSDs for your journaling.
>> For a higher performance tier, look at the 'double double' storage
>> servers from Supermicro. Or even nicer the new 'ruler'form factor servers.
>> For a higher density archiving tier the 90 bay Supermicro servers.
>>
>> Please get in touch with someone for advice. If you are in the UK I am
>> happy to help and point you in the right direction.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, 28 Aug 2018 at 21:05, James Watson <[email protected]>
>> wrote:
>>
>>> Dear cephers,
>>>
>>> I am new to the storage domain.
>>> Trying to get my head around the enterprise - production-ready setup.
>>>
>>> The following article helps a lot here: (Yahoo ceph implementation)
>>> https://yahooeng.tumblr.com/tagged/object-storage
>>>
>>> But a couple of questions:
>>>
>>> What HDD would they have used here? NVMe / SATA /SAS etc (with just 52
>>> storage node they got 3.2 PB of capacity !! )
>>> I try to calculate a similar setup with HGST Ultrastar He12 (12TB and
>>> it's more recent ) and would need 86 HDDs that adds up to 1 PB only!!
>>>
>>> How is the HDD drive attached is it DAS or a SAN (using Fibre Channel
>>> Switches, Host Bus Adapters etc)?
>>>
>>> Do we need a proprietary hashing algorithm to implement multi-cluster
>>> based setup of ceph to contain CPU/Memory usage within the cluster when
>>> rebuilding happens during device failure?
>>>
>>> If proprietary hashing algorithm is required to setup multi-cluster ceph
>>> using load balancer - then what could be the alternative setup we can
>>> deploy to address the same issue?
>>>
>>> The aim is to design a similar architecture but with upgraded products
>>> and higher performance. - Any suggestions or thoughts are welcome
>>>
>>>
>>>
>>> Thanks in advance
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to