We are doing it with system integrator called Racklogic in San Jose.
We tell them what to build and they do it per our intructions.  We are
running 3 datacenters with 500 servers however, and 20 Gbps of traffic
to the world... so, a lot of our stuff is custom made.

-Jack

On Wed, Nov 3, 2010 at 11:45 AM, Jason Lotz <[email protected]> wrote:
> Thanks for the replies.  My take away is that most organizations are buying
> from vendors (Dell, HP, SuperMicro, HP, etc.)  While "build it yourself" is
> an approach, I'm not hearing a lot of companies that are doing it.
>
> Thanks again,
> Jason
>
> On Wed, Nov 3, 2010 at 10:04 AM, Michael Segel 
> <[email protected]>wrote:
>
>>
>> Well I usually go to Home Depot, even though there's an ACE a block away...
>> :-)
>> (Just kidding)
>>
>> If you're keen on Dell, I don't know if they are still making R410s.
>>
>> They're 1U so you can put in 4 Hot Swap drives giving you roughly 7TB per
>> node.
>> They have multiple 1GBe ports so you can bond them if you need to. Assuming
>> you're using 'standard' SATA drives, then
>> you will max out your drive i/o before you max out your networking
>> bandwidth so if you do port bonding, you'll have enough head room.
>>
>> For your ToR switch, I'd recommend the new switch by Blade.
>>
>> http://www.bladenetwork.net/?pi_ad_id=6155346275&gclid=CISh1Y7khKUCFYPV5wod_X3kQA
>> Note: IBM bought them out so prices may vary...
>>
>> They announced a new ToR Switch that had 42 (I think) 1 GBe ports w 4 10GBe
>> uplink ports.
>> Definitely something to consider because if you try to 'trunk' your switch
>> over a 1GBe port you'll see the bottleneck between racks hit you hard.
>>
>>
>> If you've got the budget you could go with 10GBe on the motherboard... or
>> go with SolarFlare's nic cards:
>> http://www.solarflare.com/index.php
>>
>> They have a sweet card that has 2 nic ports (SPIF) each capable of 10GBe
>> bidirectional (so the card handles 40GBe).
>> Definitely a good option if you're doing more things in memory, or you have
>> 8 drive or more per node.
>> Also gives you a future on your hardware.
>> Note: 10GBe isn't 'cheap' and most people don't need it.
>> 10K switch, 1K per nic card is a good budget price...
>>
>> If you want to get away from Dell, you can look at other hardware
>> providers, or you could build your own white boxes for less money, provided
>> you have people who know how to build, install and support your hardware/OS.
>> Most corporations don't do this because its easier to pick up the phone and
>> order a box already built and you get support.
>>
>> You may consider a hybrid approach. Go w Dell/IBM/Oracle/HP (weird saying
>> Oracle and not Sun) for your 'master nodes' [NN,SN,ZKs] where you have
>> raided drives (smaller) and more memory.
>> Go white box for your DN (RS) where if you lose a box, you just bring up a
>> new one in its place and re-balance.
>>
>> HTH
>>
>> -Mike
>>
>>
>> > From: [email protected]
>> > Date: Wed, 3 Nov 2010 09:21:03 -0400
>> > Subject: Where do you get your hardware?
>> > To: [email protected]
>> >
>> > We are in the process of analyzing our options for the future purchases
>> of
>> > our Hadoop/HBase DN/RS servers.  Currently, we purchase Dell PowerEdge
>> > R710's which work well for us.  However, we know that there are other
>> > options that may give us more bang for our buck.
>> >
>> > I'm not as interested in knowing the specs of the machines that people
>> are
>> > using.  Rather, I'm curious to know where you buy them from or if you are
>> > building them yourselves.
>> >
>> > Any feedback on how you acquire server hardware in your environment would
>> be
>> > greatly appreciated.
>> >
>> > Jason
>>
>>
>

Reply via email to