Brian,

You seem to have a pretty large cluster. How do you think about the overall 
performance?
Is your implementation on Open-SSH or SSH2?

I'm new to this and trying to setup a 20 node cluster. But our Linux boxes 
enforced F-secure SSH2 already, which I found HDFS 0.18 does not support right 
now. 

Anyone has any idea of a workaround?


Thanks and best Rgds. 
        Roger Zhang 

-----Original Message-----
From: Brian Bockelman [mailto:[EMAIL PROTECTED] 
Sent: 2008年11月4日 21:36
To: [email protected]
Subject: Re: Hadoop hardware specs

Hey Arjit,

We use all internal SATA drives in our cluster, which is about 110TB  
today; if we grow it to our planned 350TB, it will be a healthy mix of  
worker nodes w/ SATA, large internal chases (12 - 48TB), SCSI attached  
vaults, and fibre channel vaults.

Brian

On Nov 4, 2008, at 4:16 AM, Arijit Mukherjee wrote:

> Hi All
>
> We're thinking of setting up a Hadoop cluster which will be used to
> create a prototype system for analyzing telecom data. The wiki page on
> machine scaling (http://wiki.apache.org/hadoop/MachineScaling) gives  
> an
> overview of the node specs and from the Hadoop primer I found the
> following specs -
>
> * 5 x dual core CPUs
> * RAM - 4-8GB; ECC preferred, though more expensive
> * 2 x 250GB SATA drives (on each of the 5 nodes)
> * 1-5 TB external storage
>
> I'm curious to find out what sort of specs do people use normally. Is
> the external storage essential or will the individual disks on each  
> node
> be sufficient? Why would you need an external storage in a hadoop
> cluster? How can I find out what other projects on hadoop are using?
> Cheers
> Arijit
>
>
> Dr. Arijit Mukherjee
> Principal Member of Technical Staff, Level-II
> Connectiva Systems (I) Pvt. Ltd.
> J-2, Block GP, Sector V, Salt Lake
> Kolkata 700 091, India
> Phone: +91 (0)33 23577531/32 x 107
> http://www.connectivasystems.com
>

Reply via email to