On Sat, Jun 7, 2008 at 5:25 PM, Chris K Wensel <[EMAIL PROTECTED]> wrote:
> The new scripts do not use the start/stop-all.sh scripts, and thus do not
> maintain the slaves file. This is so cluster startup is much faster and a
> bit more reliable (keys do not need to be pushed to the slaves). Also w
The new scripts do not use the start/stop-all.sh scripts, and thus do
not maintain the slaves file. This is so cluster startup is much
faster and a bit more reliable (keys do not need to be pushed to the
slaves). Also we can grow the cluster lazily just by starting slave
nodes. That is, the
I should update this to stupidity on my part (though the hidden shell execution
within the client thats error gets masked is somewhat fickle). Of course if I
dont start the thing up via the ide, but from the command line it goes past
this problem (security issue, but that one is probably a more
First of all, thanks to whoever maintains the hadoop-ec2 scripts.
They've saved us untold time and frustration getting started with a
small testing cluster (5 instances).
A question: when we log into the newly created cluster, and run jobs
from the example jar (pi, etc) everything works great. We
Sorry in advance if these "challenges" are covered in a document somewhere.
I have setup hadoop on a centos 64 bit Linux box. I have verified that it is
up and running only through seeing the java processes running and that I can
access it from the admin ui.
hadoop version is 1.7.0 but I also
Hi,
What is the maximum number of files that can be stored on HDFS? Is it
dependent on namenode memory configuration? Also does this impact on the
performance of namenode anyway?
thanks in advance
Karthik
From Chandigarh to Chennai - find friends all over India. Go to
http://in.promos