on eucalyptus. Please let me know how to do this. -- View this message in context: http://www.nabble.com/contrib-EC2-with-hadoop-0.17-tp17711758p22310068.html Sent from the Hadoop core-user mailing list archive at Nabble.com.
Thanks for the description, Chris. Now that I understand the basic model, I'm starting to see how the configuration is passed to the slaves using the -d option of ec2-run-instances. One config question: on our cluster (hadoop 0.17 with INSTANCE_TYPE=m1.small) the conf/hadoop-default.xml has
First of all, thanks to whoever maintains the hadoop-ec2 scripts. They've saved us untold time and frustration getting started with a small testing cluster (5 instances). A question: when we log into the newly created cluster, and run jobs from the example jar (pi, etc) everything works great. We
The new scripts do not use the start/stop-all.sh scripts, and thus do not maintain the slaves file. This is so cluster startup is much faster and a bit more reliable (keys do not need to be pushed to the slaves). Also we can grow the cluster lazily just by starting slave nodes. That is,