Hi Toby,

> > > Are you able to do 'bin/hadoop-ec2 launch-cluster' then (on your 
> > > workstation)
> > >
> > > . bin/hadoop-ec2-env.sh
> > > ssh $SSH_OPTS "[EMAIL PROTECTED]" "sed -i -e
> > > \"s/$MASTER_HOST/\$(hostname)/g\"
> > > /usr/local/hadoop-$HADOOP_VERSION/conf/hadoop-site.xml"
> > >
> > > and then check to see if the master host has been set correctly (to
> > > the internal IP) in the master host's hadoop-site.xml.
> >
> > Well, no, since my $MASTER_HOST is now just the external DNS name of
> > the first instance started in the reservation, but this is performed
> > as part of my launch-hadoop-cluster script. In any case, that value is
> > not set to the internal IP, but rather to the hostname portion of the
> > internal DNS name.
>
> This is a bit of a mystery to me - I'll try to reproduce it in on my
> workstation.
>

I tried running the EC2 scripts and they work fine for me on Linux and
OSX. Do you have the same problems you report without the
modifications you made to the scripts (apart from those to
hadoop-ec2-env.sh)? How about if you use HDFS rather than S3?

I've also created a public 0.14.1 AMI so that might be worth trying too.

Tom

Reply via email to