Hi Daniels,

Let me see if I understand. You are starting a Hadoop cluster using
Whirr 0.4.0 from your local machine and you want to use it from a
different ec2 instance running in another security group.

The issue you are seeing is probably related to how DNS resolution
works inside and outside the Amazon network (e.g.
ec2-174-129-68-49.compute-1.amazonaws.com resolves to the public IP
outside the Amazon network and to the private IP inside).

I've done some testing by creating a similar environment and it seems
like the easiest workaround is to replace the hostname with the public
IP in hadoop-site.xml (the one generate on the local machine) and
updating hadoop-proxy.sh in a similar fashion.

I am not sure but I think we should consider to update the code to
replace the hostnames with IPs by default.

Let me know if you need more help with this one.

Cheers,

-- Andrei Savu / andreisavu.ro

On Thu, Jun 2, 2011 at 8:02 AM, Doug Daniels <ddani...@mortardata.com> wrote:
> Hi,
>
> Using whirr 0.4.0 I'm able to start up and communicate with a hadoop cluster 
> from my local machine, but I'm having trouble doing so from an ec2 instance.
>
> Everything seems to start fine, but after I start the hadoop-proxy.sh I get 
> this error message when I try to do 'hadoop fs -ls /':
>
> ubuntu@domU-12-31-39-04-1E-48:~$ hadoop fs -ls /
> 11/06/02 04:57:16 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found 
> in the classpath. Usage of hadoop-site.xml is deprecated. Instead use 
> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of 
> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
> Bad connection to FS. command aborted. exception: Call to 
> ec2-174-129-68-49.compute-1.amazonaws.com/10.192.214.192:8020 failed on local 
> exception: java.io.EOFException
> ubuntu@domU-12-31-39-04-1E-48:~$
>
> I can ssh directly to the machines, but can't seem to communicate using 
> hadoop.
>
> Does anyone have any ideas what might be happening there?
>
> Thanks,
> Doug

Reply via email to