I think you should setup passwordless ssh from master to all VMs
you can do this by running this command on master:
ssh-keygen -t rsa -P ""
ssh-copy-id -i $HOME/.ssh/id_rsa.pub slave

-Thanks and Regards,
Rahul Patodi
Associate Software Engineer,
Impetus Infotech (India) Private Limited,
www.impetus.com
Mob:09907074413

On Tue, Dec 7, 2010 at 10:38 AM, Adarsh Sharma <adarsh.sha...@orkash.com>wrote:

> li ping wrote:
>
>> Make sure the VMs can reach each other (e.g,IPtables). And the DNS/ip is
>> correct.
>>
>> On Mon, Dec 6, 2010 at 7:05 PM, Adarsh Sharma <adarsh.sha...@orkash.com
>> >wrote:
>>
>>
>>
>>> Dear all,
>>>
>>> I am facing below problem while running Hadoop on VM's. I am using
>>> hadoop0-.20.2 with JDK6
>>>
>>> My jobtracker log says that :-2010-12-06 15:16:06,618 INFO
>>> org.apache.hadoop.mapred.JobTracker: JobTracker up at: 54311
>>> 2010-12-06 15:16:06,618 INFO org.apache.hadoop.mapred.JobTracker:
>>> JobTracker webserver: 50030
>>> 2010-12-06 15:16:06,738 INFO org.apache.hadoop.mapred.JobTracker:
>>> Cleaning
>>> up the system directory
>>> 2010-12-06 15:16:06,801 INFO
>>> org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
>>> inactive
>>> 2010-12-06 15:17:15,830 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.56:50010]
>>> 2010-12-06 15:17:15,830 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_377241628391316172_1001
>>> 2010-12-06 15:17:15,832 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.56:50010
>>> 2010-12-06 15:18:30,836 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.56:50010]
>>> 2010-12-06 15:18:30,836 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_2025622418653738085_1001
>>> 2010-12-06 15:18:30,838 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.56:50010
>>> 2010-12-06 15:19:45,842 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.61:50010]
>>> 2010-12-06 15:19:45,843 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_696328516245550547_1001
>>> 2010-12-06 15:19:45,845 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.61:50010
>>> 2010-12-06 15:21:00,849 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.55:50010]
>>> 2010-12-06 15:21:00,849 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_6110605884701761678_1001
>>> 2010-12-06 15:21:00,853 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.55:50010
>>> 2010-12-06 15:21:06,854 WARN org.apache.hadoop.hdfs.DFSClient:
>>> DataStreamer
>>> Exception: java.io.IOException: Unable to create new block.
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>>
>>> 2010-12-06 15:21:06,855 WARN org.apache.hadoop.hdfs.DFSClient: Error
>>> Recovery for block blk_6110605884701761678_1001 bad datanode[0] nodes ==
>>> null
>>> 2010-12-06 15:21:06,855 WARN org.apache.hadoop.hdfs.DFSClient: Could not
>>> get block locations. Source file "/home/hadoop/mapred/system/
>>> jobtracker.info" - Aborting...
>>> 2010-12-06 15:21:06,855 WARN org.apache.hadoop.mapred.JobTracker: Writing
>>> to file
>>> hdfs://ws-test:54310/home/hadoop/mapred/system/jobtracker.infofailed!
>>>
>>>                             41,1           5%
>>>
>>> tem/jobtracker.info failed!
>>> 2010-12-06 15:21:06,855 WARN org.apache.hadoop.mapred.JobTracker:
>>> FileSystem is not ready yet!
>>> 2010-12-06 15:21:06,862 WARN org.apache.hadoop.mapred.JobTracker: Failed
>>> to
>>> initialize recovery manager.
>>> java.net.SocketTimeoutException: 69000 millis timeout while waiting for
>>> channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.55:50010]
>>>      at
>>>
>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:213)
>>>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2870)
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>> 2010-12-06 15:21:16,864 WARN org.apache.hadoop.mapred.JobTracker:
>>> Retrying...
>>> 2010-12-06 15:22:25,879 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.61:50010]
>>> 2010-12-06 15:22:25,879 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_172376134548226023_1002
>>> 2010-12-06 15:22:25,882 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.61:50010
>>> 2010-12-06 15:23:40,886 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.55:50010]
>>> 2010-12-06 15:23:40,886 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_4842978800729986815_1002
>>> 2010-12-06 15:23:40,888 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.55:50010
>>> 2010-12-06 15:24:55,891 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.56:50010]
>>> 2010-12-06 15:24:55,891 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_-680322070618990602_1002
>>> 2010-12-06 15:24:55,894 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.56:50010
>>> 2010-12-06 15:26:10,897 INFO org.apache.hadoop.hdfs.DFSClient: Exception
>>> in
>>> createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
>>> timeout while waiting for channel to be ready for connect. ch :
>>> java.nio.channels.SocketChannel[connection-pending remote=/
>>> 192.168.0.60:50010]
>>> 2010-12-06 15:26:10,897 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_-7796705320586033371_1002
>>> 2010-12-06 15:26:10,899 INFO org.apache.hadoop.hdfs.DFSClient: Waiting to
>>> find target node: 192.168.0.60:50010
>>> 2010-12-06 15:26:16,900 WARN org.apache.hadoop.hdfs.DFSClient:
>>> DataStreamer
>>> Exception: java.io.IOException: Unable to create new block.
>>>      at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
>>>
>>>                             64,1          13%
>>>
>>> Please help me to find the root cause.
>>>
>>> Thanks & Regards
>>> Adarsh Sharma
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>>
> Thanks a lot |!
> Virtual machines are not able to connect each other.
>
> A foolish mistake.
>
>
> Regards
> Adarsh Sharma
>



--

Reply via email to