---- On Wed, 21 Apr 2010 15:01:17 +0530 Steve Loughran <[email protected]> 
wrote ---- 

>manas.tomar wrote: 
>> I have set-up Hadoop on OpenSuse 11.2 VM using Virtualbox. I ran Hadoop 
>> examples in the standalone mode successfully. 
>> Now, I want to run in distributed mode using 2 nodes. 
>> Hadoop starts fine and jps lists all the nodes. But when i try to put any 
>> file or run any example, I get error. For e.g. : 
>> 
>> had...@master:~/hadoop> ./bin/hadoop dfs -copyFromLocal ./input inputsample 
>> 10/04/17 14:42:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream 
>> java.net.SocketException: Operation not supported 
>> 10/04/17 14:42:46 INFO hdfs.DFSClient: Abandoning block 
>> blk_8951413748418693186_1080 
>> .... 
>> 10/04/17 14:43:04 INFO hdfs.DFSClient: Exception in createBlockOutputStream 
>> java.net.SocketException: Protocol not available 
>> 10/04/17 14:43:04 INFO hdfs.DFSClient: Abandoning block 
>> blk_838428157309440632_1081 
>> 10/04/17 14:43:10 WARN hdfs.DFSClient: DataStreamer Exception: 
>> java.io.IOException: Unable to create new block. 
>> at 
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
>>  
>> at 
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>  
>> at 
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>  
>> 
>> 10/04/17 14:43:10 WARN hdfs.DFSClient: Error Recovery for block 
>> blk_838428157309440632_1081 bad datanode[0] nodes == null 
>> 10/04/17 14:43:10 WARN hdfs.DFSClient: Could not get block locations. Source 
>> file "/user/hadoop/inputsample/check" - Aborting... 
>> copyFromLocal: Protocol not available 
>> 10/04/17 14:43:10 ERROR hdfs.DFSClient: Exception closing file 
>> /user/hadoop/inputsample/check : java.net.SocketException: Protocol not 
>> available 
>> java.net.SocketException: Protocol not available 
>> at sun.nio.ch.Net.getIntOption0(Native Method) 
>> at sun.nio.ch.Net.getIntOption(Net.java:178) 
>> at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) 
>> at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) 
>> at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) 
>> at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) 
>> at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) 
>> at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) 
>> at 
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873)
>>  
>> at 
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
>>  
>> at 
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>  
>> at 
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>  
>> 
>> 
>> I can see the files on HDFS through the web interface but they are empty. 
>> Any suggestion on how can I get over this ? 
>> 
> 
>That is a very low-level socket error; I would file a bugrep on hadoop 
>and include all machine details, as there is something very odd about 
>your underlying machine or network stack that is stopping hadoop 
>tweaking TCP buffer sizes
>

Thanks.
any suggestions on  how to zero down the cause?
I want to know whether it is Hadoop or my network config 
i.e. any of Opensuse/VirtualBox or Vista before i file a bugrep.

Reply via email to