Hi

Can one of the implementors comment on what conditions trigger this error ?

All the data nodes show up as commissioned.  No errors during startup

If I google for this error, there are several posts reporting the issue :
but most of the answers have weak solutions like reformating and restarting
none of which help.

My guess is that this is a networking /port access issue. If anyone can
shed light on what conditions cause this error , it would be much
appreciated.

regards




On Mon, Feb 24, 2014 at 1:07 PM, Manoj Khangaonkar <khangaon...@gmail.com>wrote:

> Hi,
>
> I setup a cluster with
>
> machine1 : namenode and datanode
> machine 2 : data node
>
> A simple hdfs copy is not working. Can someone help with this issue ?
> Several folks have posted this error on the web, But I have seen a good
> reason or solution.
>
> command:
> bin/hadoop fs -copyFromLocal ~/hello /manoj/
>
> Error:
> copyFromLocal: File /manoj/hello._COPYING_ could only be replicated to 0
> nodes instead of minReplication (=1).  There are 2 datanode(s) running and
> no node(s) are excluded in this operation.
> 14/02/24 12:56:38 ERROR hdfs.DFSClient: Failed to close file
> /manoj/hello._COPYING_
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /manoj/hello._COPYING_ could only be replicated to 0 nodes instead of
> minReplication (=1).  There are 2 datanode(s) running and no node(s) are
> excluded in this operation.
>     at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>     at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
>     at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
>     at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
>     at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
>
> My setup is very basic :
> core-site.xml
> <configuration>
> <property>
>     <name>fs.default.name</name>
>         <value>hdfs://n-prd-bst-beacon01:9000</value>
>           </property>
>             <property>
>                 <name>hadoop.tmp.dir</name>
>                     <value>/home/manoj/hadoop-2.2.0/tmp</value>
>                       </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
> <property>
>      <name>dfs.replication</name>
>           <value>1</value>
>              </property>
>                 <property>
>                      <name>dfs.permissions</name>
>                           <value>false</value>
>                              </property>
> </configuration>
>
> slaves:
> localhost
> n-prd-bst-beacon02.advertising.aol.com
>
> Namenode and Datanode (on both machines) are up & running without errors
>
> regards
>
> --
> http://khangaonkar.blogspot.com/
>



-- 
http://khangaonkar.blogspot.com/

Reply via email to