Try the options listed here: http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
On Thu, Jan 26, 2012 at 10:47 PM, neuron005 <[email protected]> wrote: > > Hii there > I earlier used hbase locally , using my ext3 as filesystem for hbase. That > worked ok :) . Now I moved on to next step of setting it up on hdfs. I am > using hadoop-0.20.2 and hbase0.90.4 in pseudo distributed mode > I an getting this error in my log > > 2012-01-26 22:37:50,629 DEBUG org.apache.hadoop.hbase.util.FSUtils: Created > version file at hdfs://89neuron:9000/hbase set its version at:7 > 2012-01-26 22:37:50,637 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer > Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File > /hbase/hbase.version could only be replicated to 0 nodes, instead of 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:416) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > > at org.apache.hadoop.ipc.Client.call(Client.java:740) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at $Proxy6.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:616) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy6.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > > 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Error > Recovery for block null bad datanode[0] nodes == null > 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hdfs.DFSClient: Could not get > block locations. Source file "/hbase/hbase.version" - Aborting... > 2012-01-26 22:37:50,638 WARN org.apache.hadoop.hbase.util.FSUtils: Unable to > create version file at hdfs://89neuron:9000/hbase, retrying: > java.io.IOException: File /hbase/hbase.version could only be replicated to 0 > nodes, instead of 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:416) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > > Looks like that dfs.replication which is set to 1 is the problem but I can > not confirm it actually is. Please help me out. > Thanks in advance > -- > View this message in context: > http://old.nabble.com/Error-While-using-hbase-with-hadoop-tp33208913p33208913.html > Sent from the HBase User mailing list archive at Nabble.com. > -- Harsh J Customer Ops. Engineer, Cloudera
