Álvaro, Have you checked for the health of HDFS? Maybe your cluster ran out of space or you don't have data nodes running.
Esteban > On Apr 5, 2014, at 10:11, haosdent <[email protected]> wrote: > > From the log informations, it seems you lost blocks. > 2014-4-6 上午12:38于 "Álvaro Recuero" <[email protected]>写道: > >> has anyone come across this before? there is still space in the RS and this >> is not a problem of datanodes availability as I can confirm. cheers >> >> 2014-04-05 09:55:19,210 DEBUG >> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: using new >> createWriter -- HADOOP-6840 >> 2014-04-05 09:55:19,211 DEBUG >> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: >> Path=hdfs:// >> taurus-5.lyon.grid5000.fr: >> >> 9000/hbase/usertable/fc55e2d2d4bcec49d6fedf5a469353b9/recovered.edits/0000000000002550928.temp, >> syncFs=true, hflush=false, compressi >> on=false >> 2014-04-05 09:55:19,211 DEBUG >> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer >> path=hdfs://taurus-5.lyon.grid5 >> >> 000.fr:9000/hbase/usertable/fc55e2d2d4bcec49d6fedf5a469353b9/recovered.edits/0000000000002550928.tempregion=fc55e2d2d4bcec49d6fedf5 >> a469353b9 >> 2014-04-05 09:55:19,233 DEBUG >> org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or >> departed >> 2014-04-05 09:55:19,233 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer >> Exception: org.apache.hadoop.ipc.RemoteException: java.i >> o.IOException: File >> >> /hbase/usertable/237859a0b1e47c86c25a6123506ccb2a/recovered.edits/0000000000002550921.temp >> could only be replica >> ted to 0 nodes, instead of 1 >> at >> >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) >> at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) >> at >> >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:616) >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) >> at java.security.AccessController.doPrivileged(Native Method) >> at javax.security.auth.Subject.doAs(Subject.java:416) >> at >> >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) >> >> at org.apache.hadoop.ipc.Client.call(Client.java:1070) >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) >> at sun.proxy.$Proxy9.addBlock(Unknown Source) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:616) >> at >> >> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) >> at >> >> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) >> at sun.proxy.$Proxy9.addBlock(Unknown Source) >> at >> >> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510) >> at >> >> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373) >> at >> >> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589) >> at >> >> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829) >> >> 2014-04-05 09:55:19,233 WARN org.apache.hadoop.hdfs.DFSClient: Error >> Recovery for block null bad datanode[0] nodes == null >> 2014-04-05 09:55:19,233 WARN org.apache.hadoop.hdfs.DFSClient: Could not >> get block locations. Source file >> >> "/hbase/usertable/237859a0b1e47c86c25a6123506ccb2a/recovered.edits/0000000000002550921.temp" >> - Aborting... >>
