Right, no big issue in reality. And so far it doesn't seem that deleting
those files has any negative impact on that (demo) installation.
Thanks,
Henning Blohm
*ZFabrik Software KG*
T: +49 6227 3984255
F: +49 6227 3984254
M: +49 1781891820
Lammstrasse 2 69190 Walldorf
[email protected] <mailto:[email protected]>
Linkedin <http://www.linkedin.com/pub/henning-blohm/0/7b5/628>
ZFabrik <http://www.zfabrik.de>
Blog <http://www.z2-environment.net/blog>
Z2-Environment <http://www.z2-environment.eu>
Z2 Wiki <http://redmine.z2-environment.net>
On 08/20/2014 05:16 PM, Jean-Marc Spaggiari wrote:
They seems to be the logs as said before. But as you said too, too late now
;) We can not take one and look at it. Basically, when you ran out of
space, mot probably HBase failed to write the logs correctly so the files
got corrupted, and got moved into this folder when they got replayed.
Since it's a standalone instance I guess it's not production so this is not
a big issue.
JM
2014-08-20 11:12 GMT-04:00 Henning Blohm <[email protected]>:
Ah.. man.. sorry for the confusion: Just noted that the terminal was still
open. Here's the output from the delete:
$ hadoop fs -rmr /hbase/.corrupt/*
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1406915392963
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1407034197420
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1407246602219
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1407770546240
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1407773074652
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1407773969678
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241347935
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241348470
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241348677
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241349446
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241349732
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241350291
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241350733
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241351260
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408241351469
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408244952906
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408450158299
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408450158313
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408477692983
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408481294207
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408481294227
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408481294237
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408481294245
Deleted hdfs://localhost:9000/hbase/.corrupt/localhost%3A60020.
1408481294257
Does that say anything to you?
Thanks,
Henning
On 08/20/2014 04:43 PM, Jean-Marc Spaggiari wrote:
Can you list the files you have under this directory?
Look at 9.6.5.3.1 in http://hbase.apache.org/book/regionserver.arch.html
They might be corrupt logs files that we can not replay. So might be safe
to remove, but you might have some data lost there...
JM
2014-08-20 10:29 GMT-04:00 Henning Blohm <[email protected]>:
Nobody?
Well... I will try and see what happens...
Thanks,
Henning
On 08/11/2014 09:28 PM, Henning Blohm wrote:
Lately, on a single node test installation, I noticed that the
Hadoop/Hbase folder /hbase/.corrupt got quite big (probably due to
failed
log splitting due to lack of disk space).
Is it safe to simply delete that folder?
And, what would one possibly do with those problematic WAL logs?
Thanks,
Henning