stack wrote:
Hey Stephen:
On 1., OOME was in client? When I see the 'all datanodes are bad' message,
it usually means hdfs has gone away.
In 2., you see 'No node available for block'. This and the above would seem
to indicate you are suffering from a lack of
https://issues.apache.org/jira/browse/HDFS-127.
Yeah - sounds like something like that alright. I'll need tune these
nodes once I understand Hadoop a little better - and maybe upgrade also.
If you can shutdown hbase, then 3., is for sure the way to go -- its
complete and runs quickest. I'm surprised though that it would complain of
missing blocks when fsck does not.
Yeah, I was too. I figured a clean bill of health from fsck was good
enough - but it looks like it missed something. Does it seem likely my
hbase is somehow corrupt or is it robust enough to tolerate those
missing blocks? Running a count on my old and new hbase - it looks like
my new hbase (from the backup) has slightly less rows ... but is much
much faster.
Is there a hbase fsck or verification process?
Can we get you to migrate to 0.20.0?
I plan to. But I wasn't clear it was safe to yet - is it? :)
-stephen
--
Stephen Mulcahy, DI2, Digital Enterprise Research Institute,
NUI Galway, IDA Business Park, Lower Dangan, Galway, Ireland
http://di2.deri.ie http://webstar.deri.ie http://sindice.com