Yes, this explains the infinite loop, but does not explain how it got corrupted and why the failure to read is not stable,
which are more interesting questions :-)
We'll need more information to track that.

--Konstantin

Koji Noguchi wrote:

Are you seeing this?
https://issues.apache.org/jira/browse/HADOOP-1911

Koji

Konstantin Shvachko wrote:

Does fsck return HEALTHY status?
What is your block replication factor?
If one of the data-nodes is flaky and you have a particular block only on that node, then that could be the case.
You might want to examine the nodes or increase replication.

Open Study wrote:

Hi, when use "hadoop dfs -cat" command, I keep getting the problem that says " Could not obtain block 0 from any node: java.io.IOException: No live
nodes contain current block"

The block does exist as the "cat" command didn't always fail, occasionally
it will return desired result, but most of time I just get that error.

I also checked with hadoop web console and find all data-nodes living.

I'm using hadoop 0.13.1, deployed on a cluster of 6 servers, all run on
AMD64 and OpenSuse 10.2 64X, with 2G RAM.

This problem happens only recently afterI imported a bulk of data(1G) into
HDFS.

Any idea how I can fix it? or will upgrade to 0.14.2 help?

Thanks





Reply via email to