I really don't know if there is any more I can do over email. You might want
to look at the metrics to see if anything out of the ordinary is happening on
these nodes just before or just after the error happens. Is there anything
else in the logs that looks a little bit odd compared to the oth
I suspect that HDFS and/or its local disk may be full or sick
The problem occurs after a job has been running at least 10 hours -
I am too new at this to know too much about where to look to see how bad
hdfs is and could use some pointers.
There are points in the job where the reducer writes to h
Did you mean 0.20.2?
If so then Wow, that is a bit of a stumper. Line 200 of BZip2Codec.java is the
following
196:public void write(int b) throws IOException {
197: if (needsReset) {
198:internalReset();
199: }
200: this.output.write(b);
201: }
So it must be that th
0.202 and using that API -
On Mon, Nov 7, 2011 at 8:27 AM, Robert Evans wrote:
> What version of Hadoop are you using?
>
>
>
> On 11/5/11 11:09 AM, "Steve Lewis" wrote:
>
> My job is dying during a map task write. This happened in enough task to
> kill the job although most tasks succeeded -
What version of Hadoop are you using?
On 11/5/11 11:09 AM, "Steve Lewis" wrote:
My job is dying during a map task write. This happened in enough task to kill
the job although most tasks succeeded -
Any ideas as to where to start diagnosing the problem
Caused by: java.lang.NullPointerExcep