The *FileNotFoundException* was thrown when I tried to submit a job
calculating PI, actually there is no such exception thrown when I submit a
wordcount job, but I can still see Exception from container-launch...
and any other jobs would throw such exceptions.
Every job runs successfully when I
The FileNotFoundException doesn't mean anything in the pi program. If you
have some error and the program didn't run successfully, it will always
throw this exception.
What do you have in the opts?
Regards,
*Stanley Shi,*
On Mon, May 12, 2014 at 2:09 PM, Tao Xiao xiaotao.cs@gmail.com
This is caused by properties *mapreduce.map.java.opts* and
*mapreduce.reduce.java.opts*
2014-05-11 16:16 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:
I'm sure this problem is caused by the incorrect configuration. I
commented out all the configurations regarding memory, then jobs can run
I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN
Installation: The definitive
guidehttp://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide
.
The configurations are as follows:
~/.bashrc http://pastebin.com/zQgwuQv2
I'm sure this problem is caused by the incorrect configuration. I commented
out all the configurations regarding memory, then jobs can run
successfully.
2014-05-11 0:01 GMT+08:00 Tao Xiao xiaotao.cs@gmail.com:
I installed Hadoop-2.2 in a cluster of 4 nodes, following Hadoop YARN
Sounds oddSo (1) you got a filenotfound exception and (2) you fixed it by
commenting out memory specific config parameters?
Not sure how that would work... Any other details or am I missing something
else?
On May 11, 2014, at 4:16 AM, Tao Xiao xiaotao.cs@gmail.com wrote:
I'm sure