Hi Hadoop users,
I am running a m/r job with an input file of 23 million records. I can see
all our files are not getting used.
What can I change to utilize all nodes?
Containers Mem Used Mem Avail Vcores used Vcores avail
8 11.25 GB 0 B 8 0
0 0 B 11.25 GB 0 8
0 0 B 11.25 GB 0 8
8 11.25 GB 0 B
Hi Hadoop users,
I am running a m/r job with an input file of 23 million records. I can see
all our files are not getting used.
What can I change to utilize all nodes?
Containers Mem Used Mem Avail Vcores used Vcores avail
8 11.25 GB 0 B 8 0
0 0 B 11.25 GB 0 8
0 0 B 11.25 GB 0 8
8 11.25 GB 0 B
That's unusual. Are you able to submit a simple sleep job? You can do this
using:
yarn jar
$HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-*-tests.jar
sleep -m 1 -r 1
This should finish it in under a minute. Otherwise I'd suspect that your
cluster is misconfigured.
HTH
I agree with Rakesh that spaces in JAVA_HOME is likely to be a problem. This
is a known problem tracked in JIRA issue HADOOP-9600.
--Chris Nauroth
From: Rakesh Radhakrishnan
Date: Monday, August 8, 2016 at 8:03 AM
To: Atri Sharma
Cc: "user.hadoop"
Hi Atri,
I doubt the problem is due to space in the path -> "Program Files".
Instead of C:\Program Files\Java\jdk1.8.0_101, please copy JDK dir to
C:\java\jdk1.8.0_101 and try once.
Rakesh
Intel
On Mon, Aug 8, 2016 at 4:34 PM, Atri Sharma wrote:
> Hi All,
>
> I am trying
Hi All,
I am trying to run a compiled Hadoop jar on Windows but ran into the
following error when running hdfs-format:
JAVA_HOME is incorrectly set.
I echoed the path being set in etc/Hadoop-env.cmd and it echoes the correct
path:
C:\Program Files\Java\jdk1.8.0_101
Please advise.
Regards,