Hi,
I seem to be having a problem with the job cache on the JobTracker of
my (single-node, DataNode, NameNode, JobTracker, TaskTracker all on
same machine) Hadoop cluster. I've been recompiling my code multiple
times, and the class files in my jarfile are correct (unzipped,
checked for new string
Hi,
I'd like to do this in my hodrc file:
client-params = ...,,mapred.child.java.opts=-Dkey=value,...
but HoD doesn't like it:
error: 1 problem found.
Check your command line options and/or your configuration file dir/hodrc
Any ideas how to specify nested equals? Has anyone ever tried this,
or
the second
submitted job?
Thanks.
Jiaqi Tan
Hi Hemanth,
More design questions I'm wondering about:
So what determines the spread/location of data blocks that are
uploaded/added to HDFS outside of the Map/Reduce framework? For
instance, if I use a dfs -put to upload files to the HDFS, does the
dfs system try to spread the blocks out across
that break Hadoop's design
principle of having processing take place near the data?
And if HoD doesn't currently take block location into account when
allocating nodes, are there future plans for that to be incorporated?
Thanks,
Jiaqi Tan