Hi,
I am very new to YARN.
I have a setup where ResourceManager is running on one cluster and I have 2
NodeManager running on other clusters.
Could someone point me to setting required to point my ResourceManager to 2
of my NodeManager ?
Thanks,
Hitarth
Hi Kamal!
Thanks for your initiative. Please take a look at MiniDFSCluster /
MiniJournalCluster / MiniYarnCluster etc. In your unit tests you can
essentially start a cluster in a single JVM. You can look at
TestQJMWithFaults.java
HTHRavi
On Sunday, January 4, 2015 10:09 PM, kamaldeep
Hitarth,
I don't know how much direction you are looking for with regards to the
formats of the times but you can certainly read both files into the third
mapreduce job using the FileInputFormat by comma-separating the paths to
the files. The blocks for both files will essentially be unioned
Hitarth:
You can also consider MultiFileInputFormat (and its concrete
implementations).
Cheers
On Mon, Jan 5, 2015 at 6:14 PM, Corey Nolet cjno...@gmail.com wrote:
Hitarth,
I don't know how much direction you are looking for with regards to the
formats of the times but you can certainly
Hi hitarth
,
If your file1 and file 2 is smaller you can move on with Distributed Cache.
mentioned here
http://unmeshasreeveni.blogspot.in/2014/10/how-to-load-file-in-distributedcache-in.html
.
Or you can move on with MultipleInputFormat
mentioned here
Hi,
I am using CDH 5.1.3 i.e. hadoop version 2.3.0
I am running Hbase row counter map reduce job. All the mappers in
the Job complete successfully. there are no reducers in the job.
However, in the cleanup/stop phase I see the following exception in
the application logs stored in hdfs.
Config the following to the actual RM address in your NodeManger
yarn-site.xml
property
nameyarn.resourcemanager.resource-tracker.address/name
value${yarn.resourcemanager.hostname}:8031/value
/property
On Mon, Jan 5, 2015 at 8:18 AM, hitarth trivedi t.hita...@gmail.com wrote:
Hi,
Hi,
I have 6 node cluster, and the scenario is as follows :-
I have one map reduce job which will write file1 in HDFS.
I have another map reduce job which will write file2 in HDFS.
In the third map reduce job I need to use file1 and file2 to do some
computation and output the value.
What is