It seems the parameter mapreduce.map.memory.mb is parsed from client.
2015-06-07 15:05 GMT+08:00 J. Rottinghuis jrottingh...@gmail.com:
On each node you can configure how much memory is available for containers
to run.
On the other hand, for each application you can configure how large
On each node you can configure how much memory is available for containers
to run.
On the other hand, for each application you can configure how large
containers should be. For MR apps, you can separately set mappers,
reducers, and the app master itself.
Yarn will detemine through scheduling
Hello,
Recently I have increased my physical cluster. I have two kind of nodes:
Type 1:
RAM: 24 GB
12 cores
Type 2:
RAM: 64 GB
12 cores
Theses nodes are in the same physical rack. I would like to configure it
to use 12 container per node, in nodes of type 1 each mapper has