Hi all,
can’t figure out subj.
On my hadoop cluster I have an issue when ApplicationMaster(AM) killed by
NodeManager because AM tries to allocate more than default 1GB. MR application,
that AM is in charge of, is a mapper only job(1(!) mapper, no reducers,
downloads data from remote source). At the moment when AM killed, MR job is ok
(uses about 70% of ram limit). MR job doesn't have any custom counters,
distributes caches etc, just downloads data (by portions) via custom input
format. To fix this issue, I raised memory limit for AM, but I want to know
what is the reason of eating 1GB (!) for a trivial job like mine?