Hi,

Simple JVM reuse, as was usable via "mapreduce.job.jvm.numtasks" does
not exist anymore on the new framework.

There is however, the concept of an "über" task which is similar in
nature but the config has gotten only more complex/fine-grained.
Specific properties that may interest you:

Prop | Default | Description

mapreduce.job.ubertask.enable | (false) | 'Whether to enable the
small-jobs "ubertask" optimization, which runs "sufficiently small"
jobs sequentially within a single JVM. "Small" is defined by the
following maxmaps, maxreduces, and maxbytes settings. Users may
override this value.'

mapreduce.job.ubertask.maxmaps | 9 | 'Threshold for number of maps,
beyond which job is considered too big for the ubertasking
optimization. Users may override this value, but only downward.'

mapreduce.job.ubertask.maxreduces | 1 | 'Threshold for number of
reduces, beyond which job is considered too big for the ubertasking
optimization. CURRENTLY THE CODE CANNOT SUPPORT MORE THAN ONE REDUCE
and will ignore larger values. (Zero is a valid max, however.) Users
may override this value, but only downward.'

mapreduce.job.ubertask.maxbytes | | 'Threshold for number of input
bytes, beyond which job is considered too big for the ubertasking
optimization. If no value is specified, dfs.block.size is used as a
default. Be sure to specify a default value in mapred-site.xml if the
underlying filesystem is not HDFS. Users may override this value, but
only downward.'

Ref: 
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml

You probably hence want, at the moment:
mapreduce.job.ubertask.enable set to true
mapreduce.job.ubertask.maxmaps set to a large value
And a very low input job. The default, currently non-overridable input
bytes limit is your HDFS/FS's configured default block size (although
it apparently ought to be taken from the InputFormat's FileSystem
config instead).

On Mon, Apr 9, 2012 at 5:03 PM, ramgopal <ramgopaln...@huawei.com> wrote:
> Hi,
>
>    Is there a way to specify JVM reuse  for yarn applications  as in MRV1?
>
>
>
>
>
> Regards,
>
> Ramgopal
>
>
>
>
>
> ***************************************************************************************
> This e-mail and attachments contain confidential information from HUAWEI,
> which is intended only for the person or entity whose address is listed
> above. Any use of the information contained herein in any way (including,
> but not limited to, total or partial disclosure, r tion) by persons other
> than the intended recipient's) is prohibited. If you receive this e-mail in
> error, please notify the sender by phone or email immediately and delete it!
>
>
>
>



-- 
Harsh J

Reply via email to