I use hadoop 2.0.3-alpha
I also attach configure file , mapred-site.xml and yarn-site.xml
2013/4/24 Hitesh Shah <[email protected]>
> As some folks have mentioned earlier, it is very likely that
> "yarn.scheduler.minimum-allocation-mb" is set to 2048 in yarn-site.xml.
>
> If this is set to something different, it might be helpful to provide what
> version of hadoop you are running as
> well as a copy of your yarn-site.xml from the node running the
> ResourceManager.
>
> -- Hitesh
>
> On Apr 23, 2013, at 8:52 PM, 牛兆捷 wrote:
>
> > Why the memory of map task are 2048 rather than 900(1024)?
> >
> >
> > 2013/4/24 牛兆捷 <[email protected]>
> >
> >>
> >> Map task container:
> >>
> >> 2013-04-24 01:14:06,398 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from NEW
> to
> >> ALLOCATED
> >> 2013-04-24 01:14:06,398 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hustnn
> >> OPERATION=AM Allocated Container TARGET=SchedulerApp
> >> RESULT=SUCCESS APPID=application_1366737158682_0002
> >> CONTAINERID=container_1366737158682_0002_01_000002
> >> 2013-04-24 01:14:06,398 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode:
> >> Assigned container container_1366737158682_0002_01_000002 of capacity
> >> <memory:2048, vCores:1> on host compute-0-0.local:44082, which currently
> >> has 2 containers, <memory:4096, vCores:2> used and <memory:20480,
> >> vCores:46> available
> >> 2013-04-24 01:14:06,400 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> >> assignedContainer application=application_1366737158682_0002
> >> container=Container: [ContainerId:
> container_1366737158682_0002_01_000002,
> >> NodeId: compute-0-0.local:44082, NodeHttpAddress:
> compute-0-0.local:8042,
> >> Resource: <memory:2048, vCores:1>, Priority: 20, State: NEW, Token:
> null,
> >> Status: container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 2, }, state:
> >> C_NEW, ] containerId=container_1366737158682_0002_01_000002
> queue=default:
> >> capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048,
> >> vCores:1>usedCapacity=0.083333336, absoluteUsedCapacity=0.083333336,
> >> numApps=1, numContainers=1 usedCapacity=0.083333336
> >> absoluteUsedCapacity=0.083333336 used=<memory:2048, vCores:1>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:06,400 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> Re-sorting queues since queue: root.default stats: default:
> capacity=1.0,
> >> absoluteCapacity=1.0, usedResources=<memory:4096,
> >> vCores:2>usedCapacity=0.16666667, absoluteUsedCapacity=0.16666667,
> >> numApps=1, numContainers=2
> >> 2013-04-24 01:14:06,400 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> assignedContainer queue=root usedCapacity=0.16666667
> >> absoluteUsedCapacity=0.16666667 used=<memory:4096, vCores:2>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:07,015 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from
> >> ALLOCATED to ACQUIRED
> >> 2013-04-24 01:14:07,405 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from
> ACQUIRED
> >> to RUNNING
> >> 2013-04-24 01:14:13,920 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000002 Container Transitioned from
> RUNNING
> >> to COMPLETED
> >>
> >> reduce task container:
> >> 2013-04-24 01:14:14,923 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from NEW
> to
> >> ALLOCATED
> >> 2013-04-24 01:14:14,923 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hustnn
> >> OPERATION=AM Allocated Container TARGET=SchedulerApp
> >> RESULT=SUCCESS APPID=application_1366737158682_0002
> >> CONTAINERID=container_1366737158682_0002_01_000003
> >> 2013-04-24 01:14:14,923 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode:
> >> Assigned container container_1366737158682_0002_01_000003 of capacity
> >> <memory:3072, vCores:1> on host compute-0-0.local:44082, which currently
> >> has 2 containers, <memory:5120, vCores:2> used and <memory:19456,
> >> vCores:46> available
> >> 2013-04-24 01:14:14,924 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> >> assignedContainer application=application_1366737158682_0002
> >> container=Container: [ContainerId:
> container_1366737158682_0002_01_000003,
> >> NodeId: compute-0-0.local:44082, NodeHttpAddress:
> compute-0-0.local:8042,
> >> Resource: <memory:3072, vCores:1>, Priority: 10, State: NEW, Token:
> null,
> >> Status: container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 3, }, state:
> >> C_NEW, ] containerId=container_1366737158682_0002_01_000003
> queue=default:
> >> capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2048,
> >> vCores:1>usedCapacity=0.083333336, absoluteUsedCapacity=0.083333336,
> >> numApps=1, numContainers=1 usedCapacity=0.083333336
> >> absoluteUsedCapacity=0.083333336 used=<memory:2048, vCores:1>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:14,924 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> Re-sorting queues since queue: root.default stats: default:
> capacity=1.0,
> >> absoluteCapacity=1.0, usedResources=<memory:5120,
> >> vCores:2>usedCapacity=0.20833333, absoluteUsedCapacity=0.20833333,
> >> numApps=1, numContainers=2
> >> 2013-04-24 01:14:14,924 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> assignedContainer queue=root usedCapacity=0.20833333
> >> absoluteUsedCapacity=0.20833333 used=<memory:5120, vCores:2>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:14:15,070 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from
> >> ALLOCATED to ACQUIRED
> >> 2013-04-24 01:14:15,929 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from
> ACQUIRED
> >> to RUNNING
> >> 2013-04-24 01:14:21,652 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000003 Container Transitioned from
> RUNNING
> >> to COMPLETED
> >>
> >> AM container:
> >>
> >> 2013-04-24 01:13:59,370 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000001 Container Transitioned from NEW
> to
> >> ALLOCATED
> >> 2013-04-24 01:13:59,370 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hustnn
> >> OPERATION=AM Allocated Container TARGET=SchedulerApp
> >> RESULT=SUCCESS APPID=application_1366737158682_0002
> >> CONTAINERID=container_1366737158682_0002_01_000001
> >> 2013-04-24 01:13:59,370 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode:
> >> Assigned container container_1366737158682_0002_01_000001 of capacity
> >> <memory:2048, vCores:1> on host compute-0-0.local:44082, which currently
> >> has 1 containers, <memory:2048, vCores:1> used and <memory:22528,
> >> vCores:47> available
> >> 2013-04-24 01:13:59,374 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> >> assignedContainer application=application_1366737158682_0002
> >> container=Container: [ContainerId:
> container_1366737158682_0002_01_000001,
> >> NodeId: compute-0-0.local:44082, NodeHttpAddress:
> compute-0-0.local:8042,
> >> Resource: <memory:2048, vCores:1>, Priority: 0, State: NEW, Token: null,
> >> Status: container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 1, }, state:
> >> C_NEW, ] containerId=container_1366737158682_0002_01_000001
> queue=default:
> >> capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0,
> >> vCores:0>usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1,
> >> numContainers=0 usedCapacity=0.0 absoluteUsedCapacity=0.0
> used=<memory:0,
> >> vCores:0> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:13:59,374 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> Re-sorting queues since queue: root.default stats: default:
> capacity=1.0,
> >> absoluteCapacity=1.0, usedResources=<memory:2048,
> >> vCores:1>usedCapacity=0.083333336, absoluteUsedCapacity=0.083333336,
> >> numApps=1, numContainers=1
> >> 2013-04-24 01:13:59,374 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
> >> assignedContainer queue=root usedCapacity=0.083333336
> >> absoluteUsedCapacity=0.083333336 used=<memory:2048, vCores:1>
> >> cluster=<memory:24576, vCores:48>
> >> 2013-04-24 01:13:59,376 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000001 Container Transitioned from
> >> ALLOCATED to ACQUIRED
> >> 2013-04-24 01:13:59,377 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> Storing attempt: AppId: application_1366737158682_0002 AttemptId:
> >> appattempt_1366737158682_0002_000001 MasterContainer: Container:
> >> [ContainerId: container_1366737158682_0002_01_000001, NodeId:
> >> compute-0-0.local:44082, NodeHttpAddress: compute-0-0.local:8042,
> Resource:
> >> <memory:2048, vCores:1>, Priority: 0, State: NEW, Token: null, Status:
> >> container_id {, app_attempt_id {, application_id {, id: 2,
> >> cluster_timestamp: 1366737158682, }, attemptId: 1, }, id: 1, }, state:
> >> C_NEW, ]
> >> 2013-04-24 01:13:59,379 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> appattempt_1366737158682_0002_000001 State change from SCHEDULED to
> >> ALLOCATED_SAVING
> >> 2013-04-24 01:13:59,381 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore:
> >> Storing info for attempt: appattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,383 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> appattempt_1366737158682_0002_000001 State change from ALLOCATED_SAVING
> to
> >> ALLOCATED
> >> 2013-04-24 01:13:59,389 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> >> Launching masterappattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,414 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> >> Setting up container Container: [ContainerId:
> >> container_1366737158682_0002_01_000001, NodeId: compute-0-0.local:44082,
> >> NodeHttpAddress: compute-0-0.local:8042, Resource: <memory:2048,
> vCores:1>,
> >> Priority: 0, State: NEW, Token: null, Status: container_id {,
> >> app_attempt_id {, application_id {, id: 2, cluster_timestamp:
> >> 1366737158682, }, attemptId: 1, }, id: 1, }, state: C_NEW, ] for AM
> >> appattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,414 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> >> Command to launch container container_1366737158682_0002_01_000001 :
> >> $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties
> >> -Dyarn.app.mapreduce.container.log.dir=<LOG_DIR>
> >> -Dyarn.app.mapreduce.container.log.filesize=0
> -Dhadoop.root.logger=INFO,CLA
> >> -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster
> >> 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
> >> 2013-04-24 01:13:59,968 INFO
> >> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:
> Done
> >> launching container Container: [ContainerId:
> >> container_1366737158682_0002_01_000001, NodeId: compute-0-0.local:44082,
> >> NodeHttpAddress: compute-0-0.local:8042, Resource: <memory:2048,
> vCores:1>,
> >> Priority: 0, State: NEW, Token: null, Status: container_id {,
> >> app_attempt_id {, application_id {, id: 2, cluster_timestamp:
> >> 1366737158682, }, attemptId: 1, }, id: 1, }, state: C_NEW, ] for AM
> >> appattempt_1366737158682_0002_000001
> >> 2013-04-24 01:13:59,968 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
> >> appattempt_1366737158682_0002_000001 State change from ALLOCATED to
> LAUNCHED
> >> 2013-04-24 01:14:00,365 INFO
> >>
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl:
> >> container_1366737158682_0002_01_000001 Container Transitioned from
> ACQUIRED
> >> to RUNNING
> >>
> >>
> >> 2013/4/24 Zhijie Shen <[email protected]>
> >>
> >>> Would you please look into the resourcemanager log, and check how many
> >>> containers are allocated and what the allocated memory is? You may
> want to
> >>> search the log with "assignedContainer".
> >>>
> >>>
> >>> On Tue, Apr 23, 2013 at 10:19 AM, 牛兆捷 <[email protected]> wrote:
> >>>
> >>>> I config them in mapred-site.xml like below, I set them less then 1000
> >>> for
> >>>> the normalization as you said:
> >>>>
> >>>> "
> >>>> <property>
> >>>> <name>yarn.app.mapreduce.am.resource.mb</name>
> >>>> <value>900</value>
> >>>> </property>
> >>>> <property>
> >>>> <name>mapreduce.map.memory.mb</name>
> >>>> <value>900</value>
> >>>> </property>
> >>>> <property>
> >>>> <name>mapreduce.reduce.memory.mb</name>
> >>>> <value>900</value>
> >>>> </property>
> >>>> "
> >>>>
> >>>> Then I run just one map, as you said there are 2 contained will be
> >>>> launched, one for A/M master, the other for map task.
> >>>> However, the 2 container cost 4G memory which I see from yarn UI
> >>> interface.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> 2013/4/24 Zhijie Shen <[email protected]>
> >>>>
> >>>>> Do you mean the memory assigned for the container of M/R's AM? Did
> you
> >>>> set
> >>>>> ContainerLaunchContext.setResource?
> >>>>>
> >>>>> AFAIK, by default, yarn.scheduler.minimum-allocation-mb = 1024 and
> >>>>> yarn.app.mapreduce.am.resource.mb
> >>>>> = 1536. So, M/R job will request 1536 for its AM, but Yarn's
> scheduler
> >>>> will
> >>>>> normalize the request to 2048, which is no less than 1536, and is
> >>>> multiple
> >>>>> times of the min allocation.
> >>>>>
> >>>>>
> >>>>> On Tue, Apr 23, 2013 at 8:43 AM, 牛兆捷 <[email protected]> wrote:
> >>>>>
> >>>>>> I am using 2.0.3-alpha, I don't set the map memory capacity
> >>> explicitly,
> >>>>>> then "resourceCapacity.setMemory" should set the default memory
> >>> request
> >>>>> to
> >>>>>> 1024mb,
> >>>>>> However 2048 Memory is assigned to this container.
> >>>>>>
> >>>>>> Why it does like this?
> >>>>>>
> >>>>>> --
> >>>>>> *Sincerely,*
> >>>>>> *Zhaojie*
> >>>>>> *
> >>>>>> *
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Zhijie Shen
> >>>>> Hortonworks Inc.
> >>>>> http://hortonworks.com/
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Sincerely,*
> >>>> *Zhaojie*
> >>>> *
> >>>> *
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Zhijie Shen
> >>> Hortonworks Inc.
> >>> http://hortonworks.com/
> >>>
> >>
> >>
> >>
> >> --
> >> *Sincerely,*
> >> *Zhaojie*
> >> *
> >> *
> >>
> >
> >
> >
> > --
> > *Sincerely,*
> > *Zhaojie*
> > *
> > *
>
>
--
*Sincerely,*
*Zhaojie*
*
*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>100</value>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>50</value>
</property>
<property>
<name>mapred.child.java.opts></name>
<value>-Xmx1024m</value>
</property>
<property>
<name>mapreduce.job.maps</name>
<value>1</value>
</property>
<property>
<name>mapreduce.job.reduces</name>
<value>24</value>
</property>
<property>
<name>mapreduce.tasktracker.map.tasks.maximum</name>
<value>24</value>
</property>
<property>
<name>mapreduce.tasktracker.reduce.tasks.maximum</name>
<value>24</value>
</property>
<property>
<name>mapreduce.job.reduce.slowstart.completedmaps</name>
<value>1</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>900</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>900</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>900</value>
</property>
</configuration>
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.address</name>
<value>155.69.148.21:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>155.69.148.21:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>155.69.148.21:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>155.69.148.21:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>155.69.148.21:8088</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8192</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>8</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>24576</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-cores</name>
<value>24</value>
</property>
<property>
<name>yarn.nodemanager.vcores-pcores-ratio</name>
<value>2</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/users/staff/hustnn/hadoop-0.23.6/yarn-data/tmp/head</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/users/staff/hustnn/hadoop-0.23.6/yarn-data/logs/head</value>
</property>
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>10800</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/users/staff/hustnn/hadoop-0.23.6/yarn-data/tmp/head/logs</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir-suffix</name>
<value>logs</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
</configuration>