Hi

 please reference to
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/



2013/12/5 panfei <[email protected]>

> we have already tried several values of these two parameters, but it seems
> no use.
>
>
> 2013/12/5 Tsuyoshi OZAWA <[email protected]>
>
>> Hi,
>>
>> Please check the properties like mapreduce.reduce.memory.mb and
>> mapredce.map.memory.mb in mapred-site.xml. These properties decide
>> resource limits for mappers/reducers.
>>
>> On Wed, Dec 4, 2013 at 10:16 PM, panfei <[email protected]> wrote:
>> >
>> >
>> > ---------- Forwarded message ----------
>> > From: panfei <[email protected]>
>> > Date: 2013/12/4
>> > Subject: Container
>> > [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> running
>> > beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical
>> memory
>> > used; 332.5 GB of 8 GB virtual memory used. Killing container.
>> > To: CDH Users <[email protected]>
>> >
>> >
>> > Hi All:
>> >
>> > We are using CDH4.5 Hadoop for our production, when submit some (not
>> all)
>> > jobs from hive, we get the following exception info , seems the physical
>> > memory and virtual memory both not enough for the job to run:
>> >
>> >
>> > Task with the most failures(4):
>> > -----
>> > Task ID:
>> >   task_1386156666044_0001_m_000000
>> >
>> > URL:
>> >
>> >
>> http://namenode-1:8088/taskdetails.jsp?jobid=job_1386156666044_0001&tipid=task_1386156666044_0001_m_000000
>> > -----
>> > Diagnostic Messages for this Task:
>> > Container
>> [pid=22885,containerID=container_1386156666044_0001_01_000013] is
>> > running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> > physical memory used; 332.5 GB of 8 GB virtual memory used. Killing
>> > container.
>> > Dump of the process-tree for container_1386156666044_0001_01_000013 :
>> >         |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
>> > SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>> >         |- 22885 22036 22885 22885 (java) 5414 108 356993519616 271953
>> > /usr/java/default/bin/java -Djava.net.preferIPv4Stack=true
>> > -Dhadoop.metrics.log.level=WARN -Xmx200m
>> >
>> -Djava.io.tmpdir=/data/yarn/local/usercache/hive/appcache/application_1386156666044_0001/container_1386156666044_0001_01_000013/tmp
>> > -Dlog4j.configuration=container-log4j.properties
>> >
>> -Dyarn.app.mapreduce.container.log.dir=/var/log/hadoop-yarn/containers/application_1386156666044_0001/container_1386156666044_0001_01_000013
>> > -Dyarn.app.mapreduce.container.log.filesize=0
>> -Dhadoop.root.logger=INFO,CLA
>> > org.apache.hadoop.mapred.YarnChild 192.168.101.55 60841
>> > attempt_1386156666044_0001_m_000000_3 13
>> >
>> > following is some of our configuration:
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.memory-mb</name>
>> >     <value>12288</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-pmem-ratio</name>
>> >     <value>8</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.vmem-check-enabled</name>
>> >     <value>false</value>
>> >   </property>
>> >
>> >   <property>
>> >     <name>yarn.nodemanager.resource.cpu-vcores</name>
>> >     <value>6</value>
>> >   </property>
>> >
>> > can you give me some advice? thanks a lot.
>> > --
>> > 不学习,不知道
>> >
>> >
>> >
>> > --
>> > 不学习,不知道
>>
>>
>>
>> --
>> - Tsuyoshi
>>
>
>
>
> --
> 不学习,不知道
>

Reply via email to