[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17779794#comment-17779794
 ] 

Jim Halfpenny commented on MAPREDUCE-7460:
------------------------------------------

This is not a bug, it is a configuration error. If you create a node manager 
with yarn.nodemanager.resource.memory-mb set to 1GB and then create a MapReduce 
task with 2GB the task will never be scheduled. If you arbitrarily create a 
resource limitation and then try to exceed that you're validating that the 
resource management is working as designed.

> When "yarn.nodemanager.resource.memory-mb" and "mapreduce.map.memory.mb" work 
> together, the mapreduce sample program blocks 
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-7460
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7460
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: yarn
>    Affects Versions: 3.3.6
>            Reporter: ECFuzz
>            Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> If yarn.nodemanager.resource.memory-mb and mapreduce.map.memory.mb violate 
> the relationship, it will cause the mapreduce sample program to block.
> The first configuraion should be larger than the second one, but we manually 
> violate the relationship.
> Here we provide the entire process.
> core-site.xml like below.
> {code:java}
> <configuration>
>   <property>
>         <name>fs.defaultFS</name>
>         <value>hdfs://localhost:9000</value>
>     </property>
>     <property>
>         <name>hadoop.tmp.dir</name>
>         <value>/home/lfl/Mutil_Component/tmp</value>
>     </property>
>    
> </configuration>{code}
> hdfs-site.xml like below.
> {noformat}
> <configuration>
>    <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
> </configuration>
> {noformat}
> And then format the namenode, and start the hdfs. HDFS is running normally.
> {noformat}
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ ./bin/hdfs namenode 
> -format
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ 
> ./sbin/start-dfs.sh{noformat}
>  
> We add yarn.nodemanager.resource.memory-mb to yarn-site.xml  like below.
> {noformat}
> <property>        
>      <name>yarn.nodemanager.resource.memory-mb</name>      
>      <value>1024</value>
> </property>{noformat}
> Also, we alse add mapreduce.map.memory.mb to mapred-site.xml like below.
> {code:java}
> <property>       
>     <name>mapreduce.map.memory.mb</name>       
>      <value>2048</value>
> </property>  {code}
>  
> Finally, we start the yarn and run the given mapreduce sample job.
>  
> {code:java}
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ bin/hdfs dfs -mkdir -p 
> /user/lfl
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ bin/hdfs dfs -mkdir input
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ bin/hdfs dfs -put 
> etc/hadoop/*.xml input
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ sbin/start-yarn.sh
> Starting resourcemanager
> Starting nodemanagers
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$bin/hadoop jar 
> share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep input output 
> 'dfs[a-z.]+'{code}
> {code:java}
> 2023-10-26 10:44:47,027 INFO client.DefaultNoHARMFailoverProxyProvider: 
> Connecting to ResourceManager at localhost/127.0.0.1:8032
> 2023-10-26 10:44:47,288 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /tmp/hadoop-yarn/staging/lfl/.staging/job_1698288283355_0001
> 2023-10-26 10:44:47,483 INFO input.FileInputFormat: Total input files to 
> process : 10
> 2023-10-26 10:44:47,930 INFO mapreduce.JobSubmitter: number of splits:10
> 2023-10-26 10:44:48,406 INFO mapreduce.JobSubmitter: Submitting tokens for 
> job: job_1698288283355_0001
> 2023-10-26 10:44:48,406 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 2023-10-26 10:44:48,513 INFO conf.Configuration: resource-types.xml not found
> 2023-10-26 10:44:48,513 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'.
> 2023-10-26 10:44:48,669 INFO impl.YarnClientImpl: Submitted application 
> application_1698288283355_0001
> 2023-10-26 10:44:48,703 INFO mapreduce.Job: The url to track the job: 
> http://LAPTOP-QR7GJ7B1.localdomain:8088/proxy/application_1698288283355_0001/
> 2023-10-26 10:44:48,703 INFO mapreduce.Job: Running job: 
> job_1698288283355_0001 {code}
> And, it will continue to be blocked here.
>  
> Additionally, the following two types of configuration dependency violations 
> can also cause the same problem.
>  
> {noformat}
> dependency:yarn.nodemanager.resource.memory-mb>=yarn.app.mapreduce.am.resource.mb
> yarn-site.xml
> <property>
>         <name>yarn.nodemanager.resource.memory-mb</name>
>         <value>1024</value>
> </property>
> mapred-site.xml
> <property>
>         <name>yarn.app.mapreduce.am.resource.mb</name>
>         <value>2048</value>
> </property>
> {noformat}
>  
> {code:java}
> dependeny:yarn.nodemanager.resource.memory-mb>=mapreduce.reduce.memory.mb
> yarn-site.xml
> <property>
>         <name>yarn.nodemanager.resource.memory-mb</name>
>         <value>1024</value>
> </property>
> mapred-site.xml
> <property>
>         <name>mapreduce.reduce.memory.mb</name>
>         <value>2048</value>
> </property> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org

Reply via email to