[ 
https://issues.apache.org/jira/browse/YARN-8315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479673#comment-16479673
 ] 

Eric Yang commented on YARN-8315:
---------------------------------

[[email protected]] Apache Hadoop community is not responsible for HDP 
issues.  However, I would suggest to look at the following configuration for 
yarn-site.xml:

{code}
<property>
    <name>yarn.scheduler.capacity.schedule-asynchronously.enable</name>
    <value>true</value>
  </property>
{code}

In addition to that, the 2.9.0/3.0.0 Yarn support specify multiple thread (by 
default is 1) to allocate containers.

{code}
  <property>
    <name>yarn.scheduler.capacity.schedule-asynchronously.maximum-threads</name>
    <value>4</value>
  </property>
{code}

If the threads to schedule container is set to 1 or not enabled, it may cause 
delay on container scheduling.

> HDP 3.0.0 perfromance is slower than HDP 2.6.4
> ----------------------------------------------
>
>                 Key: YARN-8315
>                 URL: https://issues.apache.org/jira/browse/YARN-8315
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: yarn
>    Affects Versions: 3.0.0
>         Environment: I have a HDP 2.6.4 cluster and HDP 3.0.0 cluster,  I set 
> up to have the same settings for these two cluster such as java heap size, 
> container size etc.  They are both 4 node cluster with 3 data nodes.   I took 
> almost all the default setting on HDP 3.0.0 except that I modify the minimum 
> container size to 64MB instead of 1024MB in both cluster.  
>  
>            Reporter: Hsin-Liang Huang
>            Priority: Major
>
> Hi,   I am comparing the performance between HDP 3.0.0 and HDP 2.6.4 and I 
> discovered HDP 3.0.0 is much slower than HDP 2.6.4 if the job acquire more 
> yarn containers and we also pin point the problem is after the job is done,  
> when it tried to clean up all the containers to exit the application, that's 
> the place where it consumed more time than HDP 2.6.4.   I used the simple 
> yarn app that Hortonworks put out on github 
> [https://github.com/hortonworks/simple-yarn-app] to do the testing.  Below is 
> my testing result from acquiring 8 containers in both HDP 3.0.0 and HDP 2.6.4 
> cluster environment. 
> =============================================================
> HDP 3.0.0: 
> command:  time hadoop jar 
> /usr/hdp/3.0.0.0-829/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.0.0.3.0.0.0-829.jar
>  Client -classpath simple-yarn-app-1.1.0.jar -cmd "java 
> com.hortonworks.simpleyarnapp.ApplicationMaster /bin/date 8"
> 18/05/17 11:06:42 INFO unmanagedamlauncher.UnmanagedAMLauncher: Initializing 
> Client
> 18/05/17 11:06:42 INFO unmanagedamlauncher.UnmanagedAMLauncher: Starting 
> Client
> 18/05/17 11:06:43 INFO client.RMProxy: Connecting to ResourceManager at 
> whiny1.fyre.ibm.com/172.16.165.211:8050
> 18/05/17 11:06:43 INFO client.AHSProxy: Connecting to Application History 
> server at whiny2.fyre.ibm.com/172.16.200.160:10200
> 18/05/17 11:06:43 INFO unmanagedamlauncher.UnmanagedAMLauncher: Setting up 
> application submission context for ASM
> 18/05/17 11:06:43 INFO unmanagedamlauncher.UnmanagedAMLauncher: Setting 
> unmanaged AM
> 18/05/17 11:06:43 INFO unmanagedamlauncher.UnmanagedAMLauncher: Submitting 
> application to ASM
> 18/05/17 11:06:43 INFO impl.YarnClientImpl: Submitted application 
> application_1526572577866_0011
> 18/05/17 11:06:44 INFO unmanagedamlauncher.UnmanagedAMLauncher: Got 
> application report from ASM for, appId=11, 
> appAttemptId=appattempt_1526572577866_0011_000001, clientToAMToken=null, 
> appDiagnostics=AM container is launched, waiting for AM container to Register 
> with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, 
> appStartTime=1526584003704, yarnAppState=ACCEPTED, 
> distributedFinalState=UNDEFINED, appTrackingUrl=N/A, appUser=hlhuang
> 18/05/17 11:06:44 INFO unmanagedamlauncher.UnmanagedAMLauncher: Launching AM 
> with application attempt id appattempt_1526572577866_0011_000001
> 18/05/17 11:06:46 INFO client.RMProxy: Connecting to ResourceManager at 
> whiny1.fyre.ibm.com/172.16.165.211:8030
> registerApplicationMaster 0
> registerApplicationMaster 1
> 18/05/17 11:06:47 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.0.0-829/0/resource-types.xml
> Making res-req 0
> Making res-req 1
> Making res-req 2
> Making res-req 3
> Making res-req 4
> Making res-req 5
> Making res-req 6
> Making res-req 7
> Launching container container_e08_1526572577866_0011_01_000001
> Launching container container_e08_1526572577866_0011_01_000002
> Launching container container_e08_1526572577866_0011_01_000003
> Launching container container_e08_1526572577866_0011_01_000004
> Launching container container_e08_1526572577866_0011_01_000005
> Launching container container_e08_1526572577866_0011_01_000006
> Launching container container_e08_1526572577866_0011_01_000007
> Launching container container_e08_1526572577866_0011_01_000008
> Completed container container_e08_1526572577866_0011_01_000001
> Completed container container_e08_1526572577866_0011_01_000002
> Completed container container_e08_1526572577866_0011_01_000003
> Completed container container_e08_1526572577866_0011_01_000004
> Completed container container_e08_1526572577866_0011_01_000008
> Completed container container_e08_1526572577866_0011_01_000005
> Completed container container_e08_1526572577866_0011_01_000006
> Completed container container_e08_1526572577866_0011_01_000007
> 18/05/17 11:06:54 INFO unmanagedamlauncher.UnmanagedAMLauncher: AM process 
> exited with value: 0
> 18/05/17 11:06:55 INFO unmanagedamlauncher.UnmanagedAMLauncher: Got 
> application report from ASM for, appId=11, 
> appAttemptId=appattempt_1526572577866_0011_000001, clientToAMToken=null, 
> appDiagnostics=, appMasterHost=, appQueue=default, appMasterRpcPort=0, 
> appStartTime=1526584003704, yarnAppState=FINISHED, 
> distributedFinalState=SUCCEEDED, appTrackingUrl=N/A, appUser=hlhuang
> 18/05/17 11:06:55 INFO unmanagedamlauncher.UnmanagedAMLauncher: App ended 
> with state: FINISHED and status: SUCCEEDED
> 18/05/17 11:06:55 INFO unmanagedamlauncher.UnmanagedAMLauncher: Application 
> has completed successfully.
> {color:#FF0000}real 0m14.716s{color}
> {color:#FF0000}user 0m11.642s{color}
> {color:#FF0000}sys 0m0.616s{color}
>  
>  
> HDP 2.6.4 
> command: 
> time hadoop jar 
> /usr/hdp/2.6.4.0-91/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar
>  Client -classpath simple-yarn-app-1.1.0.jar -cmd "java 
> com.hortonworks.simpleyarnapp.ApplicationMaster /bin/date 8"
> 18/05/17 11:15:13 INFO unmanagedamlauncher.UnmanagedAMLauncher: Initializing 
> Client
> 18/05/17 11:15:14 INFO unmanagedamlauncher.UnmanagedAMLauncher: Starting 
> Client
> 18/05/17 11:15:14 INFO client.RMProxy: Connecting to ResourceManager at 
> swop2.fyre.ibm.com/172.16.166.190:8050
> 18/05/17 11:15:14 INFO client.AHSProxy: Connecting to Application History 
> server at swop2.fyre.ibm.com/172.16.166.190:10200
> 18/05/17 11:15:14 INFO unmanagedamlauncher.UnmanagedAMLauncher: Setting up 
> application submission context for ASM
> 18/05/17 11:15:14 INFO unmanagedamlauncher.UnmanagedAMLauncher: Setting 
> unmanaged AM
> 18/05/17 11:15:14 INFO unmanagedamlauncher.UnmanagedAMLauncher: Submitting 
> application to ASM
> 18/05/17 11:15:14 INFO impl.YarnClientImpl: Submitted application 
> application_1526573197180_0007
> 18/05/17 11:15:15 INFO unmanagedamlauncher.UnmanagedAMLauncher: Got 
> application report from ASM for, appId=7, 
> appAttemptId=appattempt_1526573197180_0007_000001, clientToAMToken=null, 
> appDiagnostics=AM container is launched, waiting for AM container to Register 
> with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, 
> appStartTime=1526584513523, yarnAppState=ACCEPTED, 
> distributedFinalState=UNDEFINED, appTrackingUrl=N/A, appUser=hlhuang
> 18/05/17 11:15:15 INFO unmanagedamlauncher.UnmanagedAMLauncher: Launching AM 
> with application attempt id appattempt_1526573197180_0007_000001
> 18/05/17 11:15:17 INFO client.RMProxy: Connecting to ResourceManager at 
> swop2.fyre.ibm.com/172.16.166.190:8030
> 18/05/17 11:15:17 INFO impl.ContainerManagementProtocolProxy: 
> yarn.client.max-cached-nodemanagers-proxies : 0
> registerApplicationMaster 0
> registerApplicationMaster 1
> Making res-req 0
> Making res-req 1
> Making res-req 2
> Making res-req 3
> Making res-req 4
> Making res-req 5
> Making res-req 6
> Making res-req 7
> 18/05/17 11:15:18 INFO impl.AMRMClientImpl: Received new token for : 
> swop4.fyre.ibm.com:45454
> 18/05/17 11:15:18 INFO impl.AMRMClientImpl: Received new token for : 
> swop2.fyre.ibm.com:45454
> Launching container container_e07_1526573197180_0007_01_000001
> 18/05/17 11:15:18 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop4.fyre.ibm.com:45454
> Launching container container_e07_1526573197180_0007_01_000002
> 18/05/17 11:15:18 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop2.fyre.ibm.com:45454
> 18/05/17 11:15:19 INFO impl.AMRMClientImpl: Received new token for : 
> swop3.fyre.ibm.com:45454
> Launching container container_e07_1526573197180_0007_01_000003
> 18/05/17 11:15:19 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop3.fyre.ibm.com:45454
> 18/05/17 11:15:19 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop4.fyre.ibm.com:45454
> Launching container container_e07_1526573197180_0007_01_000004
> Launching container container_e07_1526573197180_0007_01_000005
> 18/05/17 11:15:19 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop2.fyre.ibm.com:45454
> Completed container container_e07_1526573197180_0007_01_000002
> Completed container container_e07_1526573197180_0007_01_000001
> Launching container container_e07_1526573197180_0007_01_000006
> 18/05/17 11:15:19 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop3.fyre.ibm.com:45454
> Launching container container_e07_1526573197180_0007_01_000007
> 18/05/17 11:15:19 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop4.fyre.ibm.com:45454
> Completed container container_e07_1526573197180_0007_01_000005
> Completed container container_e07_1526573197180_0007_01_000004
> Completed container container_e07_1526573197180_0007_01_000003
> Launching container container_e07_1526573197180_0007_01_000008
> 18/05/17 11:15:19 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
> swop2.fyre.ibm.com:45454
> Completed container container_e07_1526573197180_0007_01_000006
> Completed container container_e07_1526573197180_0007_01_000008
> Completed container container_e07_1526573197180_0007_01_000007
> 18/05/17 11:15:19 INFO unmanagedamlauncher.UnmanagedAMLauncher: AM process 
> exited with value: 0
> 18/05/17 11:15:20 INFO unmanagedamlauncher.UnmanagedAMLauncher: Got 
> application report from ASM for, appId=7, 
> appAttemptId=appattempt_1526573197180_0007_000001, clientToAMToken=null, 
> appDiagnostics=, appMasterHost=, appQueue=default, appMasterRpcPort=0, 
> appStartTime=1526584513523, yarnAppState=FINISHED, 
> distributedFinalState=SUCCEEDED, appTrackingUrl=N/A, appUser=hlhuang
> 18/05/17 11:15:20 INFO unmanagedamlauncher.UnmanagedAMLauncher: App ended 
> with state: FINISHED and status: SUCCEEDED
> 18/05/17 11:15:20 INFO unmanagedamlauncher.UnmanagedAMLauncher: Application 
> has completed successfully.
> {color:#14892c}real 0m7.897s{color}
> {color:#14892c}user 0m8.335s{color}
> {color:#14892c}sys 0m0.593s{color}
>  
> The difference is {color:#d04437}14 seconds{color} in HDP 3.0.0 vs 
> {color:#14892c}7 seconds{color} in HDP 2.6.4 environment.  Twice amount of 
> the time for HDP 3.0.0. 
> If I just acquire one container for this application, then the difference is  
> {color:#d04437}0m9.696s{color}  (HDP 3.0.0) vs {color:#14892c}0m6.832s{color} 
> (HDP 2.6.4)
>   Has anyone noticed this performance problem and working on it? 
> Thanks for any help!!
>  
>   
>  
>  
>  
>    



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to