[jira] [Commented] (YARN-7327) CapacityScheduler: Allocate containers asynchronously by default

2017-12-07 Thread Craig Ingram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16281840#comment-16281840
 ] 

Craig Ingram commented on YARN-7327:


I finally got around to trying out asynchronous container allocation in Hadoop 
2.9 and 3.0-SNAPSHOT (built from master a few days ago) with Spark 2.3-SNAPSHOT 
(built same day as Hadoop). This is all running on the same hardware described 
above (I did not repeat the tests on VMs). The test results are attached as is 
the jupyter notebook I used to create it. I did change the test from what was 
done above slightly by tweaking the core counts requested each round. It's now 
requesting 16, 32, 64, 128, and 256 whereas it was requesting 2, 20, 50, and 
100 before. I reran the 2.7.3 tests as well. I also ran the 2.9 test with 4 
threads and it came out basically the same as the 3.0 test with 4 threads; 
therefore I did not include it in the graphs.

||Legend||Test||
|sync3|synchronous 3.0-SNAPSHOT|
|sync29|synchronous 2.9|
|sync273|synchronous 2.7.3|
|async1-3|async with 1 thread on 3.0-SNAPSHOT|
|async1-29|async with 1 thread on 2.9|
|async1-273|async with 1 thread on 2.7.3|
|async2-3|async with 2 threads on 3.0-SNAPSHOT|
|async4-3|async with 4 threads on 3.0-SNAPSHOT|
|async8-3|async with 8 threads on 3.0-SNAPSHOT|
|async16-3|async with 16 threads on 3.0-SNAPSHOT|

[^async-scheduling-results.md]
[^schedule-async.png]
[^spark-on-yarn-schedule-async.ipynb]

While the numbers aren't as great as I was hoping (especially at higher thread 
pool counts), it's still a big improvement. I was mainly surprised by the 
flattening out of containers allocations per second at higher container counts. 
I was thinking of giving the RM more memory or at least looking into whether it 
is under GC pressure. Is there anywhere else I should look to tune this? Thanks!

> CapacityScheduler: Allocate containers asynchronously by default
> 
>
> Key: YARN-7327
> URL: https://issues.apache.org/jira/browse/YARN-7327
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Craig Ingram
>Priority: Trivial
> Attachments: async-scheduling-results.md, schedule-async.png, 
> spark-on-yarn-schedule-async.ipynb, yarn-async-scheduling.png
>
>
> I was recently doing some research into Spark on YARN's startup time and 
> observed slow, synchronous allocation of containers/executors. I am testing 
> on a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was 
> only allocating about 3 containers per second. Moreover when starting 3 Spark 
> applications at the same time with each requesting 44 containers, the first 
> application would get all 44 requested containers and then the next 
> application would start getting containers and so on.
>  
> From looking at the code, it appears this is by design. There is an 
> undocumented configuration variable that will enable asynchronous allocation 
> of containers. I'm sure I'm missing something, but why is this not the 
> default? Is there a bug or race condition in this code path? I've done some 
> testing with it and it's been working and is significantly faster.
>  
> Here's the config:
> `yarn.scheduler.capacity.schedule-asynchronously.enable`
>  
> Any help understanding this would be appreciated.
>  
> Thanks,
> Craig
>  
> If you're curious about the performance difference with this setting, here 
> are the results:
>  
> The following tool was used for the benchmarks:
> https://github.com/SparkTC/spark-bench
> h2. async scheduler research
> The goal of this test is to determine if running Spark on YARN with async 
> scheduling of containers reduces the amount of time required for an 
> application to receive all of its requested resources. This setting should 
> also reduce the overall runtime of short-lived applications/stages or 
> notebook paragraphs. This setting could prove crucial to achieving optimal 
> performance when sharing resources on a cluster with dynalloc enabled.
> h3. Test Setup
> Must update /etc/hadoop/conf/capacity-scheduler.xml (or through Ambari) 
> between runs.  
> `yarn.scheduler.capacity.schedule-asynchronously.enable=true|false`
> conf files request executors counts of:  
> * 2
> * 20
> * 50
> * 100
> The apps are being submitted to the default queue on each cluster which caps 
> at 48 cores on dynalloc and 72 cores on baremetal. The default queue was 
> expanded for the last two tests on baremetal so it could potentially take 
> advantage of all 144 cores.
> h3. Test Environments
> h4. dynalloc
> 4 VMs in Fyre (1 master, 3 workers)
> 8 CPUs/16 GB per node
> model name: QEMU Virtual CPU version 2.5+  
> h4. baremetal
> 4 baremetal instances in Fyre (1 master, 3 workers)
> 48 CPUs/128GB per node
> model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50G

[jira] [Updated] (YARN-7327) CapacityScheduler: Allocate containers asynchronously by default

2017-12-07 Thread Craig Ingram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Ingram updated YARN-7327:
---
Attachment: async-scheduling-results.md
schedule-async.png
spark-on-yarn-schedule-async.ipynb

Latest Hadoop 2.7.3, 2.9.0, 3.0-SNAPSHOT test results

> CapacityScheduler: Allocate containers asynchronously by default
> 
>
> Key: YARN-7327
> URL: https://issues.apache.org/jira/browse/YARN-7327
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Craig Ingram
>Priority: Trivial
> Attachments: async-scheduling-results.md, schedule-async.png, 
> spark-on-yarn-schedule-async.ipynb, yarn-async-scheduling.png
>
>
> I was recently doing some research into Spark on YARN's startup time and 
> observed slow, synchronous allocation of containers/executors. I am testing 
> on a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was 
> only allocating about 3 containers per second. Moreover when starting 3 Spark 
> applications at the same time with each requesting 44 containers, the first 
> application would get all 44 requested containers and then the next 
> application would start getting containers and so on.
>  
> From looking at the code, it appears this is by design. There is an 
> undocumented configuration variable that will enable asynchronous allocation 
> of containers. I'm sure I'm missing something, but why is this not the 
> default? Is there a bug or race condition in this code path? I've done some 
> testing with it and it's been working and is significantly faster.
>  
> Here's the config:
> `yarn.scheduler.capacity.schedule-asynchronously.enable`
>  
> Any help understanding this would be appreciated.
>  
> Thanks,
> Craig
>  
> If you're curious about the performance difference with this setting, here 
> are the results:
>  
> The following tool was used for the benchmarks:
> https://github.com/SparkTC/spark-bench
> h2. async scheduler research
> The goal of this test is to determine if running Spark on YARN with async 
> scheduling of containers reduces the amount of time required for an 
> application to receive all of its requested resources. This setting should 
> also reduce the overall runtime of short-lived applications/stages or 
> notebook paragraphs. This setting could prove crucial to achieving optimal 
> performance when sharing resources on a cluster with dynalloc enabled.
> h3. Test Setup
> Must update /etc/hadoop/conf/capacity-scheduler.xml (or through Ambari) 
> between runs.  
> `yarn.scheduler.capacity.schedule-asynchronously.enable=true|false`
> conf files request executors counts of:  
> * 2
> * 20
> * 50
> * 100
> The apps are being submitted to the default queue on each cluster which caps 
> at 48 cores on dynalloc and 72 cores on baremetal. The default queue was 
> expanded for the last two tests on baremetal so it could potentially take 
> advantage of all 144 cores.
> h3. Test Environments
> h4. dynalloc
> 4 VMs in Fyre (1 master, 3 workers)
> 8 CPUs/16 GB per node
> model name: QEMU Virtual CPU version 2.5+  
> h4. baremetal
> 4 baremetal instances in Fyre (1 master, 3 workers)
> 48 CPUs/128GB per node
> model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz  
> h3. Using spark-bench with timedsleep workload sync
> h4. dynalloc
> || requested containers | avg | stdev||
> |2 | 23.814900 | 1.110725|
> |20 | 29.770250 | 0.830528|
> |50 | 44.486600 | 0.593516|
> |100 | 44.337700 | 0.490139|
> h4. baremetal - 2 queues splitting cluster 72 cores each
> || requested containers | avg | stdev||
> |2 | 14.827000 | 0.292290|
> |20 | 19.613150 | 0.155421|
> |50 | 30.768400 | 0.083400|
> |100 | 40.931850 | 0.092160|
> h4. baremetal - 1 queue to rule them all - 144 cores
> || requested containers | avg | stdev||
> |2 | 14.833050 | 0.334061|
> |20 | 19.575000 | 0.212836|
> |50 | 30.765350 | 0.111035|
> |100 | 41.763300 | 0.182700|
> h3. Using spark-bench with timedsleep workload async
> h4. dynalloc
> || requested containers | avg | stdev||
> |2 | 22.575150 | 0.574296|
> |20 | 26.904150 | 1.244602|
> |50 | 44.721800 | 0.655388|
> |100 | 44.57 | 0.514540|
> h5. 2nd run  
> || requested containers | avg | stdev||
> |2 | 22.441200 | 0.715875|
> |20 | 26.683400 | 0.583762|
> |50 | 44.227250 | 0.512568|
> |100 | 44.238750 | 0.329712|
> h4. baremetal - 2 queues splitting cluster 72 cores each
> || requested containers | avg | stdev||
> |2 | 12.902350 | 0.125505|
> |20 | 13.830600 | 0.169598|
> |50 | 16.738050 | 0.265091|
> |100 | 40.654500 | 0.111417|
> h4. baremetal - 1 queue to rule them all - 144 cores
> || requested containers | avg | stdev||
> |2 | 12.987150 | 0.118169|
> |20 | 13.837150 | 0.145871|
> |50 | 16.816300 | 0.253437|
> |100 | 23.113450 | 0.320744|



--
This message was se

[jira] [Commented] (YARN-7327) CapacityScheduler: Allocate containers asynchronously by default

2017-10-13 Thread Craig Ingram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16204352#comment-16204352
 ] 

Craig Ingram commented on YARN-7327:


Thanks Arun. I'll look into what it'll take to get a test environment setup 
with the latest YARN. I'm not sure if Spark will require any modifications to 
try it out at this point. I believe I can setup some benchmarks to demonstrate 
if there is any impact when the cluster is under load. It would be using Spark, 
so I'm not sure if that would help the general YARN use case.

I like the idea of opportunistic containers, but I think the way Spark's 
scheduler farms out tasks is already doing something similar (I'll take a 
closer look though).

> CapacityScheduler: Allocate containers asynchronously by default
> 
>
> Key: YARN-7327
> URL: https://issues.apache.org/jira/browse/YARN-7327
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Craig Ingram
>Priority: Trivial
> Attachments: yarn-async-scheduling.png
>
>
> I was recently doing some research into Spark on YARN's startup time and 
> observed slow, synchronous allocation of containers/executors. I am testing 
> on a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was 
> only allocating about 3 containers per second. Moreover when starting 3 Spark 
> applications at the same time with each requesting 44 containers, the first 
> application would get all 44 requested containers and then the next 
> application would start getting containers and so on.
>  
> From looking at the code, it appears this is by design. There is an 
> undocumented configuration variable that will enable asynchronous allocation 
> of containers. I'm sure I'm missing something, but why is this not the 
> default? Is there a bug or race condition in this code path? I've done some 
> testing with it and it's been working and is significantly faster.
>  
> Here's the config:
> `yarn.scheduler.capacity.schedule-asynchronously.enable`
>  
> Any help understanding this would be appreciated.
>  
> Thanks,
> Craig
>  
> If you're curious about the performance difference with this setting, here 
> are the results:
>  
> The following tool was used for the benchmarks:
> https://github.com/SparkTC/spark-bench
> h2. async scheduler research
> The goal of this test is to determine if running Spark on YARN with async 
> scheduling of containers reduces the amount of time required for an 
> application to receive all of its requested resources. This setting should 
> also reduce the overall runtime of short-lived applications/stages or 
> notebook paragraphs. This setting could prove crucial to achieving optimal 
> performance when sharing resources on a cluster with dynalloc enabled.
> h3. Test Setup
> Must update /etc/hadoop/conf/capacity-scheduler.xml (or through Ambari) 
> between runs.  
> `yarn.scheduler.capacity.schedule-asynchronously.enable=true|false`
> conf files request executors counts of:  
> * 2
> * 20
> * 50
> * 100
> The apps are being submitted to the default queue on each cluster which caps 
> at 48 cores on dynalloc and 72 cores on baremetal. The default queue was 
> expanded for the last two tests on baremetal so it could potentially take 
> advantage of all 144 cores.
> h3. Test Environments
> h4. dynalloc
> 4 VMs in Fyre (1 master, 3 workers)
> 8 CPUs/16 GB per node
> model name: QEMU Virtual CPU version 2.5+  
> h4. baremetal
> 4 baremetal instances in Fyre (1 master, 3 workers)
> 48 CPUs/128GB per node
> model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz  
> h3. Using spark-bench with timedsleep workload sync
> h4. dynalloc
> || requested containers | avg | stdev||
> |2 | 23.814900 | 1.110725|
> |20 | 29.770250 | 0.830528|
> |50 | 44.486600 | 0.593516|
> |100 | 44.337700 | 0.490139|
> h4. baremetal - 2 queues splitting cluster 72 cores each
> || requested containers | avg | stdev||
> |2 | 14.827000 | 0.292290|
> |20 | 19.613150 | 0.155421|
> |50 | 30.768400 | 0.083400|
> |100 | 40.931850 | 0.092160|
> h4. baremetal - 1 queue to rule them all - 144 cores
> || requested containers | avg | stdev||
> |2 | 14.833050 | 0.334061|
> |20 | 19.575000 | 0.212836|
> |50 | 30.765350 | 0.111035|
> |100 | 41.763300 | 0.182700|
> h3. Using spark-bench with timedsleep workload async
> h4. dynalloc
> || requested containers | avg | stdev||
> |2 | 22.575150 | 0.574296|
> |20 | 26.904150 | 1.244602|
> |50 | 44.721800 | 0.655388|
> |100 | 44.57 | 0.514540|
> h5. 2nd run  
> || requested containers | avg | stdev||
> |2 | 22.441200 | 0.715875|
> |20 | 26.683400 | 0.583762|
> |50 | 44.227250 | 0.512568|
> |100 | 44.238750 | 0.329712|
> h4. baremetal - 2 queues splitting cluster 72 cores each
> || requested containers | avg | stdev||
> |2 | 12.902350 | 0.125505

[jira] [Updated] (YARN-7327) Launch containers asynchronously by default

2017-10-13 Thread Craig Ingram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Ingram updated YARN-7327:
---
Attachment: yarn-async-scheduling.png

Plot showing asynchronous vs synchronous scheduling of containers with Spark. 
This is plotted from the results obtained on our baremetal cluster with 1 queue 
managing all of the resources.

> Launch containers asynchronously by default
> ---
>
> Key: YARN-7327
> URL: https://issues.apache.org/jira/browse/YARN-7327
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Craig Ingram
>Priority: Trivial
> Attachments: yarn-async-scheduling.png
>
>
> I was recently doing some research into Spark on YARN's startup time and 
> observed slow, synchronous allocation of containers/executors. I am testing 
> on a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was 
> only allocating about 3 containers per second. Moreover when starting 3 Spark 
> applications at the same time with each requesting 44 containers, the first 
> application would get all 44 requested containers and then the next 
> application would start getting containers and so on.
>  
> From looking at the code, it appears this is by design. There is an 
> undocumented configuration variable that will enable asynchronous allocation 
> of containers. I'm sure I'm missing something, but why is this not the 
> default? Is there a bug or race condition in this code path? I've done some 
> testing with it and it's been working and is significantly faster.
>  
> Here's the config:
> `yarn.scheduler.capacity.schedule-asynchronously.enable`
>  
> Any help understanding this would be appreciated.
>  
> Thanks,
> Craig
>  
> If you're curious about the performance difference with this setting, here 
> are the results:
>  
> The following tool was used for the benchmarks:
> https://github.com/SparkTC/spark-bench
> h2. async scheduler research
> The goal of this test is to determine if running Spark on YARN with async 
> scheduling of containers reduces the amount of time required for an 
> application to receive all of its requested resources. This setting should 
> also reduce the overall runtime of short-lived applications/stages or 
> notebook paragraphs. This setting could prove crucial to achieving optimal 
> performance when sharing resources on a cluster with dynalloc enabled.
> h3. Test Setup
> Must update /etc/hadoop/conf/capacity-scheduler.xml (or through Ambari) 
> between runs.  
> `yarn.scheduler.capacity.schedule-asynchronously.enable=true|false`
> conf files request executors counts of:  
> * 2
> * 20
> * 50
> * 100
> The apps are being submitted to the default queue on each cluster which caps 
> at 48 cores on dynalloc and 72 cores on baremetal. The default queue was 
> expanded for the last two tests on baremetal so it could potentially take 
> advantage of all 144 cores.
> h3. Test Environments
> h4. dynalloc
> 4 VMs in Fyre (1 master, 3 workers)
> 8 CPUs/16 GB per node
> model name: QEMU Virtual CPU version 2.5+  
> h4. baremetal
> 4 baremetal instances in Fyre (1 master, 3 workers)
> 48 CPUs/128GB per node
> model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz  
> h3. Using spark-bench with timedsleep workload sync
> h4. dynalloc
> || requested containers | avg | stdev||
> |2 | 23.814900 | 1.110725|
> |20 | 29.770250 | 0.830528|
> |50 | 44.486600 | 0.593516|
> |100 | 44.337700 | 0.490139|
> h4. baremetal - 2 queues splitting cluster 72 cores each
> || requested containers | avg | stdev||
> |2 | 14.827000 | 0.292290|
> |20 | 19.613150 | 0.155421|
> |50 | 30.768400 | 0.083400|
> |100 | 40.931850 | 0.092160|
> h4. baremetal - 1 queue to rule them all - 144 cores
> || requested containers | avg | stdev||
> |2 | 14.833050 | 0.334061|
> |20 | 19.575000 | 0.212836|
> |50 | 30.765350 | 0.111035|
> |100 | 41.763300 | 0.182700|
> h3. Using spark-bench with timedsleep workload async
> h4. dynalloc
> || requested containers | avg | stdev||
> |2 | 22.575150 | 0.574296|
> |20 | 26.904150 | 1.244602|
> |50 | 44.721800 | 0.655388|
> |100 | 44.57 | 0.514540|
> h5. 2nd run  
> || requested containers | avg | stdev||
> |2 | 22.441200 | 0.715875|
> |20 | 26.683400 | 0.583762|
> |50 | 44.227250 | 0.512568|
> |100 | 44.238750 | 0.329712|
> h4. baremetal - 2 queues splitting cluster 72 cores each
> || requested containers | avg | stdev||
> |2 | 12.902350 | 0.125505|
> |20 | 13.830600 | 0.169598|
> |50 | 16.738050 | 0.265091|
> |100 | 40.654500 | 0.111417|
> h4. baremetal - 1 queue to rule them all - 144 cores
> || requested containers | avg | stdev||
> |2 | 12.987150 | 0.118169|
> |20 | 13.837150 | 0.145871|
> |50 | 16.816300 | 0.253437|
> |100 | 23.113450 | 0.320744|



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---

[jira] [Updated] (YARN-7327) Launch containers asynchronously by default

2017-10-13 Thread Craig Ingram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Ingram updated YARN-7327:
---
Description: 
I was recently doing some research into Spark on YARN's startup time and 
observed slow, synchronous allocation of containers/executors. I am testing on 
a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was only 
allocating about 3 containers per second. Moreover when starting 3 Spark 
applications at the same time with each requesting 44 containers, the first 
application would get all 44 requested containers and then the next application 
would start getting containers and so on.
 
>From looking at the code, it appears this is by design. There is an 
>undocumented configuration variable that will enable asynchronous allocation 
>of containers. I'm sure I'm missing something, but why is this not the 
>default? Is there a bug or race condition in this code path? I've done some 
>testing with it and it's been working and is significantly faster.
 
Here's the config:
`yarn.scheduler.capacity.schedule-asynchronously.enable`
 
Any help understanding this would be appreciated.
 
Thanks,
Craig
 

If you're curious about the performance difference with this setting, here are 
the results:
 
The following tool was used for the benchmarks:
https://github.com/SparkTC/spark-bench

h2. async scheduler research
The goal of this test is to determine if running Spark on YARN with async 
scheduling of containers reduces the amount of time required for an application 
to receive all of its requested resources. This setting should also reduce the 
overall runtime of short-lived applications/stages or notebook paragraphs. This 
setting could prove crucial to achieving optimal performance when sharing 
resources on a cluster with dynalloc enabled.
h3. Test Setup
Must update /etc/hadoop/conf/capacity-scheduler.xml (or through Ambari) between 
runs.  
`yarn.scheduler.capacity.schedule-asynchronously.enable=true|false`

conf files request executors counts of:  
* 2
* 20
* 50
* 100

The apps are being submitted to the default queue on each cluster which caps at 
48 cores on dynalloc and 72 cores on baremetal. The default queue was expanded 
for the last two tests on baremetal so it could potentially take advantage of 
all 144 cores.

h3. Test Environments
h4. dynalloc
4 VMs in Fyre (1 master, 3 workers)
8 CPUs/16 GB per node
model name: QEMU Virtual CPU version 2.5+  
h4. baremetal
4 baremetal instances in Fyre (1 master, 3 workers)
48 CPUs/128GB per node
model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz  

h3. Using spark-bench with timedsleep workload sync

h4. dynalloc
|| requested containers | avg | stdev||
|2 | 23.814900 | 1.110725|
|20 | 29.770250 | 0.830528|
|50 | 44.486600 | 0.593516|
|100 | 44.337700 | 0.490139|

h4. baremetal - 2 queues splitting cluster 72 cores each
|| requested containers | avg | stdev||
|2 | 14.827000 | 0.292290|
|20 | 19.613150 | 0.155421|
|50 | 30.768400 | 0.083400|
|100 | 40.931850 | 0.092160|

h4. baremetal - 1 queue to rule them all - 144 cores
|| requested containers | avg | stdev||
|2 | 14.833050 | 0.334061|
|20 | 19.575000 | 0.212836|
|50 | 30.765350 | 0.111035|
|100 | 41.763300 | 0.182700|

h3. Using spark-bench with timedsleep workload async

h4. dynalloc
|| requested containers | avg | stdev||
|2 | 22.575150 | 0.574296|
|20 | 26.904150 | 1.244602|
|50 | 44.721800 | 0.655388|
|100 | 44.57 | 0.514540|

h5. 2nd run  
|| requested containers | avg | stdev||
|2 | 22.441200 | 0.715875|
|20 | 26.683400 | 0.583762|
|50 | 44.227250 | 0.512568|
|100 | 44.238750 | 0.329712|

h4. baremetal - 2 queues splitting cluster 72 cores each
|| requested containers | avg | stdev||
|2 | 12.902350 | 0.125505|
|20 | 13.830600 | 0.169598|
|50 | 16.738050 | 0.265091|
|100 | 40.654500 | 0.111417|

h4. baremetal - 1 queue to rule them all - 144 cores
|| requested containers | avg | stdev||
|2 | 12.987150 | 0.118169|
|20 | 13.837150 | 0.145871|
|50 | 16.816300 | 0.253437|
|100 | 23.113450 | 0.320744|

  was:
I was recently doing some research into Spark on YARN's startup time and 
observed slow, synchronous allocation of containers/executors. I am testing on 
a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was only 
allocating about 3 containers per second. Moreover when starting 3 Spark 
applications at the same time with each requesting 44 containers, the first 
application would get all 44 requested containers and then the next application 
would start getting containers and so on.
 
>From looking at the code, it appears this is by design. There is an 
>undocumented configuration variable that will enable asynchronous allocation 
>of containers. I'm sure I'm missing something, but why is this not the 
>default? Is there a bug or race condition in this code path? I've done some 
>testing with it and it's been working and is significantly faster.
 
Her

[jira] [Created] (YARN-7327) Launch containers asynchronously by default

2017-10-13 Thread Craig Ingram (JIRA)
Craig Ingram created YARN-7327:
--

 Summary: Launch containers asynchronously by default
 Key: YARN-7327
 URL: https://issues.apache.org/jira/browse/YARN-7327
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Craig Ingram
Priority: Trivial


I was recently doing some research into Spark on YARN's startup time and 
observed slow, synchronous allocation of containers/executors. I am testing on 
a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was only 
allocating about 3 containers per second. Moreover when starting 3 Spark 
applications at the same time with each requesting 44 containers, the first 
application would get all 44 requested containers and then the next application 
would start getting containers and so on.
 
>From looking at the code, it appears this is by design. There is an 
>undocumented configuration variable that will enable asynchronous allocation 
>of containers. I'm sure I'm missing something, but why is this not the 
>default? Is there a bug or race condition in this code path? I've done some 
>testing with it and it's been working and is significantly faster.
 
Here's the config:
`yarn.scheduler.capacity.schedule-asynchronously.enable`
 
Any help understanding this would be appreciated.
 
Thanks,
Craig
 

If you're curious about the performance difference with this setting, here are 
the results:
 
The following tool was used for the benchmarks:
https://github.com/SparkTC/spark-bench

h2. async scheduler research
The goal of this test is to determine if running Spark on YARN with async 
scheduling of containers reduces the amount of time required for an application 
to receive all of its requested resources. This setting should also reduce the 
overall runtime of short-lived applications/stages or notebook paragraphs. This 
setting could prove crucial to achieving optimal performance when sharing 
resources on a cluster with dynalloc enabled.
h3. Test Setup
Must update /etc/hadoop/conf/capacity-scheduler.xml (or through Ambari) between 
runs.  
`yarn.scheduler.capacity.schedule-asynchronously.enable=true|false`

conf files request executors counts of:  
* 2
* 20
* 50
* 100

The apps are being submitted to the default queue on each cluster which caps at 
48 cores on dynalloc and 72 cores on baremetal. The default queue was expanded 
for the last two tests on baremetal so it could potentially take advantage of 
all 144 cores.

h3. Test Environments
h4. dynalloc
4 VMs in Fyre (1 master, 3 workers)
8 CPUs/16 GB per node
model name: QEMU Virtual CPU version 2.5+  
h4. baremetal
4 baremetal instances in Fyre (1 master, 3 workers)
48 CPUs/128GB per node
model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz  

h3. Using spark-bench with timedsleep workload sync

h4. dynalloc
|| conf | avg | stdev||
|spark-on-yarn-schedule-async0.time | 23.814900 | 1.110725|
|spark-on-yarn-schedule-async1.time | 29.770250 | 0.830528|
|spark-on-yarn-schedule-async2.time | 44.486600 | 0.593516|
|spark-on-yarn-schedule-async3.time | 44.337700 | 0.490139|

h4. baremetal - 2 queues splitting cluster 72 cores each
|| conf | avg | stdev||
|spark-on-yarn-schedule-async0.time | 14.827000 | 0.292290|
|spark-on-yarn-schedule-async1.time | 19.613150 | 0.155421|
|spark-on-yarn-schedule-async2.time | 30.768400 | 0.083400|
|spark-on-yarn-schedule-async3.time | 40.931850 | 0.092160|

h4. baremetal - 1 queue to rule them all - 144 cores
||conf | avg | stdev||
|spark-on-yarn-schedule-async0.time | 14.833050 | 0.334061|
|spark-on-yarn-schedule-async1.time | 19.575000 | 0.212836|
|spark-on-yarn-schedule-async2.time | 30.765350 | 0.111035|
|spark-on-yarn-schedule-async3.time | 41.763300 | 0.182700|

h3. Using spark-bench with timedsleep workload async

h4. dynalloc
|| conf | avg | stdev||
|spark-on-yarn-schedule-async0.time | 22.575150 | 0.574296|
|spark-on-yarn-schedule-async1.time | 26.904150 | 1.244602|
|spark-on-yarn-schedule-async2.time | 44.721800 | 0.655388|
|spark-on-yarn-schedule-async3.time | 44.57 | 0.514540|

h5. 2nd run  
|| conf | avg | stdev||
|spark-on-yarn-schedule-async0.time | 22.441200 | 0.715875|
|spark-on-yarn-schedule-async1.time | 26.683400 | 0.583762|
|spark-on-yarn-schedule-async2.time | 44.227250 | 0.512568|
|spark-on-yarn-schedule-async3.time | 44.238750 | 0.329712|

h4. baremetal - 2 queues splitting cluster 72 cores each
|| conf | avg | stdev||
|spark-on-yarn-schedule-async0.time | 12.902350 | 0.125505|
|spark-on-yarn-schedule-async1.time | 13.830600 | 0.169598|
|spark-on-yarn-schedule-async2.time | 16.738050 | 0.265091|
|spark-on-yarn-schedule-async3.time | 40.654500 | 0.111417|

h4. baremetal - 1 queue to rule them all - 144 cores
|| conf | avg | stdev||
|spark-on-yarn-schedule-async0.time | 12.987150 | 0.118169|
|spark-on-yarn-schedule-async1.time | 13.837150 | 0.145871|
|spark-on-yarn-schedule-async2.time | 16.816300 | 0.253437|
|spark-