Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16819
So your current approach is to have a second connection to the RM, and ask
for the RM's available resources every time the scheduler tries to change the
number of resources.
Did you look at
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
@vanzin What do you think about current approach? I have tested on a same
Spark hive-thriftserver, the `spark.dynamicAllocation.maxExecutors` wiil
decrease if I kill 4 NodeManager:
```
17/02
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16819
I agree with others, this is not the way to do this. There are different
schedulers in yarn, each with different configs that could affect the actual
resources you get.
If you want to do
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73515/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73515 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73515/testReport)**
for PR 16819 at commit
[`e4b3b0c`](https://github.com/apache/spark/commit/e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73515 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73515/testReport)**
for PR 16819 at commit
[`e4b3b0c`](https://github.com/apache/spark/commit/e4
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16819
Getting the config only at the beginning, to me, is not an acceptable
solution.
Getting it every once in a while is better, but it's not the only possible
approach. I even suggest something
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
@vanzin We must pull the configuration from ResourceManager,
ResourceManager can't push.
So setting the max before each stage? This feels too frequent.
In fact, This is suitable for peri
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16819
I agree there's room for improvement in the current code; I even asked
SPARK-18769 to be filed to track that work.
But I don't think setting the max to a fixed value at startup is the right
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73282/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73282 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73282/testReport)**
for PR 16819 at commit
[`cd306e2`](https://github.com/apache/spark/commit/c
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73282 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73282/testReport)**
for PR 16819 at commit
[`cd306e2`](https://github.com/apache/spark/commit/cd
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73277/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73277 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73277/testReport)**
for PR 16819 at commit
[`fabe2c5`](https://github.com/apache/spark/commit/f
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73277 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73277/testReport)**
for PR 16819 at commit
[`fabe2c5`](https://github.com/apache/spark/commit/fa
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73151/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73151 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73151/testReport)**
for PR 16819 at commit
[`8e99701`](https://github.com/apache/spark/commit/8
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73151 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73151/testReport)**
for PR 16819 at commit
[`8e99701`](https://github.com/apache/spark/commit/8e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73147/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73147 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73147/testReport)**
for PR 16819 at commit
[`4f81680`](https://github.com/apache/spark/commit/4
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73147 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73147/testReport)**
for PR 16819 at commit
[`4f81680`](https://github.com/apache/spark/commit/4f
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
@srowen . Dynamic set `spark.dynamicAllocation.maxExecutors` can avoid
some strange problems:
1. [Spark application hang when dynamic allocation is
enabled](https://issues.apache.org/jira/
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16819
What problem does this solve though? calling that function is not a
problem. It seems like you get the right behavior in both cases. Are you saying
there's some RPC problem? The target goes very high
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
It will reduce the function call on
[CoarseGrainedSchedulerBackend.requestTotalExecutors()](https://github.com/apache/spark/blob/v2.1.0/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGr
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16819
I agree. Resource managers generally expect applications to request more
than what's available already so we don't have to do it again ourselves in
Spark.
---
If your project is set up for it,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72434/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #72434 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72434/testReport)**
for PR 16819 at commit
[`97e5eee`](https://github.com/apache/spark/commit/9
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #72434 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72434/testReport)**
for PR 16819 at commit
[`97e5eee`](https://github.com/apache/spark/commit/97
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16819
I don't think this is a necessary change. Already, you can't ask for more
resources than the cluster has; the cluster won't grant them. Capping it here
means the app can't use more resources if the c
35 matches
Mail list logo