Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@srowen ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17031
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17031
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74020/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17031
**[Test build #74020 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/74020/testReport)**
for PR 17031 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17031
**[Test build #74020 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/74020/testReport)**
for PR 17031 at commit
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@skonto updated the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@srowen To support increasing the default, I've had to:
- make refuse_seconds configurable
- factor out `declineOffer` so the dispatcher can use it in addition to the
coarse grained
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/17031
@mgummelt do we wan to keep the suppress/revive technique?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17031
Compared to the title, this looks like a significant change, still. Is the
intent something different from the JIRA? this doens't just increase a default.
I don't have any opinion on the changes,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17031
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73883/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17031
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17031
**[Test build #73883 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73883/testReport)**
for PR 17031 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17031
**[Test build #73883 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73883/testReport)**
for PR 17031 at commit
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@srowen Just to move things along, I removed everything not directly
relevant to this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@skonto I completely agree that this is a cluster-wide issue, but
unfortunately that's the state of things. In the long-term, optimistic offers
in Mesos should fix this.
---
If your project is
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@srowen Yes, most of the code is refactoring that I came across when
solving this. If that's going to delay this being merged, please let me know
and I can remove the refactoring.
---
If your
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/17031
@srown There are parts for refactoring only purposes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/17031
@mgummelt LGTM. Thanks fo rthe clarifications. @srowen can we get a merge?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@skonto Any other concerns? Can I get a LGTM?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
Your understanding is correct. You must set refuse_seconds for all your
frameworks to some value N, such that N >= #frameworks. So for this change, if
some operator is running >120 frameworks,
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/17031
But this time is the refuse time correct? As stated here:
https://issues.apache.org/jira/browse/MESOS-3202 I have 30 seconds for osme
other framework to accept resources in the list otherwise the
22 matches
Mail list logo