Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13152
**[Test build #3291 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3291/consoleFull)**
for PR 13152 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13152
**[Test build #3291 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3291/consoleFull)**
for PR 13152 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13152
LGTM - sorry that this has taken a while. I will merge once tests pass.
Also cc @zsxwing for his attention.
---
If your project is set up for it, you can reply to this email and have your
Github user shubhamchopra commented on the issue:
https://github.com/apache/spark/pull/13152
Rebased to master to resolve merge conflicts
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shubhamchopra commented on the issue:
https://github.com/apache/spark/pull/13152
Thanks for the suggestions. I have corrected the style check errors and
verified that locally, so hopefully there are not more style errors. I have
also done a couple of modifications per
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13152
Thanks - this looks pretty good!
I've triggered a new Jenkins run and also left some small comments. It
would be great to add some unit tests (not integration tests) for two of the
classes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13152
**[Test build #3232 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3232/consoleFull)**
for PR 13152 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13152
**[Test build #3232 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3232/consoleFull)**
for PR 13152 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13152
**[Test build #3225 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3225/consoleFull)**
for PR 13152 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13152
**[Test build #3225 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3225/consoleFull)**
for PR 13152 at commit
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/13152
@rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/13152
still lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/13152
LGTM. Just style comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/13152
You wouldn't have to create a new selector after a failure. That case can
be detected by checking if the number of failed replications has increased,
e.g. `if (failedReplications.length >
Github user shubhamchopra commented on the issue:
https://github.com/apache/spark/pull/13152
The state being managed inside getRandomPeer() is also modified in a couple
of other places, so it won't be a very clean change to remove some of it out of
getRandomPeer. Even if that is
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/13152
> The topology info is only queried when the executor initiates and is
assumed to stay the same throughout the life of the executor. Depending on the
cluster manager being used, I am assuming the
Github user shubhamchopra commented on the issue:
https://github.com/apache/spark/pull/13152
The topology info is only queried when the executor initiates and is
assumed to stay the same throughout the life of the executor. Depending on the
cluster manager being used, I am assuming
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/13152
A couple high level questions:
- Rather than send an RPC to the master asking for a worker's topology
info, is it possible for this to be provided at initialization time or
determined based on
18 matches
Mail list logo