Github user tnachen commented on the issue:
https://github.com/apache/spark/pull/13051
Other than the style nit, I think it LGTM. Once you fixed it we need to ask
a Spark committer to review it.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13051#discussion_r65578085
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -119,12 +122,14 @@ private[mesos] object
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-218263708
Seems like @nraychaudhuri is busy, I'll take this PR and update it myself.
We definitely need this to be merged as it's quite useful for testing.
---
If your project
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/13026#issuecomment-218261027
And what version of Mesos are you using? Also are you using docker?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/13026#issuecomment-218260952
@sayevsky I'm not sure it's Mesos that's using colon as a seperator, what
is in your stdout when the driver runs? I'm guessing perhaps that the command
that we
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217341006
Yes I am aware of multi roles, but not sure in what situation you would
want to register with multiple roles but only use one role's resources?
---
If your project
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217337832
I see, I wonder besides the wildcard role if it ever makes sense to have
two lists of roles, one set that you are registered with and another set you
want to accept
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217167410
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217167363
jenkins please test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217167334
So from what I can see, what you really want is to only don't select
resources from * role right?
Otherwise if you're getting offers from other roles you
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-212173748
@andrewor14 @nraychaudhuri @dragos Sorry I'm not suggesting we close this
PR, we still need the flag since we want to be able to either failover
automatically
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60310542
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1978,57 +1978,134 @@ private[spark] object Utils extends Logging
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60257784
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1978,57 +1978,134 @@ private[spark] object Utils extends Logging
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60257720
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1978,57 +1978,134 @@ private[spark] object Utils extends Logging
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60255016
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254817
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254758
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254665
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254778
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254689
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254568
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254354
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60254407
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60253926
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60253411
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -353,4 +371,247 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60253302
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -171,7 +186,8 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60253331
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -333,7 +350,8 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60253191
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -45,7 +45,8 @@ private[mesos] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r60253095
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -154,7 +161,8 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12403#issuecomment-211977488
@andrewor14 PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12403#issuecomment-210225802
@andrewor14 fixed the comment.
I just checked the other places where isYarnCluster is used, and looks like
they're still correct. Spark on Mesos doesn't yet support
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12403#issuecomment-210209600
@dragos @andrewor14 PTAL, it's a very small change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/12403
[SPARK-14645][MESOS] Fix python running on cluster mode mesos to have non
local uris
## What changes were proposed in this pull request?
Fix SparkSubmit to allow non-local python uris
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8349#issuecomment-210022398
That's odd it should have worked when we merged, let me try it again locally
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/12218#issuecomment-206618482
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-198558275
@jayv are you planning to update this PR with that commit?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-197585699
I think we should add tests and I don't think it requires that much
refactoring, if you look at MesosClusterSchedulerSuite you can see the test
"can handle mul
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11272#issuecomment-195576917
@andrewor14 I'll try to create a test to verify this, when is the 2.0
closing date?
---
If your project is set up for it, you can reply to this email and have your
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11369#issuecomment-193372121
@andrewor14 this LGTM to me and @dragos , can you take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11157#issuecomment-192521461
Can you fix the whitespacing in general in your patch? There are quite some
extra whitespaces in the Utils.scala
---
If your project is set up for it, you can reply
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r55104599
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1940,57 +1940,131 @@ private[spark] object Utils extends Logging
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r55104611
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1940,57 +1940,131 @@ private[spark] object Utils extends Logging
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r55104561
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -376,8 +381,10 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-192042971
I just found out that this is actually a bug in Mesos, where we cannot
store a duration that's larger than int64_t. I filed a Mesos jira for this
(https
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-191015322
I've tested this myself and is indeed now doing the correct behavior when
not adding the flag in. I'll need to dig more, @nraychaudhuri have you tried
this as well
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11369#issuecomment-190973374
@s-urbaniak I think we should keep the existing -p since users might
actually depend on it. Otherwise this patch LGTM. @dragos you want to take a
look?
---
If your
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11369#discussion_r54657202
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -135,7 +135,7 @@ $(document).ready(function
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11369#discussion_r54610723
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -135,7 +135,7 @@ $(document).ready(function
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11369#discussion_r54610842
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArguments.scala
---
@@ -44,7 +44,7 @@ private[mesos] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11272#issuecomment-190833991
@dragos Yes it shouldn't block the PR, just mentioning it as I see the need
for it. I have two comments on this, otherwise it LGTM too.
---
If your project is set up
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11272#issuecomment-190475702
This looks like a great candidate to add integration tests for in the
mesos-spark-integration-tests suite. Ideally we have something long running
that we can run
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r54507553
--- Diff:
network/shuffle/src/main/java/org/apache/spark/network/shuffle/mesos/MesosExternalShuffleClient.java
---
@@ -53,21 +65,58 @@ public
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r54506572
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosExternalShuffleService.scala
---
@@ -93,7 +113,8 @@ private[mesos] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11369#issuecomment-189414154
@s-urbaniak You need to fix the scala style error:
/home/jenkins/workspace/SparkPullRequestBuilder/core/src/test/scala/org/apache/spark/scheduler/cluster/mesos
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11277#issuecomment-188388392
@andrewor14 Fixed your comment now, PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11277#issuecomment-188387223
@andrewor14 This is so that users can be able to set java properties when
running the mesos cluster dispatcher, it's really useful for testing the
dispatcher
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-186534096
@andrewor14 Sorry for the mess up, I kept thinking the code was ready just
needed to rebase and address comments. The rebase and comments did cause some
style problems
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8872#discussion_r53546468
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -442,9 +443,12 @@ private[spark] class
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/11281
[MESOS] Allow multiple dispatchers to be launched.
## What changes were proposed in this pull request?
Users might want to start multiple mesos dispatchers, as each dispatcher
can
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-186454509
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-186454538
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-186443823
@dragos Ah yes if you need bash completion than we have to use shell = true.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8872#discussion_r53531469
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -452,38 +456,42 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11277#issuecomment-186399031
@andrewor14 @dragos It's a pretty small change, PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/11277
[SPARK-13387] Add support for SPARK_DAEMON_JAVA_OPTS with
MesosClusterDispatcher.
## What changes were proposed in this pull request?
Add support for SPARK_DAEMON_JAVA_OPTS
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10924#discussion_r53116209
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -254,53 +258,65 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/11207#issuecomment-184623985
I also agree with @dragos, and I think we should keep the same semantics by
having a heartbeat instead.
---
If your project is set up for it, you can reply
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-184461120
@andrewor14 PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10770#issuecomment-184183969
What's the problem you're running into when you set SPARK_USER? You can
still run Mesos with root but Mesos with switch_user enabled should switch you
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-183898284
We already have some shared logic of using multiple resources from
different roles, it just wasn't plugged in when I wrote the cluster scheduler.
I think now we have
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8872#discussion_r52843425
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -525,14 +531,14 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8872#discussion_r52843376
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -525,14 +531,14 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-183855606
@dragos you mean the framework no longer shows up in the UI? the console
output doesn't seem to suggest it's gone.
---
If your project is set up for it, you can reply
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-183122191
Btw can you add a quick unit test for this? We've added tests before
already so should be straightforward to do so.
---
If your project is set up for it, you can
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r52188015
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -260,113 +257,208 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-181444389
Just one comment, overall LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r52188465
--- Diff: docs/configuration.md ---
@@ -825,13 +825,18 @@ Apart from these, the following properties are also
available, and may be useful
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51281443
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,23 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176845715
@dragos we can certainly review it first, @andrewor14 has been educating me
about the review process and how Spark community typically review things after
it clears CI
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176848352
I think besides the only comment I have everything else LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176892060
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176972121
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176972208
@drcrallen sorry I don't think everyone has the permissions to trigger
jenkins, I'll help watch this
---
If your project is set up for it, you can reply to this email
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10057#issuecomment-176599484
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-176599538
Looks like there are more rules to scala style now, it's finally passing!
@andrewor14 PTAL
---
If your project is set up for it, you can reply to this email and have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-176377522
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10057#issuecomment-176526642
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176526812
@drcrallen sorry for the delay, can you please fix the scala style tests
first? will take a look once you update it.
---
If your project is set up for it, you can
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10370#discussion_r50775070
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -440,6 +446,9 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-172041940
I think that either case just looking at stop isn't enough since we are
relying on the callback to empty the executors map for us to exit the loop
before the timeout
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8872#discussion_r49891184
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -358,9 +358,10 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10701#discussion_r49760002
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArguments.scala
---
@@ -97,6 +102,7 @@ private[mesos] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-171723201
Besides what @skyluc and my comments I think this patch LGTM. Have you
tested this btw?
---
If your project is set up for it, you can reply to this email and have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-171723091
jenkins please test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-171567791
@drcrallen about waiting suggestions, the best way from the scheduler side
is waiting until all tasks are terminated when you like to shutdown. I'm
thinking
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r49696555
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +380,23 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10057#issuecomment-168780176
@andrewor14 Can you take a look at this PR sometime this week?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-168781103
@andrewor14 @dragos I think we figured out the testing problem at this
point, I've tested this locally myself so @dragos if you like to try it out let
me know
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r48770592
--- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
@@ -245,10 +245,19 @@ object SparkEnv extends Logging {
val securityManager
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r48771144
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -854,7 +854,8 @@ private[spark] object Utils extends Logging {
* Get the local
101 - 200 of 625 matches
Mail list logo