Github user dragos commented on the issue:
https://github.com/apache/spark/pull/10949
I think both @tnachen and I have moved on to the non-Spark world in the
meantime. Anyway, neither of us had commit rights. I agree it's a pity to drop
it, perhaps @andrewor14 could help
Github user dragos commented on the issue:
https://github.com/apache/spark/pull/17982
If I'm not mistaken the current patch will fail on files passed via `-i` to
spark-shell, since Spark is initialized after `process` is done (the
SparkContext is not available during initialization
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10924#issuecomment-216759444
> On 4 mai 2016, at 02:22, Sebastien Rainville <notificati...@github.com>
wrote:
>
> @dragos I finally did the change. Sorr
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-215110842
ping @atongen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10924#issuecomment-215111311
ping @sebastienrainville
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/12403#issuecomment-210375089
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-205275163
Hey @atongen, I think this is really close to being merged, can you please
rebase?
---
If your project is set up for it, you can reply to this email and have your
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/12101#issuecomment-204390585
Thanks @jayv for the backport!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-203989761
LGTM! @andrewor14 please have a look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10924#issuecomment-203396355
Cool, looking forward to pushing this over the finish line!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-203396085
@atongen can you please rebase? The tests look good, but I'd like to see
the test suite passing.
---
If your project is set up for it, you can reply to this email
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-203394614
Sounds good. Who can close this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-203394562
@jayv can you please rebase?
@andrewor14 the last issue (escaping characters for the shell command) has
been fixed. Please take a look.
---
If your project
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-200406113
Yeap, I just opened a PR on your repo. If you merge it it should be
reflected here.
---
If your project is set up for it, you can reply to this email and have your
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-199886574
It works, but I found a couple of corner cases. I guess you need to escape
`\` as well. For instance, a string ending in `\`, or a string like `\"?` won't
be q
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11887#discussion_r56994916
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/ui/MesosClusterPage.scala ---
@@ -115,4 +144,58 @@ private[mesos] class MesosClusterPage(parent
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11887#discussion_r56994609
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/ui/MesosClusterPage.scala ---
@@ -76,6 +104,7 @@ private[mesos] class MesosClusterPage(parent
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11887#issuecomment-199840159
LGTM apart for minor comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10924#issuecomment-199766699
@sebastienrainville sorry for my confusion. Fine-grained mode does not
respect `spark.cores.max`, so my comment does not apply. Can you just do the
small refactoring
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-199332959
I can pick this up tomorrow.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11272#issuecomment-195727884
AFAIK this could go in. I did test it manually and things worked well.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11369#issuecomment-191692543
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11369#discussion_r54859540
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArguments.scala
---
@@ -44,7 +44,7 @@ private[mesos] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11272#issuecomment-190709339
Yes, we could have an integration test for this, shouldn't be too hard to
add. The basic idea is to decrease the network timeout and have a job that
exceeds
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/9925#issuecomment-190243686
I like @jodersky's solution. As I mentioned on the JIRA, Slick uses the
same operator, so at least there's a precedent, and some people might find it
familiar
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11277#issuecomment-188390919
`SPARK_DEAMON_JAVA_OPTS` seems to be understood by the other daemons, so
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53806998
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosExternalShuffleService.scala
---
@@ -17,69 +17,88 @@
package
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-187770951
Hey, @atongen will you have time to look into the additional test?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10924#issuecomment-187770729
You are right about having two different settings Makes sense. Let's go
with that for the moment.
---
If your project is set up for it, you can reply to this email
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-187770505
@jayv please let me know if you will look into this one. If you're busy,
I'm happy to take over, and will happily start with your escaping algorithm ;-)
---
If your
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11292#issuecomment-187770227
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11311#issuecomment-187770099
The failure is spurious. LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53802880
--- Diff:
network/shuffle/src/main/java/org/apache/spark/network/shuffle/protocol/BlockTransferMessage.java
---
@@ -40,7 +41,8 @@
/** Preceding
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53802611
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosExternalShuffleService.scala
---
@@ -17,69 +17,88 @@
package
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53798220
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosExternalShuffleService.scala
---
@@ -17,69 +17,88 @@
package
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53795420
--- Diff:
network/shuffle/src/main/java/org/apache/spark/network/shuffle/protocol/BlockTransferMessage.java
---
@@ -40,7 +41,8 @@
/** Preceding
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11272#issuecomment-187217805
I'm done reviewing, I only have a couple of small observations.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53634517
--- Diff:
network/shuffle/src/main/java/org/apache/spark/network/shuffle/mesos/MesosExternalShuffleClient.java
---
@@ -53,21 +65,55 @@ public
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53629422
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosExternalShuffleService.scala
---
@@ -17,69 +17,92 @@
package
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11272#discussion_r53629124
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosExternalShuffleService.scala
---
@@ -17,69 +17,92 @@
package
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11292#discussion_r53619838
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleInMemorySorter.java ---
@@ -30,7 +30,9 @@
private static final class SortComparator
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11292#discussion_r53619820
--- Diff:
unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -850,7 +850,7 @@ public int compareTo(@Nonnull final UTF8String other
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11292#discussion_r53620013
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleInMemorySorter.java ---
@@ -30,7 +30,9 @@
private static final class SortComparator
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11292#discussion_r53618380
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -718,6 +720,7 @@ object SparkSubmit {
throw new
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-187137035
OK, I retract that. We need to use `shell=true`, so @jayv if you have that
character escape version, let's go that way.
The reason why it won't work
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-186542740
Shouldn't be a problem, I just missed it in my first try. I think it's not
too hard to make it work with `shell=false`, I just need to spend a bit more
time
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-186283938
My commit is not working, BTW, I'll come back to this (the executable needs
to be a complete path to `spark-class`, right now it's a bash-ism: `cd
spark-1.*; ./bin
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11272#issuecomment-186283689
I'll have to come back to this on Monday. Judging by the description, it
looks good.
---
If your project is set up for it, you can reply to this email and have your
GitHub user dragos opened a pull request:
https://github.com/apache/spark/pull/11271
[minor][docs][mesos] Clarify that Mesos version is a lower bound.
## What changes were proposed in this pull request?
Clarify that 0.21 is only a **minimum** requirement.
## How
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10924#discussion_r53344844
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -254,53 +258,65 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-184761659
I pushed [this commit](https://github.com/dragos/spark/commit/a6f9df1) with
the general idea, but I didn't get to test it much. I'll come back to it
tomorrow
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10924#issuecomment-184743925
- could we have only one rejection delay setting?
- why not add the same logic in fine-grained mode as well?
..and sorry for the delay in reviewing
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10924#discussion_r53029744
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -254,53 +258,65 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10326#issuecomment-184709574
@SleepyThread, user name checks! :) Any progress?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-184707515
@jayv did you have the chance to look at this again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-184706654
Good idea about the unit test. I don't think it's too hard to add one along
the lines of what's already in `MesosClusterSchedulerSuite`.
---
If your project is set up
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11207#issuecomment-184619429
@bbossy thanks for picking this up!
I have a problem with the bandwidth this design implies. For instance, my
state.json is 200KB (a cluster of 1 master and 2
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10701#issuecomment-183929182
> On 14 feb. 2016, at 10:01, Timothy Chen <notificati...@github.com> wrote:
>
> @dragos you mean the framework no longer shows up in the
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r52422595
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -260,113 +257,208 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10984#issuecomment-181384158
@jodersky @CodingCat can you have a look? I think you've look at this part
of the code in the past.
---
If your project is set up for it, you can reply to this email
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-180752078
LGTM! Great work, @mgummelt!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51989924
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -373,40 +451,25 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-179843161
@Astralidea it will deploy more than one executor on the same slave if
there are enough resources and `spark.cores.max` wasn't reached yet. It's just
that it will first
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/11047#discussion_r51875638
--- Diff: docs/running-on-mesos.md ---
@@ -246,18 +246,13 @@ In either case, HDFS runs separately from Hadoop
MapReduce, without being schedu
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10770#issuecomment-179844830
To me this sounds rather a misconfiguration of your environment.. or at
least, a very peculiar setup. I'm worried about adding more complexity for a
scenario
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-179958972
@drcrallen probably [my
comment](https://github.com/apache/spark/pull/10319#discussion_r51693326) and
@andrewor14'r reply were buried by the GitHub interface
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51704761
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -373,40 +451,25 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51693326
--- Diff: docs/running-on-mesos.md ---
@@ -387,6 +387,13 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51703515
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/11047#issuecomment-179150956
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-179266568
I see the same behavior with master. I think this is a regression
introduced when Akka was removed, and communication has switched to Netty.
Here's what
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-179250730
@Astralidea this PR implements round-robin on the received offers. That
means it will try to schedule executors on all slaves in the current set of
offers, before
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-179294468
It seems this was reported already
[SPARK-12583](https://issues.apache.org/jira/browse/SPARK-12583), I somehow
missed it...
---
If your project is set up for it, you
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-179251806
I'm having troubles running this with dynamic allocation. Did you test it
in that scenario?
I'm seeing disconnects from the driver, leading
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10770#issuecomment-179290756
I am not an expert of this area of the code, nor security... so, you'd like
to run the Mesos executor as one user, but reading from HDFS using another
user? Why can't
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/9301#issuecomment-178502301
Ok, will do.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-178502476
As far as I'm concerned, that's the only thing (I'll have to test again on
a real Mesos cluster).
---
If your project is set up for it, you can reply to this email
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-178499263
@andrewor14 this is the one that should go forward. The first sentence of
this PR says:
>We have a similar need to what is proposed in #10768 by @Astrali
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10949#issuecomment-178500939
Regarding sharing code: The logic to check constraints is already shared.
The actual resource processing isn't. Maybe there is room to share more logic.
I opened [SPARK
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10768#issuecomment-178503147
@Astralidea I think we should focus on getting #10949 in, which implements
exactly this behavior.
---
If your project is set up for it, you can reply to this email
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-178503833
I didn't have time to look at this in detail, I'll do so this afternoon.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-178501728
For the record,
[SPARK-10444](https://issues.apache.org/jira/browse/SPARK-10444)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51594185
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51599060
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51599986
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51600410
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51600547
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -373,40 +451,25 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51600867
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -426,23 +489,23 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-178696871
@mgummelt this looks really good! I have a few comments. I still have to
run this PR with dynamic allocation and see it in action!
---
If your project is set up
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51600669
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51592789
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51593473
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51593041
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51593225
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,99 +239,182 @@ private[spark] class
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51686128
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -245,113 +240,207 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-177912182
The error seems spurious:
```
[info] *** 1 TEST FAILED ***
[error] Failed: Total 385, Failed 1, Errors 0, Passed 384, Ignored 2
[error] Failed tests
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-177912252
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dragos commented on a diff in the pull request:
https://github.com/apache/spark/pull/10993#discussion_r51404529
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -65,17 +65,10 @@ private[spark] class
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176761491
I don't see the point of holding back a review until an import is moved two
lines above. Better give the feedback now, so there's not so many
back-and-forths. This has
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176828137
Not sure if this is what you tried, but you can run only the style checks
using `dev/lint-scala`
---
If your project is set up for it, you can reply to this email
1 - 100 of 483 matches
Mail list logo