Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-69221016
This is failing tests because `conf/docker.properties.template` does not
have the apache license header. You'll need to add this file to `.rat-excludes`.
---
If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-69221113
@hellertime could you add `[SPARK-2691]` to the title?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-69223911
[Test build #25246 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25246/consoleFull)
for PR 3074 at commit
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-69223979
Ok. I've excluded the example properties file, and update the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934167
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934666
[Test build #25113 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25113/consoleFull)
for PR 3074 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934672
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934655
[Test build #25113 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25113/consoleFull)
for PR 3074 at commit
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68966151
Jenkins please test again
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68791982
@ash211 Can you take a look at this patch again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68802762
// I now have permissions to do this
Jenkins this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68804188
@ash211 I see, I've asked Matei that question before and he responded on
the mailing list that since Mesos integration is not a big piece of code any
committer can jump
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68805203
@hellertime if you don't have time I can write up a test in a new PR. I'll
make sure you're credited for the fix.
---
If your project is set up for it, you can reply to
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68808789
I've got it nearly complete just hadn't quite finished it. I'll work on it
tonight, that should be all the time I need.
Sent from my iPhone
On Jan
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68803247
Embarrassingly, through a combination of the holidays and other obligations
I've yet to submit my test for this. I've got it mostly complete, but right now
there is
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68812181
@hellertime sounds good!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68802858
Unfortunately I'm more of an interested bystander than doing real code
review on this PR. There doesn't seem to be a go-to person for asking about
Mesos either --
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68826850
Alright. I've added a test to the `MesosSchedulerBackendSuite` which checks
that the spark conf properties correctly populate the DockerInfo fields.
---
If your
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-67060666
@hellertime, do you think you can address the style and also add a test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-67060975
Just pushed a style fix. It addresses the two points @ash211 pointed out.
I'll look into designing a test for this. I'm thinking I'll test that the
protobuf has
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-66905021
@tnachen I don't have permissions to have Jenkins test this PR but
@pwendell does.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-66905391
I took a quick look, and some of the style seemed a little off from the
rest of Spark. I'm guessing the first Jenkins run will flag a few style errors
so it could be
Github user preillyme commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64221042
What's the current status of this pull request? Is anybody currently
testing this approach?
---
If your project is set up for it, you can reply to this email and have
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64221838
Currently this patch is know to work on Mesos 0.20.1 -- I have been using
it there for some time.
I'm presently standing up a 0.21.0 cluster, and once that
Github user preillyme commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64224210
@hellertime you should just rebase against master and the merge conflicts
should go away. Also thanks for the update.
---
If your project is set up for it, you can
Github user rdhyee commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64227322
I'm a Spark/Mesos newbie who is attracted to using (Py)Spark on Mesos as a
way of scaling up my Python-centered computation (using primarily typical
scientific Python
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64228513
The slaves will need to pull the image you specify from a Docker Hub (or
you can pre-pull using the command-line client on each node).
If your image is in the
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64228713
I have a meta-question about Git. I rebased my branch on master, and now I
can't push to my remote branch since it is no longer a fast-forward merge.
Will
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64229280
Just force push, it will delete any comments on your existing merge but
that's the only way
Sent from my iPhone
On Nov 24, 2014, at 9:19 AM, Chris
Github user rdhyee commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64231523
Thanks, @hellertime for [answering my
questions](https://github.com/apache/spark/pull/3074#issuecomment-64228513).
I'll give this PR a try this week.
---
If your
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-62652752
@pwendell can you help take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r20114359
--- Diff: docs/configuration.md ---
@@ -224,6 +224,43 @@ Apart from these, the following properties are also
available, and may be useful
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r20114363
--- Diff: docs/configuration.md ---
@@ -224,6 +224,43 @@ Apart from these, the following properties are also
available, and may be useful
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-62458689
Besides the comments and the PR title (not just fine grain anymore),
everything else LGTM!
---
If your project is set up for it, you can reply to this email and have
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r19873195
--- Diff: pom.xml ---
@@ -115,7 +115,7 @@
scala.version2.10.4/scala.version
scala.binary.version2.10/scala.binary.version
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61634293
Ok. I've gone and added support for coarse mode. It looks to be
functioning, I can issue jobs on my mesos cluster, and get results both with
`spark.mesos.coarse` set
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r19822340
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r19832776
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to
GitHub user hellertime opened a pull request:
https://github.com/apache/spark/pull/3074
Support for mesos DockerInfo in fine-grained mode.
This patch adds partial support for running spark on mesos inside of a
docker container. Only fine-grained mode is presently supported, and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61501664
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61502880
This patch is in reference to
https://issues.apache.org/jira/browse/SPARK-2691
---
If your project is set up for it, you can reply to this email and have your
reply
Github user timothysc commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r19762022
--- Diff: pom.xml ---
@@ -115,7 +115,7 @@
scala.version2.10.4/scala.version
scala.binary.version2.10/scala.binary.version
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61575462
Very cool! I don't know what the Spark code style are, so can't really give
any code wise comments.
Have you tried this out already?
And also there should be
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61581145
This code has been in use for a while now. I'm currently building a
workflow reliant on the ability for spark to spin up its tasks inside docker.
Not sure
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61581607
@hellertime yes I was referring to the coarse mode, I think it should be
straight forward enough to try to get coarse mode included in this patch (as I
learned getting a
Github user timothysc commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61582163
+1 try to be as comprehensive as possible.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61582366
Ok. I'll look into expanding this to coarse mode. It shouldn't be too bad.
Just need to work backwards on the protobuf objects.
---
If your project is set up for it,
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61584495
Question on approach here. There is a lot of code duplication between the
fine and coarse scheduler (as noted in comments in the coarse scheduler code).
I
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61588956
Let's move to consolidate what makes sense, I refactored the code a bit
earlier but want to continually doing so moving forward, especially I like to
introduce a lot
49 matches
Mail list logo