Github user hellertime commented on the issue:
https://github.com/apache/spark/pull/12933
You saw the error with `./dev/run-tests`? Ok I'll figure this out.
Sent from my iPhone
> On Oct 13, 2016, at 12:24 AM, Timothy Chen <notificati...@github.com&g
Github user hellertime commented on the issue:
https://github.com/apache/spark/pull/12933
@tnachen I'm not sure what to do about this unit test failure. Running
`./dev/run-tests` on my system does not produce the error, and trying to just
run the mesos suite with `./build/mvn
Github user hellertime commented on the issue:
https://github.com/apache/spark/pull/12933
Fixed scala style error
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hellertime commented on the issue:
https://github.com/apache/spark/pull/12933
@tnachen rebased with master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hellertime commented on the issue:
https://github.com/apache/spark/pull/12933
Hi! Was busy with the day job. I didn't mean to let this slip! Absolutely
will rebase and retest. Thanks.
Sent from my iPhone
> On Jun 21, 2016, at 11:16 PM, Timothy C
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217445754
Rebasing against master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217423623
Alright, I've modified the code to convert from a property which enumerated
the accepted roles, to one which will simply ignore the default role when it is
set
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217343655
I'm updating the title of the PR to reflect the change in approach. I think
the boolean property will be sufficient.
---
If your project is set up for it, you can
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217341428
True. So really the desired behavior is always just to ignore `*`
resources. A boolean property would suffice here. What about
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217339163
I see that as a potential solution, however at some future point mesos will
support frameworks which can hold multiple roles (MESOS-1763 ð) so perhaps
leaving
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217240387
*MiMa* test died here:
```
/home/jenkins/workspace/SparkPullRequestBuilder@2/dev/mima: line 37: 18083
Aborted (core dumped) java
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217213224
@tnachen exactly, I want to keep Spark from grabbing * roles, as in my use
case I have a particular spark cluster that I want to isolate from other
clusters
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/12933#issuecomment-217156017
@tnachen I'd like to request you review this change, since you are the
author of the original mesos role code.
---
If your project is set up for it, you can reply
GitHub user hellertime opened a pull request:
https://github.com/apache/spark/pull/12933
[Spark-15155][Mesos] Selectively accept Mesos resources by role
Add new property `spark.mesos.acceptedResourceRoles`. When set, Spark will
only accept resources with roles that match. When
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/7536#discussion_r36336003
--- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
@@ -295,7 +295,15 @@ private class PartitionCoalescer(maxPartitions: Int,
prev
Github user hellertime closed the pull request at:
https://github.com/apache/spark/pull/3881
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-113860838
@andrewor14 I was under the impression all the shell scripts were getting
refactored and that this patch had become obsolete. I agree it's best to close
this out
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-97288972
@andrewor14 hmm. I'll have a look. That Jenkins output is none too helpful
:)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-97299396
Ok got it. I had pulled out the construction of an ExecutorInfo to shorten
line lengths, and that cause the type inference to decide that I wanted the
mesos
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r29096430
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -0,0 +1,117 @@
+/*
+ * Licensed
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-96114471
@doctapp actually in that example Dockerfile, the implication was that the
container had been run with a flag such as `-v
/usr/local/lib:/host/usr/local/lib:ro`, so
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-96135280
@andrewor14 all good suggestions. I've captured them all in this round of
commits. I'm still not sold on the naming of the Util object. So I've left it
for now
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-96135291
Jenkins. Make it so! Oh right...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r28239714
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -0,0 +1,117 @@
+/*
+ * Licensed
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-87014181
@tnachen indeed, ready to go.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-85229862
@tnachen I'm stumped at the moment. I've gone so far as to exclude the
explicit docker/spark-mesos/Dockerfile path, but it is still not excluded. I
had put this down
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-85233392
@tnachen stop making things sound so damn easy! ;)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-85235541
Jenkins make it so! Oh wait, I don't have permission to do that.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-81964794
@pwendell @mateiz I'd like to lobby towards getting this merged into the
next release. I've been making use of this branch in my Mesos cluster, and I
know
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-81967379
Is there a link to documentation on the syntax of apache rat exclude
expressions? I didn't see any docs on the website, other than a very light
overview and --help
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-76720424
Odd. I explicitly have a line in .rat-excludes for the path which caused
the error. What gives?
---
If your project is set up for it, you can reply to this email
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75784638
@nchammas wow, happy to make those edits. As for the long options parsing
order, the approach currently used by the script (and the approach that I
coopted
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75695177
Ok. JIRA ticket has been filed, noted ticket in the title of this PR.
Happy to add the additional comment.
---
If your project is set up for it, you can
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-74944182
@tnachen That Dockerfile you have is actually all that is needed for an
example image; that its based on the mesosphere image is even better!
I had hoped
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-74937819
So perhaps putting an example Dockerfile in the `docker` subdirectory is
not an appropriate thing to do... any suggestions on a better location for
examples
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-74700554
Missed that EasyMock is not longer the mocking kit. Gotta fixup my tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-73810490
I'm on a short vacation this week so I'll not be making changes until I
return. That dropped version is just a merge error I didn't catch. I'm planning
to address
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-73107870
Still working to integrate the new docker examples, but I've fixed up the
code
---
If your project is set up for it, you can reply to this email and have your
reply
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r23931789
--- Diff: conf/docker.properties.template ---
@@ -0,0 +1,3 @@
+spark.executor.docker.image: amplab/spark-1.1.0
--- End diff --
I think
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-72477463
BTW. This is a meta question, but how do you run a single spark test suite?
I've tried both the maven-surefire method of
`-Dtest=MesosSchedulerBackendSuite
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-71454692
@andrewor14 your suggestions all look quite reasonable, I'll have a closer
look at them tonight and make appropriate changes.
@mateiz adding a Dockerfile
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-69223979
Ok. I've excluded the example properties file, and update the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-69233810
I've rebased to master, could someone retest this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68808789
I've got it nearly complete just hadn't quite finished it. I'll work on it
tonight, that should be all the time I need.
Sent from my iPhone
On Jan 5
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68803247
Embarrassingly, through a combination of the holidays and other obligations
I've yet to submit my test for this. I've got it mostly complete, but right now
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68826850
Alright. I've added a test to the `MesosSchedulerBackendSuite` which checks
that the spark conf properties correctly populate the DockerInfo fields.
---
If your
GitHub user hellertime opened a pull request:
https://github.com/apache/spark/pull/3881
Allow spark-daemon.sh to support foreground operation
Add `--foreground` option to spark-daemon.sh to prevent the process from
daemonizing itself. Useful if running under a watchdog which waits
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-67060975
Just pushed a style fix. It addresses the two points @ash211 pointed out.
I'll look into designing a test for this. I'm thinking I'll test that the
protobuf has
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64221838
Currently this patch is know to work on Mesos 0.20.1 -- I have been using
it there for some time.
I'm presently standing up a 0.21.0 cluster, and once
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64228513
The slaves will need to pull the image you specify from a Docker Hub (or
you can pre-pull using the command-line client on each node).
If your image
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-64228713
I have a meta-question about Git. I rebased my branch on master, and now I
can't push to my remote branch since it is no longer a fast-forward merge
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r19873195
--- Diff: pom.xml ---
@@ -115,7 +115,7 @@
scala.version2.10.4/scala.version
scala.binary.version2.10/scala.binary.version
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61634293
Ok. I've gone and added support for coarse mode. It looks to be
functioning, I can issue jobs on my mesos cluster, and get results both with
`spark.mesos.coarse` set
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r19832776
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -0,0 +1,60 @@
+/*
+ * Licensed
GitHub user hellertime opened a pull request:
https://github.com/apache/spark/pull/3074
Support for mesos DockerInfo in fine-grained mode.
This patch adds partial support for running spark on mesos inside of a
docker container. Only fine-grained mode is presently supported
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61502880
This patch is in reference to
https://issues.apache.org/jira/browse/SPARK-2691
---
If your project is set up for it, you can reply to this email and have your
reply
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61581145
This code has been in use for a while now. I'm currently building a
workflow reliant on the ability for spark to spin up its tasks inside docker.
Not sure
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61582366
Ok. I'll look into expanding this to coarse mode. It shouldn't be too bad.
Just need to work backwards on the protobuf objects.
---
If your project is set up
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-61584495
Question on approach here. There is a lot of code duplication between the
fine and coarse scheduler (as noted in comments in the coarse scheduler code).
I
59 matches
Mail list logo