[GitHub] spark issue #12933: [Spark-15155][Mesos] Optionally ignore default role reso...

2016-10-13 Thread hellertime
Github user hellertime commented on the issue:

https://github.com/apache/spark/pull/12933
  
You saw the error with `./dev/run-tests`? Ok I'll figure this out.

Sent from my iPhone

> On Oct 13, 2016, at 12:24 AM, Timothy Chen <notificati...@github.com> 
wrote:
> 
> I just tried running it locally and I'm getting the same error. It seems 
like with your change that test is simply declining the offer.
> 
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or mute the thread.
> 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #12933: [Spark-15155][Mesos] Optionally ignore default role reso...

2016-10-12 Thread hellertime
Github user hellertime commented on the issue:

https://github.com/apache/spark/pull/12933
  
@tnachen I'm not sure what to do about this unit test failure. Running 
`./dev/run-tests` on my system does not produce the error, and trying to just 
run the mesos suite with `./build/mvn 
-DwildcardSuites=org.apache.spark.scheduler.cluster.mesos test` results in a 
successful build, it doesn't appear to actually run the test. Any advice on how 
to test this locally?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #12933: [Spark-15155][Mesos] Optionally ignore default role reso...

2016-10-11 Thread hellertime
Github user hellertime commented on the issue:

https://github.com/apache/spark/pull/12933
  
Fixed scala style error


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #12933: [Spark-15155][Mesos] Optionally ignore default role reso...

2016-10-11 Thread hellertime
Github user hellertime commented on the issue:

https://github.com/apache/spark/pull/12933
  
@tnachen rebased with master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #12933: [Spark-15155][Mesos] Optionally ignore default role reso...

2016-06-22 Thread hellertime
Github user hellertime commented on the issue:

https://github.com/apache/spark/pull/12933
  
Hi! Was busy with the day job. I didn't mean to let this slip! Absolutely 
will rebase and retest. Thanks.

Sent from my iPhone

> On Jun 21, 2016, at 11:16 PM, Timothy Chen <notificati...@github.com> 
wrote:
> 
> @hellertime ping
> 
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or mute the thread.
> 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Optionally ignore default...

2016-05-06 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217445754
  
Rebasing against master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Optionally ignore default...

2016-05-06 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217423623
  
Alright, I've modified the code to convert from a property which enumerated 
the accepted roles, to one which will simply ignore the default role when it is 
set.

I also removed one unit test, which was redundant with the test that checks 
we accepts all roles when `spark.mesos.role` is set.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Optionally ignore default...

2016-05-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217343655
  
I'm updating the title of the PR to reflect the change in approach. I think 
the boolean property will be sufficient.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Selectively accept Mesos ...

2016-05-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217341428
  
True. So really the desired behavior is always just to ignore `*` 
resources. A boolean property would suffice here. What about 
`spark.mesos.ignoreDefaultResourceRole`, and have the property ignored if 
`spark.mesos.role` is unset?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Selectively accept Mesos ...

2016-05-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217339163
  
I see that as a potential solution, however at some future point mesos will 
support frameworks which can hold multiple roles (MESOS-1763 😉) so perhaps 
leaving it in this form will ease that transition (though we'd then need to 
change the other property to `spark.mesos.roles`, so really its a wash).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Selectively accept Mesos ...

2016-05-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217240387
  
*MiMa* test died here:
```
/home/jenkins/workspace/SparkPullRequestBuilder@2/dev/mima: line 37: 18083 
Aborted (core dumped) java -XX:MaxPermSize=1g -Xmx2g -cp 
"$TOOLS_CLASSPATH:$OLD_DEPS_CLASSPATH" org.apache.spark.tools.GenerateMIMAIgnore
[error] running /home/jenkins/workspace/SparkPullRequestBuilder@2/dev/mima 
-Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive-thriftserver -Phive ; received return 
code 134
Attempting to post to Github...
 > Post successful.
```

I don't think this is due to my change, but I'm really not familiar with 
the *MiMa* test.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Selectively accept Mesos ...

2016-05-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217213224
  
@tnachen exactly, I want to keep Spark from grabbing * roles, as in my use 
case I have a particular spark cluster that I want to isolate from other 
clusters. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Selectively accept Mesos ...

2016-05-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/12933#issuecomment-217156017
  
@tnachen I'd like to request you review this change, since you are the 
author of the original mesos role code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [Spark-15155][Mesos] Selectively accept Mesos ...

2016-05-05 Thread hellertime
GitHub user hellertime opened a pull request:

https://github.com/apache/spark/pull/12933

[Spark-15155][Mesos] Selectively accept Mesos resources by role

Add new property `spark.mesos.acceptedResourceRoles`. When set, Spark will 
only accept resources with roles that match. When unset, Spark operates as is, 
accepting resources from `*` and `spark.mesos.role` if set.

## How was this patch tested?

Additional unit tests added to `MesosSchedulerBackendSuite`, extending the 
original multi-role test suite.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hellertime/spark SPARK-15155

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/12933.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12933


commit ec4a09667354ee5582bde35098499a9e26776a89
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-04T11:58:55Z

Limit resources to accepted roles

commit 83bc40d9b21022470e5f4b0473bc033f79aabfb1
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-04T12:53:12Z

Move defaultAcceptedRoles logic into Util

commit 66f7822fe7e5a9ad8eeef49d92e436548f46e2a6
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-05T11:45:13Z

Pre-filter resources

commit 4d2941ebab68b6380a52cd454ccd49bd4077f81b
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-05T11:52:07Z

Convert to scala list to use filter

commit fda6c713cdb66efd1fa1166168dfcc5620230ec4
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-05T12:59:36Z

Rework utility function

commit 86516c66777d5dbde377ef37d0d30e207de0fa21
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-05T13:00:14Z

Update docs

commit 77c0685b7a30f2a4d5f87794b30a683676571919
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-05T13:26:21Z

Remove double restrict

commit 48b472329036e62ef00973f912d54db7fc5d7872
Author: Chris Heller <hellert...@gmail.com>
Date:   2016-05-05T13:26:30Z

Add testsuite




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-9189][CORE] Takes locality and the sum ...

2015-08-05 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/7536#discussion_r36336003
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
@@ -295,7 +295,15 @@ private class PartitionCoalescer(maxPartitions: Int, 
prev: RDD[_], balanceSlack:
 
 val r1 = rnd.nextInt(groupArr.size)
 val r2 = rnd.nextInt(groupArr.size)
-val minPowerOfTwo = if (groupArr(r1).size  groupArr(r2).size) 
groupArr(r1) else groupArr(r2)
+val minPowerOfTwo = if (p.isInstanceOf[HadoopPartition]) {
+  val groupLen1 = groupArr(r1).arr.map(part =
+part.asInstanceOf[HadoopPartition].inputSplit.value.getLength).sum
+  val groupLen2 = groupArr(r1).arr.map(part =
--- End diff --

Shouldn't this be ` groupArr(r2)`? Otherwise  `groupLen1` will aways equal 
`groupLen2`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5964] Allow spark-daemon.sh to support ...

2015-06-21 Thread hellertime
Github user hellertime closed the pull request at:

https://github.com/apache/spark/pull/3881


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5964] Allow spark-daemon.sh to support ...

2015-06-20 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3881#issuecomment-113860838
  
@andrewor14 I was under the impression all the shell scripts were getting 
refactored and that this patch had become obsolete. I agree it's best to close 
this out.

Sent from my iPhone

 On Jun 18, 2015, at 7:59 PM, andrewor14 notificati...@github.com wrote:
 
 @hellertime have you had the chance to address those comments? This patch 
has mostly gone stale at this point and I would recommend that we close it for 
now. If you prefer feel free to open a new updated one.
 
 —
 Reply to this email directly or view it on GitHub.
 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97288972
  
@andrewor14 hmm. I'll have a look. That Jenkins output is none too helpful 
:)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97299396
  
Ok got it. I had pulled out the construction of an ExecutorInfo to shorten 
line lengths, and that cause the type inference to decide that I wanted the 
mesos ExecutorInfo structure and not the spark ExecutorInfo structure -- even 
though I passed the value into a call site expecting the later, spark 
ExecutorInfo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r29096430
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
--- End diff --

The Volume parsing is not technically a DockerInfo setting, but is part of 
the ContainerInfo instead, so I could argue it is a more general 
SchedulerBackendUtil than the more specific DockerUtil. Perhaps 
MesosContainerUtil?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96114471
  
@doctapp actually in that example Dockerfile, the implication was that the 
container had been run with a flag such as `-v 
/usr/local/lib:/host/usr/local/lib:ro`, so the path as it stands is fine. This 
could be made more clear; in fact I might rewrite this example to use the 
mesosphere docker image as a base.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96135280
  
@andrewor14 all good suggestions. I've captured them all in this round of 
commits. I'm still not sold on the naming of the Util object. So I've left it 
for now.

@tnachen I've bumped the log level to debug for displaying the image name.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96135291
  
Jenkins. Make it so! Oh right...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-13 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28239714
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec: Array[String] =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, simmilar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
+  container: ContainerInfo.Builder,
+  image: String,
+  volumes: Option[List[Volume]] = None,
+  network: Option[ContainerInfo.DockerInfo.Network] = None,
+  portmaps: Option[List

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-27 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-87014181
  
@tnachen indeed, ready to go.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85229862
  
@tnachen I'm stumped at the moment. I've gone so far as to exclude the 
explicit docker/spark-mesos/Dockerfile path, but it is still not excluded. I 
had put this down so I haven't looked at it in a few days, nor merged in HEAD, 
but no the .rat-excludes is still stopping me. Its probably a typo that I've 
stared at too long (:


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85233392
  
@tnachen stop making things sound so damn easy! ;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85235541
  
Jenkins make it so! Oh wait, I don't have permission to do that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81964794
  
@pwendell @mateiz I'd like to lobby towards getting this merged into the 
next release. I've been making use of this branch in my Mesos cluster, and I 
know that it would benefit from much wider use. Using docker to deploy Spark on 
Mesos is really convenient! Maintaining a patched version of Spark, not so much 
(:


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81967379
  
Is there a link to documentation on the syntax of apache rat exclude 
expressions? I didn't see any docs on the website, other than a very light 
overview and --help flag output.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-02 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-76720424
  
Odd. I explicitly have a line in .rat-excludes for the path which caused 
the error. What gives?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5964] Allow spark-daemon.sh to support ...

2015-02-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3881#issuecomment-75784638
  
@nchammas wow, happy to make those edits. As for the long options parsing 
order, the approach currently used by the script (and the approach that I 
coopted) is limited. It would require some rewrite in order to allow for 
parsing the args out of order.

Changes like that might be better in another PR, but it wouldn't be trouble 
to add them here, just a bit overreaching.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5964] Allow spark-daemon.sh to support ...

2015-02-23 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3881#issuecomment-75695177
  
Ok. JIRA ticket has been filed, noted ticket in the title of this PR.

Happy to add the additional comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74944182
  
@tnachen That Dockerfile you have is actually all that is needed for an 
example image; that its based on the mesosphere image is even better!

I had hoped that there could be an actual image on the Docker hub which 
could be referenced from the properties example. Is that image on the Docker 
hub?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74937819
  
So perhaps putting an example Dockerfile in the `docker` subdirectory is 
not an appropriate thing to do... any suggestions on a better location for 
examples such as this? The `examples` directory also would be inappropriate I 
think.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-17 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74700554
  
Missed that EasyMock is not longer the mocking kit. Gotta fixup my tests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-10 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-73810490
  
I'm on a short vacation this week so I'll not be making changes until I 
return. That dropped version is just a merge error I didn't catch. I'm planning 
to address the Dockerfile too once I return.

Sent from my iPhone

 On Feb 10, 2015, at 11:45 AM, Timothy Chen notificati...@github.com 
wrote:
 
 @hellertime Are you going to update this patch and also include a 
Dockerfile?
 
 —
 Reply to this email directly or view it on GitHub.
 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-73107870
  
Still working to integrate the new docker examples, but I've fixed up the 
code


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-02 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r23931789
  
--- Diff: conf/docker.properties.template ---
@@ -0,0 +1,3 @@
+spark.executor.docker.image: amplab/spark-1.1.0
--- End diff --

I think once I fix up the code to show how to build an image from the 
result of `make-distribution.sh` the example image name can reflect the name of 
the built image. That current name is really just a placeholder, I don't think 
such an image actually exists!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-02 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-72477463
  
BTW. This is a meta question, but how do you run a single spark test suite? 
I've tried both the maven-surefire method of 
`-Dtest=MesosSchedulerBackendSuite` and the scalatest form of 
`-Dsuites=org.apache.spark.scheduler.mesos.MesosSchedulerBackendSuite`. Neither 
run just that suite.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-01-26 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-71454692
  
@andrewor14 your suggestions all look quite reasonable, I'll have a closer 
look at them tonight and make appropriate changes.

@mateiz adding a Dockerfile will be no trouble. Basically one typically 
just creates a Docker image with the spark distribution tarball pre-expanded 
inside. An example Dockerfile could either be setup to refer directly to the 
output of `make-distribution.sh`, or to some other pre-existing Spark Docker 
image. Is there some official Spark Docker image I might reference, looking at 
the Docker Hub, there doesn't appear to be one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2015-01-08 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-69223979
  
Ok. I've excluded the example properties file, and update the title.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-01-08 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-69233810
  
I've rebased to master, could someone retest this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2015-01-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-68808789
  
I've got it nearly complete just hadn't quite finished it. I'll work on it 
tonight, that should be all the time I need.

Sent from my iPhone

 On Jan 5, 2015, at 7:04 PM, Timothy Chen notificati...@github.com wrote:
 
 @hellertime if you don't have time I can write up a test in a new PR. 
I'll make sure you're credited for the fix.
 
 —
 Reply to this email directly or view it on GitHub.
 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2015-01-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-68803247
  
Embarrassingly, through a combination of the holidays and other obligations 
I've yet to submit my test for this. I've got it mostly complete, but right now 
there is no additions to the test suite in the patch, just the syntax changes 
requested.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2015-01-05 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-68826850
  
Alright. I've added a test to the `MesosSchedulerBackendSuite` which checks 
that the spark conf properties correctly populate the DockerInfo fields. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Allow spark-daemon.sh to support foreground op...

2015-01-02 Thread hellertime
GitHub user hellertime opened a pull request:

https://github.com/apache/spark/pull/3881

Allow spark-daemon.sh to support foreground operation

Add `--foreground` option to spark-daemon.sh to prevent the process from 
daemonizing itself. Useful if running under a watchdog which waits on its child 
process.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hellertime/spark 
feature/no-daemon-spark-daemon

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/3881.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3881


commit 358400fb4f87f2e6de791a116bfd64c5a31f9d39
Author: Chris Heller hellert...@gmail.com
Date:   2014-12-29T19:28:53Z

Allow spark-daemon.sh to support foreground operation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2014-12-15 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-67060975
  
Just pushed a style fix. It addresses the two points @ash211 pointed out.
I'll look into designing a test for this. I'm thinking I'll test that the 
protobuf has the expected values in its fields after a call to `withDockerInfo`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2014-11-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-64221838
  
Currently this patch is know to work on Mesos 0.20.1 -- I have been using 
it there for some time.

I'm presently standing up a 0.21.0 cluster, and once that happens I'll be 
making use of this patch against the new version -- but since the updated 
fields have not been added yet I don't expect it to have any issues.

I noticed that the PR indicates merge conflicts... but I don't see how to 
view the actual conflicts. Is that something I can look at to resolve?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2014-11-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-64228513
  
The slaves will need to pull the image you specify from a Docker Hub (or 
you can pre-pull using the command-line client on each node).

If your image is in the main docker hub than that is what all slaves will 
use.

If you have a base Dockerfile which includes spark, and the appropriate 
libmesos.so, and you add to that your python dependencies you should be good to 
go.

You can even test by running a standalone Spark inside the image to make 
sure the paths are OK.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for Mesos DockerInfo

2014-11-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-64228713
  
I have a meta-question about Git. I rebased my branch on master, and now I 
can't push to my remote branch since it is no longer a fast-forward merge.

Will forcing the push cause this pull-request to fail, or is that just how 
one is supposed to update a pull-request which needs rebasing?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-05 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r19873195
  
--- Diff: pom.xml ---
@@ -115,7 +115,7 @@
 scala.version2.10.4/scala.version
 scala.binary.version2.10/scala.binary.version
 scala.macros.version2.0.1/scala.macros.version
-mesos.version0.18.1/mesos.version
+mesos.version0.20.1/mesos.version
--- End diff --

mesos-0.21 doesn't appear up on maven central yet, not even an -rc release, 
so I'll add support for what is available in 0.20.1 for now, but leave it 
flexible enough to expand as new fields arrive.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-04 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-61634293
  
Ok. I've gone and added support for coarse mode. It looks to be 
functioning, I can issue jobs on my mesos cluster, and get results both with 
`spark.mesos.coarse` set and unset.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-04 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r19832776
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.Builder
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil {
+  def withDockerInfo(container: ContainerInfo.Builder, image: String, 
volumes: Option[String]) = {
+container.setType(ContainerInfo.Type.DOCKER)
+
container.setDocker(ContainerInfo.DockerInfo.newBuilder().setImage(image).build())
+
+def mkVol(p: String, m:Volume.Mode) {
+  container.addVolumesBuilder().setContainerPath(p).setMode(m)
+}
+
+def mkMnt(s: String, d: String, m: Volume.Mode) {
+  
container.addVolumesBuilder().setContainerPath(d).setHostPath(s).setMode(m)
+}
+
+volumes.map {
+  _.split(,).map(_.split(:)).map {
+_ match {
+  case Array(container_path) =
+mkVol(container_path, Volume.Mode.RW)
+  case Array(container_path, rw) =
+mkVol(container_path, Volume.Mode.RW)
+  case Array(container_path, ro) =
+mkVol(container_path, Volume.Mode.RO)
+  case Array(host_path, container_path) =
+mkMnt(host_path, container_path, Volume.Mode.RW)
+  case Array(host_path, container_path, rw) =
+mkMnt(host_path, container_path, Volume.Mode.RW)
+  case Array(host_path, container_path, ro) =
+mkMnt(host_path, container_path, Volume.Mode.RO)
+  case _ = ()
--- End diff --

Would logging a message be sufficient? I think if I extend the utility 
object with `Logging` I can emit a `logInfo` call here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-03 Thread hellertime
GitHub user hellertime opened a pull request:

https://github.com/apache/spark/pull/3074

Support for mesos DockerInfo in fine-grained mode.

This patch adds partial support for running spark on mesos inside of a 
docker container. Only fine-grained mode is presently supported, and there is 
no checking done to ensure that the version of libmesos is recent enough to 
have a DockerInfo structure in the protobuf (other than pinning a mesos version 
in the pom.xml).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hellertime/spark SPARK-2691

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/3074.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3074


commit 71ad9d536df2f49fde1f017177b07cbdde5eb291
Author: Chris Heller hellert...@gmail.com
Date:   2014-11-03T15:55:06Z

Support for mesos DockerInfo in fine-grained mode.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-03 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-61502880
  
This patch is in reference to 
https://issues.apache.org/jira/browse/SPARK-2691



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-03 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-61581145
  
This code has been in use for a while now. I'm currently building a 
workflow reliant on the ability for spark to spin up its tasks inside docker. 

Not sure what exactly you mean about the other executor? If you are 
referring to coarse/fine. This patch only touches the fine mode backend 
scheduler, since that is what we were using.

Coarse mode should be easy to adapt, it would just require refactoring the 
`maybeDockerize` code to not expect an ExecutorInfo since the coarse mode puts 
the ContainerInfo directly into the task (that was a bit hand-wavy, apologies).




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-03 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-61582366
  
Ok. I'll look into expanding this to coarse mode. It shouldn't be too bad. 
Just need to work backwards on the protobuf objects.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Support for mesos DockerInfo in fine-grained m...

2014-11-03 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-61584495
  
Question on approach here. There is a lot of code duplication between the 
fine and coarse scheduler (as noted in comments in the coarse scheduler code).

I could continue this trend and duplicate the docker info code in both, or 
perhaps pull it out into a ... ?

Being still new to the spark code, what approach would be more spark like?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org