[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-05-01 Thread andrewor14
Github user andrewor14 commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-98279651
  
Ok, finally merging into master. This feature will be in the 1.4.0 release. 
Thanks everyone.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-05-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/3074


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97290781
  
Merged build started.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97290769
  
 Merged build triggered.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97290814
  
  [Test build #31223 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/31223/consoleFull)
 for   PR 3074 at commit 
[`d504af6`](https://github.com/apache/spark/commit/d504af6d250f9bc85df54ace370ecd46af557fbd).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97305716
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/31223/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97305710
  
  [Test build #31223 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/31223/consoleFull)
 for   PR 3074 at commit 
[`d504af6`](https://github.com/apache/spark/commit/d504af6d250f9bc85df54ace370ecd46af557fbd).
 * This patch **passes all tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.
 * This patch **adds the following new dependencies:**
   * `RoaringBitmap-0.4.5.jar`
   * `activation-1.1.jar`
   * `akka-actor_2.10-2.3.4-spark.jar`
   * `akka-remote_2.10-2.3.4-spark.jar`
   * `akka-slf4j_2.10-2.3.4-spark.jar`
   * `aopalliance-1.0.jar`
   * `arpack_combined_all-0.1.jar`
   * `avro-1.7.7.jar`
   * `breeze-macros_2.10-0.11.2.jar`
   * `breeze_2.10-0.11.2.jar`
   * `chill-java-0.5.0.jar`
   * `chill_2.10-0.5.0.jar`
   * `commons-beanutils-1.7.0.jar`
   * `commons-beanutils-core-1.8.0.jar`
   * `commons-cli-1.2.jar`
   * `commons-codec-1.10.jar`
   * `commons-collections-3.2.1.jar`
   * `commons-compress-1.4.1.jar`
   * `commons-configuration-1.6.jar`
   * `commons-digester-1.8.jar`
   * `commons-httpclient-3.1.jar`
   * `commons-io-2.1.jar`
   * `commons-lang-2.5.jar`
   * `commons-lang3-3.3.2.jar`
   * `commons-math-2.1.jar`
   * `commons-math3-3.4.1.jar`
   * `commons-net-2.2.jar`
   * `compress-lzf-1.0.0.jar`
   * `config-1.2.1.jar`
   * `core-1.1.2.jar`
   * `curator-client-2.4.0.jar`
   * `curator-framework-2.4.0.jar`
   * `curator-recipes-2.4.0.jar`
   * `gmbal-api-only-3.0.0-b023.jar`
   * `grizzly-framework-2.1.2.jar`
   * `grizzly-http-2.1.2.jar`
   * `grizzly-http-server-2.1.2.jar`
   * `grizzly-http-servlet-2.1.2.jar`
   * `grizzly-rcm-2.1.2.jar`
   * `groovy-all-2.3.7.jar`
   * `guava-14.0.1.jar`
   * `guice-3.0.jar`
   * `hadoop-annotations-2.2.0.jar`
   * `hadoop-auth-2.2.0.jar`
   * `hadoop-client-2.2.0.jar`
   * `hadoop-common-2.2.0.jar`
   * `hadoop-hdfs-2.2.0.jar`
   * `hadoop-mapreduce-client-app-2.2.0.jar`
   * `hadoop-mapreduce-client-common-2.2.0.jar`
   * `hadoop-mapreduce-client-core-2.2.0.jar`
   * `hadoop-mapreduce-client-jobclient-2.2.0.jar`
   * `hadoop-mapreduce-client-shuffle-2.2.0.jar`
   * `hadoop-yarn-api-2.2.0.jar`
   * `hadoop-yarn-client-2.2.0.jar`
   * `hadoop-yarn-common-2.2.0.jar`
   * `hadoop-yarn-server-common-2.2.0.jar`
   * `ivy-2.4.0.jar`
   * `jackson-annotations-2.4.0.jar`
   * `jackson-core-2.4.4.jar`
   * `jackson-core-asl-1.8.8.jar`
   * `jackson-databind-2.4.4.jar`
   * `jackson-jaxrs-1.8.8.jar`
   * `jackson-mapper-asl-1.8.8.jar`
   * `jackson-module-scala_2.10-2.4.4.jar`
   * `jackson-xc-1.8.8.jar`
   * `jansi-1.4.jar`
   * `javax.inject-1.jar`
   * `javax.servlet-3.0.0.v201112011016.jar`
   * `javax.servlet-3.1.jar`
   * `javax.servlet-api-3.0.1.jar`
   * `jaxb-api-2.2.2.jar`
   * `jaxb-impl-2.2.3-1.jar`
   * `jcl-over-slf4j-1.7.10.jar`
   * `jersey-client-1.9.jar`
   * `jersey-core-1.9.jar`
   * `jersey-grizzly2-1.9.jar`
   * `jersey-guice-1.9.jar`
   * `jersey-json-1.9.jar`
   * `jersey-server-1.9.jar`
   * `jersey-test-framework-core-1.9.jar`
   * `jersey-test-framework-grizzly2-1.9.jar`
   * `jets3t-0.7.1.jar`
   * `jettison-1.1.jar`
   * `jetty-util-6.1.26.jar`
   * `jline-0.9.94.jar`
   * `jline-2.10.4.jar`
   * `jodd-core-3.6.3.jar`
   * `json4s-ast_2.10-3.2.10.jar`
   * `json4s-core_2.10-3.2.10.jar`
   * `json4s-jackson_2.10-3.2.10.jar`
   * `jsr305-1.3.9.jar`
   * `jtransforms-2.4.0.jar`
   * `jul-to-slf4j-1.7.10.jar`
   * `kryo-2.21.jar`
   * `log4j-1.2.17.jar`
   * `lz4-1.2.0.jar`
   * `management-api-3.0.0-b012.jar`
   * `mesos-0.21.1-shaded-protobuf.jar`
   * `metrics-core-3.1.0.jar`
   * `metrics-graphite-3.1.0.jar`
   * `metrics-json-3.1.0.jar`
   * `metrics-jvm-3.1.0.jar`
   * `minlog-1.2.jar`
   * `netty-3.8.0.Final.jar`
   * `netty-all-4.0.23.Final.jar`
   * `objenesis-1.2.jar`
   * `opencsv-2.3.jar`
   * `oro-2.0.8.jar`
   * `paranamer-2.6.jar`
   * `parquet-column-1.6.0rc3.jar`
   * `parquet-common-1.6.0rc3.jar`
   * `parquet-encoding-1.6.0rc3.jar`
   * `parquet-format-2.2.0-rc1.jar`
   * `parquet-generator-1.6.0rc3.jar`
   * `parquet-hadoop-1.6.0rc3.jar`
   * `parquet-jackson-1.6.0rc3.jar`
   * `protobuf-java-2.4.1.jar`
   * `protobuf-java-2.5.0-spark.jar`
   * `py4j-0.8.2.1.jar`
   * `pyrolite-2.0.1.jar`
   * `quasiquotes_2.10-2.0.1.jar`
   * `reflectasm-1.07-shaded.jar`
   * `scala-compiler-2.10.4.jar`
   * `scala-library-2.10.4.jar`
   * 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97305714
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97288972
  
@andrewor14 hmm. I'll have a look. That Jenkins output is none too helpful 
:)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97299396
  
Ok got it. I had pulled out the construction of an ExecutorInfo to shorten 
line lengths, and that cause the type inference to decide that I wanted the 
mesos ExecutorInfo structure and not the spark ExecutorInfo structure -- even 
though I passed the value into a call site expecting the later, spark 
ExecutorInfo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97197224
  
  [Test build #31157 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/31157/consoleFull)
 for   PR 3074 at commit 
[`6ed728b`](https://github.com/apache/spark/commit/6ed728b3c6a62272414ff6bb11f56358aec9f83a).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.
 * This patch **removes the following dependencies:**
   * `RoaringBitmap-0.4.5.jar`
   * `activation-1.1.jar`
   * `akka-actor_2.10-2.3.4-spark.jar`
   * `akka-remote_2.10-2.3.4-spark.jar`
   * `akka-slf4j_2.10-2.3.4-spark.jar`
   * `aopalliance-1.0.jar`
   * `arpack_combined_all-0.1.jar`
   * `avro-1.7.7.jar`
   * `breeze-macros_2.10-0.11.2.jar`
   * `breeze_2.10-0.11.2.jar`
   * `chill-java-0.5.0.jar`
   * `chill_2.10-0.5.0.jar`
   * `commons-beanutils-1.7.0.jar`
   * `commons-beanutils-core-1.8.0.jar`
   * `commons-cli-1.2.jar`
   * `commons-codec-1.10.jar`
   * `commons-collections-3.2.1.jar`
   * `commons-compress-1.4.1.jar`
   * `commons-configuration-1.6.jar`
   * `commons-digester-1.8.jar`
   * `commons-httpclient-3.1.jar`
   * `commons-io-2.1.jar`
   * `commons-lang-2.5.jar`
   * `commons-lang3-3.3.2.jar`
   * `commons-math-2.1.jar`
   * `commons-math3-3.4.1.jar`
   * `commons-net-2.2.jar`
   * `compress-lzf-1.0.0.jar`
   * `config-1.2.1.jar`
   * `core-1.1.2.jar`
   * `curator-client-2.4.0.jar`
   * `curator-framework-2.4.0.jar`
   * `curator-recipes-2.4.0.jar`
   * `gmbal-api-only-3.0.0-b023.jar`
   * `grizzly-framework-2.1.2.jar`
   * `grizzly-http-2.1.2.jar`
   * `grizzly-http-server-2.1.2.jar`
   * `grizzly-http-servlet-2.1.2.jar`
   * `grizzly-rcm-2.1.2.jar`
   * `groovy-all-2.3.7.jar`
   * `guava-14.0.1.jar`
   * `guice-3.0.jar`
   * `hadoop-annotations-2.2.0.jar`
   * `hadoop-auth-2.2.0.jar`
   * `hadoop-client-2.2.0.jar`
   * `hadoop-common-2.2.0.jar`
   * `hadoop-hdfs-2.2.0.jar`
   * `hadoop-mapreduce-client-app-2.2.0.jar`
   * `hadoop-mapreduce-client-common-2.2.0.jar`
   * `hadoop-mapreduce-client-core-2.2.0.jar`
   * `hadoop-mapreduce-client-jobclient-2.2.0.jar`
   * `hadoop-mapreduce-client-shuffle-2.2.0.jar`
   * `hadoop-yarn-api-2.2.0.jar`
   * `hadoop-yarn-client-2.2.0.jar`
   * `hadoop-yarn-common-2.2.0.jar`
   * `hadoop-yarn-server-common-2.2.0.jar`
   * `ivy-2.4.0.jar`
   * `jackson-annotations-2.4.0.jar`
   * `jackson-core-2.4.4.jar`
   * `jackson-core-asl-1.8.8.jar`
   * `jackson-databind-2.4.4.jar`
   * `jackson-jaxrs-1.8.8.jar`
   * `jackson-mapper-asl-1.8.8.jar`
   * `jackson-module-scala_2.10-2.4.4.jar`
   * `jackson-xc-1.8.8.jar`
   * `jansi-1.4.jar`
   * `javax.inject-1.jar`
   * `javax.servlet-3.0.0.v201112011016.jar`
   * `javax.servlet-3.1.jar`
   * `javax.servlet-api-3.0.1.jar`
   * `jaxb-api-2.2.2.jar`
   * `jaxb-impl-2.2.3-1.jar`
   * `jcl-over-slf4j-1.7.10.jar`
   * `jersey-client-1.9.jar`
   * `jersey-core-1.9.jar`
   * `jersey-grizzly2-1.9.jar`
   * `jersey-guice-1.9.jar`
   * `jersey-json-1.9.jar`
   * `jersey-server-1.9.jar`
   * `jersey-test-framework-core-1.9.jar`
   * `jersey-test-framework-grizzly2-1.9.jar`
   * `jets3t-0.7.1.jar`
   * `jettison-1.1.jar`
   * `jetty-util-6.1.26.jar`
   * `jline-0.9.94.jar`
   * `jline-2.10.4.jar`
   * `jodd-core-3.6.3.jar`
   * `json4s-ast_2.10-3.2.10.jar`
   * `json4s-core_2.10-3.2.10.jar`
   * `json4s-jackson_2.10-3.2.10.jar`
   * `jsr305-1.3.9.jar`
   * `jtransforms-2.4.0.jar`
   * `jul-to-slf4j-1.7.10.jar`
   * `kryo-2.21.jar`
   * `log4j-1.2.17.jar`
   * `lz4-1.2.0.jar`
   * `management-api-3.0.0-b012.jar`
   * `mesos-0.21.0-shaded-protobuf.jar`
   * `metrics-core-3.1.0.jar`
   * `metrics-graphite-3.1.0.jar`
   * `metrics-json-3.1.0.jar`
   * `metrics-jvm-3.1.0.jar`
   * `minlog-1.2.jar`
   * `netty-3.8.0.Final.jar`
   * `netty-all-4.0.23.Final.jar`
   * `objenesis-1.2.jar`
   * `opencsv-2.3.jar`
   * `oro-2.0.8.jar`
   * `paranamer-2.6.jar`
   * `parquet-column-1.6.0rc3.jar`
   * `parquet-common-1.6.0rc3.jar`
   * `parquet-encoding-1.6.0rc3.jar`
   * `parquet-format-2.2.0-rc1.jar`
   * `parquet-generator-1.6.0rc3.jar`
   * `parquet-hadoop-1.6.0rc3.jar`
   * `parquet-jackson-1.6.0rc3.jar`
   * `protobuf-java-2.4.1.jar`
   * `protobuf-java-2.5.0-spark.jar`
   * `py4j-0.8.2.1.jar`
   * `pyrolite-2.0.1.jar`
   * `quasiquotes_2.10-2.0.1.jar`
   * `reflectasm-1.07-shaded.jar`
   * `scala-compiler-2.10.4.jar`
   * `scala-library-2.10.4.jar`
   * 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97197249
  
Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97197254
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/31157/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread andrewor14
Github user andrewor14 commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97200770
  
@hellertime Tests don't seem to compile.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread andrewor14
Github user andrewor14 commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97174820
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97175225
  
 Merged build triggered.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97175479
  
  [Test build #31157 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/31157/consoleFull)
 for   PR 3074 at commit 
[`6ed728b`](https://github.com/apache/spark/commit/6ed728b3c6a62272414ff6bb11f56358aec9f83a).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97175273
  
Merged build started.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-28 Thread andrewor14
Github user andrewor14 commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-97174923
  
Jenkins, ok to test and retest this please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-27 Thread doctapp
Github user doctapp commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96791733
  
@hellertime thanks for the info, didn't catch this wasn't pre-installed 
with mesos.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-27 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96795918
  
  [Test build #721 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/721/consoleFull)
 for   PR 3074 at commit 
[`064101c`](https://github.com/apache/spark/commit/064101c0096eb44b7d91fa62bafa27756279aca2).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-27 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96770128
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r29096430
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
--- End diff --

The Volume parsing is not technically a DockerInfo setting, but is part of 
the ContainerInfo instead, so I could argue it is a more general 
SchedulerBackendUtil than the more specific DockerUtil. Perhaps 
MesosContainerUtil?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96114471
  
@doctapp actually in that example Dockerfile, the implication was that the 
container had been run with a flag such as `-v 
/usr/local/lib:/host/usr/local/lib:ro`, so the path as it stands is fine. This 
could be made more clear; in fact I might rewrite this example to use the 
mesosphere docker image as a base.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96135280
  
@andrewor14 all good suggestions. I've captured them all in this round of 
commits. I'm still not sold on the naming of the Util object. So I've left it 
for now.

@tnachen I've bumped the log level to debug for displaying the image name.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-24 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-96135291
  
Jenkins. Make it so! Oh right...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-23 Thread tnachen
Github user tnachen commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28991901
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
+  container: ContainerInfo.Builder,
+  image: String,
+  volumes: Option[List[Volume]] = None,
+  network: Option[ContainerInfo.DockerInfo.Network] = None,
+  portmaps: 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477222
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
 ---
@@ -148,13 +148,26 @@ private[spark] class MesosSchedulerBackend(
 Value.Scalar.newBuilder()
   .setValue(MemoryUtils.calculateTotalMemory(sc)).build())
   .build()
-MesosExecutorInfo.newBuilder()
+val executorInfo = MesosExecutorInfo.newBuilder()
   .setExecutorId(ExecutorID.newBuilder().setValue(execId).build())
   .setCommand(command)
   .setData(ByteString.copyFrom(createExecArg()))
   .addResources(cpus)
   .addResources(memory)
-  .build()
+
+sc.conf.getOption(spark.mesos.executor.docker.image).map { 
image:String =
--- End diff --

also this should be `foreach` because we don't care about the return value


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477417
  
--- Diff: docs/running-on-mesos.md ---
@@ -167,6 +167,14 @@ acquire. By default, it will acquire *all* cores in 
the cluster (that get offere
 only makes sense if you run just one application at a time. You can cap 
the maximum number of cores
 using `conf.set(spark.cores.max, 10)` (for example).
 
+# Mesos Docker Support
+
+Spark can make use of a Mesos Docker containerizer by setting the property 
`spark.mesos.executor.docker.image`
+in your [SparkConf](configuration.html#spark-properties)
--- End diff --

Can you add Note that this requires Mesos version X or later here 
instead? Then you don't need to duplicate that line in all the relevant configs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477087
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
+  container: ContainerInfo.Builder,
+  image: String,
+  volumes: Option[List[Volume]] = None,
+  network: Option[ContainerInfo.DockerInfo.Network] = None,
+  portmaps: 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-93610107
  
@hellertime Functionality-wise this looks fine to me. The comments I left 
are relatively minor, mostly to do with code style and clarifying comments. It 
appears that a number of watchers have already tested this in their own 
deployments :), so I suppose this patch is already in a working state. Once you 
address the comments we can do a final round of review and hopefully merge it 
in by 1.4.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477157
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
--- End diff --

also this can be `private[mesos]`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477204
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
 ---
@@ -148,13 +148,26 @@ private[spark] class MesosSchedulerBackend(
 Value.Scalar.newBuilder()
   .setValue(MemoryUtils.calculateTotalMemory(sc)).build())
   .build()
-MesosExecutorInfo.newBuilder()
+val executorInfo = MesosExecutorInfo.newBuilder()
   .setExecutorId(ExecutorID.newBuilder().setValue(execId).build())
   .setCommand(command)
   .setData(ByteString.copyFrom(createExecArg()))
   .addResources(cpus)
   .addResources(memory)
-  .build()
+
+sc.conf.getOption(spark.mesos.executor.docker.image).map { 
image:String =
+  val container = executorInfo.getContainerBuilder()
+  val volumes = sc.conf
+.getOption(spark.mesos.executor.docker.volumes)
+.map(MesosSchedulerBackendUtil.parseVolumesSpec)
+  val portmaps = sc.conf
+.getOption(spark.mesos.executor.docker.portmaps)
+.map(MesosSchedulerBackendUtil.parsePortMappingsSpec)
+  MesosSchedulerBackendUtil.withDockerInfo(
+container, image, volumes = volumes, portmaps = portmaps)
+}
--- End diff --

This chunk of code is duplicated. You should factor this into a method in 
your utils class, and maybe call it `setupDockerContainer(conf: SparkConf)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28475865
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
 ---
@@ -148,13 +148,26 @@ private[spark] class MesosSchedulerBackend(
 Value.Scalar.newBuilder()
   .setValue(MemoryUtils.calculateTotalMemory(sc)).build())
   .build()
-MesosExecutorInfo.newBuilder()
+val executorInfo = MesosExecutorInfo.newBuilder()
   .setExecutorId(ExecutorID.newBuilder().setValue(execId).build())
   .setCommand(command)
   .setData(ByteString.copyFrom(createExecArg()))
   .addResources(cpus)
   .addResources(memory)
-  .build()
+
+sc.conf.getOption(spark.mesos.executor.docker.image).map { 
image:String =
--- End diff --

nit: you can leave out the `String` here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476625
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
--- End diff --

I would just do `flatMap` up there in L35, and then do `.map { _.build() }` 
here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28475951
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
+  container: ContainerInfo.Builder,
+  image: String,
+  volumes: Option[List[Volume]] = None,
+  network: Option[ContainerInfo.DockerInfo.Network] = None,
+  portmaps: 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476388
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
--- End diff --

need parentheses `()` for build, it's not a getter


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476422
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
--- End diff --

This is more like `MesosDockerUtil` right? The methods here seem pretty 
specific to docker


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476440
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
--- End diff --

style: can you unindent everything in this block by 2 spaces?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476893
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
--- End diff --

same here, can you rephrase this similar to my suggestion above?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477321
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
--- End diff --

Let's be more specific and replace `proto` with `tcp|udp`, as you did in 
the documentation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477280
  
--- Diff: docker/spark-mesos/Dockerfile ---
@@ -0,0 +1,37 @@
+# This is an example Dockerfile for creating a Spark image which can be
+# references by the Spark property 'spark.mesos.executor.docker.image'
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the License); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+FROM ubuntu
+
+# Update the base ubuntu image with dependencies needed for Spark and Mesos
+RUN apt-get update  \
+apt-get install -y python libnss3 openjdk-7-jre-headless
+
+# A Spark distribution tarball is needed, one can be built from the root
+# of the Spark source repository with `make-distribution.sh --tgz`
+ADD spark-1.3.0.tar.gz /opt
--- End diff --

the version this patch will be merged in is likely 1.4.0. Do we have to 
update this file every version? Is this more like a template? If so we should 
give it the right extension.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476817
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
--- End diff --

from the code it seems that `container-dir` is required, so shouldn't this 
be:
`[host-dir:]container-path[:rw|ro]`. It would be ideal if you could provide 
an example of what this string looks like here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476685
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
--- End diff --

Can you rephrase this:
`sUnable to parse volume specs: $volumes. Expected form [host-dir:]...`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477334
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
--- End diff --

style:
```
blank line
Note: the docker form is...
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477358
  
--- Diff: docs/running-on-mesos.md ---
@@ -167,6 +167,14 @@ acquire. By default, it will acquire *all* cores in 
the cluster (that get offere
 only makes sense if you run just one application at a time. You can cap 
the maximum number of cores
 using `conf.set(spark.cores.max, 10)` (for example).
 
+# Mesos Docker Support
+
+Spark can make use of a Mesos Docker containerizer by setting the property 
`spark.mesos.executor.docker.image`
+in your [SparkConf](configuration.html#spark-properties)
+
+The Docker Image used must have an appropriate version of Spark already 
part of the image, or you can
--- End diff --

no need to capitalize Image?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477372
  
--- Diff: docs/running-on-mesos.md ---
@@ -211,6 +219,43 @@ See the [configuration page](configuration.html) for 
information on Spark config
   /td
 /tr
 tr
+  tdcodespark.mesos.executor.docker.image/code/td
+  td(none)/td
+  td
+Set the docker image in which the Spark executors will run when using 
Mesos. The selected
--- End diff --

Do you mean Set the path to the docker image?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476531
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
--- End diff --

should this be `[host-dir]:[container-dir]:[rw|ro]` instead?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476488
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
--- End diff --

Parse a comma-delimited list of volume spec, each of which takes the form...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28476966
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, similar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
--- End diff --

This method name is still not super descriptive. Should this be 
`setupDockerContainer`? Also please add a short javadoc on what this does.


---
If your project is set up for it, you can reply to this email and have 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-15 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28477717
  
--- Diff: docs/running-on-mesos.md ---
@@ -167,6 +167,14 @@ acquire. By default, it will acquire *all* cores in 
the cluster (that get offere
 only makes sense if you run just one application at a time. You can cap 
the maximum number of cores
 using `conf.set(spark.cores.max, 10)` (for example).
 
+# Mesos Docker Support
+
+Spark can make use of a Mesos Docker containerizer by setting the property 
`spark.mesos.executor.docker.image`
+in your [SparkConf](configuration.html#spark-properties)
--- End diff --

(also you need a period at the end here)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-14 Thread doctapp
Github user doctapp commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-92905011
  
There's a typo in `docker/spark-mesos/Dockerfile` where `ENV 
MESOS_NATIVE_JAVA_LIBRARY /host/usr/local/lib/libmesos.so` should be refering 
to /usr/local/lib/libmesos.so


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-13 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-92441223
  
  [Test build #30170 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30170/consoleFull)
 for   PR 3074 at commit 
[`d8ed2b6`](https://github.com/apache/spark/commit/d8ed2b615f48baf384a9d8a8557b1581ff6238eb).
 * This patch **passes all tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.
 * This patch **adds the following new dependencies:**
   * `mesos-0.21.1-shaded-protobuf.jar`

 * This patch **removes the following dependencies:**
   * `mesos-0.21.0-shaded-protobuf.jar`



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-92441238
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/30170/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-13 Thread hellertime
Github user hellertime commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r28239714
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec: Array[String] =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, simmilar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
+  container: ContainerInfo.Builder,
+  image: String,
+  volumes: Option[List[Volume]] = None,
+  network: Option[ContainerInfo.DockerInfo.Network] = None,
+  portmaps: 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-13 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-92408362
  
  [Test build #30170 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30170/consoleFull)
 for   PR 3074 at commit 
[`d8ed2b6`](https://github.com/apache/spark/commit/d8ed2b615f48baf384a9d8a8557b1581ff6238eb).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-05 Thread tnachen
Github user tnachen commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r27784679
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec: Array[String] =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, simmilar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
+  container: ContainerInfo.Builder,
+  image: String,
+  volumes: Option[List[Volume]] = None,
+  network: Option[ContainerInfo.DockerInfo.Network] = None,
+  portmaps: 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-05 Thread tnachen
Github user tnachen commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r27784657
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec: Array[String] =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, simmilar to the form passed to 'docker run -p'
+   * the form accepted is host_port:container_port[:proto]
+   * Note:
+   * the docker form is [ip:]host_port:container_port, but the DockerInfo
+   * message has no field for 'ip', and instead has a 'protocol' field.
+   * Docker itself only appears to support TCP, so this alternative form
+   * anticipates the expansion of the docker form to allow for a protocol
+   * and leaves open the chance for mesos to begin to accept an 'ip' field
+   */
+  def parsePortMappingsSpec(portmaps: String): 
List[DockerInfo.PortMapping] = {
+portmaps.split(,).map(_.split(:)).map { spec: Array[String] =
+  val portmap: DockerInfo.PortMapping.Builder = DockerInfo.PortMapping
+.newBuilder()
+.setProtocol(tcp)
+  spec match {
+case Array(host_port, container_port) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt))
+case Array(host_port, container_port, protocol) =
+  Some(portmap.setHostPort(host_port.toInt)
+  .setContainerPort(container_port.toInt)
+  .setProtocol(protocol))
+case spec = {
+  logWarning(parsePortMappingSpec: unparseable:  + 
spec.mkString(:))
+  None
+}
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  def withDockerInfo(
+  container: ContainerInfo.Builder,
+  image: String,
+  volumes: Option[List[Volume]] = None,
+  network: Option[ContainerInfo.DockerInfo.Network] = None,
+  portmaps: 

[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-05 Thread tnachen
Github user tnachen commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r27784642
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec: Array[String] =
+val vol: Volume.Builder = Volume
+  .newBuilder()
+  .setMode(Volume.Mode.RW)
+spec match {
+  case Array(container_path) = 
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, rw) =
+Some(vol.setContainerPath(container_path))
+  case Array(container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setMode(Volume.Mode.RO))
+  case Array(host_path, container_path) = 
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, rw) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path))
+  case Array(host_path, container_path, ro) =
+Some(vol.setContainerPath(container_path)
+.setHostPath(host_path)
+.setMode(Volume.Mode.RO))
+  case spec = {
+logWarning(parseVolumeSpec: unparseable:  + 
spec.mkString(:))
+None
+  }
+  }
+}
+.flatMap { _.map(_.build) }
+.toList
+  }
+
+  /**
+   * Parse a portmap spec, simmilar to the form passed to 'docker run -p'
--- End diff --

typo - simmilar 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-05 Thread tnachen
Github user tnachen commented on a diff in the pull request:

https://github.com/apache/spark/pull/3074#discussion_r27784630
  
--- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
 ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.scheduler.cluster.mesos
+
+import org.apache.mesos.Protos.{ContainerInfo, Volume}
+import org.apache.mesos.Protos.ContainerInfo.DockerInfo
+
+import org.apache.spark.Logging
+
+/**
+ * A collection of utility functions which can be used by both the
+ * MesosSchedulerBackend and the CoarseMesosSchedulerBackend.
+ */
+private[spark] object MesosSchedulerBackendUtil extends Logging {
+  /**
+   * Parse a volume spec in the form passed to 'docker run -v'
+   * which is [host-dir:][container-dir][:rw|ro]
+   */
+  def parseVolumesSpec(volumes: String): List[Volume] = {
+volumes.split(,).map(_.split(:)).map { spec: Array[String] =
--- End diff --

I don't think it's in the spark style to include the type here, I see 
usually it's just { spec =


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-04-02 Thread tnachen
Github user tnachen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-89056297
  
@andrewor14 I think this is ready to go, can you review this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-27 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-87014181
  
@tnachen indeed, ready to go.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-26 Thread tnachen
Github user tnachen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-86645305
  
@hellertime looks like this is ready to go? @pwendell @mateiz what else is 
needed to merge this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85229862
  
@tnachen I'm stumped at the moment. I've gone so far as to exclude the 
explicit docker/spark-mesos/Dockerfile path, but it is still not excluded. I 
had put this down so I haven't looked at it in a few days, nor merged in HEAD, 
but no the .rat-excludes is still stopping me. Its probably a typo that I've 
stared at too long (:


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread tnachen
Github user tnachen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85228145
  
@hellertime are you able to figure out the RAT problem?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread tnachen
Github user tnachen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85232216
  
@hellertime how about just add the Apache license on the top of the 
Dockerfile?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85256109
  
  [Test build #29037 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29037/consoleFull)
 for   PR 3074 at commit 
[`a2856cd`](https://github.com/apache/spark/commit/a2856cdc99229d96f5b76a619bfbd21105513404).
 * This patch **passes all tests**.

 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85256117
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/29037/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread srowen
Github user srowen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85235696
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85235864
  
  [Test build #29036 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29036/consoleFull)
 for   PR 3074 at commit 
[`a2856cd`](https://github.com/apache/spark/commit/a2856cdc99229d96f5b76a619bfbd21105513404).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85254916
  
  [Test build #29036 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29036/consoleFull)
 for   PR 3074 at commit 
[`a2856cd`](https://github.com/apache/spark/commit/a2856cdc99229d96f5b76a619bfbd21105513404).
 * This patch **passes all tests**.

 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85254932
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/29036/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85233392
  
@tnachen stop making things sound so damn easy! ;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85235541
  
Jenkins make it so! Oh wait, I don't have permission to do that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-23 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-85236622
  
  [Test build #29037 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29037/consoleFull)
 for   PR 3074 at commit 
[`a2856cd`](https://github.com/apache/spark/commit/a2856cdc99229d96f5b76a619bfbd21105513404).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81968390
  
  [Test build #28679 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28679/consoleFull)
 for   PR 3074 at commit 
[`5231326`](https://github.com/apache/spark/commit/5231326e2176380dd0a0d2232e07c870661b0d19).
 * This patch **fails RAT tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81968401
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28679/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81968372
  
  [Test build #28679 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28679/consoleFull)
 for   PR 3074 at commit 
[`5231326`](https://github.com/apache/spark/commit/5231326e2176380dd0a0d2232e07c870661b0d19).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81964915
  
  [Test build #28676 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28676/consoleFull)
 for   PR 3074 at commit 
[`1c02b7b`](https://github.com/apache/spark/commit/1c02b7b5737add8309e1040f8dceb78d12be9f48).
 * This patch **fails RAT tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81964794
  
@pwendell @mateiz I'd like to lobby towards getting this merged into the 
next release. I've been making use of this branch in my Mesos cluster, and I 
know that it would benefit from much wider use. Using docker to deploy Spark on 
Mesos is really convenient! Maintaining a patched version of Spark, not so much 
(:


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81965912
  
  [Test build #28677 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28677/consoleFull)
 for   PR 3074 at commit 
[`cdc8f81`](https://github.com/apache/spark/commit/cdc8f81ef44d11b444ef7e5247a747f09f45866f).
 * This patch **fails RAT tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81965894
  
  [Test build #28677 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28677/consoleFull)
 for   PR 3074 at commit 
[`cdc8f81`](https://github.com/apache/spark/commit/cdc8f81ef44d11b444ef7e5247a747f09f45866f).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81965916
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28677/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81964887
  
  [Test build #28676 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28676/consoleFull)
 for   PR 3074 at commit 
[`1c02b7b`](https://github.com/apache/spark/commit/1c02b7b5737add8309e1040f8dceb78d12be9f48).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81964917
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28676/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-16 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-81967379
  
Is there a link to documentation on the syntax of apache rat exclude 
expressions? I didn't see any docs on the website, other than a very light 
overview and --help flag output.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-05 Thread gurvindersingh
Github user gurvindersingh commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-77517021
  
So it appears as this one is not getting into 1.3 release, as the voting is 
going on for RC3 now. I would have loved to test it out with 1.3 release if 
possible, otherwise have to port it back myself


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-02 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-76720065
  
  [Test build #28173 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28173/consoleFull)
 for   PR 3074 at commit 
[`6746fcb`](https://github.com/apache/spark/commit/6746fcb1844e8fd15abe5cdc4626f882096247ac).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-02 Thread hellertime
Github user hellertime commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-76720424
  
Odd. I explicitly have a line in .rat-excludes for the path which caused 
the error. What gives?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-02 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-76720082
  
  [Test build #28173 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28173/consoleFull)
 for   PR 3074 at commit 
[`6746fcb`](https://github.com/apache/spark/commit/6746fcb1844e8fd15abe5cdc4626f882096247ac).
 * This patch **fails RAT tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-03-02 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-76720085
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28173/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-27 Thread tnachen
Github user tnachen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-76515967
  
@mateiz @pwendell I'm hoping to also see this merged soon, what else is 
needed here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-24 Thread rbraley
Github user rbraley commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-75894649
  
I would love to see this merged for 1.3 as well :). I am curious why there 
should be a conceptual separation between the test docker images and production 
docker images. Shouldn't we strive for them to be identical between dev and 
production? In fact, my first impression long ago seeing the docker folder was 
that it was a way to deploy spark in general, not just for testing fwiw.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread gurvindersingh
Github user gurvindersingh commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74931493
  
will be nice to have this patch merged in for 1.3 release. As we plan to 
use this feature with Mesos and Spark


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74936971
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/27687/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74936955
  
  [Test build #27687 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27687/consoleFull)
 for   PR 3074 at commit 
[`0d6d2b3`](https://github.com/apache/spark/commit/0d6d2b304d56b65d7e2fa61d762ae787d35a2e75).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74936967
  
  [Test build #27687 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27687/consoleFull)
 for   PR 3074 at commit 
[`0d6d2b3`](https://github.com/apache/spark/commit/0d6d2b304d56b65d7e2fa61d762ae787d35a2e75).
 * This patch **fails RAT tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread tnachen
Github user tnachen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74940888
  
@mateiz where do you suggest putting this Dockerfile? I have a Dockerfile 
that builds Spark from source that depends on the Mesos image here: 
https://github.com/tnachen/spark/blob/dockerfile/Dockerfile
@hellertime you can use this if you like or make modifications with it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread mateiz
Github user mateiz commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74951728
  
The docker folder is for test images, but it could be a good place for this 
one. I'll let @pwendell comment on it.

Does Apache Mesos publish a base Docker image? It would be easier to base 
it on that if that would get updated with each release.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-2691][Mesos] Support for Mesos DockerIn...

2015-02-18 Thread tnachen
Github user tnachen commented on the pull request:

https://github.com/apache/spark/pull/3074#issuecomment-74956760
  
Mesosphere does publish a Mesos image on each release (mesosphere/mesos), 
with the each version tagged.
We don't tag the latest release with the :latest tag, I could go change 
that for sure.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   >