[jira] [Commented] (SPARK-5078) Allow setting Akka host name from env vars

2015-03-18 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367704#comment-14367704
 ] 

Timothy St. Clair commented on SPARK-5078:
--

Cross listing details of the issue here, for posterity: 
https://groups.google.com/forum/#!topic/akka-user/9RQdf2NjciE 
+ mattfs example https://github.com/mattf/docker-spark fix for k8's 

 Allow setting Akka host name from env vars
 --

 Key: SPARK-5078
 URL: https://issues.apache.org/jira/browse/SPARK-5078
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Reporter: Michael Armbrust
Assignee: Michael Armbrust
Priority: Critical
 Fix For: 1.3.0, 1.2.1


 Current spark lets you set the ip address using SPARK_LOCAL_IP, but then this 
 is given to akka after doing a reverse DNS lookup.  This makes it difficult 
 to run spark in Docker.  You can already change the hostname that is used 
 programmatically, but it would be nice to be able to do this with an 
 environment variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5368) Spark should support NAT (via akka improvements)

2015-03-09 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353586#comment-14353586
 ] 

Timothy St. Clair commented on SPARK-5368:
--

[~sowen] IIRC there are other Bugs around no longer maintaining and akka fork 
and updating to 2.4.  https://issues.apache.org/jira/browse/SPARK-5293

 Spark should  support NAT (via akka improvements)
 -

 Key: SPARK-5368
 URL: https://issues.apache.org/jira/browse/SPARK-5368
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: jay vyas
 Fix For: 1.2.2


 Spark sets up actors for akka with a set of variables which are defined in 
 the {{AkkaUtils.scala}} class.  
 A snippet:
 {noformat}
  98   |akka.loggers = [akka.event.slf4j.Slf4jLogger]
  99   |akka.stdout-loglevel = ERROR
 100   |akka.jvm-exit-on-fatal-error = off
 101   |akka.remote.require-cookie = $requireCookie
 102   |akka.remote.secure-cookie = $secureCookie
 {noformat}
 We should allow users to pass in custom settings, for example, so that 
 arbitrary akka modifications can be used at runtime for security, 
 performance, logging, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-5368) Spark should support NAT (via akka improvements)

2015-01-23 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14290057#comment-14290057
 ] 

Timothy St. Clair edited comment on SPARK-5368 at 1/23/15 9:54 PM:
---

akka 2.4 apparently has NAT support, which helps when running spark in 
container environments: 

https://groups.google.com/forum/#!topic/akka-user/9RQdf2NjciE



was (Author: tstclair):
akka 2.4 apparently has NAT support, which helps when running docker in 
container environments: 

https://groups.google.com/forum/#!topic/akka-user/9RQdf2NjciE


 Spark should  support NAT (via akka improvements)
 -

 Key: SPARK-5368
 URL: https://issues.apache.org/jira/browse/SPARK-5368
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: jay vyas
 Fix For: 1.2.2


 Spark sets up actors for akka with a set of variables which are defined in 
 the {{AkkaUtils.scala}} class.  
 A snippet:
 {noformat}
  98   |akka.loggers = [akka.event.slf4j.Slf4jLogger]
  99   |akka.stdout-loglevel = ERROR
 100   |akka.jvm-exit-on-fatal-error = off
 101   |akka.remote.require-cookie = $requireCookie
 102   |akka.remote.secure-cookie = $secureCookie
 {noformat}
 We should allow users to pass in custom settings, for example, so that 
 arbitrary akka modifications can be used at runtime for security, 
 performance, logging, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5368) Spark should support NAT (via akka improvements)

2015-01-23 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14290057#comment-14290057
 ] 

Timothy St. Clair commented on SPARK-5368:
--

akka 2.4 apparently has NAT support, which helps when running docker in 
container environments: 

https://groups.google.com/forum/#!topic/akka-user/9RQdf2NjciE


 Spark should  support NAT (via akka improvements)
 -

 Key: SPARK-5368
 URL: https://issues.apache.org/jira/browse/SPARK-5368
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: jay vyas
 Fix For: 1.2.2


 Spark sets up actors for akka with a set of variables which are defined in 
 the {{AkkaUtils.scala}} class.  
 A snippet:
 {noformat}
  98   |akka.loggers = [akka.event.slf4j.Slf4jLogger]
  99   |akka.stdout-loglevel = ERROR
 100   |akka.jvm-exit-on-fatal-error = off
 101   |akka.remote.require-cookie = $requireCookie
 102   |akka.remote.secure-cookie = $secureCookie
 {noformat}
 We should allow users to pass in custom settings, for example, so that 
 arbitrary akka modifications can be used at runtime for security, 
 performance, logging, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5368) Support user configurable akka parameters.

2015-01-22 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14287697#comment-14287697
 ] 

Timothy St. Clair commented on SPARK-5368:
--

To be specific we would like to option enable `akka.remote.untrusted-mode = on` 
through, but ideally one should be able to pass (N) params through. 



 Support user configurable akka parameters. 
 ---

 Key: SPARK-5368
 URL: https://issues.apache.org/jira/browse/SPARK-5368
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: jay vyas
 Fix For: 1.2.2


 Spark sets up actors for akka with a set of variables which are defined in 
 the {{AkkaUtils.scala}} class.  
 A snippet:
 {noformat}
  98   |akka.loggers = [akka.event.slf4j.Slf4jLogger]
  99   |akka.stdout-loglevel = ERROR
 100   |akka.jvm-exit-on-fatal-error = off
 101   |akka.remote.require-cookie = $requireCookie
 102   |akka.remote.secure-cookie = $secureCookie
 {noformat}
 We should allow users to pass in custom settings, for example, so that 
 arbitrary akka modifications can be used at runtime for security, 
 performance, logging, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-5368) Support user configurable akka parameters.

2015-01-22 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14287697#comment-14287697
 ] 

Timothy St. Clair edited comment on SPARK-5368 at 1/22/15 4:41 PM:
---

To be specific, we would like to option enable `akka.remote.untrusted-mode = 
on` through, but ideally one should be able to pass (N) params through. 




was (Author: tstclair):
To be specific we would like to option enable `akka.remote.untrusted-mode = on` 
through, but ideally one should be able to pass (N) params through. 



 Support user configurable akka parameters. 
 ---

 Key: SPARK-5368
 URL: https://issues.apache.org/jira/browse/SPARK-5368
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: jay vyas
 Fix For: 1.2.2


 Spark sets up actors for akka with a set of variables which are defined in 
 the {{AkkaUtils.scala}} class.  
 A snippet:
 {noformat}
  98   |akka.loggers = [akka.event.slf4j.Slf4jLogger]
  99   |akka.stdout-loglevel = ERROR
 100   |akka.jvm-exit-on-fatal-error = off
 101   |akka.remote.require-cookie = $requireCookie
 102   |akka.remote.secure-cookie = $secureCookie
 {noformat}
 We should allow users to pass in custom settings, for example, so that 
 arbitrary akka modifications can be used at runtime for security, 
 performance, logging, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-2691) Allow Spark on Mesos to be launched with Docker

2014-11-10 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204956#comment-14204956
 ] 

Timothy St. Clair commented on SPARK-2691:
--

[~ChrisHeller], [~tarnfeld] - I'm sure there are others on both the mesos  
spark mailing lists who would be interested in possibly testing this patch and 
giving feedback.  

 Allow Spark on Mesos to be launched with Docker
 ---

 Key: SPARK-2691
 URL: https://issues.apache.org/jira/browse/SPARK-2691
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Reporter: Timothy Chen
Assignee: Timothy Chen
  Labels: mesos
 Attachments: spark-docker.patch


 Currently to launch Spark with Mesos one must upload a tarball and specifiy 
 the executor URI to be passed in that is to be downloaded on each slave or 
 even each execution depending coarse mode or not.
 We want to make Spark able to support launching Executors via a Docker image 
 that utilizes the recent Docker and Mesos integration work. 
 With the recent integration Spark can simply specify a Docker image and 
 options that is needed and it should continue to work as-is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1882) Support dynamic memory sharing in Mesos

2014-11-10 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205084#comment-14205084
 ] 

Timothy St. Clair commented on SPARK-1882:
--

[~tnachen] ^ FYI. 

 Support dynamic memory sharing in Mesos
 ---

 Key: SPARK-1882
 URL: https://issues.apache.org/jira/browse/SPARK-1882
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Affects Versions: 1.0.0
Reporter: Andrew Ash

 Fine grained mode Mesos currently supports sharing CPUs very well, but 
 requires that memory be pre-partitioned according to the executor memory 
 parameter.  Mesos supports dynamic memory allocation in addition to dynamic 
 CPU allocation, so we should utilize this feature in Spark.
 See below where when the Mesos backend accepts a resource offer it only 
 checks that there's enough memory to cover sc.executorMemory, and doesn't 
 ever take a fraction of the memory available.  The memory offer is accepted 
 all or nothing from a pre-defined parameter.
 Coarse mode:
 https://github.com/apache/spark/blob/3ce526b168050c572a1feee8e0121e1426f7d9ee/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala#L208
 Fine mode:
 https://github.com/apache/spark/blob/a5150d199ca97ab2992bc2bb221a3ebf3d3450ba/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala#L114



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3174) Provide elastic scaling within a Spark application

2014-10-13 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14170033#comment-14170033
 ] 

Timothy St. Clair commented on SPARK-3174:
--

[~pwendell] are you talking about resizing? 

[~nnielsen] [~tnachen] ^ FYI. 

 Provide elastic scaling within a Spark application
 --

 Key: SPARK-3174
 URL: https://issues.apache.org/jira/browse/SPARK-3174
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, YARN
Affects Versions: 1.0.2
Reporter: Sandy Ryza
Assignee: Andrew Or
 Attachments: SPARK-3174design.pdf, 
 dynamic-scaling-executors-10-6-14.pdf


 A common complaint with Spark in a multi-tenant environment is that 
 applications have a fixed allocation that doesn't grow and shrink with their 
 resource needs.  We're blocked on YARN-1197 for dynamically changing the 
 resources within executors, but we can still allocate and discard whole 
 executors.
 It would be useful to have some heuristics that
 * Request more executors when many pending tasks are building up
 * Discard executors when they are idle
 See the latest design doc for more information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-2022) Spark 1.0.0 is failing if mesos.coarse set to true

2014-09-16 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135592#comment-14135592
 ] 

Timothy St. Clair commented on SPARK-2022:
--

*this appears to be done, could we please close the JIRA. 

 Spark 1.0.0 is failing if mesos.coarse set to true
 --

 Key: SPARK-2022
 URL: https://issues.apache.org/jira/browse/SPARK-2022
 Project: Spark
  Issue Type: Bug
  Components: Mesos
Affects Versions: 1.0.0
Reporter: Marek Wiewiorka
Assignee: Tim Chen
Priority: Critical

 more stderr
 ---
 WARNING: Logging before InitGoogleLogging() is written to STDERR
 I0603 16:07:53.721132 61192 exec.cpp:131] Version: 0.18.2
 I0603 16:07:53.725230 61200 exec.cpp:205] Executor registered on slave 
 201405220917-134217738-5050-27119-0
 Exception in thread main java.lang.NumberFormatException: For input string: 
 sparkseq003.cloudapp.net
 at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Integer.parseInt(Integer.java:492)
 at java.lang.Integer.parseInt(Integer.java:527)
 at 
 scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
 at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
 at 
 org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:135)
 at 
 org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
 more stdout
 ---
 Registered executor on sparkseq003.cloudapp.net
 Starting task 5
 Forked command at 61202
 sh -c '/home/mesos/spark-1.0.0/bin/spark-class 
 org.apache.spark.executor.CoarseGrainedExecutorBackend 
 -Dspark.mesos.coarse=true 
 akka.tcp://sp...@sparkseq001.cloudapp.net:40312/user/CoarseG
 rainedScheduler 201405220917-134217738-5050-27119-0 sparkseq003.cloudapp.net 
 4'
 Command exited with status 1 (pid: 61202)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3223) runAsSparkUser cannot change HDFS write permission properly in mesos cluster mode

2014-09-16 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135965#comment-14135965
 ] 

Timothy St. Clair commented on SPARK-3223:
--

As [~tnachen] mentioned in the PR, specifying the user for the framework should 
resolve the issue. 

 runAsSparkUser cannot change HDFS write permission properly in mesos cluster 
 mode
 -

 Key: SPARK-3223
 URL: https://issues.apache.org/jira/browse/SPARK-3223
 Project: Spark
  Issue Type: Bug
  Components: Input/Output, Mesos
Affects Versions: 1.0.2
Reporter: Jongyoul Lee
Priority: Critical
 Fix For: 1.0.3


 While running mesos with --no-switch_user option, HDFS account name is 
 different from driver and executor. It makes a permission error at last 
 stage. Executor's id is mesos' user id and driver's id is who runs 
 spark-submit. So, moving output from _temporary/path/to/output/part- to 
 /output/path/part- fails because of permission error. The solution for 
 this is only setting SPARK_USER to HADOOP_USER_NAME when MesosExecutorBackend 
 calls runAsSparkUser. HADOOP_USER_NAME is used when FileSystem get user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-2691) Allow Spark on Mesos to be launched with Docker

2014-09-16 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135990#comment-14135990
 ] 

Timothy St. Clair commented on SPARK-2691:
--

+1 [~tnachen], I'd be happy to help here. 

 Allow Spark on Mesos to be launched with Docker
 ---

 Key: SPARK-2691
 URL: https://issues.apache.org/jira/browse/SPARK-2691
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Reporter: Timothy Chen
  Labels: mesos

 Currently to launch Spark with Mesos one must upload a tarball and specifiy 
 the executor URI to be passed in that is to be downloaded on each slave or 
 even each execution depending coarse mode or not.
 We want to make Spark able to support launching Executors via a Docker image 
 that utilizes the recent Docker and Mesos integration work. 
 With the recent integration Spark can simply specify a Docker image and 
 options that is needed and it should continue to work as-is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1702) Mesos executor won't start because of a ClassNotFoundException

2014-09-16 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136015#comment-14136015
 ] 

Timothy St. Clair commented on SPARK-1702:
--

I don't believe so, there appears to be a number of stale tickets that are not 
being maintained. 

 Mesos executor won't start because of a ClassNotFoundException
 --

 Key: SPARK-1702
 URL: https://issues.apache.org/jira/browse/SPARK-1702
 Project: Spark
  Issue Type: Bug
  Components: Mesos
Affects Versions: 1.0.0
Reporter: Bouke van der Bijl
  Labels: executors, mesos, spark

 Some discussion here: 
 http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-ClassNotFoundException-spark-on-mesos-td3510.html
 Fix here (which is probably not the right fix): 
 https://github.com/apache/spark/pull/620
 This was broken in v0.9.0, was fixed in v0.9.1 and is now broken again.
 Error in Mesos executor stderr:
 WARNING: Logging before InitGoogleLogging() is written to STDERR
 I0502 17:31:42.672224 14688 exec.cpp:131] Version: 0.18.0
 I0502 17:31:42.674959 14707 exec.cpp:205] Executor registered on slave 
 20140501-182306-16842879-5050-10155-0
 14/05/02 17:31:42 INFO MesosExecutorBackend: Using Spark's default log4j 
 profile: org/apache/spark/log4j-defaults.properties
 14/05/02 17:31:42 INFO MesosExecutorBackend: Registered with Mesos as 
 executor ID 20140501-182306-16842879-5050-10155-0
 14/05/02 17:31:43 INFO SecurityManager: Changing view acls to: vagrant
 14/05/02 17:31:43 INFO SecurityManager: SecurityManager, is authentication 
 enabled: false are ui acls enabled: false users with view permissions: 
 Set(vagrant)
 14/05/02 17:31:43 INFO Slf4jLogger: Slf4jLogger started
 14/05/02 17:31:43 INFO Remoting: Starting remoting
 14/05/02 17:31:43 INFO Remoting: Remoting started; listening on addresses 
 :[akka.tcp://spark@localhost:50843]
 14/05/02 17:31:43 INFO Remoting: Remoting now listens on addresses: 
 [akka.tcp://spark@localhost:50843]
 java.lang.ClassNotFoundException: org/apache/spark/serializer/JavaSerializer
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at org.apache.spark.SparkEnv$.instantiateClass$1(SparkEnv.scala:165)
 at org.apache.spark.SparkEnv$.create(SparkEnv.scala:176)
 at org.apache.spark.executor.Executor.init(Executor.scala:106)
 at 
 org.apache.spark.executor.MesosExecutorBackend.registered(MesosExecutorBackend.scala:56)
 Exception in thread Thread-0 I0502 17:31:43.710039 14707 exec.cpp:412] 
 Deactivating the executor libprocess
 The problem is that it can't find the class. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-1702) Mesos executor won't start because of a ClassNotFoundException

2014-09-16 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136015#comment-14136015
 ] 

Timothy St. Clair edited comment on SPARK-1702 at 9/16/14 7:17 PM:
---

I don't believe it is an issue, but there appears to be a number of stale 
tickets that are not being cleaned up. 


was (Author: tstclair):
I don't believe so, there appears to be a number of stale tickets that are not 
being maintained. 

 Mesos executor won't start because of a ClassNotFoundException
 --

 Key: SPARK-1702
 URL: https://issues.apache.org/jira/browse/SPARK-1702
 Project: Spark
  Issue Type: Bug
  Components: Mesos
Affects Versions: 1.0.0
Reporter: Bouke van der Bijl
  Labels: executors, mesos, spark

 Some discussion here: 
 http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-ClassNotFoundException-spark-on-mesos-td3510.html
 Fix here (which is probably not the right fix): 
 https://github.com/apache/spark/pull/620
 This was broken in v0.9.0, was fixed in v0.9.1 and is now broken again.
 Error in Mesos executor stderr:
 WARNING: Logging before InitGoogleLogging() is written to STDERR
 I0502 17:31:42.672224 14688 exec.cpp:131] Version: 0.18.0
 I0502 17:31:42.674959 14707 exec.cpp:205] Executor registered on slave 
 20140501-182306-16842879-5050-10155-0
 14/05/02 17:31:42 INFO MesosExecutorBackend: Using Spark's default log4j 
 profile: org/apache/spark/log4j-defaults.properties
 14/05/02 17:31:42 INFO MesosExecutorBackend: Registered with Mesos as 
 executor ID 20140501-182306-16842879-5050-10155-0
 14/05/02 17:31:43 INFO SecurityManager: Changing view acls to: vagrant
 14/05/02 17:31:43 INFO SecurityManager: SecurityManager, is authentication 
 enabled: false are ui acls enabled: false users with view permissions: 
 Set(vagrant)
 14/05/02 17:31:43 INFO Slf4jLogger: Slf4jLogger started
 14/05/02 17:31:43 INFO Remoting: Starting remoting
 14/05/02 17:31:43 INFO Remoting: Remoting started; listening on addresses 
 :[akka.tcp://spark@localhost:50843]
 14/05/02 17:31:43 INFO Remoting: Remoting now listens on addresses: 
 [akka.tcp://spark@localhost:50843]
 java.lang.ClassNotFoundException: org/apache/spark/serializer/JavaSerializer
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at org.apache.spark.SparkEnv$.instantiateClass$1(SparkEnv.scala:165)
 at org.apache.spark.SparkEnv$.create(SparkEnv.scala:176)
 at org.apache.spark.executor.Executor.init(Executor.scala:106)
 at 
 org.apache.spark.executor.MesosExecutorBackend.registered(MesosExecutorBackend.scala:56)
 Exception in thread Thread-0 I0502 17:31:43.710039 14707 exec.cpp:412] 
 Deactivating the executor libprocess
 The problem is that it can't find the class. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1807) Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.

2014-09-16 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14136029#comment-14136029
 ] 

Timothy St. Clair commented on SPARK-1807:
--

Please close in favor of SPARK-2691

 Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.
 -

 Key: SPARK-1807
 URL: https://issues.apache.org/jira/browse/SPARK-1807
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Affects Versions: 0.9.0
Reporter: Timothy St. Clair

 Modify Mesos Scheduler integration to allow SPARK_EXECUTOR_URI to be an 
 executable script.  This allows admins to launch spark in any fashion they 
 desire, vs. just tarball fetching + implied context.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3535) Spark on Mesos not correctly setting heap overhead

2014-09-15 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14134499#comment-14134499
 ] 

Timothy St. Clair commented on SPARK-3535:
--

Are you seeing this under fine grained, course grained, or both.  

 Spark on Mesos not correctly setting heap overhead
 --

 Key: SPARK-3535
 URL: https://issues.apache.org/jira/browse/SPARK-3535
 Project: Spark
  Issue Type: Bug
  Components: Mesos
Affects Versions: 1.1.0
Reporter: Brenden Matthews

 Spark on Mesos does account for any memory overhead.  The result is that 
 tasks are OOM killed nearly 95% of the time.
 Like with the Hadoop on Mesos project, Spark should set aside 15-25% of the 
 executor memory for JVM overhead.
 For example, see: 
 https://github.com/mesos/hadoop/blob/master/src/main/java/org/apache/hadoop/mapred/ResourcePolicy.java#L55-L63



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-1807) Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.

2014-07-10 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057592#comment-14057592
 ] 

Timothy St. Clair commented on SPARK-1807:
--

I'm saying that the URI should *only* have an implied context of a tarball if 
it is a tarball, otherwise if it's a script then it should be able to detect 
and executed. 

I think simple extension detection should work and be backwards compatible. 

 Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.
 -

 Key: SPARK-1807
 URL: https://issues.apache.org/jira/browse/SPARK-1807
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Affects Versions: 0.9.0
Reporter: Timothy St. Clair

 Modify Mesos Scheduler integration to allow SPARK_EXECUTOR_URI to be an 
 executable script.  This allows admins to launch spark in any fashion they 
 desire, vs. just tarball fetching + implied context.   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (SPARK-1807) Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.

2014-07-10 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057592#comment-14057592
 ] 

Timothy St. Clair edited comment on SPARK-1807 at 7/10/14 3:40 PM:
---

I'm saying that the URI should *only* have an implied context of a tarball if 
it is a tarball, otherwise if it's a script then it should be able to detect 
and execut. 

I think simple extension detection should work and be backwards compatible. 


was (Author: tstclair):
I'm saying that the URI should *only* have an implied context of a tarball if 
it is a tarball, otherwise if it's a script then it should be able to detect 
and executed. 

I think simple extension detection should work and be backwards compatible. 

 Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.
 -

 Key: SPARK-1807
 URL: https://issues.apache.org/jira/browse/SPARK-1807
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Affects Versions: 0.9.0
Reporter: Timothy St. Clair

 Modify Mesos Scheduler integration to allow SPARK_EXECUTOR_URI to be an 
 executable script.  This allows admins to launch spark in any fashion they 
 desire, vs. just tarball fetching + implied context.   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (SPARK-1807) Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.

2014-07-10 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057592#comment-14057592
 ] 

Timothy St. Clair edited comment on SPARK-1807 at 7/10/14 3:40 PM:
---

I'm saying that the URI should *only* have an implied context of a tarball if 
it is a tarball, otherwise if it's a script then it should be able to detect 
and execute. 

I think simple extension detection should work and be backwards compatible. 


was (Author: tstclair):
I'm saying that the URI should *only* have an implied context of a tarball if 
it is a tarball, otherwise if it's a script then it should be able to detect 
and execut. 

I think simple extension detection should work and be backwards compatible. 

 Modify SPARK_EXECUTOR_URI to allow for script execution in Mesos.
 -

 Key: SPARK-1807
 URL: https://issues.apache.org/jira/browse/SPARK-1807
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Affects Versions: 0.9.0
Reporter: Timothy St. Clair

 Modify Mesos Scheduler integration to allow SPARK_EXECUTOR_URI to be an 
 executable script.  This allows admins to launch spark in any fashion they 
 desire, vs. just tarball fetching + implied context.   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (SPARK-1433) Upgrade Mesos dependency to 0.17.0

2014-05-15 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-1433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993896#comment-13993896
 ] 

Timothy St. Clair commented on SPARK-1433:
--

Likely want to aim higher at this point, perhaps 0.18.1

 Upgrade Mesos dependency to 0.17.0
 --

 Key: SPARK-1433
 URL: https://issues.apache.org/jira/browse/SPARK-1433
 Project: Spark
  Issue Type: Task
Reporter: Sandeep Singh
Assignee: Sandeep Singh
Priority: Minor
 Fix For: 1.0.0


 Mesos 0.13.0 was released 6 months ago.
 Upgrade Mesos dependency to 0.17.0



--
This message was sent by Atlassian JIRA
(v6.2#6252)