[jira] [Updated] (SPARK-10295) Dynamic allocation in Mesos does not release when RDDs are cached

2015-08-29 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-10295:

Fix Version/s: (was: 1.5.1)
   (was: 1.6.0)
   1.5.0

 Dynamic allocation in Mesos does not release when RDDs are cached
 -

 Key: SPARK-10295
 URL: https://issues.apache.org/jira/browse/SPARK-10295
 Project: Spark
  Issue Type: Improvement
  Components: Documentation, Spark Core
Affects Versions: 1.5.0
 Environment: Spark 1.5.0 RC1
 Centos 6
 java 7 oracle
Reporter: Hans van den Bogert
Assignee: Sean Owen
Priority: Minor
 Fix For: 1.5.0


 When running spark in coarse grained mode with shuffle service and dynamic 
 allocation, the driver does not release executors if a dataset is cached.
 The console output OTOH shows:
  15/08/26 17:29:58 WARN SparkContext: Dynamic allocation currently does not 
  support cached RDDs. Cached data for RDD 9 will be lost when executors are 
  removed.
 However after the default of 1m, executors are not released. When I perform 
 the same initial setup, loading data, etc, but without caching, the executors 
 are released.
 Is this intended behaviour?
 If this is intended behaviour, the console warning is misleading. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-10295) Dynamic allocation in Mesos does not release when RDDs are cached

2015-08-27 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-10295:
--

I believe that YARN currently will release executors even if they have cached 
data. I also recall that there's a desire to change this behavior, so that 
executors may stick around with cached data. I am not sure what the current or 
intended Mesos behavior is, but assume it's the same.

Therefore, this message may need to be softened to something like Dynamic 
allocation is enabled; executors may be removed even when they contain cached 
data or something similar. I don't think there are hard guarantees about the 
behavior in any event, and the intent is just to make the user aware that it's 
possible for cached data to go away with dynamic allocation on.

CC [~vanzin] and [~sandyr]

 Dynamic allocation in Mesos does not release when RDDs are cached
 -

 Key: SPARK-10295
 URL: https://issues.apache.org/jira/browse/SPARK-10295
 Project: Spark
  Issue Type: Question
  Components: Mesos
Affects Versions: 1.5.0
 Environment: Spark 1.5.0 RC1
 Centos 6
 java 7 oracle
Reporter: Hans van den Bogert
Priority: Minor

 When running spark in coarse grained mode with shuffle service and dynamic 
 allocation, the driver does not release executors if a dataset is cached.
 The console output OTOH shows:
  15/08/26 17:29:58 WARN SparkContext: Dynamic allocation currently does not 
  support cached RDDs. Cached data for RDD 9 will be lost when executors are 
  removed.
 However after the default of 1m, executors are not released. When I perform 
 the same initial setup, loading data, etc, but without caching, the executors 
 are released.
 Is this intended behaviour?
 If this is intended behaviour, the console warning is misleading. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-10295) Dynamic allocation in Mesos does not release when RDDs are cached

2015-08-27 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-10295:
--
Component/s: Mesos

 Dynamic allocation in Mesos does not release when RDDs are cached
 -

 Key: SPARK-10295
 URL: https://issues.apache.org/jira/browse/SPARK-10295
 Project: Spark
  Issue Type: Question
  Components: Mesos
Affects Versions: 1.5.0
 Environment: Spark 1.5.0 RC1
 Centos 6
 java 7 oracle
Reporter: Hans van den Bogert
Priority: Minor

 When running spark in coarse grained mode with shuffle service and dynamic 
 allocation, the driver does not release executors if a dataset is cached.
 The console output OTOH shows:
  15/08/26 17:29:58 WARN SparkContext: Dynamic allocation currently does not 
  support cached RDDs. Cached data for RDD 9 will be lost when executors are 
  removed.
 However after the default of 1m, executors are not released. When I perform 
 the same initial setup, loading data, etc, but without caching, the executors 
 are released.
 Is this intended behaviour?
 If this is intended behaviour, the console warning is misleading. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-10295) Dynamic allocation in Mesos does not release when RDDs are cached

2015-08-27 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-10295:
--
Component/s: (was: Mesos)
 Spark Core
 Documentation
 Issue Type: Improvement  (was: Question)

OK, let's think of this as a simple log message update. PR coming.

 Dynamic allocation in Mesos does not release when RDDs are cached
 -

 Key: SPARK-10295
 URL: https://issues.apache.org/jira/browse/SPARK-10295
 Project: Spark
  Issue Type: Improvement
  Components: Documentation, Spark Core
Affects Versions: 1.5.0
 Environment: Spark 1.5.0 RC1
 Centos 6
 java 7 oracle
Reporter: Hans van den Bogert
Priority: Minor

 When running spark in coarse grained mode with shuffle service and dynamic 
 allocation, the driver does not release executors if a dataset is cached.
 The console output OTOH shows:
  15/08/26 17:29:58 WARN SparkContext: Dynamic allocation currently does not 
  support cached RDDs. Cached data for RDD 9 will be lost when executors are 
  removed.
 However after the default of 1m, executors are not released. When I perform 
 the same initial setup, loading data, etc, but without caching, the executors 
 are released.
 Is this intended behaviour?
 If this is intended behaviour, the console warning is misleading. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-10295) Dynamic allocation in Mesos does not release when RDDs are cached

2015-08-26 Thread Hans van den Bogert (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hans van den Bogert updated SPARK-10295:

Summary: Dynamic allocation in Mesos does not release when RDDs are cached  
(was: Dynamic reservation in Mesos does not release when RDDs are cached)

 Dynamic allocation in Mesos does not release when RDDs are cached
 -

 Key: SPARK-10295
 URL: https://issues.apache.org/jira/browse/SPARK-10295
 Project: Spark
  Issue Type: Question
Affects Versions: 1.5.0
 Environment: Spark 1.5.0 RC1
 Centos 6
 java 7 oracle
Reporter: Hans van den Bogert
Priority: Minor

 When running spark in coarse grained mode with shuffle service and dynamic 
 allocation, the driver does not release executors if a dataset is cached.
 The console output OTOH shows:
  15/08/26 17:29:58 WARN SparkContext: Dynamic allocation currently does not 
  support cached RDDs. Cached data for RDD 9 will be lost when executors are 
  removed.
 However after the default of 1m, executors are not released. When I perform 
 the same initial setup, loading data, etc, but without caching, the executors 
 are released.
 Is this intended behaviour?
 If this is intended behaviour, the console warning is misleading. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org