spark git commit: [SPARK-24696][SQL] ColumnPruning rule fails to remove extra Project

2018-06-29 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 8ff4b9727 -> 3c0af793f


[SPARK-24696][SQL] ColumnPruning rule fails to remove extra Project

The ColumnPruning rule tries adding an extra Project if an input node produces 
fields more than needed, but as a post-processing step, it needs to remove the 
lower Project in the form of "Project - Filter - Project" otherwise it would 
conflict with PushPredicatesThroughProject and would thus cause a infinite 
optimization loop. The current post-processing method is defined as:
```
  private def removeProjectBeforeFilter(plan: LogicalPlan): LogicalPlan = plan 
transform {
case p1  Project(_, f  Filter(_, p2  Project(_, child)))
  if p2.outputSet.subsetOf(child.outputSet) =>
  p1.copy(child = f.copy(child = child))
  }
```
This method works well when there is only one Filter but would not if there's 
two or more Filters. In this case, there is a deterministic filter and a 
non-deterministic filter so they stay as separate filter nodes and cannot be 
combined together.

An simplified illustration of the optimization process that forms the infinite 
loop is shown below (F1 stands for the 1st filter, F2 for the 2nd filter, P for 
project, S for scan of relation, PredicatePushDown as abbrev. of 
PushPredicatesThroughProject):
```
 F1 - F2 - P - S
PredicatePushDown  =>F1 - P - F2 - S
ColumnPruning  =>F1 - P - F2 - P - S
   =>F1 - P - F2 - S(Project removed)
PredicatePushDown  =>P - F1 - F2 - S
ColumnPruning  =>P - F1 - P - F2 - S
   =>P - F1 - P - F2 - P - S
   =>P - F1 - F2 - P - S(only one Project removed)
RemoveRedundantProject =>F1 - F2 - P - S(goes back to the loop 
start)
```
So the problem is the ColumnPruning rule adds a Project under a Filter (and 
fails to remove it in the end), and that new Project triggers 
PushPredicateThroughProject. Once the filters have been push through the 
Project, a new Project will be added by the ColumnPruning rule and this goes on 
and on.
The fix should be when adding Projects, the rule applies top-down, but later 
when removing extra Projects, the process should go bottom-up to ensure all 
extra Projects can be matched.

Added a optimization rule test in ColumnPruningSuite; and a end-to-end test in 
SQLQuerySuite.

Author: maryannxue 

Closes #21674 from maryannxue/spark-24696.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3c0af793
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3c0af793
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3c0af793

Branch: refs/heads/branch-2.3
Commit: 3c0af793f9e050f5d8dfb2f5daab6c0043c39748
Parents: 8ff4b97
Author: maryannxue 
Authored: Fri Jun 29 23:46:12 2018 -0700
Committer: Xiao Li 
Committed: Fri Jun 29 23:57:09 2018 -0700

--
 .../sql/catalyst/optimizer/Optimizer.scala  |  5 +++--
 .../catalyst/optimizer/ColumnPruningSuite.scala |  9 -
 .../org/apache/spark/sql/SQLQuerySuite.scala| 21 
 3 files changed, 32 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/3c0af793/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
index c77e0f8..38799c1 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
@@ -524,9 +524,10 @@ object ColumnPruning extends Rule[LogicalPlan] {
 
   /**
* The Project before Filter is not necessary but conflict with 
PushPredicatesThroughProject,
-   * so remove it.
+   * so remove it. Since the Projects have been added top-down, we need to 
remove in bottom-up
+   * order, otherwise lower Projects can be missed.
*/
-  private def removeProjectBeforeFilter(plan: LogicalPlan): LogicalPlan = plan 
transform {
+  private def removeProjectBeforeFilter(plan: LogicalPlan): LogicalPlan = plan 
transformUp {
 case p1 @ Project(_, f @ Filter(_, p2 @ Project(_, child)))
   if p2.outputSet.subsetOf(child.outputSet) =>
   p1.copy(child = f.copy(child = child))

http://git-wip-us.apache.org/repos/asf/spark/blob/3c0af793/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ColumnPruningSuite.scala
--
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ColumnPruningSuite

spark git commit: simplify rand in dsl/package.scala

2018-06-29 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 0f534d3da -> 8ff4b9727


simplify rand in dsl/package.scala

(cherry picked from commit d54d8b86301581142293341af25fd78b3278a2e8)
Signed-off-by: Xiao Li 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8ff4b972
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8ff4b972
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8ff4b972

Branch: refs/heads/branch-2.3
Commit: 8ff4b97274e58f5944506b25481c6eb44238a4cd
Parents: 0f534d3
Author: Xiao Li 
Authored: Fri Jun 29 23:51:13 2018 -0700
Committer: Xiao Li 
Committed: Fri Jun 29 23:53:00 2018 -0700

--
 .../src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala  | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/8ff4b972/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
index efb2eba..89e8c99 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
@@ -149,6 +149,7 @@ package object dsl {
   }
 }
 
+def rand(e: Long): Expression = Rand(e)
 def sum(e: Expression): Expression = Sum(e).toAggregateExpression()
 def sumDistinct(e: Expression): Expression = 
Sum(e).toAggregateExpression(isDistinct = true)
 def count(e: Expression): Expression = Count(e).toAggregateExpression()


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[1/2] spark git commit: [SPARK-24696][SQL] ColumnPruning rule fails to remove extra Project

2018-06-29 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/master 03545ce6d -> d54d8b863


[SPARK-24696][SQL] ColumnPruning rule fails to remove extra Project

## What changes were proposed in this pull request?

The ColumnPruning rule tries adding an extra Project if an input node produces 
fields more than needed, but as a post-processing step, it needs to remove the 
lower Project in the form of "Project - Filter - Project" otherwise it would 
conflict with PushPredicatesThroughProject and would thus cause a infinite 
optimization loop. The current post-processing method is defined as:
```
  private def removeProjectBeforeFilter(plan: LogicalPlan): LogicalPlan = plan 
transform {
case p1  Project(_, f  Filter(_, p2  Project(_, child)))
  if p2.outputSet.subsetOf(child.outputSet) =>
  p1.copy(child = f.copy(child = child))
  }
```
This method works well when there is only one Filter but would not if there's 
two or more Filters. In this case, there is a deterministic filter and a 
non-deterministic filter so they stay as separate filter nodes and cannot be 
combined together.

An simplified illustration of the optimization process that forms the infinite 
loop is shown below (F1 stands for the 1st filter, F2 for the 2nd filter, P for 
project, S for scan of relation, PredicatePushDown as abbrev. of 
PushPredicatesThroughProject):
```
 F1 - F2 - P - S
PredicatePushDown  =>F1 - P - F2 - S
ColumnPruning  =>F1 - P - F2 - P - S
   =>F1 - P - F2 - S(Project removed)
PredicatePushDown  =>P - F1 - F2 - S
ColumnPruning  =>P - F1 - P - F2 - S
   =>P - F1 - P - F2 - P - S
   =>P - F1 - F2 - P - S(only one Project removed)
RemoveRedundantProject =>F1 - F2 - P - S(goes back to the loop 
start)
```
So the problem is the ColumnPruning rule adds a Project under a Filter (and 
fails to remove it in the end), and that new Project triggers 
PushPredicateThroughProject. Once the filters have been push through the 
Project, a new Project will be added by the ColumnPruning rule and this goes on 
and on.
The fix should be when adding Projects, the rule applies top-down, but later 
when removing extra Projects, the process should go bottom-up to ensure all 
extra Projects can be matched.

## How was this patch tested?

Added a optimization rule test in ColumnPruningSuite; and a end-to-end test in 
SQLQuerySuite.

Author: maryannxue 

Closes #21674 from maryannxue/spark-24696.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/797971ed
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/797971ed
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/797971ed

Branch: refs/heads/master
Commit: 797971ed42cab41cbc3d039c0af4b26199bff783
Parents: 03545ce
Author: maryannxue 
Authored: Fri Jun 29 23:46:12 2018 -0700
Committer: Xiao Li 
Committed: Fri Jun 29 23:46:12 2018 -0700

--
 .../apache/spark/sql/catalyst/dsl/package.scala |  1 +
 .../sql/catalyst/optimizer/Optimizer.scala  |  5 +++--
 .../catalyst/optimizer/ColumnPruningSuite.scala |  9 -
 .../org/apache/spark/sql/SQLQuerySuite.scala| 21 
 4 files changed, 33 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/797971ed/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
index efb2eba..8cf69c6 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
@@ -149,6 +149,7 @@ package object dsl {
   }
 }
 
+def rand(e: Long): Expression = Rand(Literal.create(e, LongType))
 def sum(e: Expression): Expression = Sum(e).toAggregateExpression()
 def sumDistinct(e: Expression): Expression = 
Sum(e).toAggregateExpression(isDistinct = true)
 def count(e: Expression): Expression = Count(e).toAggregateExpression()

http://git-wip-us.apache.org/repos/asf/spark/blob/797971ed/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
index aa992de..2cc27d8 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
+++ 
b/sql/catalyst/src/main/scala/or

[2/2] spark git commit: simplify rand in dsl/package.scala

2018-06-29 Thread lixiao
simplify rand in dsl/package.scala


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d54d8b86
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d54d8b86
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d54d8b86

Branch: refs/heads/master
Commit: d54d8b86301581142293341af25fd78b3278a2e8
Parents: 797971e
Author: Xiao Li 
Authored: Fri Jun 29 23:51:13 2018 -0700
Committer: Xiao Li 
Committed: Fri Jun 29 23:51:13 2018 -0700

--
 .../src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/d54d8b86/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
index 8cf69c6..89e8c99 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala
@@ -149,7 +149,7 @@ package object dsl {
   }
 }
 
-def rand(e: Long): Expression = Rand(Literal.create(e, LongType))
+def rand(e: Long): Expression = Rand(e)
 def sum(e: Expression): Expression = Sum(e).toAggregateExpression()
 def sumDistinct(e: Expression): Expression = 
Sum(e).toAggregateExpression(isDistinct = true)
 def count(e: Expression): Expression = Count(e).toAggregateExpression()


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-24638][SQL] StringStartsWith support push down

2018-06-29 Thread wenchen
Repository: spark
Updated Branches:
  refs/heads/master f71e8da5e -> 03545ce6d


[SPARK-24638][SQL] StringStartsWith support push down

## What changes were proposed in this pull request?

`StringStartsWith` support push down. About 50% savings in compute time.

## How was this patch tested?
unit tests, manual tests and performance test:
```scala
cat < SPARK-24638.scala
def benchmark(func: () => Unit): Long = {
  val start = System.currentTimeMillis()
  for(i <- 0 until 100) { func() }
  val end = System.currentTimeMillis()
  end - start
}
val path = "/tmp/spark/parquet/string/"
spark.range(1000).selectExpr("concat(id, 'str', id) as 
id").coalesce(1).write.mode("overwrite").option("parquet.block.size", 
1048576).parquet(path)
val df = spark.read.parquet(path)

spark.sql("set spark.sql.parquet.filterPushdown.string.startsWith=true")
val pushdownEnable = benchmark(() => df.where("id like '98%'").count())

spark.sql("set spark.sql.parquet.filterPushdown.string.startsWith=false")
val pushdownDisable = benchmark(() => df.where("id like '98%'").count())

val improvements = pushdownDisable - pushdownEnable
println(s"improvements: $improvements")
EOF

bin/spark-shell -i SPARK-24638.scala
```
result:
```scala
Loading SPARK-24638.scala...
benchmark: (func: () => Unit)Long
path: String = /tmp/spark/parquet/string/
df: org.apache.spark.sql.DataFrame = [id: string]
res1: org.apache.spark.sql.DataFrame = [key: string, value: string]
pushdownEnable: Long = 11608
res2: org.apache.spark.sql.DataFrame = [key: string, value: string]
pushdownDisable: Long = 31981
improvements: Long = 20373
```

Author: Yuming Wang 

Closes #21623 from wangyum/SPARK-24638.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/03545ce6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/03545ce6
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/03545ce6

Branch: refs/heads/master
Commit: 03545ce6de08bd0ad685c5f59b73bc22dfc40887
Parents: f71e8da
Author: Yuming Wang 
Authored: Sat Jun 30 13:58:50 2018 +0800
Committer: Wenchen Fan 
Committed: Sat Jun 30 13:58:50 2018 +0800

--
 .../org/apache/spark/sql/internal/SQLConf.scala | 11 +++
 .../datasources/parquet/ParquetFileFormat.scala |  4 +-
 .../datasources/parquet/ParquetFilters.scala| 35 +++-
 .../parquet/ParquetFilterSuite.scala| 84 +++-
 4 files changed, 130 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/03545ce6/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index e1752ff..da1c34c 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -378,6 +378,14 @@ object SQLConf {
 .booleanConf
 .createWithDefault(true)
 
+  val PARQUET_FILTER_PUSHDOWN_STRING_STARTSWITH_ENABLED =
+buildConf("spark.sql.parquet.filterPushdown.string.startsWith")
+.doc("If true, enables Parquet filter push-down optimization for string 
startsWith function. " +
+  "This configuration only has an effect when 
'spark.sql.parquet.filterPushdown' is enabled.")
+.internal()
+.booleanConf
+.createWithDefault(true)
+
   val PARQUET_WRITE_LEGACY_FORMAT = 
buildConf("spark.sql.parquet.writeLegacyFormat")
 .doc("Whether to be compatible with the legacy Parquet format adopted by 
Spark 1.4 and prior " +
   "versions, when converting Parquet schema to Spark SQL schema and vice 
versa.")
@@ -1459,6 +1467,9 @@ class SQLConf extends Serializable with Logging {
 
   def parquetFilterPushDownDate: Boolean = 
getConf(PARQUET_FILTER_PUSHDOWN_DATE_ENABLED)
 
+  def parquetFilterPushDownStringStartWith: Boolean =
+getConf(PARQUET_FILTER_PUSHDOWN_STRING_STARTSWITH_ENABLED)
+
   def orcFilterPushDown: Boolean = getConf(ORC_FILTER_PUSHDOWN_ENABLED)
 
   def verifyPartitionPath: Boolean = getConf(HIVE_VERIFY_PARTITION_PATH)

http://git-wip-us.apache.org/repos/asf/spark/blob/03545ce6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
index 9602a08..93de1fa 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/executi

[spark] Git Push Summary

2018-06-29 Thread vanzin
Repository: spark
Updated Tags:  refs/tags/v2.1.3 [created] b7eac07b9

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r27835 - in /dev/spark: v2.1.3-rc1-bin/ v2.1.3-rc1-docs/ v2.1.3-rc2-docs/

2018-06-29 Thread vanzin
Author: vanzin
Date: Fri Jun 29 21:37:13 2018
New Revision: 27835

Log:
Remove Spark 2.1.3 RC directories.

Removed:
dev/spark/v2.1.3-rc1-bin/
dev/spark/v2.1.3-rc1-docs/
dev/spark/v2.1.3-rc2-docs/


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r27834 - /dev/spark/v2.1.3-rc2-bin/ /release/spark/spark-2.1.3/

2018-06-29 Thread vanzin
Author: vanzin
Date: Fri Jun 29 21:31:17 2018
New Revision: 27834

Log:
Move Spark 2.1.3-rc2 to release area.

Added:
release/spark/spark-2.1.3/
  - copied from r27833, dev/spark/v2.1.3-rc2-bin/
Removed:
dev/spark/v2.1.3-rc2-bin/


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r27833 - in /dev/spark/2.4.0-SNAPSHOT-2018_06_29_12_01-f71e8da-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-29 Thread pwendell
Author: pwendell
Date: Fri Jun 29 19:15:55 2018
New Revision: 27833

Log:
Apache Spark 2.4.0-SNAPSHOT-2018_06_29_12_01-f71e8da docs


[This commit notification would consist of 1467 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-24566][CORE] Fix spark.storage.blockManagerSlaveTimeoutMs default config

2018-06-29 Thread zsxwing
Repository: spark
Updated Branches:
  refs/heads/master f6e6899a8 -> f71e8da5e


[SPARK-24566][CORE] Fix spark.storage.blockManagerSlaveTimeoutMs default config

This PR use spark.network.timeout in place of 
spark.storage.blockManagerSlaveTimeoutMs when it is not configured, as 
configuration doc said

manual test

Author: xueyu <278006...@qq.com>

Closes #21575 from xueyumusic/slaveTimeOutConfig.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f71e8da5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f71e8da5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f71e8da5

Branch: refs/heads/master
Commit: f71e8da5efde96aacc89e59c6e27b71fffcbc25f
Parents: f6e6899
Author: xueyu <278006...@qq.com>
Authored: Fri Jun 29 10:44:17 2018 -0700
Committer: Shixiong Zhu 
Committed: Fri Jun 29 10:44:49 2018 -0700

--
 core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala| 5 ++---
 .../cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala  | 2 +-
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f71e8da5/core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala
--
diff --git a/core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala 
b/core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala
index ff960b3..bcbc8df 100644
--- a/core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala
+++ b/core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala
@@ -74,10 +74,9 @@ private[spark] class HeartbeatReceiver(sc: SparkContext, 
clock: Clock)
 
   // "spark.network.timeout" uses "seconds", while 
`spark.storage.blockManagerSlaveTimeoutMs` uses
   // "milliseconds"
-  private val slaveTimeoutMs =
-sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs", "120s")
   private val executorTimeoutMs =
-sc.conf.getTimeAsSeconds("spark.network.timeout", s"${slaveTimeoutMs}ms") 
* 1000
+sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs",
+  s"${sc.conf.getTimeAsSeconds("spark.network.timeout", "120s")}s")
 
   // "spark.network.timeoutInterval" uses "seconds", while
   // "spark.storage.blockManagerTimeoutIntervalMs" uses "milliseconds"

http://git-wip-us.apache.org/repos/asf/spark/blob/f71e8da5/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
--
diff --git 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
index d35bea4..1ce2f81 100644
--- 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
+++ 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
@@ -634,7 +634,7 @@ private[spark] class MesosCoarseGrainedSchedulerBackend(
 slave.hostname,
 externalShufflePort,
 sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs",
-  s"${sc.conf.getTimeAsSeconds("spark.network.timeout", "120s") * 
1000L}ms"),
+  s"${sc.conf.getTimeAsSeconds("spark.network.timeout", 
"120s")}s"),
 sc.conf.getTimeAsMs("spark.executor.heartbeatInterval", "10s"))
 slave.shuffleRegistered = true
   }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org