[GitHub] spark issue #17350: [SPARK-20017][SQL] change the nullability of function 'S...

2017-04-08 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/17350
  
@gatorsmile Sorry for the late reply. 
I have checked all the functions' nullability setting and I didn't found 
any issue.
Thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #17350: [SPARK-20017][SQL] change the nullability of function 'S...

2017-03-21 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/17350
  
@gatorsmile OK, I will do it and I will give you feedback as soon as 
possible.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #17350: [SPARK-20017][SQL] change the nullability of function 'S...

2017-03-21 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/17350
  
@gatorsmile  Is it OK? I don't know how to add test case in 
string-function.sql. ï¼·ould you give me some guidance? Thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #17350: [SPARK-20017][SQL] change the nullability of function 'S...

2017-03-20 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/17350
  
@maropu The test case is added. Please check it, thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17350: [SPARK-20017][SQL] change the nullability of func...

2017-03-19 Thread zhaorongsheng
GitHub user zhaorongsheng opened a pull request:

https://github.com/apache/spark/pull/17350

[SPARK-20017][SQL] change the nullability of function 'StringToMap' from 
'false' to 'true'

## What changes were proposed in this pull request?

Change the nullability of function `StringToMap` from `false` to `true`.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhaorongsheng/spark bug-fix_strToMap_NPE

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/17350.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #17350


commit d4bb1583650e44e556bf06d1503355d7529e1ac6
Author: zhaorongsheng <334362...@qq.com>
Date:   2017-03-18T16:03:34Z

Merge remote-tracking branch 'upstream/master' into master_git

commit ee2d7e6aee4248cec124457b5b03da5aa790c984
Author: zhaorongsheng <334362...@qq.com>
Date:   2017-03-19T15:48:15Z

SPARK-20017 change the nullability of 'StringToMap'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14969: [SPARK-17406][WEB UI] limit timeline executor eve...

2017-01-23 Thread zhaorongsheng
Github user zhaorongsheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/14969#discussion_r97465466
  
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsTab.scala 
---
@@ -38,47 +37,67 @@ private[ui] class ExecutorsTab(parent: SparkUI) extends 
SparkUITab(parent, "exec
   }
 }
 
+private[ui] case class ExecutorTaskSummary(
+var executorId: String,
+var totalCores: Int = 0,
+var tasksMax: Int = 0,
+var tasksActive: Int = 0,
+var tasksFailed: Int = 0,
+var tasksComplete: Int = 0,
+var duration: Long = 0L,
+var jvmGCTime: Long = 0L,
+var inputBytes: Long = 0L,
+var inputRecords: Long = 0L,
+var outputBytes: Long = 0L,
+var outputRecords: Long = 0L,
+var shuffleRead: Long = 0L,
+var shuffleWrite: Long = 0L,
+var executorLogs: Map[String, String] = Map.empty,
+var isAlive: Boolean = true
+)
+
 /**
  * :: DeveloperApi ::
  * A SparkListener that prepares information to be displayed on the 
ExecutorsTab
  */
 @DeveloperApi
 class ExecutorsListener(storageStatusListener: StorageStatusListener, 
conf: SparkConf)
 extends SparkListener {
-  val executorToTotalCores = HashMap[String, Int]()
-  val executorToTasksMax = HashMap[String, Int]()
-  val executorToTasksActive = HashMap[String, Int]()
-  val executorToTasksComplete = HashMap[String, Int]()
-  val executorToTasksFailed = HashMap[String, Int]()
-  val executorToDuration = HashMap[String, Long]()
-  val executorToJvmGCTime = HashMap[String, Long]()
-  val executorToInputBytes = HashMap[String, Long]()
-  val executorToInputRecords = HashMap[String, Long]()
-  val executorToOutputBytes = HashMap[String, Long]()
-  val executorToOutputRecords = HashMap[String, Long]()
-  val executorToShuffleRead = HashMap[String, Long]()
-  val executorToShuffleWrite = HashMap[String, Long]()
-  val executorToLogUrls = HashMap[String, Map[String, String]]()
-  val executorIdToData = HashMap[String, ExecutorUIData]()
+  var executorToTaskSummary = LinkedHashMap[String, ExecutorTaskSummary]()
+  var executorEvents = new ListBuffer[SparkListenerEvent]()
+
+  private val maxTimelineExecutors = 
conf.getInt("spark.ui.timeline.executors.maximum", 1000)
+  private val retainedDeadExecutors = 
conf.getInt("spark.ui.retainedDeadExecutors", 100)
 
   def activeStorageStatusList: Seq[StorageStatus] = 
storageStatusListener.storageStatusList
 
   def deadStorageStatusList: Seq[StorageStatus] = 
storageStatusListener.deadStorageStatusList
 
   override def onExecutorAdded(executorAdded: SparkListenerExecutorAdded): 
Unit = synchronized {
 val eid = executorAdded.executorId
-executorToLogUrls(eid) = executorAdded.executorInfo.logUrlMap
-executorToTotalCores(eid) = executorAdded.executorInfo.totalCores
-executorToTasksMax(eid) = executorToTotalCores(eid) / 
conf.getInt("spark.task.cpus", 1)
-executorIdToData(eid) = new ExecutorUIData(executorAdded.time)
+val taskSummary = executorToTaskSummary.getOrElseUpdate(eid, 
ExecutorTaskSummary(eid))
+taskSummary.executorLogs = executorAdded.executorInfo.logUrlMap
+taskSummary.totalCores = executorAdded.executorInfo.totalCores
+taskSummary.tasksMax = taskSummary.totalCores / 
conf.getInt("spark.task.cpus", 1)
+executorEvents += executorAdded
+if (executorEvents.size > maxTimelineExecutors) {
+  executorEvents.remove(0)
+}
+
+val deadExecutors = executorToTaskSummary.filter(e => !e._2.isAlive)
+if (deadExecutors.size > retainedDeadExecutors) {
+  val head = deadExecutors.head
+  executorToTaskSummary.remove(head._1)
--- End diff --

Here we remove only one elements in each time. So we would remove one 
element when each new executor is added.
Could we remove more elements at once time?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16389: [SPARK-18981][Core]The job hang problem when speculation...

2017-01-23 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/16389
  
@zsxwing @mridulm 
Would you check this PR please?

Thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16389: [SPARK-18981][Core]The job hang problem when speculation...

2016-12-30 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/16389
  
@zsxwing I think it may cause some other problem.
For example, if we got some ExecutorLostFailure and the speculated task was 
running on it, the `numRunningTasks` will never be zero.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16389: [SPARK-18981][Core]The job hang problem when speculation...

2016-12-28 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/16389
  
Hi, is anyone can check this PR?
thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16389: [SPARK-18981][Core]The job hang problem when speculation...

2016-12-25 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/16389
  
@mridulm Please check it. Thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16389: [SPARK-18981][Core]The job hang problem when speculation...

2016-12-24 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/16389
  
Yes, I have checked it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16389: [SPARK-18981][Core]The job hang problem when speculation...

2016-12-23 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/16389
  
Jenkins, retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16389: [SPARK-18981][Core]The job hang problem when speculation...

2016-12-23 Thread zhaorongsheng
Github user zhaorongsheng commented on the issue:

https://github.com/apache/spark/pull/16389
  
Hi @mridulm . I have modified the tests. Please check it. 
Thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16394: [SPARK-18981][Core]The job hang problem when spec...

2016-12-23 Thread zhaorongsheng
Github user zhaorongsheng closed the pull request at:

https://github.com/apache/spark/pull/16394


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16394: [SPARK-18981][Core]The job hang problem when spec...

2016-12-23 Thread zhaorongsheng
GitHub user zhaorongsheng opened a pull request:

https://github.com/apache/spark/pull/16394

[SPARK-18981][Core]The job hang problem when speculation is on

## What changes were proposed in this pull request?

The root cause of this issue is that `ExecutorAllocationListener` gets the 
speculated task end info after the stage end event handling which let 
`numRunningTasks = 0`. Then it let `numRunningTasks -= 1` so the 
#numRunningTasks is negative. When calculate #maxNeeded in method 
`maxNumExecutorsNeeded()`, the value may be 0 or negative. So 
`ExecutorAllocationManager` does not request container and the job will be hung.

This PR changes the method `onTaskEnd()` in `ExecutorAllocationListener`. 
When `stageIdToNumTasks` contains the taskEnd's stageId, let #numRunningTasks 
minus 1.

## How was this patch tested?

This patch was tested in the method `test("SPARK-18981...)` of 
ExecutorAllocationManagerSuite.scala.
Create two taskInfos and one of them is speculated task. After the stage 
ending event, the speculated task ending event is posted to listener.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhaorongsheng/spark branch-18981-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/16394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16394


commit c0220aefb1689731144dafccb001860276ee8d22
Author: roncen.zhao <roncen.z...@vipshop.com>
Date:   2016-12-24T02:37:53Z

resolve the job hang problem when speculation is on




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16389: [SPARK-18981][Core]The job hang problem when spec...

2016-12-23 Thread zhaorongsheng
Github user zhaorongsheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/16389#discussion_r93811187
  
--- Diff: 
core/src/test/scala/org/apache/spark/ExecutorAllocationManagerSuite.scala ---
@@ -938,6 +938,33 @@ class ExecutorAllocationManagerSuite
 assert(removeTimes(manager) === Map.empty)
   }
 
+  test("SPARK-18981: maxNumExecutorsNeeded should properly handle 
speculated tasks") {
+sc = createSparkContext()
+val manager = sc.executorAllocationManager.get
+assert(maxNumExecutorsNeeded(manager) === 0)
+
+val stageInfo = createStageInfo(0, 1)
+sc.listenerBus.postToAll(SparkListenerStageSubmitted(stageInfo))
+assert(maxNumExecutorsNeeded(manager) === 1)
+
+val taskInfo = createTaskInfo(1, 1, "executor-1")
+val speculatedTaskInfo = createTaskInfo(2, 1, "executor-1")
+sc.listenerBus.postToAll(SparkListenerTaskStart(0, 0, taskInfo))
+assert(maxNumExecutorsNeeded(manager) === 1)
--- End diff --

Yes, the warning info 'No stages are running, but numRunningTasks != 0' is 
printed and at that time the #numRunningTasks is set to 0. But after that the 
speculated task end event is arrived and the  #numRunningTasks will plus 1.
The tests are wrong, I will fix it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16389: [SPARK-18981][Core]The job hang problem when spec...

2016-12-23 Thread zhaorongsheng
GitHub user zhaorongsheng opened a pull request:

https://github.com/apache/spark/pull/16389

[SPARK-18981][Core]The job hang problem when speculation is on

## What changes were proposed in this pull request?

The root cause of this issue is that `ExecutorAllocationListener` gets the 
speculated task end info after the stage end event handling which let 
`numRunningTasks = 0`. Then it let `numRunningTasks -= 1` so the 
#numRunningTasks is negative. When calculate #maxNeeded in method 
`maxNumExecutorsNeeded()`, the value may be 0 or negative. So 
`ExecutorAllocationManager` does not request container and the job will be hung.

This PR changes the method `onTaskEnd()` in `ExecutorAllocationListener`. 
When `stageIdToNumTasks` contains the taskEnd's stageId, let #numRunningTasks 
minus 1.

## How was this patch tested?

This patch was tested in the method `test("SPARK-18981...)` of 
ExecutorAllocationManagerSuite.scala.
Create two taskInfos and one of them is speculated task. After the stage 
ending event, the speculated task ending event is posted to listener.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhaorongsheng/spark master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/16389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16389


commit 1e191136581a22ed7cafd42e8b85e9a057b71171
Author: roncen.zhao <roncen.z...@vipshop.com>
Date:   2016-12-23T16:38:34Z

resolve the job hang problem when speculation is on




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: update from orign

2016-05-15 Thread zhaorongsheng
Github user zhaorongsheng commented on the pull request:

https://github.com/apache/spark/pull/13118#issuecomment-219288567
  
sorry, it is unintentional mistake.
I close it right now!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: update from orign

2016-05-15 Thread zhaorongsheng
Github user zhaorongsheng closed the pull request at:

https://github.com/apache/spark/pull/13118


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: update from orign

2016-05-14 Thread zhaorongsheng
GitHub user zhaorongsheng opened a pull request:

https://github.com/apache/spark/pull/13118

update from orign

## What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)


## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)


(If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhaorongsheng/spark master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/13118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #13118


commit 00a39d9c05c55b5ffcd4f49aadc91cedf227669a
Author: Patrick Wendell <pwend...@gmail.com>
Date:   2015-12-15T23:09:57Z

Preparing Spark release v1.6.0-rc3

commit 08aa3b47e6a295a8297e741effa14cd0d834aea8
Author: Patrick Wendell <pwend...@gmail.com>
Date:   2015-12-15T23:10:04Z

Preparing development version 1.6.0-SNAPSHOT

commit 9e4ac56452710ddd8efb695e69c8de49317e3f28
Author: tedyu <yuzhih...@gmail.com>
Date:   2015-12-16T02:15:10Z

[SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling 
setConf

This is continuation of SPARK-12056 where change is applied to 
SqlNewHadoopRDD.scala

andrewor14
FYI

Author: tedyu <yuzhih...@gmail.com>

Closes #10164 from tedyu/master.

(cherry picked from commit f725b2ec1ab0d89e35b5e2d3ddeddb79fec85f6d)
Signed-off-by: Andrew Or <and...@databricks.com>

commit 2c324d35a698b353c2193e2f9bd8ba08c741c548
Author: Timothy Chen <tnac...@gmail.com>
Date:   2015-12-16T02:20:00Z

[SPARK-12351][MESOS] Add documentation about submitting Spark with mesos 
cluster mode.

Adding more documentation about submitting jobs with mesos cluster mode.

Author: Timothy Chen <tnac...@gmail.com>

Closes #10086 from tnachen/mesos_supervise_docs.

(cherry picked from commit c2de99a7c3a52b0da96517c7056d2733ef45495f)
Signed-off-by: Andrew Or <and...@databricks.com>

commit 8e9a600313f3047139d3cebef85acc782903123b
Author: Naveen <naveenmin...@gmail.com>
Date:   2015-12-16T02:25:22Z

[SPARK-9886][CORE] Fix to use ShutdownHookManager in

ExternalBlockStore.scala

Author: Naveen <naveenmin...@gmail.com>

Closes #10313 from naveenminchu/branch-fix-SPARK-9886.

(cherry picked from commit 8a215d2338c6286253e20122640592f9d69896c8)
Signed-off-by: Andrew Or <and...@databricks.com>

commit 93095eb29a1e59dbdbf6220bfa732b502330e6ae
Author: Bryan Cutler <bjcut...@us.ibm.com>
Date:   2015-12-16T02:28:16Z

[SPARK-12062][CORE] Change Master to asyc rebuild UI when application 
completes

This change builds the event history of completed apps asynchronously so 
the RPC thread will not be blocked and allow new workers to register/remove if 
the event log history is very large and takes a long time to rebuild.

Author: Bryan Cutler <bjcut...@us.ibm.com>

Closes #10284 from BryanCutler/async-MasterUI-SPARK-12062.

(cherry picked from commit c5b6b398d5e368626e589feede80355fb74c2bd8)
Signed-off-by: Andrew Or <and...@databricks.com>

commit fb08f7b784bc8b5e0cd110f315f72c7d9fc65e08
Author: Wenchen Fan <cloud0...@outlook.com>
Date:   2015-12-16T02:29:19Z

[SPARK-10477][SQL] using DSL in ColumnPruningSuite to improve readability

Author: Wenchen Fan <cloud0...@outlook.com>

Closes #8645 from cloud-fan/test.

(cherry picked from commit a89e8b6122ee5a1517fbcf405b1686619db56696)
Signed-off-by: Andrew Or <and...@databricks.com>

commit a2d584ed9ab3c073df057bed5314bdf877a47616
Author: Timothy Hunter <timhun...@databricks.com>
Date:   2015-12-16T18:12:33Z

[SPARK-12324][MLLIB][DOC] Fixes the sidebar in the ML documentation

This fixes the sidebar, using a pure CSS mechanism to hide it when the 
browser's viewport is too narrow.
Credit goes to the original author Titan-C (mentioned in the NOTICE).

Note that I am not a CSS expert, so I can only address comments up to some 
extent.

Default view:
https://cloud.githubusercontent.com/assets/7594753/11793597/6d1d6eda-a261-11e5-836b-6eb2054e9054.png;>

When collapsed manually by the user:
https://cloud.githubusercontent.com/assets/7594753/11793669/c991989e-a261-11e5-8bf6-aecf3bdb6319.png;>

Disappears when column is too narrow:
https://cloud.githubusercontent.com/assets/7594753/11793607/7754dbcc-a261-11e5-8b15-e0d074b0e47c.png;>

Can still be opened by the user if necessary:
https://cloud.git