spark git commit: [SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a Stage

2015-11-25 Thread irashid
Repository: spark Updated Branches: refs/heads/master b9b6fbe89 -> 0a5aef753 [SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a Stage This issue was addressed in https://github.com/apache/spark/pull/5494, but the fix in that PR, while safe in the sense that it

spark git commit: [SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a Stage

2015-11-25 Thread irashid
Repository: spark Updated Branches: refs/heads/branch-1.6 4971eaaa5 -> 2aeee5696 [SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a Stage This issue was addressed in https://github.com/apache/spark/pull/5494, but the fix in that PR, while safe in the sense that

spark git commit: [SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a Stage

2015-11-25 Thread irashid
Repository: spark Updated Branches: refs/heads/branch-1.5 27b5f31a0 -> 4139a4ed1 [SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a Stage This issue was addressed in https://github.com/apache/spark/pull/5494, but the fix in that PR, while safe in the sense that

spark git commit: [SPARK-11956][CORE] Fix a few bugs in network lib-based file transfer.

2015-11-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 0a5aef753 -> c1f85fc71 [SPARK-11956][CORE] Fix a few bugs in network lib-based file transfer. - NettyRpcEnv::openStream() now correctly propagates errors to the read side of the pipe. - NettyStreamManager now throws if the file being

spark git commit: [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.5 4139a4ed1 -> b1fcefca6 [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits deleting the temp dir like that ``` scala> import scala.collection.mutable import scala.collection.mutable scala> val a =

spark git commit: [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/master faabdfa2b -> 6b781576a [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits deleting the temp dir like that ``` scala> import scala.collection.mutable import scala.collection.mutable scala> val a =

spark git commit: [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.4 94789f374 -> 1df3e8230 [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits deleting the temp dir like that ``` scala> import scala.collection.mutable import scala.collection.mutable scala> val a =

spark git commit: [SPARK-11206] Support SQL UI on the history server

2015-11-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 21e560641 -> cc243a079 [SPARK-11206] Support SQL UI on the history server On the live web UI, there is a SQL tab which provides valuable information for the SQL query. But once the workload is finished, we won't see the SQL tab on the

[2/2] spark git commit: [SPARK-11956][CORE] Fix a few bugs in network lib-based file transfer.

2015-11-25 Thread vanzin
[SPARK-11956][CORE] Fix a few bugs in network lib-based file transfer. - NettyRpcEnv::openStream() now correctly propagates errors to the read side of the pipe. - NettyStreamManager now throws if the file being transferred does not exist. - The network library now correctly handles zero-sized

spark git commit: [SPARK-11969] [SQL] [PYSPARK] visualization of SQL query for pyspark

2015-11-25 Thread davies
Repository: spark Updated Branches: refs/heads/branch-1.6 d5145210b -> c7db01b20 [SPARK-11969] [SQL] [PYSPARK] visualization of SQL query for pyspark Currently, we does not have visualization for SQL query from Python, this PR fix that. cc zsxwing Author: Davies Liu

spark git commit: [SPARK-10864][WEB UI] app name is hidden if window is resized

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 448208d0e -> 97317d346 [SPARK-10864][WEB UI] app name is hidden if window is resized Currently the Web UI navbar has a minimum width of 1200px; so if a window is resized smaller than that the app name goes off screen. The 1200px width

spark git commit: [SPARK-10864][WEB UI] app name is hidden if window is resized

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 67b673208 -> 83653ac5e [SPARK-10864][WEB UI] app name is hidden if window is resized Currently the Web UI navbar has a minimum width of 1200px; so if a window is resized smaller than that the app name goes off screen. The 1200px width

spark git commit: [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and TransformFunctionSerializer to Java

2015-11-25 Thread tdas
Repository: spark Updated Branches: refs/heads/master 88875d941 -> d29e2ef4c [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and TransformFunctionSerializer to Java The Python exception track in TransformFunction and TransformFunctionSerializer is not sent back to

spark git commit: [SPARK-10558][CORE] Fix wrong executor state in Master

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 9f3e59a16 -> 88875d941 [SPARK-10558][CORE] Fix wrong executor state in Master `ExecutorAdded` can only be sent to `AppClient` when worker report back the executor state as `LOADING`, otherwise because of concurrency issue, `AppClient`

spark git commit: [SPARK-10558][CORE] Fix wrong executor state in Master

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 7b720bf1c -> b4cf318ab [SPARK-10558][CORE] Fix wrong executor state in Master `ExecutorAdded` can only be sent to `AppClient` when worker report back the executor state as `LOADING`, otherwise because of concurrency issue,

[1/2] spark git commit: [SPARK-11762][NETWORK] Account for active streams when couting outstanding requests.

2015-11-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-1.6 2aeee5696 -> 400f66f7c [SPARK-11762][NETWORK] Account for active streams when couting outstanding requests. This way the timeout handling code can correctly close "hung" channels that are processing streams. Author: Marcelo Vanzin

spark git commit: [DOCUMENTATION] Fix minor doc error

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 0dee44a66 -> 67b673208 [DOCUMENTATION] Fix minor doc error Author: Jeff Zhang Closes #9956 from zjffdu/dev_typo. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit:

spark git commit: [DOCUMENTATION] Fix minor doc error

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 685b9c2f5 -> 448208d0e [DOCUMENTATION] Fix minor doc error Author: Jeff Zhang Closes #9956 from zjffdu/dev_typo. (cherry picked from commit 67b67320884282ccf3102e2af96f877e9b186517) Signed-off-by: Andrew Or

spark git commit: [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and TransformFunctionSerializer to Java

2015-11-25 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-1.6 b4cf318ab -> 849ddb6ae [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and TransformFunctionSerializer to Java The Python exception track in TransformFunction and TransformFunctionSerializer is not sent back to

spark git commit: [MINOR] Remove unnecessary spaces in `include_example.rb`

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master dc1d324fd -> 0dee44a66 [MINOR] Remove unnecessary spaces in `include_example.rb` Author: Yu ISHIKAWA Closes #9960 from yu-iskw/minor-remove-spaces. Project: http://git-wip-us.apache.org/repos/asf/spark/repo

spark git commit: [MINOR] Remove unnecessary spaces in `include_example.rb`

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 c7db01b20 -> 685b9c2f5 [MINOR] Remove unnecessary spaces in `include_example.rb` Author: Yu ISHIKAWA Closes #9960 from yu-iskw/minor-remove-spaces. (cherry picked from commit

spark git commit: [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd from wrong directory

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 83653ac5e -> 9f3e59a16 [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd from wrong directory * On windows the `bin/load-spark-env.cmd` tries to load `spark-env.cmd` from `%~dp0..\..\conf`, where `~dp0`

spark git commit: [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd from wrong directory

2015-11-25 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 97317d346 -> 7b720bf1c [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd from wrong directory * On windows the `bin/load-spark-env.cmd` tries to load `spark-env.cmd` from `%~dp0..\..\conf`, where `~dp0`

spark git commit: [SPARK-11866][NETWORK][CORE] Make sure timed out RPCs are cleaned up.

2015-11-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-1.6 849ddb6ae -> cd86d8c74 [SPARK-11866][NETWORK][CORE] Make sure timed out RPCs are cleaned up. This change does a couple of different things to make sure that the RpcEnv-level code and the network library agree about the status of

spark git commit: Fix Aggregator documentation (rename present to finish).

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 cd86d8c74 -> 399739702 Fix Aggregator documentation (rename present to finish). (cherry picked from commit ecac2835458bbf73fe59413d5bf921500c5b987d) Signed-off-by: Reynold Xin Project:

spark git commit: [SPARK-11866][NETWORK][CORE] Make sure timed out RPCs are cleaned up.

2015-11-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/master d29e2ef4c -> 4e81783e9 [SPARK-11866][NETWORK][CORE] Make sure timed out RPCs are cleaned up. This change does a couple of different things to make sure that the RpcEnv-level code and the network library agree about the status of

spark git commit: Fix Aggregator documentation (rename present to finish).

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/master 4e81783e9 -> ecac28354 Fix Aggregator documentation (rename present to finish). Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ecac2835 Tree:

spark git commit: [SPARK-11984][SQL][PYTHON] Fix typos in doc for pivot for scala and python

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/master c1f85fc71 -> faabdfa2b [SPARK-11984][SQL][PYTHON] Fix typos in doc for pivot for scala and python Author: felixcheung Closes #9967 from felixcheung/pypivotdoc. Project:

spark git commit: [SPARK-11984][SQL][PYTHON] Fix typos in doc for pivot for scala and python

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 400f66f7c -> 204f3601d [SPARK-11984][SQL][PYTHON] Fix typos in doc for pivot for scala and python Author: felixcheung Closes #9967 from felixcheung/pypivotdoc. (cherry picked from commit

spark git commit: [SPARK-12003] [SQL] remove the prefix for name after expanded star

2015-11-25 Thread davies
Repository: spark Updated Branches: refs/heads/branch-1.6 399739702 -> d40bf9ad8 [SPARK-12003] [SQL] remove the prefix for name after expanded star Right now, the expended start will include the name of expression as prefix for column, that's not better than without expending, we should not

spark git commit: [SPARK-12003] [SQL] remove the prefix for name after expanded star

2015-11-25 Thread davies
Repository: spark Updated Branches: refs/heads/master cc243a079 -> d1930ec01 [SPARK-12003] [SQL] remove the prefix for name after expanded star Right now, the expended start will include the name of expression as prefix for column, that's not better than without expending, we should not have

spark git commit: [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task

2015-11-25 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-1.5 b1fcefca6 -> 7900d192e [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task In the previous codes, `newDaemonCachedThreadPool` uses `SynchronousQueue`, which is wrong. `SynchronousQueue`

spark git commit: [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task

2015-11-25 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-1.4 1df3e8230 -> f5af299ab [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task In the previous codes, `newDaemonCachedThreadPool` uses `SynchronousQueue`, which is wrong. `SynchronousQueue`

spark git commit: [SPARK-11980][SPARK-10621][SQL] Fix json_tuple and add test cases for

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 d40bf9ad8 -> 7e7f2627f [SPARK-11980][SPARK-10621][SQL] Fix json_tuple and add test cases for Added Python test cases for the function `isnan`, `isnull`, `nanvl` and `json_tuple`. Fixed a bug in the function `json_tuple` rxin , could

spark git commit: [SPARK-11980][SPARK-10621][SQL] Fix json_tuple and add test cases for

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/master d1930ec01 -> 068b6438d [SPARK-11980][SPARK-10621][SQL] Fix json_tuple and add test cases for Added Python test cases for the function `isnan`, `isnull`, `nanvl` and `json_tuple`. Fixed a bug in the function `json_tuple` rxin , could you

spark git commit: [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task

2015-11-25 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-1.6 7e7f2627f -> 0df6beccc [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task In the previous codes, `newDaemonCachedThreadPool` uses `SynchronousQueue`, which is wrong. `SynchronousQueue`

spark git commit: [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task

2015-11-25 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 068b6438d -> d3ef69332 [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task In the previous codes, `newDaemonCachedThreadPool` uses `SynchronousQueue`, which is wrong. `SynchronousQueue` is

spark git commit: [SPARK-11981][SQL] Move implementations of methods back to DataFrame from Queryable

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/master 2610e0612 -> a0f1a1183 [SPARK-11981][SQL] Move implementations of methods back to DataFrame from Queryable Also added show methods to Dataset. Author: Reynold Xin Closes #9964 from rxin/SPARK-11981. Project:

spark git commit: [SPARK-11981][SQL] Move implementations of methods back to DataFrame from Queryable

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 007eb4ac0 -> 997896643 [SPARK-11981][SQL] Move implementations of methods back to DataFrame from Queryable Also added show methods to Dataset. Author: Reynold Xin Closes #9964 from rxin/SPARK-11981. (cherry

spark git commit: [SPARK-11970][SQL] Adding JoinType into JoinWith and support Sample in Dataset API

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/branch-1.6 7f030aa42 -> 007eb4ac0 [SPARK-11970][SQL] Adding JoinType into JoinWith and support Sample in Dataset API Except inner join, maybe the other join types are also useful when users are using the joinWith function. Thus, added the

spark git commit: [SPARK-11970][SQL] Adding JoinType into JoinWith and support Sample in Dataset API

2015-11-25 Thread rxin
Repository: spark Updated Branches: refs/heads/master 216988688 -> 2610e0612 [SPARK-11970][SQL] Adding JoinType into JoinWith and support Sample in Dataset API Except inner join, maybe the other join types are also useful when users are using the joinWith function. Thus, added the joinType

spark git commit: [SPARK-11860][PYSAPRK][DOCUMENTATION] Invalid argument specification …

2015-11-25 Thread srowen
Repository: spark Updated Branches: refs/heads/master 638500265 -> b9b6fbe89 [SPARK-11860][PYSAPRK][DOCUMENTATION] Invalid argument specification … …for registerFunction [Python] Straightforward change on the python doc Author: Jeff Zhang Closes #9901 from

spark git commit: [SPARK-11860][PYSAPRK][DOCUMENTATION] Invalid argument specification …

2015-11-25 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-1.6 a986a3bde -> 4971eaaa5 [SPARK-11860][PYSAPRK][DOCUMENTATION] Invalid argument specification … …for registerFunction [Python] Straightforward change on the python doc Author: Jeff Zhang Closes #9901 from

spark git commit: [SPARK-11686][CORE] Issue WARN when dynamic allocation is disabled due to spark.dynamicAllocation.enabled and spark.executor.instances both set

2015-11-25 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-1.6 997896643 -> a986a3bde [SPARK-11686][CORE] Issue WARN when dynamic allocation is disabled due to spark.dynamicAllocation.enabled and spark.executor.instances both set Changed the log type to a 'warning' instead of 'info' as required.

spark git commit: [SPARK-11686][CORE] Issue WARN when dynamic allocation is disabled due to spark.dynamicAllocation.enabled and spark.executor.instances both set

2015-11-25 Thread srowen
Repository: spark Updated Branches: refs/heads/master a0f1a1183 -> 638500265 [SPARK-11686][CORE] Issue WARN when dynamic allocation is disabled due to spark.dynamicAllocation.enabled and spark.executor.instances both set Changed the log type to a 'warning' instead of 'info' as required.