Repository: spark
Updated Branches:
refs/heads/branch-2.0 19a14e841 -> 80a4bfa4d
[SPARK-9926] Parallelize partition logic in UnionRDD.
This patch has the new logic from #8512 that uses a parallel collection to
compute partitions in UnionRDD. The rest of #8512 added an alternative code
path f
Repository: spark
Updated Branches:
refs/heads/master 08db49126 -> 02c07e899
[SPARK-14893][SQL] Re-enable HiveSparkSubmitSuite SPARK-8489 test after
HiveContext is removed
## What changes were proposed in this pull request?
Enable the test that was disabled when HiveContext was removed.
##
Repository: spark
Updated Branches:
refs/heads/branch-2.0 80a4bfa4d -> 1064a3303
[SPARK-14893][SQL] Re-enable HiveSparkSubmitSuite SPARK-8489 test after
HiveContext is removed
## What changes were proposed in this pull request?
Enable the test that was disabled when HiveContext was removed.
Repository: spark
Updated Branches:
refs/heads/master 02c07e899 -> bbb777343
[SPARK-15152][DOC][MINOR] Scaladoc and Code style Improvements
## What changes were proposed in this pull request?
Minor doc and code style fixes
## How was this patch tested?
local build
Author: Jacek Laskowski
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1064a3303 -> a1887f213
[SPARK-15152][DOC][MINOR] Scaladoc and Code style Improvements
## What changes were proposed in this pull request?
Minor doc and code style fixes
## How was this patch tested?
local build
Author: Jacek Laskows
Repository: spark
Updated Branches:
refs/heads/master bbb777343 -> 7f5922aa4
[HOTFIX] Fix MLUtils compile
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7f5922aa
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7f5
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a1887f213 -> 7dc3fb6ae
[HOTFIX] Fix MLUtils compile
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7dc3fb6a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a3aa22a59 -> ab006523b
[SPARK-13566][CORE] Avoid deadlock between BlockManager and Executor Thread
Temp patch for branch 1.6ï¼ avoid deadlock between BlockManager and Executor
Thread.
Author: cenyuhai
Closes #11546 from cenyuhai/SP
Repository: spark
Updated Branches:
refs/heads/branch-2.0 fb73663db -> 5cdb7bea5
[SPARK-15093][SQL] create/delete/rename directory for InMemoryCatalog
operations if needed
## What changes were proposed in this pull request?
following operations have file system operation now:
1. CREATE DATA
Repository: spark
Updated Branches:
refs/heads/master ee3b17156 -> beb16ec55
[SPARK-15093][SQL] create/delete/rename directory for InMemoryCatalog
operations if needed
## What changes were proposed in this pull request?
following operations have file system operation now:
1. CREATE DATABASE
Repository: spark
Updated Branches:
refs/heads/master beb16ec55 -> b1e01fd51
[SPARK-15199][SQL] Disallow Dropping Build-in Functions
What changes were proposed in this pull request?
As Hive and the major RDBMS behave, the built-in functions are not allowed to
drop. In the current impleme
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5cdb7bea5 -> 29bc8d2ec
[SPARK-15199][SQL] Disallow Dropping Build-in Functions
What changes were proposed in this pull request?
As Hive and the major RDBMS behave, the built-in functions are not allowed to
drop. In the current imp
Repository: spark
Updated Branches:
refs/heads/master 671b382a8 -> 2992a215c
[MINOR][DOCS] Remove remaining sqlContext in documentation at examples
This PR removes `sqlContext` in examples. Actual usage was all replaced in
https://github.com/apache/spark/pull/12809 but there are some in comme
Repository: spark
Updated Branches:
refs/heads/branch-2.0 de6afc887 -> 6371197c6
[MINOR][DOCS] Remove remaining sqlContext in documentation at examples
This PR removes `sqlContext` in examples. Actual usage was all replaced in
https://github.com/apache/spark/pull/12809 but there are some in c
Repository: spark
Updated Branches:
refs/heads/master 2992a215c -> 65b4ab281
[SPARK-15223][DOCS] fix wrongly named config reference
## What changes were proposed in this pull request?
The configuration setting `spark.executor.logs.rolling.size.maxBytes` was
changed to `spark.executor.logs.ro
Repository: spark
Updated Branches:
refs/heads/branch-2.0 6371197c6 -> 1b4e99ff1
[SPARK-15223][DOCS] fix wrongly named config reference
## What changes were proposed in this pull request?
The configuration setting `spark.executor.logs.rolling.size.maxBytes` was
changed to `spark.executor.log
Repository: spark
Updated Branches:
refs/heads/branch-1.6 ab006523b -> 518af0796
[SPARK-15223][DOCS] fix wrongly named config reference
## What changes were proposed in this pull request?
The configuration setting `spark.executor.logs.rolling.size.maxBytes` was
changed to `spark.executor.log
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1b4e99ff1 -> 8f0ed2891
[SPARK-15225][SQL] Replace SQLContext with SparkSession in Encoder documentation
`Encoder`'s doc mentions `sqlContext.implicits._`. We should use
`sparkSession.implicits._` instead now.
Only doc update.
Author:
Repository: spark
Updated Branches:
refs/heads/master 65b4ab281 -> e083db2e9
[SPARK-15225][SQL] Replace SQLContext with SparkSession in Encoder documentation
`Encoder`'s doc mentions `sqlContext.implicits._`. We should use
`sparkSession.implicits._` instead now.
Only doc update.
Author: Lia
Repository: spark
Updated Branches:
refs/heads/branch-2.0 8f0ed2891 -> 3c6f686f9
[SPARK-15067][YARN] YARN executors are launched with fixed perm gen size
## What changes were proposed in this pull request?
Look for MaxPermSize arguments anywhere in an arg, to account for quoted args.
See JIR
Repository: spark
Updated Branches:
refs/heads/master e083db2e9 -> 6747171eb
[SPARK-15067][YARN] YARN executors are launched with fixed perm gen size
## What changes were proposed in this pull request?
Look for MaxPermSize arguments anywhere in an arg, to account for quoted args.
See JIRA fo
Repository: spark
Updated Branches:
refs/heads/master 6747171eb -> ee6a8d7ea
[MINOR][SQL] Enhance the exception message if checkpointLocation is not set
Enhance the exception message when `checkpointLocation` is not set, previously
the message is:
```
java.util.NoSuchElementException: None.g
Repository: spark
Updated Branches:
refs/heads/branch-2.0 3c6f686f9 -> 1d5615857
[MINOR][SQL] Enhance the exception message if checkpointLocation is not set
Enhance the exception message when `checkpointLocation` is not set, previously
the message is:
```
java.util.NoSuchElementException: No
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1d5615857 -> c6d23b660
[SAPRK-15220][UI] add hyperlink to running application and completed application
## What changes were proposed in this pull request?
Add hyperlink to "running application" and "completed application", so user can
Repository: spark
Updated Branches:
refs/heads/master ee6a8d7ea -> f8aca5b4a
[SAPRK-15220][UI] add hyperlink to running application and completed application
## What changes were proposed in this pull request?
Add hyperlink to "running application" and "completed application", so user can
jum
Repository: spark
Updated Branches:
refs/heads/master f8aca5b4a -> dfdcab00c
[SPARK-15210][SQL] Add missing @DeveloperApi annotation in sql.types
add DeveloperApi annotation for `AbstractDataType` `MapType` `UserDefinedType`
local build
Author: Zheng RuiFeng
Closes #12982 from zhengruifeng
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c6d23b660 -> f81d25139
[SPARK-15210][SQL] Add missing @DeveloperApi annotation in sql.types
add DeveloperApi annotation for `AbstractDataType` `MapType` `UserDefinedType`
local build
Author: Zheng RuiFeng
Closes #12982 from zhengrui
sts.
Author: Andrew Or
Closes #12941 from andrewor14/move-code.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7bf9b120
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7bf9b120
Diff: http://git-wip-us.apache.org/repos/
sts.
Author: Andrew Or
Closes #12941 from andrewor14/move-code.
(cherry picked from commit 7bf9b12019bb20470b726a7233d60ce38a9c52cc)
Signed-off-by: Andrew Or
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e3f000a3
Tree: h
Repository: spark
Updated Branches:
refs/heads/master 7bf9b1201 -> c3e23bc0c
[SPARK-10653][CORE] Remove unnecessary things from SparkEnv
## What changes were proposed in this pull request?
Removed blockTransferService and sparkFilesDir from SparkEnv since they're
rarely used and don't need t
Repository: spark
Updated Branches:
refs/heads/branch-2.0 e3f000a36 -> 40d24686a
[SPARK-10653][CORE] Remove unnecessary things from SparkEnv
## What changes were proposed in this pull request?
Removed blockTransferService and sparkFilesDir from SparkEnv since they're
rarely used and don't ne
Repository: spark
Updated Branches:
refs/heads/master 0b9cae424 -> bcfee153b
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
accumulators, to
Repository: spark
Updated Branches:
refs/heads/branch-2.0 af12b0a50 -> 19a9c23c2
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
accumulators
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRe
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
a/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaD
Repository: spark
Updated Branches:
refs/heads/master bcfee153b -> ed0b4070f
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
--
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
[SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java
TestSuites
## What changes were proposed in this pull request?
Use SparkSession instead of SQLContext in Scala/Java TestSuites
as this PR already very big working Python TestSuites in a diff PR.
## How was this patch
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
Repository: spark
Updated Branches:
refs/heads/branch-2.0 19a9c23c2 -> 5bf74b44d
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
---
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSu
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
a/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaD
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
b/sql/core/src/test/scala
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
b/sql/core/src/test/scala
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
[SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java
TestSuites
## What changes were proposed in this pull request?
Use SparkSession instead of SQLContext in Scala/Java TestSuites
as this PR already very big working Python TestSuites in a diff PR.
## How was this patch
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSu
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRe
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite
Repository: spark
Updated Branches:
refs/heads/master ed0b4070f -> 5c6b08557
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's do it
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5bf74b44d -> 42db140c5
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's do
Repository: spark
Updated Branches:
refs/heads/master 5c6b08557 -> cddb9da07
[HOTFIX] SQL test compilation error from merge conflict
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cddb9da0
Tree: http://git-wip-us.apache.o
Repository: spark
Updated Branches:
refs/heads/master cddb9da07 -> db3b4a201
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Closes #13030 from dongjoo
Repository: spark
Updated Branches:
refs/heads/branch-2.0 42db140c5 -> bd7fd14c9
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Closes #13030 from don
just to get the `SparkContext` from it. This ends up creating 2
`SparkSession`s from one call, which is definitely not what we want.
## How was this patch tested?
Jenkins.
Author: Andrew Or
Closes #13031 from andrewor14/sql-test.
(cherry picked from commit 69641066ae1d35c33b082451cef636a7
t to get the `SparkContext` from it. This ends up creating 2
`SparkSession`s from one call, which is definitely not what we want.
## How was this patch tested?
Jenkins.
Author: Andrew Or
Closes #13031 from andrewor14/sql-test.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Comm
Repository: spark
Updated Branches:
refs/heads/branch-2.0 95f254994 -> 1db027d11
[SPARK-15249][SQL] Use FunctionResource instead of (String, String) in
CreateFunction and CatalogFunction for resource
Use FunctionResource instead of (String, String) in CreateFunction and
CatalogFunction for r
Repository: spark
Updated Branches:
refs/heads/master 9533f5390 -> da02d006b
[SPARK-15249][SQL] Use FunctionResource instead of (String, String) in
CreateFunction and CatalogFunction for resource
Use FunctionResource instead of (String, String) in CreateFunction and
CatalogFunction for resou
[SPARK-13522][CORE] Executor should kill itself when it's unable to heartbeat
to driver more than N times
## What changes were proposed in this pull request?
Sometimes, network disconnection event won't be triggered for other potential
race conditions that we may not have thought of, then the e
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d1654864a -> ced71d353
[SPARK-13519][CORE] Driver should tell Executor to stop itself when cleaning
executor's state
## What changes were proposed in this pull request?
When the driver removes an executor's state, the connection betwe
[SPARK-13522][CORE] Fix the exit log place for heartbeat
## What changes were proposed in this pull request?
Just fixed the log place introduced by #11401
## How was this patch tested?
unit tests.
Author: Shixiong Zhu
Closes #11432 from zsxwing/SPARK-13522-follow-up.
Project: http://git-wi
the `EXTERNAL` field optional. This is related to
#13032.
## How was this patch tested?
New test in `DDLCommandSuite`.
Author: Andrew Or
Closes #13060 from andrewor14/location-implies-external.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/
the `EXTERNAL` field optional. This is related to
#13032.
## How was this patch tested?
New test in `DDLCommandSuite`.
Author: Andrew Or
Closes #13060 from andrewor14/location-implies-external.
(cherry picked from commit f14c4ba001fbdbcc9faa46896f1f9d08a7d06609)
Signed-off-by: Andrew Or
Proj
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f763c1485 -> f8804bb10
[SPARK-15264][SPARK-15274][SQL] CSV Reader Error on Blank Column Names
## What changes were proposed in this pull request?
When a CSV begins with:
- `,,`
OR
- `"","",`
meaning that the first column names are eit
Repository: spark
Updated Branches:
refs/heads/master f14c4ba00 -> 603f4453a
[SPARK-15264][SPARK-15274][SQL] CSV Reader Error on Blank Column Names
## What changes were proposed in this pull request?
When a CSV begins with:
- `,,`
OR
- `"","",`
meaning that the first column names are either
Repository: spark
Updated Branches:
refs/heads/master 603f4453a -> db573fc74
[SPARK-15072][SQL][PYSPARK] FollowUp: Remove SparkSession.withHiveSupport in
PySpark
## What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/12851
Remove `SparkS
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f8804bb10 -> 114be703d
[SPARK-15072][SQL][PYSPARK] FollowUp: Remove SparkSession.withHiveSupport in
PySpark
## What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/12851
Remove `Sp
Repository: spark
Updated Branches:
refs/heads/branch-2.0 7d187539e -> 86acb5efd
[SPARK-15031][SPARK-15134][EXAMPLE][DOC] Use SparkSession and update indent in
examples
## What changes were proposed in this pull request?
1, Use `SparkSession` according to
[SPARK-15031](https://issues.apache.
Repository: spark
Updated Branches:
refs/heads/master ba5487c06 -> 9e266d07a
[SPARK-15031][SPARK-15134][EXAMPLE][DOC] Use SparkSession and update indent in
examples
## What changes were proposed in this pull request?
1, Use `SparkSession` according to
[SPARK-15031](https://issues.apache.org/
Repository: spark
Updated Branches:
refs/heads/branch-2.0 9098b1a17 -> b3f145442
[HOTFIX] SQL test compilation error from merge conflict
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b3f14544
Tree: http://git-wip-us.apac
Repository: spark
Updated Branches:
refs/heads/master 470de743e -> be617f3d0
[SPARK-14684][SPARK-15277][SQL] Partition Spec Validation in SessionCatalog and
Checking Partition Spec Existence Before Dropping
What changes were proposed in this pull request?
~~Currently, multiple partitions
Repository: spark
Updated Branches:
refs/heads/branch-2.0 68617e1ad -> 9c5c9013d
[SPARK-14684][SPARK-15277][SQL] Partition Spec Validation in SessionCatalog and
Checking Partition Spec Existence Before Dropping
What changes were proposed in this pull request?
~~Currently, multiple partit
Repository: spark
Updated Branches:
refs/heads/branch-2.0 2604eadcf -> 496f6d0fc
[SPARK-14603][SQL][FOLLOWUP] Verification of Metadata Operations by Session
Catalog
What changes were proposed in this pull request?
This follow-up PR is to address the remaining comments in
https://github.
Repository: spark
Updated Branches:
refs/heads/master ef7a5e0bc -> ad182086c
[SPARK-15300] Fix writer lock conflict when remove a block
## What changes were proposed in this pull request?
A writer lock could be acquired when 1) create a new block 2) remove a block 3)
evict a block to disk. 1
Repository: spark
Updated Branches:
refs/heads/master ad182086c -> faafd1e9d
[SPARK-15387][SQL] SessionCatalog in SimpleAnalyzer does not need to make
database directory.
## What changes were proposed in this pull request?
After #12871 is fixed, we are forced to make `/user/hive/warehouse` w
Repository: spark
Updated Branches:
refs/heads/branch-2.0 96a473a11 -> 9c817d027
[SPARK-15387][SQL] SessionCatalog in SimpleAnalyzer does not need to make
database directory.
## What changes were proposed in this pull request?
After #12871 is fixed, we are forced to make `/user/hive/warehous
Repository: spark
Updated Branches:
refs/heads/master faafd1e9d -> f5065abf4
[SPARK-15322][SQL][FOLLOW-UP] Update deprecated accumulator usage into
accumulatorV2
## What changes were proposed in this pull request?
This PR corrects another case that uses deprecated `accumulableCollection` to
Repository: spark
Updated Branches:
refs/heads/branch-2.0 496f6d0fc -> 96a473a11
[SPARK-15300] Fix writer lock conflict when remove a block
## What changes were proposed in this pull request?
A writer lock could be acquired when 1) create a new block 2) remove a block 3)
evict a block to dis
Repository: spark
Updated Branches:
refs/heads/master 9308bf119 -> ef7a5e0bc
[SPARK-14603][SQL][FOLLOWUP] Verification of Metadata Operations by Session
Catalog
What changes were proposed in this pull request?
This follow-up PR is to address the remaining comments in
https://github.com/
Repository: spark
Updated Branches:
refs/heads/branch-2.0 9c817d027 -> 554e0f30a
[SPARK-15322][SQL][FOLLOW-UP] Update deprecated accumulator usage into
accumulatorV2
## What changes were proposed in this pull request?
This PR corrects another case that uses deprecated `accumulableCollection`
Repository: spark
Updated Branches:
refs/heads/branch-2.0 97fd9a09c -> 4f8639f9d
[SPARK-14346][SQL] Lists unsupported Hive features in SHOW CREATE TABLE output
## What changes were proposed in this pull request?
This PR is a follow-up of #13079. It replaces `hasUnsupportedFeatures: Boolean`
Repository: spark
Updated Branches:
refs/heads/master e71cd96bf -> 6ac1c3a04
[SPARK-14346][SQL] Lists unsupported Hive features in SHOW CREATE TABLE output
## What changes were proposed in this pull request?
This PR is a follow-up of #13079. It replaces `hasUnsupportedFeatures: Boolean`
in `
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4f8639f9d -> 62e5158f1
[SPARK-15317][CORE] Don't store accumulators for every task in listeners
## What changes were proposed in this pull request?
In general, the Web UI doesn't need to store the Accumulator/AccumulableInfo
for every
Repository: spark
Updated Branches:
refs/heads/master 6ac1c3a04 -> 4e3cb7a5d
[SPARK-15317][CORE] Don't store accumulators for every task in listeners
## What changes were proposed in this pull request?
In general, the Web UI doesn't need to store the Accumulator/AccumulableInfo
for every tas
Repository: spark
Updated Branches:
refs/heads/branch-2.0 62e5158f1 -> d1b5df83d
[SPARK-15392][SQL] fix default value of size estimation of logical plan
## What changes were proposed in this pull request?
We use autoBroadcastJoinThreshold + 1L as the default value of size estimation,
that is
Repository: spark
Updated Branches:
refs/heads/master 4e3cb7a5d -> 5ccecc078
[SPARK-15392][SQL] fix default value of size estimation of logical plan
## What changes were proposed in this pull request?
We use autoBroadcastJoinThreshold + 1L as the default value of size estimation,
that is not
Repository: spark
Updated Branches:
refs/heads/branch-2.0 2126fb0c2 -> 1fc0f95eb
[HOTFIX] Test compilation error from 52b967f
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1fc0f95e
Tree: http://git-wip-us.apache.org/repo
Repository: spark
Updated Branches:
refs/heads/branch-2.0 2ef645724 -> 612866473
[HOTFIX] Add back intended change from SPARK-15392
This was accidentally reverted in f8d0177.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commi
such cases, we should throw exceptions instead.
## How was this patch tested?
`DDLCommandSuite`
Author: Andrew Or
Closes #13205 from andrewor14/ddl-prop-values.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2
s.
In such cases, we should throw exceptions instead.
## How was this patch tested?
`DDLCommandSuite`
Author: Andrew Or
Closes #13205 from andrewor14/ddl-prop-values.
(cherry picked from commit 257375019266ab9e3c320e33026318cc31f58ada)
Signed-off-by: Andrew Or
Project: http://git-wip-us.apa
ses #13203 from andrewor14/fix-pyspark-shell.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c32b1b16
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c32b1b16
Diff: http://git-wip-us.apache.org/repos/asf/spark/d
Or
Closes #13203 from andrewor14/fix-pyspark-shell.
(cherry picked from commit c32b1b162e7e5ecc5c823f79ba9f23cbd1407dbf)
Signed-off-by: Andrew Or
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/53c09f06
Tree: http://git-
LIB` is not recommended to use now, so examples in `MLLIB` are ignored in
this PR.
`StreamingContext` can not be directly obtained from `SparkSession`, so example
in `Streaming` are ignored too.
cc andrewor14
## How was this patch tested?
manual tests with spark-submit
Author: Zheng RuiFeng
Clo
is not recommended to use now, so examples in `MLLIB` are ignored in
this PR.
`StreamingContext` can not be directly obtained from `SparkSession`, so example
in `Streaming` are ignored too.
cc andrewor14
## How was this patch tested?
manual tests with spark-submit
Author: Zheng RuiFeng
Clo
Repository: spark
Updated Branches:
refs/heads/branch-2.0 684167862 -> c7e013f18
[SPARK-15456][PYSPARK] Fixed PySpark shell context initialization when HiveConf
not present
## What changes were proposed in this pull request?
When PySpark shell cannot find HiveConf, it will fallback to create
Repository: spark
Updated Branches:
refs/heads/master 127bf1bb0 -> 021c19702
[SPARK-15456][PYSPARK] Fixed PySpark shell context initialization when HiveConf
not present
## What changes were proposed in this pull request?
When PySpark shell cannot find HiveConf, it will fallback to create a
201 - 300 of 1712 matches
Mail list logo