row exceptions.
- Remove `TRUNCATE TABLE ... COLUMN`, which was never supported by either Spark
or Hive.
## How was this patch tested?
Jenkins.
Author: Andrew Or
Closes #13302 from andrewor14/truncate-table.
(cherry picked from commit ee682fe293b47988056b540ee46ca49861309982)
Signed-off-by: Andrew
Repository: spark
Updated Branches:
refs/heads/master d6d3e5071 -> 02c8072ee
[MINOR][MLLIB][STREAMING][SQL] Fix typos
fixed typos for source code for components [mllib] [streaming] and [SQL]
None and obvious.
Author: lfzCarlosC
Closes #13298 from lfzCarlosC/master.
Project: http://git-wi
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4009ddafd -> 6fc367e50
[MINOR][MLLIB][STREAMING][SQL] Fix typos
fixed typos for source code for components [mllib] [streaming] and [SQL]
None and obvious.
Author: lfzCarlosC
Closes #13298 from lfzCarlosC/master.
(cherry picked from
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c75ec5eaa -> 4009ddafd
[MINOR][CORE] Fix a HadoopRDD log message and remove unused imports in rdd
files.
## What changes were proposed in this pull request?
This PR fixes the following typos in log message and comments of
`HadoopRDD.
Repository: spark
Updated Branches:
refs/heads/master 8239fdcb9 -> d6d3e5071
[MINOR][CORE] Fix a HadoopRDD log message and remove unused imports in rdd
files.
## What changes were proposed in this pull request?
This PR fixes the following typos in log message and comments of
`HadoopRDD.scal
low setting confs correctly.
This was a leftover TODO from https://github.com/apache/spark/pull/13200.
## How was this patch tested?
Python doc tests.
cc andrewor14
Author: Eric Liang
Closes #13289 from ericl/spark-15520.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://
low setting confs correctly.
This was a leftover TODO from https://github.com/apache/spark/pull/13200.
## How was this patch tested?
Python doc tests.
cc andrewor14
Author: Eric Liang
Closes #13289 from ericl/spark-15520.
(cherry picked from commit 8239fdcb9b54ab6d13c31ad9916b8334dd146
Repository: spark
Updated Branches:
refs/heads/branch-2.0 69327667d -> 27f26a39d
[SPARK-15345][SQL][PYSPARK] SparkSession's conf doesn't take effect when this
already an existing SparkContext
## What changes were proposed in this pull request?
Override the existing SparkContext is the provid
Repository: spark
Updated Branches:
refs/heads/master b120fba6a -> 01e7b9c85
[SPARK-15345][SQL][PYSPARK] SparkSession's conf doesn't take effect when this
already an existing SparkContext
## What changes were proposed in this pull request?
Override the existing SparkContext is the provided S
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1bb0aa4b0 -> 2574abea0
[MINOR][CORE][TEST] Update obsolete `takeSample` test case.
## What changes were proposed in this pull request?
This PR fixes some obsolete comments and assertion in `takeSample` testcase of
`RDDSuite.scala`.
#
Repository: spark
Updated Branches:
refs/heads/master 784cc07d1 -> be99a99fe
[MINOR][CORE][TEST] Update obsolete `takeSample` test case.
## What changes were proposed in this pull request?
This PR fixes some obsolete comments and assertion in `takeSample` testcase of
`RDDSuite.scala`.
## Ho
Repository: spark
Updated Branches:
refs/heads/branch-2.0 988d4dbf4 -> 1bb0aa4b0
[SPARK-15388][SQL] Fix spark sql CREATE FUNCTION with hive 1.2.1
## What changes were proposed in this pull request?
spark.sql("CREATE FUNCTION myfunc AS 'com.haizhi.bdp.udf.UDFGetGeoCode'")
throws
"org.apache.
Repository: spark
Updated Branches:
refs/heads/master a313a5ae7 -> 784cc07d1
[SPARK-15388][SQL] Fix spark sql CREATE FUNCTION with hive 1.2.1
## What changes were proposed in this pull request?
spark.sql("CREATE FUNCTION myfunc AS 'com.haizhi.bdp.udf.UDFGetGeoCode'")
throws
"org.apache.hado
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1890f5fdf -> 6adbc0613
[SPARK-15397][SQL] fix string udf locate as hive
## What changes were proposed in this pull request?
in hive, `locate("aa", "aaa", 0)` would yield 0, `locate("aa", "aaa", 1)` would
yield 1 and `locate("aa", "aaa
Repository: spark
Updated Branches:
refs/heads/master de726b0d5 -> d642b2735
[SPARK-15397][SQL] fix string udf locate as hive
## What changes were proposed in this pull request?
in hive, `locate("aa", "aaa", 0)` would yield 0, `locate("aa", "aaa", 1)` would
yield 1 and `locate("aa", "aaa", 2
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d0bcec157 -> 1890f5fdf
Revert "[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grows
beyond 64 KB"
This reverts commit d0bcec157d2bd2ed4eff848f831841bef4745904.
Project: http://git-wip-us.apache.org/repos/asf/spark/r
Repository: spark
Updated Branches:
refs/heads/master fa244e5a9 -> de726b0d5
Revert "[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grows
beyond 64 KB"
This reverts commit fa244e5a90690d6a31be50f2aa203ae1a2e9a1cf.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Repository: spark
Updated Branches:
refs/heads/branch-2.0 220b9a08e -> f3162b96d
[SPARK-15464][ML][MLLIB][SQL][TESTS] Replace SQLContext and SparkContext with
SparkSession using builder pattern in python test code
## What changes were proposed in this pull request?
Replace SQLContext and Spa
Repository: spark
Updated Branches:
refs/heads/master 5afd927a4 -> a15ca5533
[SPARK-15464][ML][MLLIB][SQL][TESTS] Replace SQLContext and SparkContext with
SparkSession using builder pattern in python test code
## What changes were proposed in this pull request?
Replace SQLContext and SparkCo
Repository: spark
Updated Branches:
refs/heads/branch-2.0 3def56120 -> 220b9a08e
[SPARK-15311][SQL] Disallow DML on Regular Tables when Using In-Memory Catalog
What changes were proposed in this pull request?
So far, when using In-Memory Catalog, we allow DDL operations for the tables.
H
Repository: spark
Updated Branches:
refs/heads/master 01659bc50 -> 5afd927a4
[SPARK-15311][SQL] Disallow DML on Regular Tables when Using In-Memory Catalog
What changes were proposed in this pull request?
So far, when using In-Memory Catalog, we allow DDL operations for the tables.
Howev
ause the SerDe's conflict. As of this patch:
- `ROW FORMAT DELIMITED` is only compatible with `TEXTFILE`
- `ROW FORMAT SERDE` is only compatible with `TEXTFILE`, `RCFILE` and
`SEQUENCEFILE`
## How was this patch tested?
New tests in `DDLCommandSuite`.
Author: Andrew Or
Closes #13068 from
ause the SerDe's conflict. As of this patch:
- `ROW FORMAT DELIMITED` is only compatible with `TEXTFILE`
- `ROW FORMAT SERDE` is only compatible with `TEXTFILE`, `RCFILE` and
`SEQUENCEFILE`
## How was this patch tested?
New tests in `DDLCommandSuite`.
Author: Andrew Or
Closes #13068 from
Repository: spark
Updated Branches:
refs/heads/branch-2.0 684167862 -> c7e013f18
[SPARK-15456][PYSPARK] Fixed PySpark shell context initialization when HiveConf
not present
## What changes were proposed in this pull request?
When PySpark shell cannot find HiveConf, it will fallback to create
Repository: spark
Updated Branches:
refs/heads/master 127bf1bb0 -> 021c19702
[SPARK-15456][PYSPARK] Fixed PySpark shell context initialization when HiveConf
not present
## What changes were proposed in this pull request?
When PySpark shell cannot find HiveConf, it will fallback to create a
is not recommended to use now, so examples in `MLLIB` are ignored in
this PR.
`StreamingContext` can not be directly obtained from `SparkSession`, so example
in `Streaming` are ignored too.
cc andrewor14
## How was this patch tested?
manual tests with spark-submit
Author: Zheng RuiFeng
Clo
LIB` is not recommended to use now, so examples in `MLLIB` are ignored in
this PR.
`StreamingContext` can not be directly obtained from `SparkSession`, so example
in `Streaming` are ignored too.
cc andrewor14
## How was this patch tested?
manual tests with spark-submit
Author: Zheng RuiFeng
Clo
Or
Closes #13203 from andrewor14/fix-pyspark-shell.
(cherry picked from commit c32b1b162e7e5ecc5c823f79ba9f23cbd1407dbf)
Signed-off-by: Andrew Or
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/53c09f06
Tree: http://git-
ses #13203 from andrewor14/fix-pyspark-shell.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c32b1b16
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c32b1b16
Diff: http://git-wip-us.apache.org/repos/asf/spark/d
s.
In such cases, we should throw exceptions instead.
## How was this patch tested?
`DDLCommandSuite`
Author: Andrew Or
Closes #13205 from andrewor14/ddl-prop-values.
(cherry picked from commit 257375019266ab9e3c320e33026318cc31f58ada)
Signed-off-by: Andrew Or
Project: http://git-wip-us.apa
such cases, we should throw exceptions instead.
## How was this patch tested?
`DDLCommandSuite`
Author: Andrew Or
Closes #13205 from andrewor14/ddl-prop-values.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2
Repository: spark
Updated Branches:
refs/heads/branch-2.0 2ef645724 -> 612866473
[HOTFIX] Add back intended change from SPARK-15392
This was accidentally reverted in f8d0177.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commi
Repository: spark
Updated Branches:
refs/heads/branch-2.0 2126fb0c2 -> 1fc0f95eb
[HOTFIX] Test compilation error from 52b967f
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1fc0f95e
Tree: http://git-wip-us.apache.org/repo
Repository: spark
Updated Branches:
refs/heads/master 4e3cb7a5d -> 5ccecc078
[SPARK-15392][SQL] fix default value of size estimation of logical plan
## What changes were proposed in this pull request?
We use autoBroadcastJoinThreshold + 1L as the default value of size estimation,
that is not
Repository: spark
Updated Branches:
refs/heads/branch-2.0 62e5158f1 -> d1b5df83d
[SPARK-15392][SQL] fix default value of size estimation of logical plan
## What changes were proposed in this pull request?
We use autoBroadcastJoinThreshold + 1L as the default value of size estimation,
that is
Repository: spark
Updated Branches:
refs/heads/master 6ac1c3a04 -> 4e3cb7a5d
[SPARK-15317][CORE] Don't store accumulators for every task in listeners
## What changes were proposed in this pull request?
In general, the Web UI doesn't need to store the Accumulator/AccumulableInfo
for every tas
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4f8639f9d -> 62e5158f1
[SPARK-15317][CORE] Don't store accumulators for every task in listeners
## What changes were proposed in this pull request?
In general, the Web UI doesn't need to store the Accumulator/AccumulableInfo
for every
Repository: spark
Updated Branches:
refs/heads/master e71cd96bf -> 6ac1c3a04
[SPARK-14346][SQL] Lists unsupported Hive features in SHOW CREATE TABLE output
## What changes were proposed in this pull request?
This PR is a follow-up of #13079. It replaces `hasUnsupportedFeatures: Boolean`
in `
Repository: spark
Updated Branches:
refs/heads/branch-2.0 97fd9a09c -> 4f8639f9d
[SPARK-14346][SQL] Lists unsupported Hive features in SHOW CREATE TABLE output
## What changes were proposed in this pull request?
This PR is a follow-up of #13079. It replaces `hasUnsupportedFeatures: Boolean`
Repository: spark
Updated Branches:
refs/heads/branch-2.0 9c817d027 -> 554e0f30a
[SPARK-15322][SQL][FOLLOW-UP] Update deprecated accumulator usage into
accumulatorV2
## What changes were proposed in this pull request?
This PR corrects another case that uses deprecated `accumulableCollection`
Repository: spark
Updated Branches:
refs/heads/master 9308bf119 -> ef7a5e0bc
[SPARK-14603][SQL][FOLLOWUP] Verification of Metadata Operations by Session
Catalog
What changes were proposed in this pull request?
This follow-up PR is to address the remaining comments in
https://github.com/
Repository: spark
Updated Branches:
refs/heads/master faafd1e9d -> f5065abf4
[SPARK-15322][SQL][FOLLOW-UP] Update deprecated accumulator usage into
accumulatorV2
## What changes were proposed in this pull request?
This PR corrects another case that uses deprecated `accumulableCollection` to
Repository: spark
Updated Branches:
refs/heads/branch-2.0 496f6d0fc -> 96a473a11
[SPARK-15300] Fix writer lock conflict when remove a block
## What changes were proposed in this pull request?
A writer lock could be acquired when 1) create a new block 2) remove a block 3)
evict a block to dis
Repository: spark
Updated Branches:
refs/heads/branch-2.0 96a473a11 -> 9c817d027
[SPARK-15387][SQL] SessionCatalog in SimpleAnalyzer does not need to make
database directory.
## What changes were proposed in this pull request?
After #12871 is fixed, we are forced to make `/user/hive/warehous
Repository: spark
Updated Branches:
refs/heads/master ad182086c -> faafd1e9d
[SPARK-15387][SQL] SessionCatalog in SimpleAnalyzer does not need to make
database directory.
## What changes were proposed in this pull request?
After #12871 is fixed, we are forced to make `/user/hive/warehouse` w
Repository: spark
Updated Branches:
refs/heads/master ef7a5e0bc -> ad182086c
[SPARK-15300] Fix writer lock conflict when remove a block
## What changes were proposed in this pull request?
A writer lock could be acquired when 1) create a new block 2) remove a block 3)
evict a block to disk. 1
Repository: spark
Updated Branches:
refs/heads/branch-2.0 2604eadcf -> 496f6d0fc
[SPARK-14603][SQL][FOLLOWUP] Verification of Metadata Operations by Session
Catalog
What changes were proposed in this pull request?
This follow-up PR is to address the remaining comments in
https://github.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 68617e1ad -> 9c5c9013d
[SPARK-14684][SPARK-15277][SQL] Partition Spec Validation in SessionCatalog and
Checking Partition Spec Existence Before Dropping
What changes were proposed in this pull request?
~~Currently, multiple partit
Repository: spark
Updated Branches:
refs/heads/master 470de743e -> be617f3d0
[SPARK-14684][SPARK-15277][SQL] Partition Spec Validation in SessionCatalog and
Checking Partition Spec Existence Before Dropping
What changes were proposed in this pull request?
~~Currently, multiple partitions
Repository: spark
Updated Branches:
refs/heads/branch-2.0 9098b1a17 -> b3f145442
[HOTFIX] SQL test compilation error from merge conflict
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b3f14544
Tree: http://git-wip-us.apac
Repository: spark
Updated Branches:
refs/heads/master ba5487c06 -> 9e266d07a
[SPARK-15031][SPARK-15134][EXAMPLE][DOC] Use SparkSession and update indent in
examples
## What changes were proposed in this pull request?
1, Use `SparkSession` according to
[SPARK-15031](https://issues.apache.org/
Repository: spark
Updated Branches:
refs/heads/branch-2.0 7d187539e -> 86acb5efd
[SPARK-15031][SPARK-15134][EXAMPLE][DOC] Use SparkSession and update indent in
examples
## What changes were proposed in this pull request?
1, Use `SparkSession` according to
[SPARK-15031](https://issues.apache.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f8804bb10 -> 114be703d
[SPARK-15072][SQL][PYSPARK] FollowUp: Remove SparkSession.withHiveSupport in
PySpark
## What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/12851
Remove `Sp
Repository: spark
Updated Branches:
refs/heads/master 603f4453a -> db573fc74
[SPARK-15072][SQL][PYSPARK] FollowUp: Remove SparkSession.withHiveSupport in
PySpark
## What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/12851
Remove `SparkS
Repository: spark
Updated Branches:
refs/heads/master f14c4ba00 -> 603f4453a
[SPARK-15264][SPARK-15274][SQL] CSV Reader Error on Blank Column Names
## What changes were proposed in this pull request?
When a CSV begins with:
- `,,`
OR
- `"","",`
meaning that the first column names are either
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f763c1485 -> f8804bb10
[SPARK-15264][SPARK-15274][SQL] CSV Reader Error on Blank Column Names
## What changes were proposed in this pull request?
When a CSV begins with:
- `,,`
OR
- `"","",`
meaning that the first column names are eit
the `EXTERNAL` field optional. This is related to
#13032.
## How was this patch tested?
New test in `DDLCommandSuite`.
Author: Andrew Or
Closes #13060 from andrewor14/location-implies-external.
(cherry picked from commit f14c4ba001fbdbcc9faa46896f1f9d08a7d06609)
Signed-off-by: Andrew Or
Proj
the `EXTERNAL` field optional. This is related to
#13032.
## How was this patch tested?
New test in `DDLCommandSuite`.
Author: Andrew Or
Closes #13060 from andrewor14/location-implies-external.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/
[SPARK-13522][CORE] Fix the exit log place for heartbeat
## What changes were proposed in this pull request?
Just fixed the log place introduced by #11401
## How was this patch tested?
unit tests.
Author: Shixiong Zhu
Closes #11432 from zsxwing/SPARK-13522-follow-up.
Project: http://git-wi
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d1654864a -> ced71d353
[SPARK-13519][CORE] Driver should tell Executor to stop itself when cleaning
executor's state
## What changes were proposed in this pull request?
When the driver removes an executor's state, the connection betwe
[SPARK-13522][CORE] Executor should kill itself when it's unable to heartbeat
to driver more than N times
## What changes were proposed in this pull request?
Sometimes, network disconnection event won't be triggered for other potential
race conditions that we may not have thought of, then the e
Repository: spark
Updated Branches:
refs/heads/master 9533f5390 -> da02d006b
[SPARK-15249][SQL] Use FunctionResource instead of (String, String) in
CreateFunction and CatalogFunction for resource
Use FunctionResource instead of (String, String) in CreateFunction and
CatalogFunction for resou
Repository: spark
Updated Branches:
refs/heads/branch-2.0 95f254994 -> 1db027d11
[SPARK-15249][SQL] Use FunctionResource instead of (String, String) in
CreateFunction and CatalogFunction for resource
Use FunctionResource instead of (String, String) in CreateFunction and
CatalogFunction for r
t to get the `SparkContext` from it. This ends up creating 2
`SparkSession`s from one call, which is definitely not what we want.
## How was this patch tested?
Jenkins.
Author: Andrew Or
Closes #13031 from andrewor14/sql-test.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Comm
just to get the `SparkContext` from it. This ends up creating 2
`SparkSession`s from one call, which is definitely not what we want.
## How was this patch tested?
Jenkins.
Author: Andrew Or
Closes #13031 from andrewor14/sql-test.
(cherry picked from commit 69641066ae1d35c33b082451cef636a7
Repository: spark
Updated Branches:
refs/heads/master cddb9da07 -> db3b4a201
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Closes #13030 from dongjoo
Repository: spark
Updated Branches:
refs/heads/branch-2.0 42db140c5 -> bd7fd14c9
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Closes #13030 from don
Repository: spark
Updated Branches:
refs/heads/master 5c6b08557 -> cddb9da07
[HOTFIX] SQL test compilation error from merge conflict
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cddb9da0
Tree: http://git-wip-us.apache.o
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5bf74b44d -> 42db140c5
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's do
Repository: spark
Updated Branches:
refs/heads/master ed0b4070f -> 5c6b08557
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's do it
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSu
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRe
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
b/sql/core/src/test/scala
[SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java
TestSuites
## What changes were proposed in this pull request?
Use SparkSession instead of SQLContext in Scala/Java TestSuites
as this PR already very big working Python TestSuites in a diff PR.
## How was this patch
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
a/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaD
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
b/sql/core/src/test/scala
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
Repository: spark
Updated Branches:
refs/heads/branch-2.0 19a9c23c2 -> 5bf74b44d
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
---
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSu
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
[SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java
TestSuites
## What changes were proposed in this pull request?
Use SparkSession instead of SQLContext in Scala/Java TestSuites
as this PR already very big working Python TestSuites in a diff PR.
## How was this patch
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
a/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRe
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/executio
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
a/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaD
Repository: spark
Updated Branches:
refs/heads/master bcfee153b -> ed0b4070f
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
--
Repository: spark
Updated Branches:
refs/heads/branch-2.0 af12b0a50 -> 19a9c23c2
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
accumulators
Repository: spark
Updated Branches:
refs/heads/master 0b9cae424 -> bcfee153b
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
accumulators, to
Repository: spark
Updated Branches:
refs/heads/branch-2.0 e3f000a36 -> 40d24686a
[SPARK-10653][CORE] Remove unnecessary things from SparkEnv
## What changes were proposed in this pull request?
Removed blockTransferService and sparkFilesDir from SparkEnv since they're
rarely used and don't ne
Repository: spark
Updated Branches:
refs/heads/master 7bf9b1201 -> c3e23bc0c
[SPARK-10653][CORE] Remove unnecessary things from SparkEnv
## What changes were proposed in this pull request?
Removed blockTransferService and sparkFilesDir from SparkEnv since they're
rarely used and don't need t
sts.
Author: Andrew Or
Closes #12941 from andrewor14/move-code.
(cherry picked from commit 7bf9b12019bb20470b726a7233d60ce38a9c52cc)
Signed-off-by: Andrew Or
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e3f000a3
Tree: h
sts.
Author: Andrew Or
Closes #12941 from andrewor14/move-code.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7bf9b120
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7bf9b120
Diff: http://git-wip-us.apache.org/repos/
Repository: spark
Updated Branches:
refs/heads/master f8aca5b4a -> dfdcab00c
[SPARK-15210][SQL] Add missing @DeveloperApi annotation in sql.types
add DeveloperApi annotation for `AbstractDataType` `MapType` `UserDefinedType`
local build
Author: Zheng RuiFeng
Closes #12982 from zhengruifeng
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c6d23b660 -> f81d25139
[SPARK-15210][SQL] Add missing @DeveloperApi annotation in sql.types
add DeveloperApi annotation for `AbstractDataType` `MapType` `UserDefinedType`
local build
Author: Zheng RuiFeng
Closes #12982 from zhengrui
Repository: spark
Updated Branches:
refs/heads/master ee6a8d7ea -> f8aca5b4a
[SAPRK-15220][UI] add hyperlink to running application and completed application
## What changes were proposed in this pull request?
Add hyperlink to "running application" and "completed application", so user can
jum
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1d5615857 -> c6d23b660
[SAPRK-15220][UI] add hyperlink to running application and completed application
## What changes were proposed in this pull request?
Add hyperlink to "running application" and "completed application", so user can
101 - 200 of 1712 matches
Mail list logo