Repository: spark
Updated Branches:
refs/heads/master 03bf704bf -> a60d2b70a
[SPARK-5454] More robust handling of self joins
Also I fix a bunch of bad output in test cases.
Author: Michael Armbrust
Closes #4520 from marmbrus/selfJoin and squashes the following commits:
4f4a85c [Mich
med_struct("x",1),1 FROM src LIMIT 1;
+ |SELECT a.x FROM t1 a JOIN t2 b ON a.x = b.k;
+""".stripMargin)
+
/**
* Negative examples. Currently only left here for documentation purposes.
* TODO(marmbrus): Test that catalyst fails on these queries.
|INSERT OVERWRITE TABLE t1 SELECT 1 FROM src LIMIT 1;
+ |INSERT OVERWRITE TABLE t2 SELECT named_struct("x",1),1 FROM src LIMIT 1;
+ |SELECT a.x FROM t1 a JOIN t2 b ON a.x = b.k;
+""".stripMargin)
+
/**
* Negative examples. Cu
Repository: spark
Updated Branches:
refs/heads/master d931b01dc -> a38e23c30
[SQL] Make dataframe more tolerant of being serialized
Eases use in the spark-shell.
Author: Michael Armbrust
Closes #4545 from marmbrus/serialization and squashes the following commits:
04748e6 [Michael Armbr
Repository: spark
Updated Branches:
refs/heads/branch-1.3 bcb13827c -> 3c1b9bf65
[SQL] Make dataframe more tolerant of being serialized
Eases use in the spark-shell.
Author: Michael Armbrust
Closes #4545 from marmbrus/serialization and squashes the following commits:
04748e6 [Mich
Repository: spark
Updated Branches:
refs/heads/master 6a1be026c -> aa4ca8b87
[SQL] Improve error messages
Author: Michael Armbrust
Author: wangfei
Closes #4558 from marmbrus/errorMessages and squashes the following commits:
5e5ab50 [Michael Armbrust] Merge pull request #15 from s
Repository: spark
Updated Branches:
refs/heads/branch-1.3 cbd659e5f -> e3a975d45
[SQL] Improve error messages
Author: Michael Armbrust
Author: wangfei
Closes #4558 from marmbrus/errorMessages and squashes the following commits:
5e5ab50 [Michael Armbrust] Merge pull request #15 from s
Repository: spark
Updated Branches:
refs/heads/master 0bf031582 -> c352ffbdb
[SPARK-5758][SQL] Use LongType as the default type for integers in JSON schema
inference.
Author: Yin Huai
Closes #4544 from yhuai/jsonUseLongTypeByDefault and squashes the following
commits:
6e2ffc2 [Yin Huai] U
Repository: spark
Updated Branches:
refs/heads/branch-1.3 bf0d15c52 -> b0c79daf4
[SPARK-5758][SQL] Use LongType as the default type for integers in JSON schema
inference.
Author: Yin Huai
Closes #4544 from yhuai/jsonUseLongTypeByDefault and squashes the following
commits:
6e2ffc2 [Yin Hua
Repository: spark
Updated Branches:
refs/heads/branch-1.3 b0c79daf4 -> c7eb9ee2c
[SPARK-5573][SQL] Add explode to dataframes
Author: Michael Armbrust
Closes #4546 from marmbrus/explode and squashes the following commits:
eefd33a [Michael Armbrust] whitespace
a8d496c [Michael Armbr
Repository: spark
Updated Branches:
refs/heads/master c352ffbdb -> ee04a8b19
[SPARK-5573][SQL] Add explode to dataframes
Author: Michael Armbrust
Closes #4546 from marmbrus/explode and squashes the following commits:
eefd33a [Michael Armbrust] whitespace
a8d496c [Michael Armbrust] Me
Repository: spark
Updated Branches:
refs/heads/master ee04a8b19 -> d5fc51491
[SPARK-5755] [SQL] remove unnecessary Add
explain extended select +key from src;
before:
== Parsed Logical Plan ==
'Project [(0 + 'key) AS _c0#8]
'UnresolvedRelation [src], None
== Analyzed Logical Plan ==
Proje
Repository: spark
Updated Branches:
refs/heads/branch-1.3 c7eb9ee2c -> f7103b343
[SPARK-5755] [SQL] remove unnecessary Add
explain extended select +key from src;
before:
== Parsed Logical Plan ==
'Project [(0 + 'key) AS _c0#8]
'UnresolvedRelation [src], None
== Analyzed Logical Plan ==
P
Repository: spark
Updated Branches:
refs/heads/branch-1.3 5c9db4e75 -> 925fd84a1
[SQL] Move SaveMode to SQL package.
Author: Yin Huai
Closes #4542 from yhuai/moveSaveMode and squashes the following commits:
65a4425 [Yin Huai] Move SaveMode to sql package.
(cherry picked from commit c025a46
Repository: spark
Updated Branches:
refs/heads/master ada993e95 -> c025a4688
[SQL] Move SaveMode to SQL package.
Author: Yin Huai
Closes #4542 from yhuai/moveSaveMode and squashes the following commits:
65a4425 [Yin Huai] Move SaveMode to sql package.
Project: http://git-wip-us.apache.org
Repository: spark
Updated Branches:
refs/heads/master c025a4688 -> 1d0596a16
[SPARK-3299][SQL]Public API in SQLContext to list tables
https://issues.apache.org/jira/browse/SPARK-3299
Author: Yin Huai
Closes #4547 from yhuai/tables and squashes the following commits:
6c8f92e [Yin Huai] Add
Repository: spark
Updated Branches:
refs/heads/branch-1.3 925fd84a1 -> edbac178d
[SPARK-3299][SQL]Public API in SQLContext to list tables
https://issues.apache.org/jira/browse/SPARK-3299
Author: Yin Huai
Closes #4547 from yhuai/tables and squashes the following commits:
6c8f92e [Yin Huai]
Repository: spark
Updated Branches:
refs/heads/master 1d0596a16 -> 2aea892eb
[SQL] Fix docs of SQLContext.tables
Author: Yin Huai
Closes #4579 from yhuai/tablesDoc and squashes the following commits:
7f8964c [Yin Huai] Fix doc.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Co
Repository: spark
Updated Branches:
refs/heads/master 5d3cc6b3d -> 2cbb3e433
[SPARK-5642] [SQL] Apply column pruning on unused aggregation fields
select k from (select key k, max(value) v from src group by k) t
Author: Daoyuan Wang
Author: Michael Armbrust
Closes #4415 from adrian-wang/gro
Repository: spark
Updated Branches:
refs/heads/branch-1.3 41603717a -> efffc2e42
[SPARK-5642] [SQL] Apply column pruning on unused aggregation fields
select k from (select key k, max(value) v from src group by k) t
Author: Daoyuan Wang
Author: Michael Armbrust
Closes #4415 from adrian-wang
Repository: spark
Updated Branches:
refs/heads/master 2cbb3e433 -> 2e0c08452
[SPARK-5789][SQL]Throw a better error message if JsonRDD.parseJson encounters
unrecoverable parsing errors.
Author: Yin Huai
Closes #4582 from yhuai/jsonErrorMessage and squashes the following commits:
152dbd4 [Yi
Repository: spark
Updated Branches:
refs/heads/branch-1.3 efffc2e42 -> d9d0250fc
[SPARK-5789][SQL]Throw a better error message if JsonRDD.parseJson encounters
unrecoverable parsing errors.
Author: Yin Huai
Closes #4582 from yhuai/jsonErrorMessage and squashes the following commits:
152dbd4
Repository: spark
Updated Branches:
refs/heads/master 8e25373ce -> cc552e042
[SQL] [Minor] Update the SpecificMutableRow.copy
When profiling the Join / Aggregate queries via VisualVM, I noticed lots of
`SpecificMutableRow` objects created, as well as the `MutableValue`, since the
`SpecificMu
Repository: spark
Updated Branches:
refs/heads/branch-1.3 fef2267cd -> 1a8895560
[SQL] [Minor] Update the SpecificMutableRow.copy
When profiling the Join / Aggregate queries via VisualVM, I noticed lots of
`SpecificMutableRow` objects created, as well as the `MutableValue`, since the
`Specif
Repository: spark
Updated Branches:
refs/heads/master cc552e042 -> 275a0c081
[SPARK-5824] [SQL] add null format in ctas and set default col comment to null
Author: Daoyuan Wang
Closes #4609 from adrian-wang/ctas and squashes the following commits:
0a75d5a [Daoyuan Wang] reorder import
93d18
Repository: spark
Updated Branches:
refs/heads/branch-1.3 c2eaaea9f -> 63fa123f1
[SQL] Initial support for reporting location of error in sql string
Author: Michael Armbrust
Closes #4587 from marmbrus/position and squashes the following commits:
0810052 [Michael Armbrust] fix tests
395c
Repository: spark
Updated Branches:
refs/heads/branch-1.3 1a8895560 -> c2eaaea9f
[SPARK-5824] [SQL] add null format in ctas and set default col comment to null
Author: Daoyuan Wang
Closes #4609 from adrian-wang/ctas and squashes the following commits:
0a75d5a [Daoyuan Wang] reorder import
9
Repository: spark
Updated Branches:
refs/heads/master 275a0c081 -> 104b2c458
[SQL] Initial support for reporting location of error in sql string
Author: Michael Armbrust
Closes #4587 from marmbrus/position and squashes the following commits:
0810052 [Michael Armbrust] fix tests
395c
Repository: spark
Updated Branches:
refs/heads/branch-1.3 63fa123f1 -> 0368494c5
[SQL] Add fetched row count in SparkSQLCLIDriver
before this change:
```scala
Time taken: 0.619 seconds
```
after this change :
```scala
Time taken: 0.619 seconds, Fetched: 4 row(s)
```
Author: OopsOutOfMemory
Repository: spark
Updated Branches:
refs/heads/master 104b2c458 -> b4d7c7032
[SQL] Add fetched row count in SparkSQLCLIDriver
before this change:
```scala
Time taken: 0.619 seconds
```
after this change :
```scala
Time taken: 0.619 seconds, Fetched: 4 row(s)
```
Author: OopsOutOfMemory
Clo
Repository: spark
Updated Branches:
refs/heads/master b4d7c7032 -> 6f54dee66
[SPARK-5296] [SQL] Add more filter types for data sources API
This PR adds the following filter types for data sources API:
- `IsNull`
- `IsNotNull`
- `Not`
- `And`
- `Or`
The code which converts Catalyst predicate
Repository: spark
Updated Branches:
refs/heads/branch-1.3 0368494c5 -> 363a9a7d5
[SPARK-5296] [SQL] Add more filter types for data sources API
This PR adds the following filter types for data sources API:
- `IsNull`
- `IsNotNull`
- `Not`
- `And`
- `Or`
The code which converts Catalyst predic
Repository: spark
Updated Branches:
refs/heads/master 6f54dee66 -> c51ab37fa
[SPARK-5833] [SQL] Adds REFRESH TABLE command
Lifts `HiveMetastoreCatalog.refreshTable` to `Catalog`. Adds `RefreshTable`
command to refresh (possibly cached) metadata in external data sources tables.
[https://revi
Repository: spark
Updated Branches:
refs/heads/branch-1.3 363a9a7d5 -> 864d77e0d
[SPARK-5833] [SQL] Adds REFRESH TABLE command
Lifts `HiveMetastoreCatalog.refreshTable` to `Catalog`. Adds `RefreshTable`
command to refresh (possibly cached) metadata in external data sources tables.
[https://
Repository: spark
Updated Branches:
refs/heads/master 04b401da8 -> 5b6cd65cd
[SPARK-5746][SQL] Check invalid cases for the write path of data source API
JIRA: https://issues.apache.org/jira/browse/SPARK-5746
liancheng marmbrus
Author: Yin Huai
Closes #4617 from yhuai/insertOverwrite
Repository: spark
Updated Branches:
refs/heads/branch-1.3 ad8fd4fb3 -> 419865475
[SPARK-5746][SQL] Check invalid cases for the write path of data source API
JIRA: https://issues.apache.org/jira/browse/SPARK-5746
liancheng marmbrus
Author: Yin Huai
Closes #4617 from yhuai/insertOverwr
Repository: spark
Updated Branches:
refs/heads/master 5b6cd65cd -> f3ff1eb29
[SPARK-5839][SQL]HiveMetastoreCatalog does not recognize table names and
aliases of data source tables.
JIRA: https://issues.apache.org/jira/browse/SPARK-5839
Author: Yin Huai
Closes #4626 from yhuai/SPARK-5839 an
Repository: spark
Updated Branches:
refs/heads/branch-1.3 419865475 -> a15a0a02c
[SPARK-5839][SQL]HiveMetastoreCatalog does not recognize table names and
aliases of data source tables.
JIRA: https://issues.apache.org/jira/browse/SPARK-5839
Author: Yin Huai
Closes #4626 from yhuai/SPARK-583
Repository: spark
Updated Branches:
refs/heads/master f3ff1eb29 -> cb6c48c87
[SQL] Optimize arithmetic and predicate operators
Existing implementation of arithmetic operators and BinaryComparison operators
have redundant type checking codes, e.g.:
Expression.n2 is used by Add/Subtract/Multipl
Repository: spark
Updated Branches:
refs/heads/branch-1.3 a15a0a02c -> 639a3c2fd
[SQL] Optimize arithmetic and predicate operators
Existing implementation of arithmetic operators and BinaryComparison operators
have redundant type checking codes, e.g.:
Expression.n2 is used by Add/Subtract/Mul
Repository: spark
Updated Branches:
refs/heads/master cb6c48c87 -> e189cbb05
[SPARK-4865][SQL]Include temporary tables in SHOW TABLES
This PR adds a `ShowTablesCommand` to support `SHOW TABLES [IN databaseName]`
SQL command. The result of `SHOW TABLE` has two columns, `tableName` and
`isTemp
Repository: spark
Updated Branches:
refs/heads/branch-1.3 639a3c2fd -> 8a94bf76b
[SPARK-4865][SQL]Include temporary tables in SHOW TABLES
This PR adds a `ShowTablesCommand` to support `SHOW TABLES [IN databaseName]`
SQL command. The result of `SHOW TABLE` has two columns, `tableName` and
`is
[SPARK-5166][SPARK-5247][SPARK-5258][SQL] API Cleanup / Documentation
Author: Michael Armbrust
Closes #4642 from marmbrus/docs and squashes the following commits:
d291c34 [Michael Armbrust] python tests
9be66e3 [Michael Armbrust] comments
d56afc2 [Michael Armbrust] fix style
f004747 [Michael
Repository: spark
Updated Branches:
refs/heads/master c74b07fa9 -> d8adefefc
[SPARK-5859] [PySpark] [SQL] fix DataFrame Python API
1. added explain()
2. add isLocal()
3. do not call show() in __repl__
4. add foreach() and foreachPartition()
5. add distinct()
6. fix functions.col()/column()/lit
Repository: spark
Updated Branches:
refs/heads/master c76da36c2 -> c74b07fa9
http://git-wip-us.apache.org/repos/asf/spark/blob/c74b07fa/sql/core/src/test/scala/org/apache/spark/sql/jdbc/MySQLIntegration.scala
--
diff --git
a/s
[SPARK-5166][SPARK-5247][SPARK-5258][SQL] API Cleanup / Documentation
Author: Michael Armbrust
Closes #4642 from marmbrus/docs and squashes the following commits:
d291c34 [Michael Armbrust] python tests
9be66e3 [Michael Armbrust] comments
d56afc2 [Michael Armbrust] fix style
f004747 [Michael
Repository: spark
Updated Branches:
refs/heads/branch-1.3 97cb568a2 -> cd3d41587
http://git-wip-us.apache.org/repos/asf/spark/blob/cd3d4158/sql/core/src/test/scala/org/apache/spark/sql/jdbc/MySQLIntegration.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/branch-1.3 cd3d41587 -> 4a581aa3f
[SPARK-5859] [PySpark] [SQL] fix DataFrame Python API
1. added explain()
2. add isLocal()
3. do not call show() in __repl__
4. add foreach() and foreachPartition()
5. add distinct()
6. fix functions.col()/column()
Repository: spark
Updated Branches:
refs/heads/master fc4eb9505 -> 31efb39c1
[Minor] fix typo in SQL document
Author: CodingCat
Closes #4656 from CodingCat/fix_typo and squashes the following commits:
b41d15c [CodingCat] recover
689fe46 [CodingCat] fix typo
Project: http://git-wip-us.apac
Repository: spark
Updated Branches:
refs/heads/branch-1.3 71cf6e295 -> 5636c4a58
[Minor] fix typo in SQL document
Author: CodingCat
Closes #4656 from CodingCat/fix_typo and squashes the following commits:
b41d15c [CodingCat] recover
689fe46 [CodingCat] fix typo
(cherry picked from commit 3
Repository: spark
Updated Branches:
refs/heads/master 31efb39c1 -> 4611de1ce
[SPARK-5862][SQL] Only transformUp the given plan once in HiveMetastoreCatalog
Current `ParquetConversions` in `HiveMetastoreCatalog` will transformUp the
given plan multiple times if there are many Metastore Parquet
Repository: spark
Updated Branches:
refs/heads/branch-1.3 5636c4a58 -> 62063b7a3
[SPARK-5862][SQL] Only transformUp the given plan once in HiveMetastoreCatalog
Current `ParquetConversions` in `HiveMetastoreCatalog` will transformUp the
given plan multiple times if there are many Metastore Par
Repository: spark
Updated Branches:
refs/heads/branch-1.3 62063b7a3 -> d74d5e86a
[Minor][SQL] Use same function to check path parameter in JSONRelation
Author: Liang-Chi Hsieh
Closes #4649 from viirya/use_checkpath and squashes the following commits:
0f9a1a1 [Liang-Chi Hsieh] Use same funct
Repository: spark
Updated Branches:
refs/heads/master ac506b7c2 -> 9d281fa56
[SQL] [Minor] Update the HiveContext Unittest
In unit test, the table src(key INT, value STRING) is not the same as HIVE
src(key STRING, value STRING)
https://github.com/apache/hive/blob/branch-0.13/data/scripts/q_te
Repository: spark
Updated Branches:
refs/heads/branch-1.3 d74d5e86a -> 01356514e
[SQL] [Minor] Update the HiveContext Unittest
In unit test, the table src(key INT, value STRING) is not the same as HIVE
src(key STRING, value STRING)
https://github.com/apache/hive/blob/branch-0.13/data/scripts/
Repository: spark
Updated Branches:
refs/heads/master 4611de1ce -> ac506b7c2
[Minor][SQL] Use same function to check path parameter in JSONRelation
Author: Liang-Chi Hsieh
Closes #4649 from viirya/use_checkpath and squashes the following commits:
0f9a1a1 [Liang-Chi Hsieh] Use same function
Repository: spark
Updated Branches:
refs/heads/branch-1.3 01356514e -> e65dc1fd5
[SPARK-5868][SQL] Fix python UDFs in HiveContext and checks in SQLContext
Author: Michael Armbrust
Closes #4657 from marmbrus/pythonUdfs and squashes the following commits:
a7823a8 [Michael Armbrust] [SP
Repository: spark
Updated Branches:
refs/heads/master 9d281fa56 -> de4836f8f
[SPARK-5868][SQL] Fix python UDFs in HiveContext and checks in SQLContext
Author: Michael Armbrust
Closes #4657 from marmbrus/pythonUdfs and squashes the following commits:
a7823a8 [Michael Armbrust] [SPARK-5
Repository: spark
Updated Branches:
refs/heads/master 445a755b8 -> 3df85dccb
[SPARK-5871] output explain in Python
Author: Davies Liu
Closes #4658 from davies/explain and squashes the following commits:
db87ea2 [Davies Liu] output explain in Python
Project: http://git-wip-us.apache.org/re
Repository: spark
Updated Branches:
refs/heads/branch-1.3 35e23ff14 -> cb061603c
[SPARK-5871] output explain in Python
Author: Davies Liu
Closes #4658 from davies/explain and squashes the following commits:
db87ea2 [Davies Liu] output explain in Python
(cherry picked from commit 3df85dccbc
Repository: spark
Updated Branches:
refs/heads/master 3df85dccb -> 4d4cc760f
[SPARK-5872] [SQL] create a sqlCtx in pyspark shell
The sqlCtx will be HiveContext if hive is built in assembly jar, or SQLContext
if not.
It also skip the Hive tests in pyspark.sql.tests if no hive is available.
A
Repository: spark
Updated Branches:
refs/heads/branch-1.3 cb061603c -> 0dba382ee
[SPARK-5872] [SQL] create a sqlCtx in pyspark shell
The sqlCtx will be HiveContext if hive is built in assembly jar, or SQLContext
if not.
It also skip the Hive tests in pyspark.sql.tests if no hive is available
Repository: spark
Updated Branches:
refs/heads/master 4d4cc760f -> 117121a4e
[SPARK-5852][SQL]Fail to convert a newly created empty metastore parquet table
to a data source parquet table.
The problem is that after we create an empty hive metastore parquet table (e.g.
`CREATE TABLE test (a in
Repository: spark
Updated Branches:
refs/heads/branch-1.3 0dba382ee -> 07d8ef9e7
[SPARK-5852][SQL]Fail to convert a newly created empty metastore parquet table
to a data source parquet table.
The problem is that after we create an empty hive metastore parquet table (e.g.
`CREATE TABLE test (
Repository: spark
Updated Branches:
refs/heads/master a51fc7ef9 -> d5f12bfe8
[SPARK-5875][SQL]logical.Project should not be resolved if it contains
aggregates or generators
https://issues.apache.org/jira/browse/SPARK-5875 has a case to reproduce the
bug and explain the root cause.
Author: Y
Repository: spark
Updated Branches:
refs/heads/branch-1.3 7320605ad -> e8284b29d
[SPARK-5875][SQL]logical.Project should not be resolved if it contains
aggregates or generators
https://issues.apache.org/jira/browse/SPARK-5875 has a case to reproduce the
bug and explain the root cause.
Autho
Repository: spark
Updated Branches:
refs/heads/master d5f12bfe8 -> e50934f11
[SPARK-5723][SQL]Change the default file format to Parquet for CTAS statements.
JIRA: https://issues.apache.org/jira/browse/SPARK-5723
Author: Yin Huai
This patch had conflicts when merged, resolved by
Committer: M
Repository: spark
Updated Branches:
refs/heads/branch-1.3 2ab0ba04f -> 6e82c46bf
[SPARK-5723][SQL]Change the default file format to Parquet for CTAS statements.
JIRA: https://issues.apache.org/jira/browse/SPARK-5723
Author: Yin Huai
This patch had conflicts when merged, resolved by
Committe
Repository: spark
Updated Branches:
refs/heads/branch-1.2 068ba45cf -> 36e15b48e
[SPARK-4903][SQL]Backport the bug fix for SPARK-4903
The original fix was a part of https://issues.apache.org/jira/browse/SPARK-4912
(commit
https://github.com/apache/spark/commit/6463e0b9e8067cce70602c5c9006a25
Repository: spark
Updated Branches:
refs/heads/master a8eb92dcb -> f0e3b7107
[SPARK-5840][SQL] HiveContext cannot be serialized due to tuple extraction
Also added test cases for checking the serializability of HiveContext and
SQLContext.
Author: Reynold Xin
Closes #4628 from rxin/SPARK-584
Repository: spark
Updated Branches:
refs/heads/branch-1.3 56f8f295c -> b86e44cd9
[SPARK-5840][SQL] HiveContext cannot be serialized due to tuple extraction
Also added test cases for checking the serializability of HiveContext and
SQLContext.
Author: Reynold Xin
Closes #4628 from rxin/SPARK
Repository: spark
Updated Branches:
refs/heads/master f0e3b7107 -> aa8f10e82
[SPARK-5722] [SQL] [PySpark] infer int as LongType
The `int` is 64-bit on 64-bit machine (very common now), we should infer it as
LongType for it in Spark SQL.
Also, LongType in SQL will come back as `int`.
Author:
Repository: spark
Updated Branches:
refs/heads/branch-1.3 b86e44cd9 -> 470cba82c
[SPARK-5722] [SQL] [PySpark] infer int as LongType
The `int` is 64-bit on 64-bit machine (very common now), we should infer it as
LongType for it in Spark SQL.
Also, LongType in SQL will come back as `int`.
Aut
Repository: spark
Updated Branches:
refs/heads/master 94cdb05ff -> 8ca3418e1
[SPARK-5904][SQL] DataFrame API fixes.
1. Column is no longer a DataFrame to simplify class hierarchy.
2. Don't use varargs on abstract methods (see Scala compiler bug SI-9013).
Author: Reynold Xin
Closes #4686 fro
Repository: spark
Updated Branches:
refs/heads/branch-1.3 fe00eb66e -> 55d91d92b
[SPARK-5904][SQL] DataFrame API fixes.
1. Column is no longer a DataFrame to simplify class hierarchy.
2. Don't use varargs on abstract methods (see Scala compiler bug SI-9013).
Author: Reynold Xin
Closes #4686
Repository: spark
Updated Branches:
refs/heads/master 4a17eedb1 -> 5b0a42cb1
[SPARK-5898] [SPARK-5896] [SQL] [PySpark] create DataFrame from pandas and
tuple/list
Fix createDataFrame() from pandas DataFrame (not tested by jenkins, depends on
SPARK-5693).
It also support to create DataFrame
Repository: spark
Updated Branches:
refs/heads/branch-1.3 8c12f3114 -> 913562ae7
[SPARK-5898] [SPARK-5896] [SQL] [PySpark] create DataFrame from pandas and
tuple/list
Fix createDataFrame() from pandas DataFrame (not tested by jenkins, depends on
SPARK-5693).
It also support to create DataF
Repository: spark
Updated Branches:
refs/heads/branch-1.3 913562ae7 -> b9a6c5c84
[SPARK-5909][SQL] Add a clearCache command to Spark SQL's cache manager
JIRA: https://issues.apache.org/jira/browse/SPARK-5909
Author: Yin Huai
Closes #4694 from yhuai/clearCache and squashes the following comm
Repository: spark
Updated Branches:
refs/heads/branch-1.3 ae9704010 -> 33ccad20e
[SPARK-5935][SQL] Accept MapType in the schema provided to a JSON dataset.
JIRA: https://issues.apache.org/jira/browse/SPARK-5935
Author: Yin Huai
Author: Yin Huai
Closes #4710 from yhuai/jsonMapType and squas
Repository: spark
Updated Branches:
refs/heads/master 59536cc87 -> 48376bfe9
[SPARK-5935][SQL] Accept MapType in the schema provided to a JSON dataset.
JIRA: https://issues.apache.org/jira/browse/SPARK-5935
Author: Yin Huai
Author: Yin Huai
Closes #4710 from yhuai/jsonMapType and squashes
hon (which is 64-bit on 64-bit
machines).
Closes #4521
cc dondrake marmbrus
Author: Davies Liu
Closes #4681 from davies/long2 and squashes the following commits:
05ef1c8 [Davies Liu] infer LongType for int in Python
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-
Repository: spark
Updated Branches:
refs/heads/branch-1.3 33ccad20e -> 2d7786ed1
[SPARK-5873][SQL] Allow viewing of partially analyzed plans in queryExecution
Author: Michael Armbrust
Closes #4684 from marmbrus/explainAnalysis and squashes the following commits:
afbaa19 [Michael Armbr
Repository: spark
Updated Branches:
refs/heads/master 48376bfe9 -> 1ed57086d
[SPARK-5873][SQL] Allow viewing of partially analyzed plans in queryExecution
Author: Michael Armbrust
Closes #4684 from marmbrus/explainAnalysis and squashes the following commits:
afbaa19 [Michael Armbrust]
Repository: spark
Updated Branches:
refs/heads/master cf2e41653 -> 840333133
[SPARK-5968] [SQL] Suppresses ParquetOutputCommitter WARN logs
Please refer to the [JIRA ticket] [1] for the motivation.
[1]: https://issues.apache.org/jira/browse/SPARK-5968
[https://reviewable.io/review_button.pn
Repository: spark
Updated Branches:
refs/heads/branch-1.3 dd4255850 -> 2b562b043
[SPARK-5968] [SQL] Suppresses ParquetOutputCommitter WARN logs
Please refer to the [JIRA ticket] [1] for the motivation.
[1]: https://issues.apache.org/jira/browse/SPARK-5968
[https://reviewable.io/review_butto
Repository: spark
Updated Branches:
refs/heads/master 840333133 -> 0a59e45e2
[SPARK-5910][SQL] Support for as in selectExpr
Author: Michael Armbrust
Closes #4736 from marmbrus/asExprs and squashes the following commits:
5ba97e4 [Michael Armbrust] [SPARK-5910][SQL] Support for as
Repository: spark
Updated Branches:
refs/heads/branch-1.3 2b562b043 -> ba5d60dda
[SPARK-5910][SQL] Support for as in selectExpr
Author: Michael Armbrust
Closes #4736 from marmbrus/asExprs and squashes the following commits:
5ba97e4 [Michael Armbrust] [SPARK-5910][SQL] Support for as
Repository: spark
Updated Branches:
refs/heads/master 0a59e45e2 -> 201236628
[SPARK-5532][SQL] Repartition should not use external rdd representation
Author: Michael Armbrust
Closes #4738 from marmbrus/udtRepart and squashes the following commits:
c06d7b5 [Michael Armbrust] fix compilat
Repository: spark
Updated Branches:
refs/heads/branch-1.3 ba5d60dda -> e46096b1e
[SPARK-5532][SQL] Repartition should not use external rdd representation
Author: Michael Armbrust
Closes #4738 from marmbrus/udtRepart and squashes the following commits:
c06d7b5 [Michael Armbrust]
Repository: spark
Updated Branches:
refs/heads/master c5ba975ee -> a2b913792
[SPARK-5952][SQL] Lock when using hive metastore client
Author: Michael Armbrust
Closes #4746 from marmbrus/hiveLock and squashes the following commits:
8b871cf [Michael Armbrust] [SPARK-5952][SQL] Lock when us
Repository: spark
Updated Branches:
refs/heads/branch-1.3 a4ff445a9 -> 641423dbf
[SPARK-5952][SQL] Lock when using hive metastore client
Author: Michael Armbrust
Closes #4746 from marmbrus/hiveLock and squashes the following commits:
8b871cf [Michael Armbrust] [SPARK-5952][SQL] Lock w
Repository: spark
Updated Branches:
refs/heads/branch-1.3 17ee2460a -> 78a1781a9
[SPARK-5904][SQL] DataFrame Java API test suites.
Added a new test suite to make sure Java DF programs can use varargs properly.
Also moved all suites into test.org.apache.spark package to make sure the
suites al
Repository: spark
Updated Branches:
refs/heads/master f816e7390 -> 53a1ebf33
[SPARK-5904][SQL] DataFrame Java API test suites.
Added a new test suite to make sure Java DF programs can use varargs properly.
Also moved all suites into test.org.apache.spark package to make sure the
suites also t
Repository: spark
Updated Branches:
refs/heads/master 53a1ebf33 -> fba11c2f5
[SPARK-5985][SQL] DataFrame sortBy -> orderBy in Python.
Also added desc/asc function for constructing sorting expressions more
conveniently. And added a small fix to lift alias out of cast expression.
Author: Reyno
Repository: spark
Updated Branches:
refs/heads/branch-1.3 78a1781a9 -> 5e233b2c7
[SPARK-5985][SQL] DataFrame sortBy -> orderBy in Python.
Also added desc/asc function for constructing sorting expressions more
conveniently. And added a small fix to lift alias out of cast expression.
Author: R
Repository: spark
Updated Branches:
refs/heads/master 922b43b3c -> 769e092bd
[SPARK-5286][SQL] SPARK-5286 followup
https://issues.apache.org/jira/browse/SPARK-5286
Author: Yin Huai
Closes #4755 from yhuai/SPARK-5286-throwable and squashes the following commits:
4c0c450 [Yin Huai] Catch Thr
Repository: spark
Updated Branches:
refs/heads/branch-1.3 1e9489422 -> e7a748ecf
[SPARK-5286][SQL] SPARK-5286 followup
https://issues.apache.org/jira/browse/SPARK-5286
Author: Yin Huai
Closes #4755 from yhuai/SPARK-5286-throwable and squashes the following commits:
4c0c450 [Yin Huai] Catch
Repository: spark
Updated Branches:
refs/heads/master 769e092bd -> d641fbb39
[SPARK-5994] [SQL] Python DataFrame documentation fixes
select empty should NOT be the same as select. make sure selectExpr is behaving
the same.
join param documentation
link to source doesn't work in jekyll generat
Repository: spark
Updated Branches:
refs/heads/branch-1.3 e7a748ecf -> 5c421e030
[SPARK-5994] [SQL] Python DataFrame documentation fixes
select empty should NOT be the same as select. make sure selectExpr is behaving
the same.
join param documentation
link to source doesn't work in jekyll gen
Repository: spark
Updated Branches:
refs/heads/branch-1.3 791df93cd -> 9aca3c688
[SPARK-5944] [PySpark] fix version in Python API docs
use RELEASE_VERSION when building the Python API docs
Author: Davies Liu
Closes #4731 from davies/api_version and squashes the following commits:
c9744c9 [
801 - 900 of 1984 matches
Mail list logo