Repository: spark
Updated Branches:
refs/heads/branch-2.2 02bf5547a -> 5842eeca5
[SPARK-20725][SQL] partial aggregate should behave correctly for sameResult
## What changes were proposed in this pull request?
For aggregate function with `PartialMerge` or `Final` mode, the input is
aggregate
Repository: spark
Updated Branches:
refs/heads/master 3f98375d8 -> 1283c3d11
[SPARK-20725][SQL] partial aggregate should behave correctly for sameResult
## What changes were proposed in this pull request?
For aggregate function with `PartialMerge` or `Final` mode, the input is
aggregate
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17964
LGTM - merging to master/2.2/2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17964
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17960
LGTM - merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/master e3d2022e4 -> b84ff7eb6
[SPARK-20719][SQL] Support LIMIT ALL
### What changes were proposed in this pull request?
`LIMIT ALL` is the same as omitting the `LIMIT` clause. It is supported by both
PrestgreSQL and Presto. This PR is to support
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17964#discussion_r116306324
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -429,17 +429,13 @@ object QueryPlan
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17960#discussion_r116300475
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -279,7 +279,12 @@ class AstBuilder extends
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17953#discussion_r116138305
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -1504,6 +1504,7 @@ class AstBuilder extends
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17953
@LantaoJin Can you add a description and a test case for this? You can take
a look at the OrcSourceSuite to get an idea how to work with Hive.
---
If your project is set up for it, you can
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17953
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.2 358516dcb -> 86cef4df5
[SPARK-19447] Remove remaining references to generated rows metric
## What changes were proposed in this pull request?
https://github.com/apache/spark/commit/b486ffc86d8ad6c303321dcf8514afee723f61f8
left behind
Repository: spark
Updated Branches:
refs/heads/master fcb88f921 -> 5c2c4dcce
[SPARK-19447] Remove remaining references to generated rows metric
## What changes were proposed in this pull request?
https://github.com/apache/spark/commit/b486ffc86d8ad6c303321dcf8514afee723f61f8
left behind
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17939
LGTM - merging to master/2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/branch-2.1 12c937ede -> 50f28dfe4
[SPARK-17685][SQL] Make SortMergeJoinExec's currentVars is null when calling
createJoinKey
## What changes were proposed in this pull request?
The following SQL query cause `IndexOutOfBoundsException` issue
Repository: spark
Updated Branches:
refs/heads/branch-2.2 7600a7ab6 -> 6a996b362
[SPARK-17685][SQL] Make SortMergeJoinExec's currentVars is null when calling
createJoinKey
## What changes were proposed in this pull request?
The following SQL query cause `IndexOutOfBoundsException` issue
Repository: spark
Updated Branches:
refs/heads/master c0189abc7 -> 771abeb46
[SPARK-17685][SQL] Make SortMergeJoinExec's currentVars is null when calling
createJoinKey
## What changes were proposed in this pull request?
The following SQL query cause `IndexOutOfBoundsException` issue when
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17920
LGTM - merging to master/2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/branch-2.2 73aa23b8e -> c7bd909f6
[SPARK-19876][BUILD] Move Trigger.java to java source hierarchy
## What changes were proposed in this pull request?
Simply moves `Trigger.java` to `src/main/java` from `src/main/scala`
See
Repository: spark
Updated Branches:
refs/heads/master d099f414d -> 25ee816e0
[SPARK-19876][BUILD] Move Trigger.java to java source hierarchy
## What changes were proposed in this pull request?
Simply moves `Trigger.java` to `src/main/java` from `src/main/scala`
See
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17921
LGTM - merging to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17899#discussion_r115370303
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -609,6 +610,19 @@ object CollapseWindow extends
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17666#discussion_r115361494
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveTableValuedFunctions.scala
---
@@ -57,19 +57,21 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17666#discussion_r115361167
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveTableValuedFunctions.scala
---
@@ -57,19 +57,21 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17666#discussion_r115358211
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala
---
@@ -441,4 +440,15 @@ class AnalysisSuite extends
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17666#discussion_r115357687
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -498,12 +498,16 @@ case class Sort
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17736
For some reference. In 1.6 we used the Catalyst SqlParser to parse the
expression in `Dataframe.filter()`, and we used the Hive (ANTLR based) parser
for parsing for SQL commands. In Spark 2.0 we
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17770
I am not a giant fan of the `resolveOperators*` approach, is is yet another
code path that does something similar to the `transfrom*` code path, it
introduces some mutable state, and I have been
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17836
cc @michal-databricks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/branch-2.2 c199764ba -> c80242ab9
[SPARK-20567] Lazily bind in GenerateExec
It is not valid to eagerly bind with the child's output as this causes failures
when we attempt to canonicalize the plan (replacing the attribute references
with
Repository: spark
Updated Branches:
refs/heads/master b946f3160 -> 6235132a8
[SPARK-20567] Lazily bind in GenerateExec
It is not valid to eagerly bind with the child's output as this causes failures
when we attempt to canonicalize the plan (replacing the attribute references
with dummies).
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17838
LGTM - merging to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/master 259860d23 -> 943a684b9
[SPARK-20548] Disable ReplSuite.newProductSeqEncoder with REPL defined class
## What changes were proposed in this pull request?
`newProductSeqEncoder with REPL defined class` in `ReplSuite` has been failing
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17823
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.2 c5f559315 -> c5beabcbd
[SPARK-20492][SQL] Do not print empty parentheses for invalid primitive types
in parser
## What changes were proposed in this pull request?
Currently, when the type string is invalid, it looks printing empty
Repository: spark
Updated Branches:
refs/heads/master 4d99b95ad -> 1ee494d08
[SPARK-20492][SQL] Do not print empty parentheses for invalid primitive types
in parser
## What changes were proposed in this pull request?
Currently, when the type string is invalid, it looks printing empty
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17784
Yes, it can. Merging to master/2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17810#discussion_r114077798
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/GeneratorFunctionSuite.scala ---
@@ -91,7 +91,7 @@ class GeneratorFunctionSuite extends QueryTest
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17810#discussion_r114066762
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/GeneratorFunctionSuite.scala ---
@@ -91,7 +91,7 @@ class GeneratorFunctionSuite extends QueryTest
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/17810
[SPARK-20534][SQL] Make outer generate exec return empty rows
## What changes were proposed in this pull request?
Generate exec does not produce `null` values if the generator for the input
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17804
@anabranch can you open this against branch-2.1 instead of master?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Repository: spark
Updated Branches:
refs/heads/branch-2.2 3d53d825e -> e02b6ebfd
[SPARK-12837][CORE] Do not send the name of internal accumulator to executor
side
## What changes were proposed in this pull request?
When sending accumulator updates back to driver, the network overhead is
Repository: spark
Updated Branches:
refs/heads/master 823baca2c -> b90bf520f
[SPARK-12837][CORE] Do not send the name of internal accumulator to executor
side
## What changes were proposed in this pull request?
When sending accumulator updates back to driver, the network overhead is pretty
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17596
LGTM - merging to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17791
This is very similar to https://github.com/apache/spark/pull/16804/files
however that approach is like this one is slightly broken (because it does not
support nested char/varchar columns), can
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17784
LGTM - pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/1
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/1
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17770#discussion_r113474943
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -72,6 +72,34 @@ object CurrentOrigin
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17771
This is kinda trivial, but sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17771
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17772
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17773
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17749
can you close?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.1 ba505805d -> d99b49b11
[SPARK-20450][SQL] Unexpected first-query schema inference cost with 2.1.1
## What changes were proposed in this pull request?
https://issues.apache.org/jira/browse/SPARK-19611 fixes a regression from 2.0
where
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17749
LGTM - merging to 2.1. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17721
@windpiger could you make sure that the freshly added events still work
after this change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17716#discussion_r112829206
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -160,7 +159,7 @@ abstract class SQLViewSuite extends
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17724
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17720
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17716#discussion_r112698843
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -636,17 +636,16 @@ class Analyzer
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17716
cc @cloud-fan @ueshin @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17716#discussion_r112662001
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -636,17 +636,16 @@ class Analyzer
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/17716
[SPARK-19952][SQL] Remove various analysis exceptions
## What changes were proposed in this pull request?
We currently have quite a few analysis exception subclasses, the problem
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r112628499
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -547,6 +547,10 @@ valueExpression
| left
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r112627257
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1483,4 +1483,16 @@ class SparkSqlAstBuilder(conf: SQLConf
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/17710
[SPARK-20420][SQL] Add events to the external catalog
## What changes were proposed in this pull request?
It is often useful to be able to track changes to the `ExternalCatalog
ses, because you cannot call `super` on it. This PR makes it
a function.
## How was this patch tested?
Existing tests.
Author: Herman van Hovell <hvanhov...@databricks.com>
Closes #17705 from hvanhovell/SPARK-20410.
(cherry picked from commit 033206355339677812a250b2b64818a261871fd2)
S
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17705
I am merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
use you cannot call `super` on it. This PR makes it
a function.
## How was this patch tested?
Existing tests.
Author: Herman van Hovell <hvanhov...@databricks.com>
Closes #17705 from hvanhovell/SPARK-20410.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http:
Repository: spark
Updated Branches:
refs/heads/master b2ebadfd5 -> d95e4d9d6
[SPARK-20334][SQL] Return a better error message when correlated predicates
contain aggregate expression that has mixture of outer and local references.
## What changes were proposed in this pull request?
Address a
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17636
Merging to master/branch-2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17708#discussion_r112504587
--- Diff: python/pyspark/sql/functions.py ---
@@ -466,6 +466,14 @@ def nanvl(col1, col2):
return Column(sc._jvm.functions.nanvl(_to_java_column
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17708#discussion_r112505928
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -387,6 +387,13 @@ case class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17708#discussion_r112505585
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1007,22 +1006,38 @@ object functions {
def map(cols: Column
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17708#discussion_r112505796
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1007,22 +1006,38 @@ object functions {
def map(cols: Column
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17708#discussion_r112511878
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -387,6 +387,13 @@ case class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17708#discussion_r112509679
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/PlanParserSuite.scala
---
@@ -537,5 +537,10 @@ class PlanParserSuite extends
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17707
Could you add a unit test to the DDLCommandSuite?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17708
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17701
Merging to master/branch-2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17707
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17704
Also cherry picked this into 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/branch-2.1 9e5dc82a1 -> 66e7a8f1d
[SPARK-20409][SQL] fail early if aggregate function in GROUP BY
## What changes were proposed in this pull request?
It's illegal to have aggregate function in GROUP BY, and we should fail at
analysis phase, if
Repository: spark
Updated Branches:
refs/heads/branch-2.2 9fd25fbc4 -> 990452625
[SPARK-20409][SQL] fail early if aggregate function in GROUP BY
## What changes were proposed in this pull request?
It's illegal to have aggregate function in GROUP BY, and we should fail at
analysis phase, if
Repository: spark
Updated Branches:
refs/heads/master c6f62c5b8 -> b91873db0
[SPARK-20409][SQL] fail early if aggregate function in GROUP BY
## What changes were proposed in this pull request?
It's illegal to have aggregate function in GROUP BY, and we should fail at
analysis phase, if this
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17704
LGTM - merging to master/branch-2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17596#discussion_r112474482
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -308,16 +305,17 @@ private[spark] object TaskMetrics extends Logging
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17596#discussion_r112474143
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -308,16 +305,17 @@ private[spark] object TaskMetrics extends Logging
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17701
A few minor comments, otherwise LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17701#discussion_r112460255
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SharedSQLContext.scala ---
@@ -84,6 +85,10 @@ trait SharedSQLContext extends SQLTestUtils
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17701#discussion_r112460112
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
---
@@ -316,6 +316,39 @@ class
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/17705
[SPARK-20410][SQL] Make spark conf a val in SharedSQLContext
## What changes were proposed in this pull request?
It is kind of annoying that `SharedSQLContext.sparkConf` is a val when
Repository: spark
Updated Branches:
refs/heads/master 55bea5691 -> c6f62c5b8
[SPARK-20405][SQL] Dataset.withNewExecutionId should be private
## What changes were proposed in this pull request?
Dataset.withNewExecutionId is only used in Dataset itself and should be private.
## How was this
Repository: spark
Updated Branches:
refs/heads/branch-2.2 d01122dbc -> 9fd25fbc4
[SPARK-20405][SQL] Dataset.withNewExecutionId should be private
## What changes were proposed in this pull request?
Dataset.withNewExecutionId is only used in Dataset itself and should be private.
## How was
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17699
LGTM - merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17703
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17641#discussion_r112408980
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CastSuite.scala
---
@@ -41,13 +41,17 @@ class CastSuite extends
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17641#discussion_r112408144
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala
---
@@ -99,12 +99,9 @@ case class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/17641#discussion_r112408167
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala
---
@@ -99,12 +99,9 @@ case class
1301 - 1400 of 5057 matches
Mail list logo