Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10488#discussion_r48479722
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -57,9 +57,10 @@ case class Md5(child: Expression
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10402#discussion_r48495956
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -327,9 +327,9 @@ abstract class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10488#discussion_r48500579
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -57,9 +57,10 @@ case class Md5(child: Expression
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10402#discussion_r48496502
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -327,9 +327,9 @@ abstract class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10380#discussion_r48135557
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -60,20 +60,6 @@ object JdbcUtils extends
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10374#issuecomment-166556640
@Apo1 Could you add a unit test for this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/9819#issuecomment-166118773
@yhuai & @davies thanks for the reviews!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10402#issuecomment-166127579
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/10402
[SPARK-8641][SQL] Native Spark Window functions - Follow-up (docs & tests)
This PR is a follow-up for PR https://github.com/apache/spark/pull/9819. It
adds documentation for the wi
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10403#issuecomment-166148352
@naveenminchu I tried triggering a build, but it seems that I do not have
sufficient rights for this (I do have them for my own PRs).
---
If your project is set up
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10521#issuecomment-167947021
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/10525
[SPARK-12362][SQL][WIP] Inline Hive Parser
This PR inlines the Hive SQL parser in Spark SQL.
The previous (merged) incarnation of this PR passed all tests, but had and
still has
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10525#issuecomment-167984272
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10521#issuecomment-168003040
@maropu you might need to merge the latest master. My PR
(https://github.com/apache/spark/pull/10509) non-deterministically broke the
build, and has since been
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10525#issuecomment-168004557
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10525#issuecomment-168010422
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10525#issuecomment-168012049
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10525#issuecomment-168013255
last one promise :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10525#issuecomment-168013202
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10335#discussion_r47952489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/basicOperators.scala ---
@@ -126,6 +127,69 @@ case class Sample
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/10509
[SPARK-12362][SQL][WIP] Inline Hive Parser
This is a WIP. The PR has been taken over from @nongli (see
https://github.com/apache/spark/pull/10420). I have remove some additional
deadcode
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/10015
[SPARK-12024][SQL] More efficient multi-column counting.
In https://github.com/apache/spark/pull/9409 we enabled multi-column
counting. The approach taken in that PR introduces a bit
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/9925#issuecomment-159366507
Why not use the more common ```<>``` symbol instead ```=!=```?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/9993#issuecomment-159868802
I wonder if this is an actuall improvement. Doing back to back projections
also incur a codegen and a runtime cost; which can easily be higher than the
cost
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/9819#discussion_r45270291
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/interfaces.scala
---
@@ -187,7 +184,7 @@ sealed abstract class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/9819#discussion_r45270377
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -328,3 +280,208 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/9819#discussion_r45271240
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -25,7 +25,7 @@ import scala.collection.mutable.ArrayBuffer
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/9819
[SPARK-8641][SQL] Native Spark Window functions
This PR removes Hive windows functions from Spark and replaces them with
(native) Spark ones. The PR is on par with Hive in terms of features
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/9819#discussion_r45659284
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -328,3 +281,223 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/9819#discussion_r45658962
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -866,26 +874,37 @@ class Analyzer
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/9819#discussion_r45659112
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -328,3 +281,223 @@ object
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/10689#issuecomment-170473951
@gatorsmile the fix looks good.
@rxin / @marmbrus / @gatorsmile I am not sure if we should support this at
all. Using a limit in SELECT's connected
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13472#discussion_r65619857
--- Diff: core/src/main/scala/org/apache/spark/util/Benchmark.scala ---
@@ -97,6 +111,39 @@ private[spark] class Benchmark(
println
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13472#discussion_r65620010
--- Diff: core/src/main/scala/org/apache/spark/util/Benchmark.scala ---
@@ -97,6 +111,39 @@ private[spark] class Benchmark(
println
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13447
@gatorsmile the overall approach seems good to me. We are currently fixing
this for the SQL codepath. I was wondering if are other codepaths that can
cause this unwanted behavior
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13418
whoops triggered a build on this one... sorry about that
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13498
Seems to be related to the sequence in which tests are executed... Perhaps
`src` is changed during tests.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13566
Looks pretty good. Left one comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13570
@ioana-delaney great catch! The overall PR seems pretty solid. I left one
smallish code organization related comment.
---
If your project is set up for it, you can reply to this email and have
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13472
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13472
Merging to master/2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13570#discussion_r66364271
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1715,31 +1715,68 @@ object
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13550
@marymwu this has been fixed in
https://github.com/apache/spark/commit/09b3c56c91831b3e8d909521b8f3ffbce4eb0395.
Could you close this PR?
---
If your project is set up for it, you can
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13534#discussion_r66161676
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -594,16 +594,13 @@ qualifiedName
: identifier
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13414
Thanks! Merging to master/2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13534#discussion_r66067691
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -594,16 +594,13 @@ qualifiedName
: identifier
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13414#discussion_r65966463
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -90,6 +90,8 @@ statement
identifierCommentList
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13414
@clockfly this looks pretty good. I have left some (minor) comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13534#discussion_r65975203
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -106,9 +106,9 @@ statement
| SHOW FUNCTIONS (LIKE
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13534
cc @yhuai @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13534
[SPARK-15789][SQL] Allow reserved keywords in most places
## What changes were proposed in this pull request?
The parser currently does not allow the use of some SQL keywords as table
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13414#discussion_r65967765
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -375,9 +375,12 @@ private[sql] abstract class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13414#discussion_r65967943
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -115,6 +115,10 @@ class DDLSuite extends QueryTest
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13534#discussion_r65975425
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -453,8 +453,8 @@ expression
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13414#discussion_r65966717
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -344,6 +344,19 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13414#discussion_r65966588
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -344,6 +344,19 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/10706
@kamalcoursera you could use a predicate scalar subquery here, i.e.:
```sql
select runon as runon
case
when (select max(true) from sqltesttable b where b.key
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13570
LGTM - merging to master/2.0 thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13155
LGTM - merging to master/2.0 (will do a small follow-up)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13626
[SPARK-15370][SQL] Revert PR "Update RewriteCorrelatedSuquery rule"
This reverts commit 9770f6ee60f6834e4e1200234109120427a5cc0d.
You can merge this pull request into a Git
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13629
[SPARK-15370][SQL] Fix count bug
# What changes were proposed in this pull request?
This pull request fixes the COUNT bug in the
`RewriteCorrelatedScalarSubquery` rule.
After
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66680913
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66680885
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66680599
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66680557
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66682052
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66682369
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13155
@frreiss PR looks good. It does need a little bit of work in the style
department. I'll leave comments. Lemme know if you have time to address these;
else I'll take over and do that chore
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66680828
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66681029
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66681873
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1696,205 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13570#discussion_r66697790
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1715,31 +1715,52 @@ object
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13570#discussion_r66697830
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1715,31 +1715,52 @@ object
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13570
@ioana-delaney no worries. I think the approach you have taken is the
correct one. I have left one smallish comment.
---
If your project is set up for it, you can reply to this email and have
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13581
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13581
[SPARK-14321][SQL] Reduce date format cost and string-to-date cost in date
functions
## What changes were proposed in this pull request?
The current implementations of `UnixTime
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13155#discussion_r66540105
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1695,16 +1695,176 @@ object
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13589
[SPARK-15822][SPARK-15825][SQL] Fix SMJ Segfault/Invalid results
## What changes were proposed in this pull request?
I'll add desc later
## How was this patch tested?
TBD
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13582
LGTM - merging to master/2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13572
LGTM
@liancheng anything to add?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13498
Ok I suspect that not all tables are created in the same way. Added some
code to dump the table metadata.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13498
[SPARK-15011][SQL] Re-enable 'analyze MetastoreRelations' in hive
StatisticsSuite [WIP]
## What changes were proposed in this pull request?
This test re-enables the `analyze
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13498
I'll trigger a bunch of tests and see what happens.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/13292#issuecomment-221644166
merging to master & 2.0 thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13299
[SPARK-15525][SQL][BUILD] Upgrade ANTLR4 SBT plugin
## What changes were proposed in this pull request?
The ANTLR4 SBT plugin has been moved from its own repo to one on bintray.
The version
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13302#discussion_r64652459
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -288,9 +288,10 @@ case class TruncateTableCommand
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/13302#issuecomment-221706431
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/13299#issuecomment-221651230
cc @rxin @MLnick @vanzin (could you take a look at the build)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221724543
@sureshthalamati Yeah you are right. Sorry for jumping the gun.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221719109
@sureshthalamati I think https://github.com/apache/spark/pull/13302 already
takes care of this.
---
If your project is set up for it, you can reply to this email
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13589#discussion_r66615434
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -490,6 +490,7 @@ class CodegenContext
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13553
LGTM - merging to master/2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13524#discussion_r66834271
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -52,10 +52,10 @@ object TypeCoercion
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13498
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13666
@epahomov in this case you can just open against master (the branches
haven't diverged much yet).
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13678
[SPARK-15824][SQL] Execute WITH INSERT ... statements immediately
## What changes were proposed in this pull request?
We currently immediately execute `INSERT` commands when
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13678
cc @Sephiroth-Lin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13678#discussion_r67104025
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -179,15 +179,16 @@ class Dataset[T] private[sql](
case _ => fa
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/13681
[SPARK-15960][SQL] Rename `spark.sql.enableFallBackToHdfsForStats` config
## What changes were proposed in this pull request?
Since we are probably going to add more statistics related
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13651#discussion_r66880904
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -213,7 +213,7 @@ case class Multiply(left
301 - 400 of 4165 matches
Mail list logo