GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/8654
Merge pull request #1 from apache/master
merge origin pr
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jiangxb1987/spark master
GitHub user jiangxb1987 reopened a pull request:
https://github.com/apache/spark/pull/8654
Merge pull request #1 from apache/master
merge origin pr
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jiangxb1987/spark master
Github user jiangxb1987 closed the pull request at:
https://github.com/apache/spark/pull/8654
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jiangxb1987 closed the pull request at:
https://github.com/apache/spark/pull/8654
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/13893
[SPARK-14172][SQL] Hive table partition predicate not passed down correctly
## What changes were proposed in this pull request?
Currently partition predicate is not passed down
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13893#discussion_r68483394
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -64,10 +64,12 @@ object PhysicalOperation extends
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
@cloud-fan could you please have a look at this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
If PushDownPredicate should be improved, I would like to send a PR in one
or two days. Is a new task need to be created?@cloud-fan
---
If your project is set up for it, you can reply
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14136
@hvanhovell I've fixed most of the problems mentioned above, and I also
added basic tests and comments as you required. Please find some time to do a
pass, thanks!
---
If your project is set
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/14401
[SPARK-16793][SQL]Set the temporary warehouse path to sc'conf in TestHive.
## What changes were proposed in this pull request?
With SPARK-15034, we could use the value
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14401
@yhuai Could you spare some time to review this PR please? Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14401
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13893#discussion_r73126756
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -64,10 +64,17 @@ object PhysicalOperation extends
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
@cloud-fan Do you mean something like adding in `basicLogicalOperators` the
following:
`case class Scanner(
projectionList: Seq[NamedExpression],
filters: Seq[Expression
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13893#discussion_r73318885
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -64,10 +64,17 @@ object PhysicalOperation extends
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/14619
[SPARK-17031][SQL] Add `Scanner` operator to wrap the optimized plan
directly in planner
## What changes were proposed in this pull request?
Added `Scanner` operator to wrap
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14620
cc @sarutak
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
@cloud-fan I've send a PR to add `Scanner` operator in #14619 , please have
a look at it when you have time, thanks!
---
If your project is set up for it, you can reply to this email and have
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/14620
[SPARK-14172][SQL] Add test cases for methods in ParserUtils.
## What changes were proposed in this pull request?
Currently methods in `ParserUtils` are tested indirectly, we should
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14620#discussion_r74725943
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ParserUtilsSuite.scala
---
@@ -61,5 +88,39 @@ class ParserUtilsSuite extends
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14620
@HyukjinKwon Thank you for checking!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14620
@hvanhovell Thank you for your comments! Will do it as possible as I can.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14619
@hvanhovell This idea is inspired by @cloud-fan, as he stated in
[comment](https://github.com/apache/spark/pull/13893#discussion_r73305098),
we'd better have a wrapper node for scan, so
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14136
cc @rxin @HyukjinKwon Please review this PR and tell me what should I
update, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14136
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14136
@hvanhovell Would you please review this PR? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
With [PR#14012](https://github.com/apache/spark/pull/14012) the order
between deterministic and non-deterministic predicates would not be changed
arbitrarily, so I think we could apply
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14012#discussion_r70370161
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1086,6 +1086,28 @@ object PruneFilters extends
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13893#discussion_r68557086
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruningSuite.scala
---
@@ -141,6 +141,14 @@ class PruningSuite extends
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
Predicates should not be reordered if a condition contains
non-deterministic parts, for example, 'rand() < 0.1 AND a=1' should not be
reordered to 'a=1 AND rand() < 0.1' as the number of
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
@cloud-fan I pushed a commit to apply predicate pushdown on deterministic
parts placed before any non-deterministic predicates, should it be safe to do
this optimizationï¼
---
If your
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14012#discussion_r69535006
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1106,12 +1106,15 @@ object PushDownPredicate
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14012
cc @liancheng please review this PR, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14012#discussion_r70097418
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1135,11 +1146,16 @@ object PushDownPredicate
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/14136
[SPARK-16282][SQL] Implement percentile SQL function.
## What changes were proposed in this pull request?
Implement percentile SQL function. It computes the exact percentile(s
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14012
@liancheng please find some time to review the latest updates, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70243309
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -0,0 +1,148
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70243163
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -0,0 +1,148
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70243630
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -0,0 +1,148
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70243723
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -0,0 +1,148
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70244013
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -0,0 +1,148
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70245302
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -536,6 +536,25 @@ object functions {
def min(columnName: String
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70243463
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -0,0 +1,148
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14136#discussion_r70245059
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -0,0 +1,148
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14012#discussion_r70049725
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1106,21 +1106,32 @@ object PushDownPredicate
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14012#discussion_r70053206
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1135,11 +1146,16 @@ object PushDownPredicate
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14136
@hvanhovell Thank you for your kindly review, the suggestions are quite
useful for me. I'll try to get some time later today to update some fixes.
Thanks!
---
If your project is set up
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13893#discussion_r73821493
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -64,10 +64,17 @@ object PhysicalOperation extends
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14401
@rxin As @yhuai previously addressed, this change benifits in following
cases:
1. Right now, we set the warehouse path to the default one firstly, and
then we override the setting
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/13893
ping @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13893#discussion_r73176693
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -64,10 +64,17 @@ object PhysicalOperation extends
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/14012
[SPARK-16343][SQL] Improve the PushDownPredicate rule to pushdown preâ¦
## What changes were proposed in this pull request?
Currently our Optimizer may reorder the predicates to run
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14012
cc @liancheng @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14619#discussion_r74896284
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1636,6 +1638,30 @@ object
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14620
I've added test cases for all the methods in `ParserUtils`, and made two
small changes in the `ParserUtils` code:
1. Deleted function `command(stream: CharStream)`, merge both `command
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14620
@hvanhovell I'm working on it, will try to add test cases for the remaining
methods:
1. `operationNotAllowed(message: String, ctx: ParserRuleContext)`;
2. `checkDuplicateKeys[T](keyPairs
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14620
@hvanhovell Thank you for your help!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16674
@gatorsmile @cloud-fan Could you please look at this when you have time?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16674
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12487
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +506,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12125
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -398,26 +472,6 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12440
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +506,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12542
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +506,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12779
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +506,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12271
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +506,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12255
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -427,15 +481,15 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r12647
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +506,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16373#discussion_r100031009
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -619,18 +621,34 @@ case class ShowTablesCommand
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16674
@gatorsmile The only test cast that I removed is `test("correctly resolve a
view with CTE")`, which is duplicate with the existing `test("CTE within
view")`.
For t
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/16679
[SPARK-19272][SQL] Remove the param `viewOriginalText` from `CatalogTable`
## What changes were proposed in this pull request?
Hive will expand the view text, so it needs 2 fields
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16679
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/16674
[SPARK-19331][SQL][TESTS] Improve the test coverage of SQLViewSuite
## What changes were proposed in this pull request?
Improve the test coverage of SQLViewSuite, cover the following
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r100689894
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -617,13 +617,17 @@ class Analyzer(
private
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r100689818
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveSQLViewSuite.scala
---
@@ -0,0 +1,154 @@
+/*
+ * Licensed
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16921
Thank you for doing this, this looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r100238713
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveSQLViewSuite.scala
---
@@ -0,0 +1,190 @@
+/*
+ * Licensed
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/16869
[SPARK-19025][SQL] Remove SQL builder for operators
## What changes were proposed in this pull request?
With the new approach of view resolution, we can get rid of SQL generation
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r100455122
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveSQLViewSuite.scala
---
@@ -0,0 +1,190 @@
+/*
+ * Licensed
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r100670864
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +542,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r100670964
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala ---
@@ -452,311 +542,96 @@ class SQLViewSuite extends QueryTest
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16674#discussion_r100671808
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveSQLViewSuite.scala
---
@@ -0,0 +1,154 @@
+/*
+ * Licensed
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16873#discussion_r100348823
--- Diff: sql/core/src/test/resources/sql-tests/inputs/grouping_set.sql ---
@@ -13,5 +18,8 @@ SELECT a, b, c, count(d) FROM grouping GROUP BY a, b, c
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16873
Thank you for ccing me @hvanhovell ! This PR looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/15861
This PR should be separated into some smaller ones, I'll do this at about
March.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16613#discussion_r96580761
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -207,29 +210,35 @@ case class CreateViewCommand
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16613#discussion_r96580856
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -275,21 +286,80 @@ case class AlterViewAsCommand
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16613
Thank you @cloud-fan !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16615
Thank you for your comment @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16613
cc @cloud-fan @yhuai @hvanhovell @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16613#discussion_r96561447
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -275,21 +276,93 @@ case class AlterViewAsCommand
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16613#discussion_r96562319
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -207,29 +209,26 @@ case class CreateViewCommand
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/16613
[SPARK-19024][SQL] Implement new approach to write a permanent view
## What changes were proposed in this pull request?
On CREATE/ALTER a view, it's no longer needed to generate a SQL
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/16615
[Minor][SQL] Remove duplicate call of reset() function in
CurrentOrigin.withOrigin()
## What changes were proposed in this pull request?
Remove duplicate call of reset() function
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16561#discussion_r96331626
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/view.scala
---
@@ -28,22 +28,60 @@ import
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/17125
cc @gatorsmile @cloud-fan Please have a look at this when you have time,
thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17125#discussion_r103863856
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -128,6 +129,15 @@ case class CreateViewCommand
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17125#discussion_r103860516
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -128,6 +129,15 @@ case class CreateViewCommand
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/17125
[SPARK-19211][SQL] Explicitly prevent Insert into View or Create View As
Insert
## What changes were proposed in this pull request?
Currently we don't explicitly forbid the following
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14917
@hvanhovell I've updated the description following your advice, thank you
for your time!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14867#discussion_r77320471
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -374,15 +374,17 @@ setQuantifier
1 - 100 of 1796 matches
Mail list logo