Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/23214
maybe add some detailed test result in description and explain the reason
for this in code comment?
---
-
To unsubscribe, e
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/23152
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/23152
@juliuszsompolski I have updated it accordingly, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/23152#discussion_r237808846
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -2276,4 +2276,16 @@ class SQLQuerySuite extends
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/23152#discussion_r237808676
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -2276,4 +2276,16 @@ class SQLQuerySuite extends
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/23152#discussion_r237735460
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
---
@@ -879,13 +879,13
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/23152#discussion_r237380128
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/statsEstimation/FilterEstimationSuite.scala
---
@@ -821,6 +822,32 @@ class
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/23152#discussion_r237359357
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/statsEstimation/FilterEstimationSuite.scala
---
@@ -821,6 +822,32 @@ class
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/23152
@srowen
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/23152
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/23152
[SPARK-26181][SQL] the `hasMinMaxStats` method of `ColumnStatsMap` is not
correct
## What changes were proposed in this pull request?
For now the `hasMinMaxStats` will return the same
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/21282#discussion_r187291692
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -118,6 +120,229 @@ case class
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/21227#discussion_r185801594
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
---
@@ -81,7 +81,9 @@ public void close
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/21227#discussion_r185768453
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
---
@@ -81,7 +81,9 @@ public void close
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r94798265
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +341,91 @@ object
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r94797482
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +341,91 @@ object
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r94796481
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +341,91 @@ object
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/12391#discussion_r92030896
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogSuite.scala
---
@@ -778,6 +778,24 @@ abstract class
Github user adrian-wang closed the pull request at:
https://github.com/apache/spark/pull/12391
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/12391
I'll update this ASAP
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/15011
Thanks for your kind review @cloud-fan @gatorsmile , this helps our
enabling for BigBench on Spark 2.x
---
If your project is set up for it, you can reply to this email and have your
reply
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/15011
@hvanhovell I have checked with Hive and MySQL, they all support dropping
current database. By asking user to switch to another database before drop the
current one is not enough though
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/15011
@jameszhouyi @chenghao-intel
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/15011
[SPARK-17122][SQL]support drop current database
## What changes were proposed in this pull request?
In Spark 1.6 and earlier, we can drop the database we are using. In Spark
2.0
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/14991
[SPARK-17427][SQL] function SIZE should return -1 when parameter is null
## What changes were proposed in this pull request?
`select size(null)` returns -1 in Hive. In order
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/14366
good catch!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14280#discussion_r71477889
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -64,14 +67,19 @@ class SQLQuerySuite extends QueryTest
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/14169
@rxin Only those script transformation cases which use LazySimpleSerde
would be affected.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/14169
@rxin In Spark 2.0, those conf values start with "hive.", which have
default value in HiveConf, cannot get the default value now.
---
If your project is set up for it, you
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/14169
I have updated my code and switch to use bash as test case. Hope it will
work for Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/14169
This is strange because I can pass the specific test on my local.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/14169
[SPARK-16515][SQL]set default record reader and writer for script
transformation
## What changes were proposed in this pull request?
In `ScriptInputOutputSchema`, we read default
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/14089
[SPARK-16415][SQL] fix catalog string error
## What changes were proposed in this pull request?
In #13537 we truncate `simpleString` if it is a long `StructType`. But
sometimes we
Github user adrian-wang closed the pull request at:
https://github.com/apache/spark/pull/13783
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/13783
Closing, thx!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13784#discussion_r67795207
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -886,23 +886,45 @@ object DateTimeUtils
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/13783
[HOTFIX][SPARK-15613][SQL]Set test runtime timezone for DateTimeUtilSuite
## What changes were proposed in this pull request?
With not default timezone setting in DateTimeUtilsSuite
Github user adrian-wang commented on the issue:
https://github.com/apache/spark/pull/13652
@lw-lin @robbinspg I submit a PR to fix the failure: #13783 . Thanks for
your information!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13652#discussion_r67289086
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -851,6 +851,29 @@ object DateTimeUtils
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/13186#issuecomment-220278544
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/13186#issuecomment-220250927
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/13186
[SPARK-15397] [SQL] fix string udf locate as hive
## What changes were proposed in this pull request?
in hive, `locate("aa", "aaa", 0)` would yield 0, `locate(&qu
Github user adrian-wang closed the pull request at:
https://github.com/apache/spark/pull/11843
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/12551#issuecomment-212696187
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/12391#discussion_r59838034
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogSuite.scala
---
@@ -17,20 +17,22 @@
package
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/12391
[SPARK-14631][SQL][WIP]drop database cascade should drop functions for
HiveExternalCatalog
## What changes were proposed in this pull request?
as HIVE-12304, drop database cascade
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/12391#issuecomment-209895482
@chenghao-intel
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/11843
[SPARK-14021][SQL][WIP] custom context support for SparkSQLEnv
## What changes were proposed in this pull request?
This is to create a custom context for command `bin/spark-sql
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11815#discussion_r56745951
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -435,6 +435,11 @@ object SQLConf {
defaultValue = Some
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11815#discussion_r56745738
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -435,6 +435,11 @@ object SQLConf {
defaultValue = Some
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/11758
[SPARK-12855][MINOR] remove spark.sql.dialect from doc and test
## What changes were proposed in this pull request?
Since developer API of plug-able parser has been removed in #10801
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/11718#issuecomment-196712224
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11297#discussion_r56101451
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1169,6 +1170,7 @@ object PushPredicateThroughJoin
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/11495#issuecomment-193670192
Why we need `hive-cli` here? This is not version specific.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9589#discussion_r53545337
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -104,29 +104,39 @@ private[hive] class HiveClientImpl
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/9589#issuecomment-185523881
@marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11212#discussion_r53115148
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/codegen/UnsafeRowWriter.java
---
@@ -170,6 +170,7 @@ public void write
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11212#discussion_r52968764
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -523,11 +523,45 @@ case class Atan2(left
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11212#discussion_r52968376
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/codegen/UnsafeRowWriter.java
---
@@ -170,6 +170,7 @@ public void write
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11212#discussion_r52968287
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/MathFunctionsSuite.scala
---
@@ -351,6 +350,20 @@ class
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11212#discussion_r52968214
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/MathFunctionsSuite.scala
---
@@ -103,8 +103,7 @@ class
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/9589#issuecomment-183811059
@marmbrus We have instantiated and started a instance of `CliSessionState`,
and when we init `SparkSQLEnv`, we will create a `SessionState`.
---
If your project
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/11071#issuecomment-183791219
@srowen you are right.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11071#discussion_r51851087
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -55,10 +56,19 @@ object DateTimeUtils
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11071#discussion_r51851099
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -55,10 +56,19 @@ object DateTimeUtils
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11071#discussion_r51852765
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -55,10 +56,19 @@ object DateTimeUtils
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11071#discussion_r51853631
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -55,10 +56,19 @@ object DateTimeUtils
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/11071#issuecomment-180201265
The map key could be like "UTC+01:00". "American/Los Angeles", "PST", etc.,
they are already cached in `getTimeZone`, but the method
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/11070#issuecomment-179664171
LGTM, thx!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10762#issuecomment-179642503
@cloud-fan any more comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10964#discussion_r51392316
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2046,6 +2046,15 @@ class SQLQuerySuite extends QueryTest
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10964#issuecomment-177877914
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10964#discussion_r51381272
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2056,4 +2056,11 @@ class SQLQuerySuite extends QueryTest
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10964#discussion_r51240308
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1463,4 +1463,11 @@ class SQLQuerySuite extends
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r51295569
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -171,13 +171,21 @@ case class Join
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r51090849
--- Diff:
sql/catalyst/src/main/antlr3/org/apache/spark/sql/catalyst/parser/SparkSqlLexer.g
---
@@ -328,6 +328,8 @@ KW_WEEK: 'WEEK'|'WEEKS
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r51094651
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -104,6 +105,9 @@ trait CheckAnalysis
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r51106564
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -919,6 +919,7 @@ object PushPredicateThroughJoin
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10964#discussion_r51226292
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeExtractors.scala
---
@@ -218,7 +218,7 @@ case class
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10762#issuecomment-176532100
Acutally MySQL and Oracle does not support normal full outer join either.
PostgreSQL does support natural full outer join:
http://www.postgresql.org/docs/9.1
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r51226827
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala
---
@@ -248,4 +249,29 @@ class AnalysisSuite extends
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10762#issuecomment-176006912
Sure, I'll update this pr today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/10964
[SPARK-13056][SQL] map column would throw NPE if value is null
Jira:
https://issues.apache.org/jira/browse/SPARK-13056
Create a map like
{ "a": "somestr
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10762#issuecomment-173804613
When the parser calls the constructor, how can we get the schema of tables?
We need schema to build project list and conditions.
---
If your project is set up
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10762#issuecomment-173465114
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10731#discussion_r50063958
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -445,6 +445,26 @@ class Analyzer(
val
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10731#issuecomment-172424340
order by 2 should be the second column, I think
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/10762
[SPARK-12828][SQL]add natural join support
Jira:
https://issues.apache.org/jira/browse/SPARK-12828
You can merge this pull request into a Git repository by running:
$ git pull https
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r49813610
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -179,10 +180,15 @@ object SqlParser extends
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r49825914
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2067,4 +2067,24 @@ class SQLQuerySuite extends QueryTest
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r49825919
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -179,10 +180,15 @@ object SqlParser extends
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r49828096
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -144,6 +147,35 @@ case class Join
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10762#discussion_r49827571
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -144,6 +147,35 @@ case class Join
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/10762#issuecomment-171895087
@rxin, Thanks for you time, I'll draft another version accordingly.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10731#discussion_r49539497
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -22,8 +22,9 @@ import java.sql.Timestamp
import
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10731#discussion_r49539533
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -441,7 +442,7 @@ class Analyzer
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10731#discussion_r49539463
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -22,8 +22,9 @@ import java.sql.Timestamp
import
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10731#discussion_r49540540
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -22,8 +22,9 @@ import java.sql.Timestamp
import
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9589#discussion_r48829056
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientWrapper.scala ---
@@ -151,29 +152,34 @@ private[hive] class ClientWrapper
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10252#discussion_r48232238
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/DStream.scala ---
@@ -330,6 +330,20 @@ abstract class DStream[T: ClassTag
1 - 100 of 794 matches
Mail list logo