Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r139879632
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -140,6 +141,62 @@ class DetermineTableStats(session
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r139740180
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -139,6 +138,54 @@ class DetermineTableStats(session
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/18193
@cloud-fan I have address your comments. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/18193
retest it please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/18193
@cloud-fan PruneFileSourcePartitions is kind of a rule for datasource, But
now we cannot make hive as a data source.
---
If your project is set up for it, you can reply to this email and have
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r134523133
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -139,6 +138,54 @@ class DetermineTableStats(session
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14285
@cloud-fan OK, I will close it. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang closed the pull request at:
https://github.com/apache/spark/pull/14285
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14655
@wzhfy Yes, I think this is same with SPARK-15616.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r121985050
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -139,6 +138,54 @@ class DetermineTableStats(session
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r121982742
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -139,6 +138,54 @@ class DetermineTableStats(session
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r121982605
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -139,6 +138,54 @@ class DetermineTableStats(session
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/18205
@cloud-fan Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/18205
@cloud-fan I have addressed your comments. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121648290
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +68,45 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121648258
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
---
@@ -59,8 +60,11 @@ private[sql
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121563368
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +68,41 @@ class
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/18205
@wzhfy I have addressed your comments. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121378281
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +68,42 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121377999
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +68,42 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121378021
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +68,42 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121378012
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +68,42 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121275701
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
---
@@ -59,8 +60,10 @@ private[sql
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121275665
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +67,35 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121275610
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +67,35 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r121275118
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +67,35 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18205#discussion_r120244237
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/PruneFileSourcePartitionsSuite.scala
---
@@ -66,4 +67,33 @@ class
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/18205
[SPARK-20986] [SQL] Reset table's statistics after
PruneFileSourcePartitions rule.
## What changes were proposed in this pull request?
After PruneFileSourcePartitions rule, It needs reset
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r120095474
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
---
@@ -59,7 +60,10 @@ private[sql
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18193#discussion_r120092962
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionStateBuilder.scala
---
@@ -88,6 +89,20 @@ class HiveSessionStateBuilder(session
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13373
@HyukjinKwon @cloud-fan I will close this PR and create new PR #18193 for
it. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user lianhuiwang closed the pull request at:
https://github.com/apache/spark/pull/13373
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/18193
[SPARK-15616] [SQL] Metastore relation should fallback to HDFS size of
partitions that are involved in Query for JoinSelection.
## What changes were proposed in this pull request
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13373
@cloud-fan I do not think that PruneFileSourcePartitions rule is for Hive's
CatalogRelation. example in this PR with master branch cannot get expected
result. So i will update
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13706
@gatorsmile OK. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13706#discussion_r119124256
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/macros.scala ---
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13706#discussion_r119124106
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/NoSuchItemException.scala
---
@@ -52,3 +52,6 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13706#discussion_r119124043
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -1090,6 +1090,24 @@ class SessionCatalog
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13706#discussion_r119123747
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -107,6 +110,14 @@ class
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13706#discussion_r119122834
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/macros.scala ---
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13706#discussion_r119122863
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/macros.scala ---
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13706#discussion_r119122622
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/macros.scala ---
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13706
@gatorsmile sorry for reply lately. Now i have merged with master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13706
@hvanhovell I have updated this PR. Can you take a look? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13706
@hvanhovell Yes, I will update later. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13979
@JoshRosen Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user lianhuiwang closed the pull request at:
https://github.com/apache/spark/pull/14111
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14111
OK. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14363
@cloud-fan There is a case that i met. The varchar(length)/char(length)
type is not a String Type. But now SparkSQL consider them a string type. So
there are different result
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14111
@cloud-fan I don't think it is a bug of constraints propagation because
filter with the uncorrelated scalar subquery needs to push down due to it can
filter many records.
In addition
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13373
cc @cloud-fan @rxin @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14285
cc @cloud-fan @rxin @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/14285
[SPARK-16649][SQL] Push partition predicates down into metastore for
OptimizeMetadataOnlyQuery
## What changes were proposed in this pull request?
SPARK-6910 has supported for pushing
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14154
OK, I close it. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lianhuiwang closed the pull request at:
https://github.com/apache/spark/pull/14154
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70746279
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -234,6 +234,7 @@ object FunctionRegistry
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70660136
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -207,20 +207,12 @@ case class Multiply(left
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70660445
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -207,20 +207,12 @@ case class Multiply(left
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70659855
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -957,7 +957,7 @@ class AstBuilder extends
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70656424
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -957,7 +957,7 @@ class AstBuilder extends
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70628892
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -234,6 +234,7 @@ object FunctionRegistry
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14111
@cloud-fan At firstly I have implemented it with you said. But the
following situation that has broadcast join will have a error 'ScalarSubquery
has not finished', example (from SPARK-14791
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14111#discussion_r70563183
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -263,7 +263,9 @@ abstract class QueryPlan[PlanType
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14111
cc @rxin @hvanhovell @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
Thank you for review and merging. @rxin @hvanhovell @cloud-fan .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11293#discussion_r70475334
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -127,33 +166,30 @@ abstract class Catalog
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14154
@hvanhovell I cannot find why Hive support it from
https://issues.apache.org/jira/browse/HIVE-1856.
But now many Spark users have used Hive before, So that make some previous
these queries
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/14154
[SPARK-16497][SQL] Don't throw an exception if drop non-existent
TABLE/VIEW/Function/Partitions
## What changes were proposed in this pull request?
from
https://cwiki.apache.org
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@cloud-fan I have addressed your latest comments. thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70404513
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuerySuite.scala
---
@@ -0,0 +1,122 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70404475
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,162 @@
+/*
+ * Licensed
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@cloud-fan @hvanhovell about getPartitionAttrs() It has a improve place
that we can define it in relation node. but now relation node has not this
function. how about added in follow-up PRs
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70370384
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,153 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70369134
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,153 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70368905
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,153 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70368861
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,153 @@
+/*
+ * Licensed
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@hvanhovell I have addressed some of your comments. Thanks. Could you look
at again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70267348
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuerySuite.scala
---
@@ -0,0 +1,153 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14132#discussion_r70254654
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -339,8 +339,24 @@ class AstBuilder extends
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@hvanhovell I have addressed your comments. Thanks. If I missed something,
please tell me.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70249015
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuerySuite.scala
---
@@ -0,0 +1,153 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70248199
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,143 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70248137
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,143 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70247911
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,143 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r70247856
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -0,0 +1,143 @@
+/*
+ * Licensed
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/14111
[SPARK-16456][SQL] Reuse the uncorrelated scalar subqueries with the same
logical plan in a query
## What changes were proposed in this pull request?
In
[TPCDS-Q14](https://github.com
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@cloud-fan thanks. I have rebased it to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69860765
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1689,4 +1689,76 @@ class SQLQuerySuite extends
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69843015
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1689,4 +1689,86 @@ class SQLQuerySuite extends
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@cloud-fan Yes, Thanks, I have merged it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@rxin @yhuai @cloud-fan I have updated with your comments. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69670547
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkOptimizer.scala ---
@@ -30,6 +30,7 @@ class SparkOptimizer(
extends
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@cloud-fan Thanks. I have updated with some of your comments.
Yes, it is not a small patch and it needs more reviewers.
@yhuai @liancheng Could you take a look at this PR? Thanks
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69440435
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/MetadataOnlyOptimizerSuite.scala
---
@@ -0,0 +1,87 @@
+/*
+ * Licensed
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69440261
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkOptimizer.scala ---
@@ -30,6 +30,7 @@ class SparkOptimizer(
extends
Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69439940
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/MetadataOnlyOptimizer.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@cloud-fan I have updated with your branch code. Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
@cloud-fan Thanks. I will look at it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/13494
retest it please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/13979
[SPARK-SPARK-16302] [SQL] Set the right number of partitions for reading
data from a local collection.
## What changes were proposed in this pull request?
follow #13137 This pr sets
1 - 100 of 515 matches
Mail list logo