Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16664
Basically I see no reason to add some specific parameter to a listener API
that is meant to be generic which already contains reference to QueryExecution.
What are you going to do if next time you
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16664
I think that's a separate "bug" we should fix, i.e. DataFrameWriter should
use InsertIntoDataSourceCommand so we can consolidate the two paths.
---
If your project is set up for it, you
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16664
Well it does. It contains the entire plan.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16664
That's probably because you are not familiar with the SQL component. The
existing API already has references to the QueryExecution object, which
actually includes all of the information your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16885
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16887
LGTM pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16664#discussion_r100565585
--- Diff: docs/sql-programming-guide.md ---
@@ -1300,10 +1300,28 @@ Configuration of in-memory caching can be done
using the `setConf` method on `Sp
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16664#discussion_r100565522
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/util/QueryExecutionListener.scala
---
@@ -44,27 +44,50 @@ trait QueryExecutionListener
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16664#discussion_r100564925
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -218,7 +247,14 @@ final class DataFrameWriter[T] private[sql](ds:
Dataset
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16664
Sorry I'm really confused, probably because I haven't kept track with this
pr. But the diff doesn't match the pr description. Are we fixing a bug here or
introducing a bunch of new APIs
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16887#discussion_r100552660
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -696,9 +696,9 @@ class DAGScheduler(
/**
* Cancel a job
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16887#discussion_r100552370
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2207,20 +2207,22 @@ class SparkContext(config: SparkConf) extends
Logging
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16864#discussion_r100503141
--- Diff:
common/sketch/src/main/java/org/apache/spark/util/sketch/BloomFilter.java ---
@@ -81,6 +81,11 @@ int getVersionNumber() {
public abstract
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16875
Merging in branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16875
@bogdanrdc can you close this? It won't auto close because it is not merged
in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16872#discussion_r100396033
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameRangeSuite.scala ---
@@ -127,4 +133,28 @@ class DataFrameRangeSuite extends QueryTest
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16864
I meant just union, but createUnion ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16872
I'm going to merge this in master. If we find a way to optimize the test we
can do a follow-up pr.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16872
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16871#discussion_r100287048
--- Diff: build/mvn ---
@@ -22,7 +22,7 @@ _DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Preserve t
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16871#discussion_r100287082
--- Diff: core/src/test/java/org/apache/spark/Java8RDDAPISuite.java ---
@@ -15,7 +15,7 @@
* limitations under the License.
*/
-package
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16871#discussion_r100284451
--- Diff: core/src/test/java/org/apache/spark/Java8RDDAPISuite.java ---
@@ -15,7 +15,7 @@
* limitations under the License.
*/
-package
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16871#discussion_r100284373
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1910,31 +1908,7 @@ private[spark] object Utils extends Logging {
* @return
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16871
With this, what's the behavior if users use a Java 7 runtime to run Spark?
What kind of errors do we generate?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16871#discussion_r100284098
--- Diff: build/mvn ---
@@ -22,7 +22,7 @@ _DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Preserve t
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16864
cc @mengxr / @tjhunter / @jkbradley is this good to have?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16864#discussion_r100261227
--- Diff:
common/sketch/src/main/java/org/apache/spark/util/sketch/BloomFilter.java ---
@@ -81,6 +81,11 @@ int getVersionNumber() {
public abstract
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16864#discussion_r100261151
--- Diff:
common/sketch/src/main/java/org/apache/spark/util/sketch/BloomFilter.java ---
@@ -148,6 +153,20 @@ int getVersionNumber() {
public abstract
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16864#discussion_r100261088
--- Diff:
common/sketch/src/main/java/org/apache/spark/util/sketch/IncompatibleUnionException.java
---
@@ -0,0 +1,24 @@
+/*
+ * Licensed
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16826
@kunalkhamar you should create a JIRA ticket for this.
In addition, I'm not a big fan of the design to pass a base session in.
It'd be simpler if there is just a clone method on sessionstate
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16826#discussion_r100255729
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -213,6 +218,24 @@ class SparkSession private(
new SparkSession
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16856
I think the issue is that the programming guide should probably switch over
to the DataFrame one as the primary one, and then the RDD one as a RDD
programming guide.
cc @matei for his input
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16810
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-2.6/3810/
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16810
Did we break the build?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16837
Does this change not require changing the other external catalog?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16835
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/16835
[SPARK-19495][SQL] Make SQLConf slightly more extensible
## What changes were proposed in this pull request?
This pull request makes SQLConf slightly more extensible by removing the
visibility
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16594
ok here is an idea
how about
```
explain stats xxx
```
as the way to add stats?
---
If your project is set up for it, you can reply to this email and have your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16829
Merging in master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16829
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16832
hm is it safe to just do this change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16826
What is the semantics? Do functions/settings on the base SparkSession
affect the new forked?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16791
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/16796
[SPARK-10063] Follow-up: remove dead code related to an old output
committer.
## What changes were proposed in this pull request?
DirectParquetOutputCommitter was removed from Spark
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16792
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16756
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16751
can you put rest of the cleanups in one place?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16751
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16742
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16742
LGTM, but can you update your description:
```
This removes from the __all__ list class names that are not defined
(visible) in the pyspark.sql.column.
```
Your current
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16731
to be honest I really hate it with Scala/Java when we need to add so many
functions just for a single function. Can we just tell users to use
`expr("approx_percentile(...)")`?
---
If yo
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16533
Can return type also take a string?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16534
Is the goal to change the doc or the repl string? It might be useful to
change the repl string but I'm not sure if it is worth changing the doc.
---
If your project is set up for it, you can reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16708
Yes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16708
Basically I want to push back against exposing this as a public API ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16707
LGTM pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16708
Actually - why do we need this? I worry it can be a confusing API due to
optimizer behavior.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16708#discussion_r97935710
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2421,6 +2421,13 @@ class Dataset[T] private[sql
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16707
Maybe add a prefix so it is clear a UDF? e.g. `UDF:func_name(...)`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16702
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16594
sorry this explain plan makes no sense -- it is impossible to read.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16637
Also I think we need to update the code gen path as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16637
Can you add a test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16633
This breaks the RDD job chain doesn't it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/12004
I've pointed out this before, and again: FWIW I really don't see what this
pull request is trying to accomplish
---
If your project is set up for it, you can reply to this email and have your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16622
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16611#discussion_r96532765
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/SimpleTextHadoopFsRelationSuite.scala
---
@@ -69,18 +69,19 @@ class
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/16622
[SPARK-18917][SQL] Remove schema check in appending data
## What changes were proposed in this pull request?
In append mode, we check whether the schema of the write is compatible with
the schema
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16339
I submitted a pr here https://github.com/apache/spark/pull/16622
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96482971
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -30,21 +30,42 @@ import
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16483
cc @ankurdave
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16611
Rather than just submitting code, can you put down the interfaces concisely
either in a doc or the pr description? As @falaki said, we need this to work in
DDL too. It is possible to just extend
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16585
BTW please add a test case for this. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16598#discussion_r96320999
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2603,6 +2603,21 @@ class Dataset[T] private[sql](
def createGlobalTempView
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16598#discussion_r96320992
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2603,6 +2603,21 @@ class Dataset[T] private[sql](
def createGlobalTempView
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16591#discussion_r96320937
--- Diff: core/src/main/scala/org/apache/spark/executor/OutputMetrics.scala
---
@@ -20,7 +20,6 @@ package org.apache.spark.executor
import
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16585
should the proper fix be the python thread transfers the proper information
over?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16595
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16339#discussion_r96304206
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -445,21 +445,28 @@ case class DataSource
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16608#discussion_r96294261
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1621,9 +1621,11 @@ class Analyzer
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16608#discussion_r96294175
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/GeneratorFunctionSuite.scala ---
@@ -86,13 +86,25 @@ class GeneratorFunctionSuite extends QueryTest
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16608#discussion_r96294115
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/GeneratorFunctionSuite.scala ---
@@ -86,13 +86,25 @@ class GeneratorFunctionSuite extends QueryTest
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96171901
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/finishAnalysis.scala
---
@@ -41,13 +46,18 @@ object ReplaceExpressions extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16499
also cc @sameeragarwal
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16581#discussion_r9630
--- Diff: python/pyspark/sql/tests.py ---
@@ -342,6 +342,14 @@ def test_udf_in_filter_on_top_of_outer_join(self):
df = df.withColumn('b', udf
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16581#discussion_r9624
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -86,6 +86,19 @@ trait PredicateHelper
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16558
Alright i'm going to merge this given JIRA is down ... merging in
master/branch-2.1/branch-2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16568
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16404
Make sure you update the pull request and jira ticket description before
you merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16404
LGTM on the behavior
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16559
Do we not have something similar already? cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16538
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16558
Oops - LGTM pending tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16395#discussion_r95726607
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
---
@@ -0,0 +1,555
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16554#discussion_r95702468
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/FileCommitProtocol.scala ---
@@ -112,6 +113,15 @@ abstract class FileCommitProtocol {
* just
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16551
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16544
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r95686129
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -111,7 +112,8 @@ case class CatalogTablePartition
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16541
Is this a perf optimization? If yes, can you show some benchmarks? Also for
codegen it's good to show the generated code before/after this change. You can
get
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16395#discussion_r95527981
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -116,6 +116,12 @@ case class Filter
901 - 1000 of 14826 matches
Mail list logo