Github user rxin commented on the issue:
https://github.com/apache/spark/pull/19008
Can you also update the various json functions in which we document the
options? The way it is right now there is no way for end-users to discover this
option.
---
If your project is set up for it
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18999#discussion_r134123916
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -659,19 +659,77 @@ def distinct(self):
return DataFrame(self._jdf.distinct(), self.sql_ctx
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18999#discussion_r134123358
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -659,19 +659,77 @@ def distinct(self):
return DataFrame(self._jdf.distinct(), self.sql_ctx
Repository: spark
Updated Branches:
refs/heads/master 7880909c4 -> a2db5c576
[MINOR][TYPO] Fix typos: runnning and Excecutors
## What changes were proposed in this pull request?
Fix typos
## How was this patch tested?
Existing tests
Author: Andrew Ash
Closes #18996 from ash211/patch-2.
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18996
Merging in master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18988
Thanks. Do you want to add the Python and R ones?
It is a little bit tricky because in Python we would need to detect whether
withReplacement is a boolean or a floating point value. If it is a
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18979#discussion_r133887920
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/BasicWriteStatsTracker.scala
---
@@ -57,7 +60,14 @@ class
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18988
[SPARK-21778][SQL] Simpler Dataset.sample API in Scala / Java
## What changes were proposed in this pull request?
Dataset.sample requires a boolean flag withReplacement as the first
argument
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18640
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18956#discussion_r133360047
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -37,6 +37,12 @@ import org.apache.spark.sql.types
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18640#discussion_r133131618
--- Diff: sql/core/pom.xml ---
@@ -87,6 +87,16 @@
+ org.apache.orc
+ orc-core
+ ${orc.classifier
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18923
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
log. getTableOption.
## How was this patch tested?
Removed the test case.
Author: Reynold Xin
Closes #18912 from rxin/remove-getTableOption.
(cherry picked from commit 584c7f14370cdfafdc6cd554b2760b7ce7709368)
Signed-off-by: Reynold Xin
Project: http://git-wip-us.apache.org/repos/asf/spark/r
log. getTableOption.
## How was this patch tested?
Removed the test case.
Author: Reynold Xin
Closes #18912 from rxin/remove-getTableOption.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/584c7f14
Tree: http://git-wip-us.apache.
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18912
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18912
[SQL] Remove unused getTableOption in ExternalCatalog
## What changes were proposed in this pull request?
This patch removes the unused SessionCatalog.getTableMetadataOption and
ExternalCatalog
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18900
We should put this in the catalog, shouldn't we?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/master 84454d7d3 -> 95ad960ca
[SPARK-21669] Internal API for collecting metrics/stats during FileFormatWriter
jobs
## What changes were proposed in this pull request?
This patch introduces an internal interface for tracking metrics and/or
stati
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18884
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18884
this looks good to me, but I didn't review super carefully.
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
lly configurable). This fixes timeout issues in pyspark when using
`collect` and similar functions, in cases where Python may take more than a
couple seconds to connect.
See https://issues.apache.org/jira/browse/SPARK-21551
## How was this patch tested?
Ran the tests.
cc rxin
Author: peay
Closes #18
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18752
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18886
thx lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18786
I suspect it is ok for R ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18884#discussion_r132022172
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSink.scala
---
@@ -128,6 +128,7 @@ class FileStreamSink
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18884#discussion_r132006851
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/WriteStatsTracker.scala
---
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18884#discussion_r132006674
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/BasicWriteStatsTracker.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18607
We can just put this code in a 3rd party library, can't we? If there is an
issue with service/code discovery, we can come up with some sort of
registration process similar to the data sourc
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18884
Jenkins, add to white list.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18607
Unfortunately I think drill is not popular enough to warrant inclusion in
here yet. If this is not extensible, we should make it possible to include such
mappings outside Spark and then perhaps Drill
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18851
Looks like the strip global limit is used by at least some test cases.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18844
Ok makes sense. LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18851
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18851
[SPARK-21644][SQL] LocalLimit.maxRows is defined incorrectly
## What changes were proposed in this pull request?
The definition of `maxRows` in `LocalLimit` operator was simply wrong. This
patch
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18844
Actually why do we need this? Can't you just add Error?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18640
I just checked the dependency size. They look pretty reasonable, roughly 2
MBs in total (although I do worry in the future whether ORC would bring in a
lot more jars).
cc @omalley any
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18640
Why don't we then create a separate orc module? Just copy a few of the
files over?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18640
To the best of my knowledge almost everybody runs with Hive anyway and the
vast majority of users that run ORC are Hive users. In hindsight we probably
should have put most of the data source
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
@srowen that's not what I said. Almost always an explicit LGTM is
preferred. There are tiny changes that might not require them, and it is up to
the judgement of the committer. But those are
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18640
Why are we adding this to core? Why not just the hive module?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18844
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18839
cc @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18839
Some test on string form of the plan might fail. Let's see ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18839
[SPARK-21634][SQL] Change OneRowRelation from a case object to case class
## What changes were proposed in this pull request?
OneRowRelation is the only plan that is a case object, which causes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
@HyukjinKwon you weren't a committer before :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
@srowen search for "RTC vs CTR (was: Concerning Sentry...)"
From Todd Lipcon:
```
I don't have incubator stats... nor do I have a good way to measure "most
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
Actually Sean I disagree. Spark has always been review then commit from the
days before it entered ASF. In a huge debate last year within the ASF on RTC vs
CTR, Spark was cited as a prominent example
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
Ah OK. That's what we are discussing here. In the past it has always been
an explicit "LGTM". That was defined before github had even the approval
feature. Now most committers are a
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18828
Still looking into it, but the failure is related to reuse exchange and
caching.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
What's your point? You should be able to merge PR without anybody reviewing?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
Yes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
It is documented: http://spark.apache.org/contributing.html
It's been the convention forever and it's also good to use one way rather
than multiple, so I'd prefer us just using
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
I think @srowen did it here using the new github approval: srowen approved
these changes 20 hours ago
@srowen might be better if we stick with the LGTM one.
---
If your project is set up
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
seems fine to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18828
cc @adrian-ionescu @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18828
[SPARK-21619][SQL] Fail the execution of canonicalized plans explicitly
## What changes were proposed in this pull request?
Canonicalized plans are not supposed to be executed. I ran into a case
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
@sitalkedia anyway you can talk to the FB team that does that one and
relicense, similar to RocksDB?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
Our compression codec is actually completely decoupled from Hadoops, but
dependency management (and licensing) can be annoying to deal with.
---
If your project is set up for it, you can reply to
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
How big is the dependency that's getting pulled in? If we are adding more
compression codecs maybe we should retire some old ones, or move them into a
separate package so downstream app
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
Any benchmark data?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
OK great. I think we should avoid breaking developer APIs, unless it has a
huge upside. It wouldn't be fun to break it just for some cosmetic things ...
---
If your project is set up for it, yo
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
What is the compatibility concern?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18780
If you are asking for their opinions it'd be easier if you ask more
explicitly (A vs B) in one comment, rather than asking them to go through and
read the entire thread ...
---
If your proje
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18752
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Repository: spark
Updated Branches:
refs/heads/master cf29828d7 -> 60472dbfd
[SPARK-21485][SQL][DOCS] Spark SQL documentation generation for built-in
functions
## What changes were proposed in this pull request?
This generates a documentation for Spark SQL built-in functions.
One drawback i
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18702
LGTM too.
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18697
cc @cloud-fan @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
When users upgrade from 2.11 to 2.12, their app would be broken, wouldn't
it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
@srowen I don't agree that we should just break source compatibility here.
We have already spent a lot of time doing this in the past and figuring out how
to preserve it.
---
If your project i
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18715
Wait let's ask why @tdas did it this way...
On Sun, Jul 23, 2017 at 10:45 AM asfgit wrote:
> Closed #18715 <https://github.com/apache/spark/pull/18715> via a4eac8b
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
It is still source breaking change, and this is why I was saying it would
be a lot of work to upgrade to Scala 2.12 without breaking existing source
code. For 2.12 we should get rid of the functions
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18715
cc @tdas Was there a reason to use ``?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18715
[minor] Remove in test case names in FlatMapGroupsWithStateSuite
## What changes were proposed in this pull request?
This patch removes the `` string from test names in
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18709
"Create Version" isn't a good user facing description. It'd make more sense
to just say "Created by Spark xxx"
---
If your project is set up for it, you can reply to
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18714#discussion_r128908118
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -881,6 +881,16 @@ object SQLConf {
.intConf
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
@srowen You just showed that the Scala 2.12 changes are source breaking,
isn't it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18645#discussion_r128890891
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
---
@@ -353,7 +353,7 @@ class DatasetSuite extends QueryTest with
SharedSQLContext
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18645#discussion_r128890868
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskContextSuite.scala ---
@@ -54,7 +54,10 @@ class TaskContextSuite extends SparkFunSuite with
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18468
Uncompress a small block at a time.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18468
Hey sorry for commenting late, but I don't think this change really makes
sense. If anything, I'd decompress data in batch into uncompressed column
batch, rather than building an adapter.
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18680
Have you guys checked the performance of this change? It changes the number
of concrete implementations for column vector from 2 to 3 (and potentially 1 to
2 at runtime). This might (or might not
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18487
hm is this a bug fix? if not we shouldn't cherry pick it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18306
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128162324
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -103,4 +110,19 @@ case class UserDefinedFunction
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128159939
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -103,4 +110,19 @@ case class UserDefinedFunction
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128159874
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -103,4 +110,19 @@ case class UserDefinedFunction
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128159780
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -58,6 +55,13 @@ case class UserDefinedFunction protected
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17150
Are you working on 2.12?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17150
Do the removal (i.e. this PR).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17150
Maybe do it a bit later, when the backport rate drops? E.g. it's unlikely
we still do a lot of backports when 2.3 is cut.
---
If your project is set up for it, you can reply to this email and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18606
It's already merged.
https://github.com/apache/spark/commit/24367f23f77349a864da340573e39ab2168c5403
---
If your project is set up for it, you can reply to this email and have your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18606
That's true. Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Repository: spark
Updated Branches:
refs/heads/master 2cbfc975b -> 24367f23f
[SPARK-21382] The note about Scala 2.10 in building-spark.md is wrong.
[https://issues.apache.org/jira/browse/SPARK-21382](https://issues.apache.org/jira/browse/SPARK-21382)
There should be "Note that support for Scal
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17633
@mallman we don't backport such risky changes to maintenance branches.
Those branches typically go through much less testing.
---
If your project is set up for it, you can reply to this emai
Repository: spark
Updated Branches:
refs/heads/master d03aebbe6 -> c3713fde8
[SPARK-21358][EXAMPLES] Argument of repartitionandsortwithinpartitions at
pyspark
## What changes were proposed in this pull request?
At example of repartitionAndSortWithinPartitions at rdd.py, third argument
should
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18586
Merging in master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18559
It'd be important to document what syntaxes are no longer allowed in the
JIRA ticket (and PR description), and we also highlight that in release notes.
---
If your project is set up for it, yo
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126072754
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2638,4 +2638,17 @@ class SQLQuerySuite extends QueryTest with
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18540#discussion_r126016128
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala ---
@@ -174,28 +191,22 @@ class WindowSpec private[sql
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18540#discussion_r126016260
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -805,4 +806,24 @@ object TypeCoercion
501 - 600 of 20485 matches
Mail list logo