Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220586456
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/python/BatchEvalPythonExecSuite.scala
---
@@ -100,6 +104,28 @@ class
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220569284
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,53 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220570791
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,53 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220568484
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,53 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220570975
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,53 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220570021
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,53 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22364
@cloud-fan @dongjoon-hyun @gatorsmile any luck with this? Thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220518059
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,51 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220482709
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,51 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220482278
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,51 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220481529
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,51 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220484279
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,51 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220453684
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,56 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220273079
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -152,3 +153,56 @@ object EliminateOuterJoin extends Rule
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220272571
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1304,10 +1307,27 @@ object CheckCartesianProducts
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22326#discussion_r220272683
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -165,6 +165,8 @@ abstract class Optimizer
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22518
@gengliangwang no, let me cite and explain the PR description. I am not
sure how to improve it, but if you have suggestions I am happy to. The main
point of the PR is to address an issue which
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22364
I also run on the TPCDS and TPCH benchmark with 10 runs:
Rule | Effective After | Effective Before | Total After | Total Before | %
Eff | % Total
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22518
> hmm, how can this happen?
you can check the UT which reproduces the issue. The scalar subquery is
pushed down as part of the filter `GreaterT
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
yes, sounds ok to me. Thanks. LGTM.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user mgaido91 closed the pull request at:
https://github.com/apache/spark/pull/21184
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22519#discussion_r219667782
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -554,8 +554,11 @@ class Analyzer(
Cast
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22519#discussion_r219667780
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -554,8 +554,11 @@ class Analyzer(
Cast
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22524#discussion_r219667461
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -84,9 +84,10 @@ trait BaseLimitExec extends UnaryExecNode
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22524#discussion_r219667410
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala
---
@@ -465,13 +465,18 @@ case class RangeExec(range
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22518
cc @cloud-fan @dongjoon-hyun @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22518#discussion_r219667342
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/subquery.scala ---
@@ -166,7 +168,7 @@ case class ReuseSubquery(conf: SQLConf) extends
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22494#discussion_r219505527
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1345,6 +1345,15 @@ object SQLConf {
.booleanConf
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/22518
[SPARK-25482][SQL] ReuseSubquery can be useless when the subqueries have
the same id
## What changes were proposed in this pull request?
When a `ExecSubqueryExpression` is copied, it may
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22494#discussion_r219493420
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2858,6 +2858,13 @@ class SQLQuerySuite extends QueryTest
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22494#discussion_r219492191
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1345,6 +1345,16 @@ object SQLConf {
.booleanConf
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22494#discussion_r219493091
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1345,6 +1345,16 @@ object SQLConf {
.booleanConf
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22494#discussion_r219492885
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1345,6 +1345,16 @@ object SQLConf {
.booleanConf
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r219454650
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
> I'm talking about the specific query reported at
https://issues.apache.org/jira/browse/SPARK-22036?focusedCommentId=16618104=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpa
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20999
any more comments on this @maropu apart from
https://github.com/apache/spark/pull/20999#discussion_r216525719 where we are
waiting for others' feedback?
@cloud-fan @dongjoon-hyun
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
>The problem is, when users hit SPARK-25454, they must turn off both the
DECIMAL_OPERATIONS_ALLOW_PREC_LOSS and the new config.
If a user hits SPARK-25454, the va
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r219435279
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r219406170
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
Thank you for your help and guidance.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
> If your argument is, picking a precise precision for literal is an
individual featue and not related to #20023, I'm OK to create a new config for
it.
Yes this is - I think - a bet
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
I think there are 2 separate topics here:
- **Handling negative scale in decimal operations**
I am writing the design doc and I'll update this PR if needed (anyway
I'll add more test
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
oh I see now what you mean, thanks. Yes, Hive does the same. We may have to
revisit completely our parsing of literals but since it is a breaking change I
am not sure it will be possible before
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
> So I don't see any harm of this PR.
If the user doesn't turn off the flag, of course nothing changes. If the
user does, then let's imagine this case. A user has this: `select 1234567
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
> seems there are more problems like the data type of literals
sorry, I haven't got what you mean here, may you please explain me?
> your long explanation makes me think we
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
yes @cloud-fan, I see your point. I am neutral to this change honestly.
It is true that it avoids regressions form previous cases, but it is also
true that it doesn't make the behavior
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
oh, I see now the problem. And it is much bigger than I thought, sorry.
Here we were returning a negative scale for `1e6` (it happens in the
`AstBuilder`, as we build a BigDecimal from it) also
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
Sure @cloud-fan, I'll do asap.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22494
I don't really agree with this. As I mentioned
[here](https://github.com/apache/spark/pull/22450#issuecomment-423077220), I
think we can avoid to use negative scales regardless of the value
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22480
nit: in the title please `Serria` -> `Sierra`
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additio
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
yes @dilipbiswal , Hive does the same.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22465
thank you
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
Let me answer to all your 3 points:
> how to prove Divide is the only one having problems of negative scale?
You can check the other operations. Let's go though all of t
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22470
@dilipbiswal yes, I definitely think that in general we should get to
forbid negative scales. I only thought that this should be done in 3.0 rather
than now. And for now the safest option to me
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22469
thank you for pointing this out @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22465
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22470
@cloud-fan I think the main problem about this (and it is the reason why I
haven't proposed it) is that the range of operations supported would be
smaller, so we may forbid operations which now
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
@cloud-fan I checked the other operations and they have no issue with
negative scale. This is the reason why this fix is only for Divide: it is the
only operation which wasn't dealing it properly
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22469
cc @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/22469
[SPARK-24341][FOLLOWUP][DOCS] Add migration note for IN subqueries behavior
## What changes were proposed in this pull request?
The PR updates the migration guide in order to explain
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22450#discussion_r218814303
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala
---
@@ -40,10 +40,13 @@ import
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22450#discussion_r218762046
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala
---
@@ -129,16 +129,17 @@ object DecimalPrecision
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22465#discussion_r218760897
--- Diff: sql/core/src/test/resources/sql-tests/inputs/operators.sql ---
@@ -26,6 +26,15 @@ select 5 div 0;
select 5 div null;
select null div 5
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22450#discussion_r218758428
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala
---
@@ -129,16 +129,17 @@ object DecimalPrecision
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/21403
@cloud-fan sure, I'll create a followup PR, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22465#discussion_r218756921
--- Diff: sql/core/src/test/resources/sql-tests/inputs/operators.sql ---
@@ -26,6 +26,15 @@ select 5 div 0;
select 5 div null;
select null div 5
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/22465
[SPARK-25457][SQL] IntegralDivide returns data type of the operands
## What changes were proposed in this pull request?
The PR proposes to return the data type of the operands as a result
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22450#discussion_r218690147
--- Diff:
sql/core/src/test/resources/sql-tests/inputs/typeCoercion/native/decimalArithmeticOperations.sql
---
@@ -83,4 +83,7 @@ select
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/21403
@cloud-fan, no, it introduces a behavior change when structs are involved.
The two queries
[here](https://github.com/apache/spark/pull/21403/files/eb1dfb7e0873b8479ea54d223b7fde3dcefa4834#diff
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r218686614
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
cc @cloud-fan @dongjoon-hyun
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22450
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/22450
[SPARK-25454][SQL] Avoid precision loss in division with decimal with
negative scale
## What changes were proposed in this pull request?
Our rules for determine decimal precision
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22364
thanks @maropu for your review! @gatorsmile do you have any comments?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21403#discussion_r218398155
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-basic.sql.out
---
@@ -0,0 +1,70 @@
+-- Automatically generated
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21403#discussion_r218398060
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-basic.sql.out
---
@@ -0,0 +1,70 @@
+-- Automatically generated
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22395
Sure @cloud-fan, I'll create a JIRA and submit a PR for it.
> Looks like a use case for a legacy config.
Yes, thanks for the suggestion @rxin, I ag
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r218362156
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22440
I see @dongjoon-hyun. Sure, sorry I wasn't aware of how this is working.
Shall I create a new PR based on @jamesthomp's commits or is it enough to
update this adding his commits?
Sorry
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22395
thank you all for the reviews
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r218094296
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22440#discussion_r218022452
--- Diff: docs/sql-programming-guide.md ---
@@ -1879,6 +1879,7 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22440
cc @dongjoon-hyun @HyukjinKwon @viirya
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/22440
[SPARK-24151][SQL] Case insensitive resolution of CURRENT_DATE and
CURRENT_TIMESTAMP
## What changes were proposed in this pull request?
SPARK-22333 introduced a regression
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/21217
if it's ok for you @viirya , I am submitting a PR for this then. I'll
specify in the description that the credit should be given to @jamesthomp but
this can be done also by the committer when
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22395#discussion_r217982791
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ArithmeticExpressionSuite.scala
---
@@ -143,16 +143,14 @@ class
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22395#discussion_r217977048
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ArithmeticExpressionSuite.scala
---
@@ -143,16 +143,14 @@ class
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22395
@gatorsmile I checked on MySQL 5.6 and there are 2 differences between
MySQL's `div` and the current implementation:
- MySQL returns an `int` if the div operands are integers, `bigint
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22395#discussion_r217747916
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -314,6 +314,32 @@ case class Divide(left
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22420
The change seems reasonable to me but could you also improve the
description? I mean, can you provide some evidence about how to reproduce the
issue and see that it is solved? In the JIRA you
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22364
@maropu I have run the following benchmark:
```
test("AttributeSet -- benchmark") {
val attrSetA = AttributeSet((1 to 100).map { i =>
AttributeReference(s"
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22364
@maropu anyway I checked and that is the only other places where this
pattern happens. So I am ok including it here. The point is that there the
situation is a bit different, ie
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/21217
Thanks for pinging me @HyukjinKwon . I can take it over too, let me know.
Thanks.
---
-
To unsubscribe, e-mail: reviews
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22364
@maropu yes, that can be done as well, but I think the main focus of this
PR is the `ColumnPruning` rule, so I think it would be great to do that in a
separate PR. What do you think? Thanks
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r217495874
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/22038
@wangyum yes, it seems that the new behavior is the correct one. Maybe we
can change Spark's behavior in 3.0. WDYT @cloud-fan @gatorsmile
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22395#discussion_r217390828
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -314,6 +314,27 @@ case class Divide(left
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22375#discussion_r217381027
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelperSuite.scala
---
@@ -35,6 +36,13 @@ class
401 - 500 of 2479 matches
Mail list logo