Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/19440
@gatorsmile i have a question, should also be handled from other execs.?
For example, like
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22401
so to summarize the discussion,
1. current code which does p+10,s is still a mystery, but it breaks usecase
mentioned by SPARK-25413
2. changes by this PR are working because of missing add
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22401
its difficult for end user to set this depending on his query ( queries
which are similar to SPARK-25413 AND SPARK-24957 ) but yes, i agree with the
point that rounding off cale is better than
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22401
i agree with the resolution on + vs sum but i also see that that avg
precision and scale cannot be calculated well ahead in this case which
satisfies all scenarios. but i am just suggesting this as
Github user ajithme commented on a diff in the pull request:
https://github.com/apache/spark/pull/22401#discussion_r217038517
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Average.scala
---
@@ -36,7 +36,13 @@ abstract class AverageLike
Github user ajithme commented on a diff in the pull request:
https://github.com/apache/spark/pull/22401#discussion_r216984413
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Average.scala
---
@@ -36,7 +36,13 @@ abstract class AverageLike
Github user ajithme commented on a diff in the pull request:
https://github.com/apache/spark/pull/22401#discussion_r216979582
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Average.scala
---
@@ -36,7 +36,13 @@ abstract class AverageLike
Github user ajithme commented on a diff in the pull request:
https://github.com/apache/spark/pull/22401#discussion_r216954947
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Average.scala
---
@@ -36,7 +36,13 @@ abstract class AverageLike
GitHub user ajithme opened a pull request:
https://github.com/apache/spark/pull/22401
[SPARK-25413] Precision value is going for toss when Avg is done
As per the definition, see
org.apache.spark.sql.catalyst.analysis.DecimalPrecision
Operation : e1 + e2
Result
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22277
Attaching a sql file to reproduce the issue and see the effect of PR :
[test.txt](https://github.com/apache/spark/files/2356468/test.txt)
### Without patch
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22277
I see. But the code modified in this PR is when alias is part of
projection. The query mention by you seems not to hit the current alias logic
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22277
@jiangxb1987 Thanks you for the feedback. Couple of points
1. If introduce a predicate which refers to alias( as u mentioned a > z),
it will throw error
```
spark-sql> create
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22277
@gatorsmile and @jiangxb1987 any inputs.?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user ajithme opened a pull request:
https://github.com/apache/spark/pull/22277
[SPARK-25276] Redundant constrains when using alias
Attaching a test to reproduce the issue. The test fails with following
message
test("redundant constrains") {
Github user ajithme closed the pull request at:
https://github.com/apache/spark/pull/22120
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user ajithme commented on the issue:
https://github.com/apache/spark/pull/22120
@vanzin I agree its a trivial change. Just wanted it to be consistent
output with yarn cluster mode. This is not just for event logs also for a
custom SparkListener , it may be confusing that appId
GitHub user ajithme opened a pull request:
https://github.com/apache/spark/pull/22120
[SPARK-25131]Event logs missing applicationAttemptId for
SparkListenerApplicationStart
When master=yarn and deploy-mode=client, event logs do not contain
applicationAttemptId for
17 matches
Mail list logo