Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18156
I don't understand why the value of 3.1415f is actually something like
3.1414...
From the user's point of view, i think 3.142 maybe more reasonable.
---
If your project is set up
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18156
[SPARK-20933][SQL]when the input parameter is float typeï¼the âround â
or âbroundâ function can't work well
## What changes were proposed in this pull request?
spark-sql>sel
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18155
[SPARK-20876][SQL][Backport-2.2]If the input parameter is float type for
ceil or floor,the result is not we expected
## What changes were proposed in this pull request?
This PR
GitHub user 10110346 reopened a pull request:
https://github.com/apache/spark/pull/18082
[SPARK-20665][SQL][FOLLOW-UP]Move test case to MathExpressionsSuite
## What changes were proposed in this pull request?
add test case to MathExpressionsSuite as #17906
## How
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/18082
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18103#discussion_r118812033
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/MathExpressionsSuite.scala
---
@@ -258,6 +258,16 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18103#discussion_r118810938
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/MathExpressionsSuite.scala
---
@@ -258,6 +258,16 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18103#discussion_r118810895
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -232,12 +232,13 @@ case class Ceil(child
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18059
okï¼I will close it,thanks @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/18059
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18059
@cloud-fan Hive use Floatï¼
select if(true, 123456789, cast(1.23 as float));
++--+
| (IF(true, CAST(123456789 AS FLOAT
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18103
[SPARK-20876][SQL]if the input parameter is float type for ceil ,the result
is not we expected
## What changes were proposed in this pull request?
spark-sql>SELECT ceil(cast(12345.1
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18082
@gatorsmile OK,i will do,thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18059
@cloud-fan The current behavior always using `float` , but using `double`
can reduce loss of precision,so ,I think using `double` will be better
---
If your project is set up for it, you can
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18059
I have tested in MYSQL, the result datatype seems to be `decimalType`:
mysql> desc test;
++-+--+-+-+---+
| Field | Type| Null |
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18082
Done. Should i delete the unit test case from `MathFunctionsSuite.scala`?
@gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18059
Could you help to review it ? thanks @hvanhovell @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18082
[SPARK-20665][SQL][FOLLOW-UP]move test case to SQLQueryTestSuite
## What changes were proposed in this pull request?
test case to SQLQueryTestSuite as #17906
## How
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
Test passed, thanks. @ueshin @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18059
I think we should try our best to ensure accuracy, no matter what scenario
@wzhfy
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/18054
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18059
[SPARK-20834][SQL]TypeCoercion:In case 1,loss of precision is not acceptable
## What changes were proposed in this pull request?
spark-sql>select if(true, 123456789, cast(1.23 as fl
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@kiszk @gatorsmile Test is not started, could you help trigger it again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18054
[SPARK-20763][SQL][Backport-2.1] The function of `month` and `day` return
the value which is not we expected.
What changes were proposed in this pull request?
This PR is to backport
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/18053
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18053
ok, thanks @wzhfy @ueshin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18053
yes,merge this into branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18053
This change has Merged to master/2.2 in PR #17997
The current PR just submits a backport PR to 2.1 @gatorsmile @wzhfy
---
If your project is set up for it, you can reply to this email
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18053
[SPARK-20763][SQL][Backport-2.1] The function of `month` and `day` return
the value which is not we expected.
## What changes were proposed in this pull request?
This PR is to backport
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
OKï¼l Will do it, thanks. @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@gatorsmile I have added test cases to the file `cast.sql` , thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17880
I have modify `Scala style`.
Test is not started, could you help trigger it,thanks @HyukjinKwon
@gatorsmile
---
If your project is set up for it, you can reply to this email and have your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
@gatorsmile I have tested like this:
`val dav = Date.valueOf("1582-10-04");
val date = new Date(dav.getTime);
println(date.toString)`
the output is :1582-10-04
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117181219
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117180644
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117158817
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117158477
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117155497
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117153595
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r116954928
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r116952974
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
@ueshin Yes, I will do it ,thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
@srowen @rxin So, by contrast, mybe make a hack in one place is a better
solution
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r116888479
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -601,22 +601,32 @@ object DateTimeUtils
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
I have tried to changed `getYearAndDayInYear` like this:
`private[this] def getYearAndDayInYear(daysSince1970: SQLDate): (Int, Int,
Int) = {
val date = new Date(daysToMillis
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
@srowen I have tested in mysql, it can support dates before 1970.
mysql> select month("1582-09-28");
+-+
| mont
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17997
[SPARK-20763][SQL]The function of `month` and `day` return an error value
## What changes were proposed in this pull request?
spark-sql>select month("1582-09-28");
spark-sql
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
@cloud-fan Spark 2.0 and Spark 2.1 have the same issue. I have updated
the affected versions in the JIRA. Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
@cloud-fan ok, I will do it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
@cloud-fan
I have tested in mysql:
mysql> select round(12.3, 2);
++
| round(12.3, 2)|
++
| 12
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
I have updated it , and test passed
please review it again,thanks @srowen @rxin @HyukjinKwon
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@ HyukjinKwon I agree with you,i will try ,thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
Please reveiw itï¼thanks @dongjoon-hyun @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
ok, i will do it, thanks @dongjoon-hyun
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
"round" has the same problem.@ dongjoon-hyun . Actually, this PR can solve
the problem for both of them
---
If your project is set up for it, you can reply to this email and have
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17906
[SPARK-20665][SQL]"Bround" function return NULL
## What changes were proposed in this pull request?
>select bround(12.3, 2);
>NULL
For this case, the expecte
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17880
done,thanks @HyukjinKwon
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17880
@gatorsmile thanks,l will do it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17796
@srowen done, thank you very much.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17880
[SPARK-20620][TEST]Add some unit tests into NullExpressionsSuite
## What changes were proposed in this pull request?
1.add more datatype for exist unit
2.add new unit tests for other
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
I have tested all of
them(booleanï¼tinyintï¼smallintï¼intï¼bigintï¼floatï¼doubleï¼decimalï¼dateï¼timestampï¼binaryï¼string),
they can work properly. @srowen @rxin
---
If your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17796
Test FAILed. I don't know why.
Environment problem? Could you help me? @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17796
I quite agree with you.
I will do it,thanks @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@rxin, would you help me review it again?thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@srowen, can you review it again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17796
ok, thanks for review it@srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17796
[SPARK-20519][SQL][CORE]Modify to prevent some possible runtime exceptions
Signed-off-by: liuxian <liu.xi...@zte.com.cn>
## What changes were proposed in this pull r
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@srowen test is not started, could you help trigger it ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
can Jenkins to test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17698#discussion_r112392683
--- Diff:
examples/src/main/scala/org/apache/spark/examples/LocalKMeans.scala ---
@@ -76,8 +76,8 @@ object LocalKMeans {
showWarning
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17698#discussion_r112392592
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala ---
@@ -98,9 +98,9 @@ private[spark] object UIData {
var schedulingPool
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17698#discussion_r112392046
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -1036,3 +1036,8 @@ case class UpCast(child: Expression
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17698
[SPARK-20403][SQL][Documentation]Modify the instructions of some functions,
and add instructions of 'cast' function
## What changes were proposed in this pull request?
1. ââhashSetâ
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17132
[SPARK-19792][webui]In the Master Page,the column named âMemory per
Nodeâ ,I think it is not all right
all right
Signed-off-by: liuxian <liu.xi...@zte.com.cn>
#
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/17007
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17007
change 'var' to 'val' for better Specification
Signed-off-by: liuxian <liu.xi...@zte.com.cn>
## What changes were proposed in this pull request?
(Please fill in changes pr
301 - 377 of 377 matches
Mail list logo