GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17007
change 'var' to 'val' for better Specification
Signed-off-by: liuxian <liu.xi...@zte.com.cn>
## What changes were proposed in this pull request?
(Please fill in changes pr
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/17007
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17132
[SPARK-19792][webui]In the Master Page,the column named âMemory per
Nodeâ ,I think it is not all right
all right
Signed-off-by: liuxian <liu.xi...@zte.com.cn>
#
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17698#discussion_r112392592
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala ---
@@ -98,9 +98,9 @@ private[spark] object UIData {
var schedulingPool
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17698#discussion_r112392683
--- Diff:
examples/src/main/scala/org/apache/spark/examples/LocalKMeans.scala ---
@@ -76,8 +76,8 @@ object LocalKMeans {
showWarning
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17698#discussion_r112392046
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -1036,3 +1036,8 @@ case class UpCast(child: Expression
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17698
[SPARK-20403][SQL][Documentation]Modify the instructions of some functions,
and add instructions of 'cast' function
## What changes were proposed in this pull request?
1. ââhashSetâ
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
can Jenkins to test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130330782
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
I have updated this PR, please help to review it again. @viirya also cc
@cloud-fan @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
@viirya Maybe adding `ResolvedOrdinal` is not very well.
I have another problem:
`select a, **4 AS k**, count(b) from data group by k, 1;`
This test case has the same exception
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
@viirya Only to `group-by ordinal`, i think this is a good idea.
but this will also result in inconsistent processing between `order-by
ordinal` and `group-by ordinal`.
and i feel
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
`k` is resolved to `4` in `ResolveAggAliasInGroupBy`,and then `4` is
resolved to `ResolvedOrdinal(4)`
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
@maropu Could you not put the test cases in the end of
`group-by-ordinal.sql `?
because it has set `spark.sql.groupByOrdinal=false; ` in the end
---
If your project is set up for it, you
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
I have ran `SubstituteUnresolvedOrdinals` rule with `Once `, it still
looks like some problems, i will confirm it
---
If your project is set up for it, you can reply to this email and have your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
@maropu `select 3 as c, 4 as d, sum(b) from data group by c, d`
This test case still has exeception using your modification:GROUP BY
position 4 is not in select list (valid range is [1, 3
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
I think it is a perfect solution,thank you very much. @viirya @maropu
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
ok,thanks @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309163
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309076
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131298714
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131300093
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131292299
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131329942
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala ---
@@ -557,4 +557,22 @@ class DataFrameAggregateSuite extends
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131329917
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,18 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131334204
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,18 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131323389
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131351217
--- Diff: sql/core/src/test/resources/sql-tests/inputs/order-by-ordinal.sql
---
@@ -34,3 +34,8 @@ set spark.sql.orderByOrdinal=false;
-- 0 is now
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18228
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126359992
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,82 @@ case class Substring
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126325282
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,85 @@ case class Substring
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18228
@gatorsmile Thanks
I have tested in mysql:
mysql> select right("sparksql",null);
++
| right("sparksql",null) |
+-
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126341299
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,82 @@ case class Substring
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126599834
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,82 @@ case class Substring
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126608382
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,82 @@ case class Substring
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126617750
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,45 @@ case class Substring
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126624733
--- Diff:
sql/core/src/test/resources/sql-tests/results/string-functions.sql.out ---
@@ -86,3 +86,35 @@ select position('bar' in 'foobarbar'), position
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18228
@cloud-fan Could you help to review it? thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18251
Making `DEFAULT_NUM_ELEMENTS_FOR_SPILL_THRESHOLD` 2 times smaller may be
not very well, this will increase the `spill`.
Adding a safe check in ` UnsafeExternalSorter.growPointerArrayIfNecessary
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18711#discussion_r128893064
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -580,7 +580,9 @@ private[deploy] class Master(
* The number
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18711
[SPARK-21506][DOC]The description of "spark.executor.cores" may be not
correct
## What changes were proposed in this pull request?
The number of cores assigned to eac
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18711#discussion_r129251166
--- Diff: docs/configuration.md ---
@@ -1103,10 +1103,10 @@ Apart from these, the following properties are also
available, and may be useful
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18711#discussion_r128952895
--- Diff: docs/configuration.md ---
@@ -1106,7 +1106,7 @@ Apart from these, the following properties are also
available, and may be useful
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18711
@jiangxb1987 If `app.coresLeft` is not zero and there is no more free
cores left, it is not ending.Waiting for some workers have free cores, this
app will be assigned cores continue
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18711
If the other application is finished, it will release cores
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18711
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126842852
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,49 @@ case class Substring
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18228#discussion_r126842908
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1199,6 +1199,49 @@ case class Substring
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18435
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18228
`Substring` is the most commonly used as `left` and `right`, and i think
using these form is more friendly for users.
Also mysql and SQL server support these two functions and `Substring
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18522#discussion_r125379907
--- Diff:
core/src/main/scala/org/apache/spark/util/logging/FileAppender.scala ---
@@ -76,7 +76,11 @@ private[spark] class FileAppender(inputStream
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18522
[MINOR]Closes stream and releases any system resources associated with this
stream
## What changes were proposed in this pull request?
Closes inputstream or outputstream and releases any
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18522#discussion_r125380867
--- Diff:
core/src/main/scala/org/apache/spark/util/logging/FileAppender.scala ---
@@ -76,7 +76,11 @@ private[spark] class FileAppender(inputStream
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18522#discussion_r125394310
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -488,7 +488,7 @@ class UtilsSuite extends SparkFunSuite
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18522#discussion_r125455468
--- Diff:
core/src/main/scala/org/apache/spark/util/logging/FileAppender.scala ---
@@ -76,7 +76,11 @@ private[spark] class FileAppender(inputStream
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18507
[SPARK-21283][core]FileOutputStream should be created as append mode
## What changes were proposed in this pull request?
`FileAppender` is used to write `stderr` and `stdout` files
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18507
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18507
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18507#discussion_r125250432
--- Diff: core/src/test/scala/org/apache/spark/util/FileAppenderSuite.scala
---
@@ -52,10 +52,12 @@ class FileAppenderSuite extends SparkFunSuite
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18174
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@srowen test is not started, could you help trigger it ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17796
[SPARK-20519][SQL][CORE]Modify to prevent some possible runtime exceptions
Signed-off-by: liuxian <liu.xi...@zte.com.cn>
## What changes were proposed in this pull r
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17796
ok, thanks for review it@srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
I learned a lot from you, thanks all.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18779
[SPARK-21580][SQL]There's a bug with `Group by ordinal`
## What changes were proposed in this pull request?
create temporary view data as select * from values
(1, 1),
(1, 2
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
Thanks,i will update @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18522
`process.getInputStream` is closed in `driverRunner`, but it is not closed
in `ExecutorRunner`.
Which approach is correct? @jiangxb1987 @srowen
---
If your project is set up for it, you can
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130503562
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130507794
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r130504796
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1010,7 +1014,16 @@ class Analyzer
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18711#discussion_r130211732
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -580,7 +580,13 @@ private[deploy] class Master(
* The number
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18522
Thanks @srowen
I think it's better to keep the same in `driverRunner` and `ExecutorRunner
`. @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18711
@JoshRosen @cloud-fan Could you help to review it ? thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
@cloud-fan ok, I will do it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
@cloud-fan Spark 2.0 and Spark 2.1 have the same issue. I have updated
the affected versions in the JIRA. Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
Please reveiw itï¼thanks @dongjoon-hyun @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
@cloud-fan
I have tested in mysql:
mysql> select round(12.3, 2);
++
| round(12.3, 2)|
++
| 12
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
I have updated it , and test passed
please review it again,thanks @srowen @rxin @HyukjinKwon
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@ HyukjinKwon I agree with you,i will try ,thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17906
[SPARK-20665][SQL]"Bround" function return NULL
## What changes were proposed in this pull request?
>select bround(12.3, 2);
>NULL
For this case, the expecte
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117158817
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117153595
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117158477
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117180644
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117181219
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r117155497
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
@ueshin Yes, I will do it ,thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r116952974
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -603,7 +603,13 @@ object DateTimeUtils
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17997#discussion_r116954928
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -76,6 +76,9 @@ class
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
I have tried to changed `getYearAndDayInYear` like this:
`private[this] def getYearAndDayInYear(daysSince1970: SQLDate): (Int, Int,
Int) = {
val date = new Date(daysToMillis
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17997
@srowen I have tested in mysql, it can support dates before 1970.
mysql> select month("1582-09-28");
+-+
| mont
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17997
[SPARK-20763][SQL]The function of `month` and `day` return an error value
## What changes were proposed in this pull request?
spark-sql>select month("1582-09-28");
spark-sql
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
ok, i will do it, thanks @dongjoon-hyun
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
"round" has the same problem.@ dongjoon-hyun . Actually, this PR can solve
the problem for both of them
---
If your project is set up for it, you can reply to this email and have
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/18238
[SPARK-21016][core]Improve code fault tolerance for converting string to
number
## What changes were proposed in this pull request?
When converting `string` to `number`(int, long or double
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17880
I have modify `Scala style`.
Test is not started, could you help trigger it,thanks @HyukjinKwon
@gatorsmile
---
If your project is set up for it, you can reply to this email and have your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@gatorsmile I have added test cases to the file `cast.sql` , thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18059
I think we should try our best to ensure accuracy, no matter what scenario
@wzhfy
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
1 - 100 of 377 matches
Mail list logo