GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18853
[SPARK-21646][SQL] BinaryComparison shouldn't auto cast string to int/long
## What changes were proposed in this pull request?
How to reproduce:
hive:
```:sql
$ hi
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18841
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18841
[SPARK-21635][SQL] ACOS(2) and ASIN(2) should be null
## What changes were proposed in this pull request?
This PR makes ACOS(2) and ASIN(2) to null, same MySQL.
I have submit a [patch
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18833
[SPARK-21625][SQL] sqrt(negative number) should be null.
## What changes were proposed in this pull request?
This PR makes `sqrt(negative number)` to null, same as Hive and MySQL
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
@HyukjinKwon Could you help review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18808
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
I'll fix it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
`SetCommand.scala` throws exception seems roughly. `InsertIntoHiveTable`
throws exception seems too late. so I logWarning at `SetCommand.scala`
---
If your project is set up for it, you can reply
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r130526899
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -1186,3 +1186,124 @@ case class BRound(child
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r130526538
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -1186,3 +1186,124 @@ case class BRound(child
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
@viirya Spark does not support that.
see:https://github.com/apache/spark/pull/17223#issuecomment-286608743
@dongjoon-hyun How about throw exception when users try to change them as
@cloud-fan
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18769
[SPARK-21574][SQL] Fix set hive.exec.max.dynamic.partitions lose effect.
## What changes were proposed in this pull request?
How to reproduce:
```scala
val data = (0 until 1001
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18323
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r130021930
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -1219,44 +1219,91 @@ case class WidthBucket
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r128482157
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/MathUtils.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r128482031
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/MathUtils.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r127601368
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -1186,3 +1186,51 @@ case class BRound(child
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18361#discussion_r127190632
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -197,7 +197,7 @@ class HadoopMapRedWriteConfigUtil[K, V: ClassTag
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/18361
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18527
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18527
[SPARK-21101][SQL] Catch IllegalStateException when CREATE TEMPORARY
FUNCTION
## What changes were proposed in this pull request?
It must `override` [`public StructObjectInspector
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18466
@dongjoon-hyun Try the following to reproduce, I missed
`spark.serializer=org.apache.spark.serializer.KryoSerializer`, this is my
default config:
```
spark-shell --conf
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18490
[SPARK-21269][Core][WIP] Fix FetchFailedException when enable
maxReqSizeShuffleToMem and KryoSerializer
## What changes were proposed in this pull request?
Spark **cluster** can
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18466
Yes, I reproduce it by Yarn cluster, local mode can't reproduce, It seems
`DownloadCallback` doesn't really work.
---
If your project is set up for it, you can reply to this email and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18445
Please setting your username and checking your email:
https://help.github.com/articles/setting-your-username-in-git/#platform-linux
https://help.github.com/articles/setting-your-email-in-git
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18445
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18466
@zsxwing @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18466
[SPARK-21253][CORE] Disable use DownloadCallback fetch big blocks
## What changes were proposed in this pull request?
Disable use DownloadCallback fetch big blocks.
## How was
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18323
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r123937970
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/util/MathUtilsSuite.scala
---
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r123930006
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/MathUtils.scala
---
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18106#discussion_r123920115
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -132,3 +133,154 @@ case class Uuid() extends
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18106#discussion_r123919660
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -132,3 +133,154 @@ case class Uuid() extends
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18413
[SPARK-21205][SQL] pmod(number, 0) should be null.
## What changes were proposed in this pull request?
Hive `pmod(3.13, 0)`:
```:sql
hive> select pmod(3.13, 0);
OK
NULL
T
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/18195
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r123736535
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -82,7 +82,7 @@ object JDBCRDD extends Logging
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r123734456
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala
---
@@ -198,4 +205,37 @@ class
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r123733947
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -907,7 +907,7 @@ class JDBCSuite extends SparkFunSuite
assert
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18195
@vivekdixit05 I'm working on another PR
https://github.com/apache/spark/pull/18266, That may be more general. I'll
finish it soon.
---
If your project is set up for it, you can rep
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r123677893
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/MathUtils.scala
---
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18323#discussion_r123677056
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/MathUtils.scala
---
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18361
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
Yes, I also tend to be one `trunc` function. I found Presto provides two
functions `date_trunc` and `truncate`.
---
If your project is set up for it, you can reply to this email and have your
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/18175
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18372
Now it's like this:
![configuration](https://user-images.githubusercontent.com/5399861/27361913-10c126a2-565d-11e7-9cd3-b3750bf8.jpg)
---
If your project is set up for it, you can
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18372
[MINOR][DOCS] Add lost tag for configuration.md
## What changes were proposed in this pull request?
Add lost tag for `configuration.md`.
## How was this patch tested?
N/A
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18361
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18361
[SPARK-19660][SQL][FOLLOWUP] Replace mapred.input.dir.recursive to
mapreduce.input.fileinputformat.input.dir.recursive
## What changes were proposed in this pull request?
Replace
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18343#discussion_r122637320
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -141,8 +143,8 @@ private[spark] class HighlyCompressedMapStatus private
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18343#discussion_r122633016
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -175,6 +175,7 @@ class KryoSerializer(conf: SparkConf
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18343
@viirya Yes, I' using `org.apache.spark.serializer.KryoSerializer`, [master
branch](https://github.com/apache/spark/tree/ce49428ef7d640c1734e91ffcddc49dbc8547ba7)
still has this issue, error
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18343
@jinxing64 `big_table` may be need big enough, my `big_table` is 270.7 G:
```sql
spark-sql -e "
set spark.sql.shuffle.partitions=2001;
drop table if exists spark_hcm
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18343
cc @jinxing64
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18343
[SPARK-21133][CORE] Fix HighlyCompressedMapStatus#writeExternal throws NPE
## What changes were proposed in this pull request?
Fix HighlyCompressedMapStatus#writeExternal NPE
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/17886
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18173
@gatorsmile It's same to [ABS function support string
type](https://github.com/apache/spark/pull/18153).
---
If your project is set up for it, you can reply to this email and have your
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18330
[SPARK-20749][SQL][FOLLOWUP] Support character_length
## What changes were proposed in this pull request?
The function `char_length` is shorthand for `character_length` function.
Both
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18323
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18323
[SPARK-21117][SQL] Built-in SQL Function Support - WIDTH_BUCKET
## What changes were proposed in this pull request?
Add build-in SQL function - `WIDTH_BUCKET`
Ref:
https
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18271
Removed the duplicating. It works both for Windows and linux:
```
R -e "install.packages(c('knitr', 'rmarkdown', 'testthat', 'e1071',
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
cc @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18271#discussion_r121896463
--- Diff: R/WINDOWS.md ---
@@ -34,10 +34,10 @@ To run the SparkR unit tests on Windows, the following
steps are required âass
4. Set the
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18266
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18266
retest please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18271
[MINOR][DOCS] Improve docs to Running R Tests
## What changes were proposed in this pull request?
`install.packages(testthat)` should be `install.packages("testthat")`,
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18266
[SPARK-20427][SQL] Read JDBC table use custom schema
## What changes were proposed in this pull request?
Auto generated Oracle schema some times not we expect:
- `number(1)` auto
GitHub user wangyum reopened a pull request:
https://github.com/apache/spark/pull/18247
[SPARK-13933][BUILD] Update hadoop-2.7 profile's curator version to 2.7.1
## What changes were proposed in this pull request?
Update hadoop-2.7 profile's curator version to 2
Github user wangyum closed the pull request at:
https://github.com/apache/spark/pull/18247
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18247
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18247
[SPARK-13933][BUILD] Update hadoop-2.7 profile's curator version to 2.7.1
## What changes were proposed in this pull request?
Update hadoop-2.7 profile's curator versio
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18206
[SPARK-20754][SQL][FOLLOWUP] Add Function Alias For MOD/POSITION.
## What changes were proposed in this pull request?
https://github.com/apache/spark/pull/18106 Support TRUNC (number), We
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18195
[SPARK-20921][SQL][WIP] Support can config OracleDialect whether convert
number(1) to BooleanType
## What changes were proposed in this pull request?
Support can config OracleDialect
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18175
[SPARK-20951][SQL] Built-in SQL Function FormatNumber validate input value
## What changes were proposed in this pull request?
SQL Function `FormatNumber` should validate input value, if
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18173
[SPARK-20948][SQL] Built-in SQL Function UnaryMinus/UnaryPositive support
string type
## What changes were proposed in this pull request?
Built-in SQL Function UnaryMinus/UnaryPositive
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18153#discussion_r119340217
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -97,20 +97,30 @@ case class UnaryPositive(child
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18157
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18153
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18157
[MINOR][SQL] Fix a few function description error.
## What changes were proposed in this pull request?
Fix a few function description error.
## How was this patch tested
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18153#discussion_r119271567
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -97,20 +97,30 @@ case class UnaryPositive(child
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18153
[SPARK-20931][SQL] ABS function support string type.
## What changes were proposed in this pull request?
ABS function support string type. Hive/MySQL support this feature.
Ref
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18134#discussion_r119000601
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -402,6 +402,44 @@ case class DayOfMonth
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18138
[SPARK-20915][SQL] Make lpad/rpad with empty pad string same as MySQL.
## What changes were proposed in this pull request?
Spark SQL `rpad/lpad` with empty pad string:
```sql
spark
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18136
[SPARK-20910][SQL] Add build-int SQL function - UUID
## What changes were proposed in this pull request?
Add build-int SQL function - UUID.
## How was this patch tested
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18133
Indent issues look ununified description:
![display](https://cloud.githubusercontent.com/assets/5399861/26527364/243235c8-43c5-11e7-85f4-faa132e19b2b.gif)
---
If your project is set up for it
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18134
[SPARK-20909][SQL] Add build-int SQL function - DAYOFWEEK
## What changes were proposed in this pull request?
Add build-int SQL function - DAYOFWEEK
## How was this patch tested
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18133
[Minor] Fix some indent issues.
## What changes were proposed in this pull request?
Fix some indent issues.
## How was this patch tested?
existing tests.
You can merge
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18132
[SPARK-8184][SQL] Add additional function description for weekofyear
## What changes were proposed in this pull request?
Add additional function description for weekofyear.
## How
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18106
[SPARK-20754][SQL] Support TRUNC (number)
## What changes were proposed in this pull request?
Add support for `TRUNC(number)`, it's similar to Oracle
[TRUNC(number)](
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18019
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/18019#discussion_r117623979
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -1268,6 +1268,51 @@ case class Ascii(child
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18039
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18039
[SPARK-20751][SQL] Add cot test in MathExpressionsSuite
## What changes were proposed in this pull request?
Add cot test in MathExpressionsSuite as
https://github.com/apache/spark/pull
801 - 900 of 1006 matches
Mail list logo