Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/22141
I reproduced the issue with the following code (was a bit surprised with
the behavior)
The tables:
```scala
scala> spark.sql("SELECT * FROM users").show
+---+
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r206047653
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FailureSafeParser.scala
---
@@ -56,9 +57,14 @@ class FailureSafeParser[IN
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r206045407
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -450,7 +450,8 @@ class DataFrameReader private[sql](sparkSession
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r205974316
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2233,7 +2233,7 @@ class JsonSuite extends
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r205974224
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -203,19 +203,11 @@ class UnivocityParser
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r205974275
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FailureSafeParser.scala
---
@@ -56,9 +57,14 @@ class FailureSafeParser[IN
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r205969639
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -450,7 +450,8 @@ class DataFrameReader private[sql](sparkSession
Github user dmateusp closed the pull request at:
https://github.com/apache/spark/pull/21706
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
hey @HyukjinKwon thanks for coming back to me on this :)
I'll close the PR now, and start a thread later today on the dev mailing
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21740#discussion_r202614320
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala
---
@@ -165,7 +183,7 @@ class
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21740#discussion_r202542318
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala
---
@@ -165,7 +183,7 @@ class
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21764#discussion_r202539843
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/OptimizerRuleExclusionSuite.scala
---
@@ -0,0 +1,84
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21764#discussion_r202539342
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -175,6 +179,35 @@ abstract class Optimizer
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21764#discussion_r202539784
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -127,6 +127,14 @@ object SQLConf
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21764#discussion_r202538924
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -46,7 +47,23 @@ abstract class Optimizer
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21766
Just checked out the PR,
```scala
scala> spark.sql("SELECT CAST(1 as NUMERIC)")
res0: org.apache.spark.sql.DataFrame = [CAST(1 AS DECIMAL(10,0)):
decimal(10,0)]
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
hey @gatorsmile, sorry to bother, could you just clarify the above?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
In the current Spark version I can run
```scala
scala> spark.sql("SELECT 'interval 1 hour' as
a").select(col("a"
Github user dmateusp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21706#discussion_r200959306
--- Diff: sql/core/src/test/resources/sql-tests/inputs/cast.sql ---
@@ -42,4 +42,23 @@ SELECT CAST('9223372036854775808' AS long);
DES
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/10130
+1 any chance we can revive this PR ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
Could someone review ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
@maropu thanks! I got help on the dev email list as well, I've added
sql-tests now
---
-
To unsubscribe, e-mail: re
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
Just added it to the FunctionRegistry:
```scala
scala> spark.sql("DESC function calendarinterval").show(t
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
Realized I need to add `calendarinterval` as a function as well to
reproduce the behavior of `int`, `date`, `string` etc..
example:
```scala
scala> spark.sql("select s
Github user dmateusp commented on the issue:
https://github.com/apache/spark/pull/21706
sure!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user dmateusp opened a pull request:
https://github.com/apache/spark/pull/21706
[SPARK-24702] Fix Unable to cast to calendar interval in spark sql
## What changes were proposed in this pull request?
Making the `calendarinterval` a parse-able DataType keyword to allow
26 matches
Mail list logo