Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r126710311
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -209,6 +209,18 @@ class DataFrameFunctionsSuite extends
Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r130025210
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -209,6 +209,18 @@ class DataFrameFunctionsSuite extends
Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r124545289
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameFunctionsSuite.scala ---
@@ -209,6 +209,18 @@ class DataFrameFunctionsSuite extends
GitHub user mmolimar opened a pull request:
https://github.com/apache/spark/pull/18447
[SPARK-21232][SQL][SparkR][PYSPARK] New built-in SQL function - Data_Type
## What changes were proposed in this pull request?
New built-in function to get the data type of columns in SQL
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/18447
In some SQL db you have to query explicitly the table schema, ie: ``select
data_type from all_tab_columns where table_name = 'my_table'``or something like
that.
In case of the ARQ engine from
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/18447
@felixcheung Everything done!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/18447
@felixcheung I think it should be fine now.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/18447
so @felixcheung ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user mmolimar opened a pull request:
https://github.com/apache/spark/pull/22234
[SPARK-25241][SQL] Configurable empty values when reading/writing CSV files
## What changes were proposed in this pull request?
There is an option in the CSV parser to set values when we have
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/22234
@MaxGekk I added what you suggested as well.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/22234#discussion_r212842706
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -345,11 +345,11 @@ def text(self, paths, wholetext=False, lineSep=None):
@since(2.0
Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/22234#discussion_r212850822
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
---
@@ -117,6 +117,9 @@ class CSVOptions
Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/22234#discussion_r212851409
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -457,9 +459,9 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/18447
Hi @HyukjinKwon
For me it's fine:
"In some SQL db you have to query explicitly the table schema, ie: select
data_type from all_tab_columns where table_name = 'my_table'or something
Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/22234#discussion_r216337792
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVDataSource.scala
---
@@ -91,9 +91,10 @@ abstract class CSVDataSource
Github user mmolimar closed the pull request at:
https://github.com/apache/spark/pull/18447
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
GitHub user mmolimar opened a pull request:
https://github.com/apache/spark/pull/22383
[SPARK-25395][JavaAPI] Removing Optional Spark Java API
## What changes were proposed in this pull request?
Previous Spark versions didn't require Java 8 and an ``Optional`` Spark
Java
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/22383
Done @srowen
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/22383
No problem. Done ;-)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mmolimar commented on a diff in the pull request:
https://github.com/apache/spark/pull/22383#discussion_r224948273
--- Diff: project/MimaExcludes.scala ---
@@ -36,6 +36,8 @@ object MimaExcludes {
// Exclude rules for 3.0.x
lazy val v30excludes
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/22383
I agree @srowen.
What do you think about reusing the current implementation we already have,
for example, in the guava lib instead of having that class in Spark
Github user mmolimar commented on the issue:
https://github.com/apache/spark/pull/22383
Updated @srowen
The PR title already contains SPARK-25395, is that what you're expecting or
another PR
22 matches
Mail list logo