[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/22453


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-25 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r220409331
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

Let's make it short and get rid of all other things orthogonal with the 
issue itself (I think the issue is specific to decimals). For instance, we 
could say:

If `true`, it writes Parquet file in a way of Spark 1.4 and earlier, for 
instance, decimal values will be written in Apache Parquet's fixed-length byte 
array format, which other systems such as Apache Hive and Apache Impala use. If 
`false`, the newer format in Parquet will be used, for instance, decimals will 
be written based on int. If Parquet output is intended for use with systems 
that do not support this newer format, set to `true`.

Please feel free to change words as what you think is righter


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-25 Thread seancxmao
Github user seancxmao commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r220407692
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

If we must call it "legacy", I'd think of it legacy implementation in Spark 
side, rather than legacy format in Parquet side. 
As comment in 
[SPARK-20297](https://issues.apache.org/jira/browse/SPARK-20297?focusedCommentId=15975559=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15975559)
> The standard doesn't say that smaller decimals have to be stored in 
int32/int64, it just is an option for subset of decimal types. int32 and int64 
are valid representations for a subset of decimal types. fixed_len_byte_array 
and binary are a valid representation of any decimal type.
> 
>The int32/int64 options were present in the original version of the 
decimal spec, they just weren't widely implemented. So its not a new/old 
version thing, it was just an alternative representation that many systems 
didn't implement.

Anyway, it really leads to confusion.

Really appreciate your suggestion @srowen to make the doc shorter, the doc 
you suggested is more concise and to the point.

One more thing I want to discuss. After investigating the usage of this 
option,  I found this option is not only related to decimals, but also complex 
types (Array, Map), see below source code. Should we mention this in the doc?


https://github.com/apache/spark/blob/473d0d862de54ec1c7a8f0354fa5e06f3d66e455/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaConverter.scala#L450-L458



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-25 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r220200276
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

It sounds like it isn't quite a legacy format, but one still used by Hive 
and even considered valid if not current by Parquet? This part I am not sure 
of, but basing it on Hyukjin's comment above. 

I suggest a somewhat shorter text like this, what do you think? its length 
would be more suitable as a config doc below.

If `true`, then decimal values will be written in Apache Parquet's 
fixed-length byte array format. This is used by Spark 1.4 and earlier, and 
systems like Apache Hive and Apache Impala. If `false`, decimals will be 
written using the newer int format in Parquet. If Parquet output is intended 
for use with systems that do not support this newer format, set to `true`.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-24 Thread seancxmao
Github user seancxmao commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r220042478
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

Thanks for your suggestion. I have updated the doc in SQLConf.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-24 Thread seancxmao
Github user seancxmao commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r220038438
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

Hive and Impala do NOT support new Parquet format yet.

* [HIVE-19069](https://jira.apache.org/jira/browse/HIVE-19069): Hive can't 
read int32 and int64 Parquet decimal. This issue is not resolved yet. This is 
consistent with source code check by @HyukjinKwon 
* [IMPALA-5542](https://issues.apache.org/jira/browse/IMPALA-5542): Impala 
cannot scan Parquet decimal stored as int64_t/int32_t. This is resolved, 
however targeted to Impala 3.1.0, which is a version not released yet. The 
latest release is 3.0.0 (https://impala.apache.org/downloads.html).

Presto began to support new Parquet format since 0.182.

* [issues/7533](https://github.com/prestodb/presto/issues/7533): Improve 
decimal type support in the new Parquet reader.  This patch is included in 
[0.182](https://prestodb.io/docs/current/release/release-0.182.html). Blow is 
the excerpt: 

> Fix reading decimal values in the optimized Parquet reader when they are 
backed by the int32 or int64 types.



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-24 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219895824
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

This is another issue since we call the option something "legacy" which 
isn't actually legacy in Parquet's decimal side.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-24 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219895047
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

I haven't checked closely but I think Hive still uses binary for decimals 
(https://github.com/apache/hive/blob/ae008b79b5d52ed6a38875b73025a505725828eb/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java#L503-L541).
 Given my past investigation, thing is, Parquet supports both ways to write out 
(https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#decimal) 
IIRC. They deprecated timestamp based on int 96 
(https://github.com/apache/parquet-format/blob/master/src/main/thrift/parquet.thrift#L782)
 but not decimals.




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-24 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219892751
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

BTW, let's match the doc in `SQLConf` as well.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-24 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219827092
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
+  false
+  
+This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
+and prior versions or the standard format defined in parquet-format 
specification to write
+Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
+systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
+configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
+above, other systems that only support legacy decimal format (fixed 
length byte array) will not
+be able to read what Spark has written. Note other systems may have 
added support for the
+standard format in more recent versions, which will make this 
configuration unnecessary. Please
--- End diff --

Yeah, I think Hive and Impala also use newer Parquet versions/format. Isn't 
it sufficient to say older versions of Spark (<= 1.4) and older versions of 
Hive, Impala (do we know which?) use older Parquet formats and this enables 
writing it that way?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-23 Thread seancxmao
Github user seancxmao commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219729166
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
--- End diff --

OK, I will update the doc and describe scenarios and reasons why we need 
this flag.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-23 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219722950
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
--- End diff --

++1 for more information actually.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-23 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219722694
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
--- End diff --

OK that sounds important to document. But the reasoning in this thread is 
also more useful information I think. Instead of describing it as a legacy 
format (implying it's not valid Parquet or something) and that it's required 
for Hive and Impala, can we mention or point to the specific reason that would 
cause you to need this? The value of the documentation here is in whether it 
helps the user know when to set it one way or the other.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-23 Thread seancxmao
Github user seancxmao commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219721110
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
--- End diff --

I'd like to add my 2 cents. We use both Spark and Hive in our Hadoop/Spark 
clusters. And we have 2 types of tables, working tables and target tables. 
Working tables are only used by Spark jobs, while target tables are populated 
by Spark and exposed to downstream jobs including Hive jobs. Our data engineers 
frequently meet with this issue when they use Hive to read target tables. 
Finally we decided to set spark.sql.parquet.writeLegacyFormat=true as the 
default value for target tables and explicitly describe this in our internal 
developer guide.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-23 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219719299
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
--- End diff --

This is, of course, something we should remove in long term but my 
impression is that it's better to expose and explicitly mention we deprecate 
this later, and the remove it out.

I already argued a bit (for instance in SPARK-20297) to explain how to 
workaround and why it is. Was thinking it's better document this and reduce 
such overhead at least.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-23 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219719166
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
--- End diff --

@srowen, actually, this configuration specifically related with 
compatibility with other systems like Impala (not only old Spark ones) where 
decimals are written based on fixed binary format (nowdays it's written in 
int-based in Spark). If this configurations is not enabled, they are unable to 
read what Spark wrote.

Given 
https://stackoverflow.com/questions/44279870/why-cant-impala-read-parquet-files-after-spark-sqls-write
 and JIRA like 
[SPARK-20297](https://issues.apache.org/jira/browse/SPARK-20297), I think this 
configuration is kind of important. I even expected more documentation about 
this configuration specifically at the first place.

Personally I have been thinking it would better to leave this configuration 
after 3.0 as well for better compatibility. 



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-23 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/22453#discussion_r219717918
  
--- Diff: docs/sql-programming-guide.md ---
@@ -1002,6 +1002,15 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
 
   
 
+
+  spark.sql.parquet.writeLegacyFormat
--- End diff --

This should go with the other parquet properties if anything, but, this one 
is so old I don't think it's worth documenting. It shouldn't be used today.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22453: [SPARK-20937][DOCS] Describe spark.sql.parquet.wr...

2018-09-18 Thread seancxmao
GitHub user seancxmao opened a pull request:

https://github.com/apache/spark/pull/22453

[SPARK-20937][DOCS] Describe spark.sql.parquet.writeLegacyFormat property 
in Spark SQL, DataFrames and Datasets Guide

## What changes were proposed in this pull request?
Describe spark.sql.parquet.writeLegacyFormat property in Spark SQL, 
DataFrames and Datasets Guide.

## How was this patch tested?
N/A

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/seancxmao/spark SPARK-20937

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/22453.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #22453


commit 3af33a31f528059b5f4a66e8ba10bf945eb6fa53
Author: seancxmao 
Date:   2018-09-18T14:32:18Z

[SPARK-20937][DOCS] Describe spark.sql.parquet.writeLegacyFormat property 
in Spark SQL, DataFrames and Datasets Guide




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org