Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22453#discussion_r219827092
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1002,6 +1002,21 @@ Configuration of Parquet can be done using the 
`setConf` method on `SparkSession
         </p>
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.sql.parquet.writeLegacyFormat</code></td>
    +  <td>false</td>
    +  <td>
    +    This configuration indicates whether we should use legacy Parquet 
format adopted by Spark 1.4
    +    and prior versions or the standard format defined in parquet-format 
specification to write
    +    Parquet files. This is not only related to compatibility with old 
Spark ones, but also other
    +    systems like Hive, Impala, Presto, etc. This is especially important 
for decimals. If this
    +    configuration is not enabled, decimals will be written in int-based 
format in Spark 1.5 and
    +    above, other systems that only support legacy decimal format (fixed 
length byte array) will not
    +    be able to read what Spark has written. Note other systems may have 
added support for the
    +    standard format in more recent versions, which will make this 
configuration unnecessary. Please
    --- End diff --
    
    Yeah, I think Hive and Impala also use newer Parquet versions/format. Isn't 
it sufficient to say older versions of Spark (<= 1.4) and older versions of 
Hive, Impala (do we know which?) use older Parquet formats and this enables 
writing it that way?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to