MaxGekk commented on a change in pull request #31564:
URL: https://github.com/apache/spark/pull/31564#discussion_r576617058



##########
File path: docs/sql-data-sources-parquet.md
##########
@@ -329,4 +365,54 @@ Configuration of Parquet can be done using the `setConf` 
method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+<td>spark.sql.legacy.parquet.datetimeRebaseModeInRead</td>
+  <td><code>EXCEPTION</code></td>
+  <td>The rebasing mode for the values of the <code>DATE</code>, 
<code>TIMESTAMP_MILLIS</code>, <code>TIMESTAMP_MICROS</code> logical types from 
the Julian to Proleptic Gregorian calendar:<br>
+    <ul>
+      <li><code>EXCEPTION</code>: Spark will fail the reading if it sees 
ancient dates/timestamps that are ambiguous between the two calendars.</li>
+      <li><code>CORRECTED</code>: Spark will not do rebase and read the 
dates/timestamps as it is.</li>
+      <li><code>LEGACY</code>: Spark will rebase dates/timestamps from the 
legacy hybrid (Julian + Gregorian) calendar to Proleptic Gregorian calendar 
when reading Parquet files.</li>
+    </ul>
+    This config is only effective if the writer info (like Spark, Hive) of the 
Parquet files is unknown.
+  </td>
+  <td>3.0.0</td>
+</tr>
+<tr>
+  <td>spark.sql.legacy.parquet.datetimeRebaseModeInWrite</td>

Review comment:
       In this PR https://github.com/apache/spark/pull/31571, I made 
`spark.sql.legacy.replaceDatabricksSparkAvro.enabled` as non-internal since it 
has been already documented publicly.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to