HyukjinKwon commented on a change in pull request #32546:
URL: https://github.com/apache/spark/pull/32546#discussion_r635972738



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
##########
@@ -881,14 +881,10 @@ final class DataFrameWriter[T] private[sql](ds: 
Dataset[T]) {
    *   format("orc").save(path)
    * }}}
    *
-   * You can set the following ORC-specific option(s) for writing ORC files:
-   * <ul>
-   * <li>`compression` (default is the value specified in 
`spark.sql.orc.compression.codec`):
-   * compression codec to use when saving to file. This can be one of the 
known case-insensitive
-   * shorten names(`none`, `snappy`, `zlib`, `lzo`, and `zstd`). This will 
override
-   * `orc.compress` and `spark.sql.orc.compression.codec`. If `orc.compress` 
is given,
-   * it overrides `spark.sql.orc.compression.codec`.</li>
-   * </ul>
+   * ORC-specific option(s) for writing ORC files can be found in
+   * <a href=
+   *   
"https://spark.apache.org/docs/latest/sql-data-sources-orc.html#data-source-option";>
+   *   Data Source Option</a> in the version you use.

Review comment:
       Either way is fine. The point is that we should keep it consistent




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to