dongjoon-hyun commented on a change in pull request #23742: [SPARK-26835][DOCS] 
Document configuration properties of Spark SQL Generic Load/Save Functions
URL: https://github.com/apache/spark/pull/23742#discussion_r254985504
 
 

 ##########
 File path: docs/sql-data-sources-load-save-functions.md
 ##########
 @@ -41,6 +41,11 @@ name (i.e., `org.apache.spark.sql.parquet`), but for 
built-in sources you can al
 names (`json`, `parquet`, `jdbc`, `orc`, `libsvm`, `csv`, `text`). DataFrames 
loaded from any data
 source type can be converted into other types using this syntax.
 
+For built-in sources, the available extra options are documented in the API 
documentation,
+on the method corresponding to the format (For example 
`org.apache.spark.sql.DataFrameWriter.csv`).
+Check the documentation of methods in `org.apache.spark.sql.DataFrameReader` 
for extra options of
+`read` operations and `org.apache.spark.sql.DataFrameWriter` for extra options 
of `write` operations.
 
 Review comment:
   Thank you for your first contribution, @peter-gergely-horvath .
   
   BTW, the title seems to be misleading because it's too broader (`Document 
configuration properties...`?) than the PR content. This is just adding a 
pointer. And, as you claimed at the JIRA, the added sentence doesn't cover 
external Parquet/ORC options at all, too.
   
   Could you update the PR title and description to be specific to what this PR 
contribute?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to