HyukjinKwon commented on a change in pull request #32660:
URL: https://github.com/apache/spark/pull/32660#discussion_r638680715
##########
File path: docs/sql-data-sources-text.md
##########
@@ -38,3 +36,35 @@ Spark SQL provides `spark.read().text("file_name")` to read
a file or directory
</div>
</div>
+
+## Data Source Option
+
+Data source options of CSV can be set via:
+* the `.option`/`.options` methods of
+ * `DataFrameReader`
+ * `DataFrameWriter`
+ * `DataStreamReader`
+ * `DataStreamWriter`
+
+<table class="table">
+ <tr><th><b>Property
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr>
+ <tr>
+ <td><code>wholetext</code></td>
+ <td>None</td>
+ <td>If true, read each file from input path(s) as a single row.</td>
+ <td>read</td>
+ </tr>
+ <tr>
+ <td><code>lineSep</code></td>
+ <td>None</td>
+ <td>Defines the line separator that should be used for parsing. If None is
set, it covers all <code>\\r</code>, <code>\\r\\n</code> and <code>\\n</code>.
Maximum length is 1 character.</td>
Review comment:
Looks like we lost the description for writing "defines the line
separator that should be used for writing"
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]