HyukjinKwon commented on code in PR #42462:
URL: https://github.com/apache/spark/pull/42462#discussion_r1300848314
##########
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala:
##########
@@ -259,6 +259,27 @@ final class DataStreamReader private[sql](sparkSession:
SparkSession) extends Lo
*/
def csv(path: String): DataFrame = format("csv").load(path)
+ /**
+ * Loads a XML file stream and returns the result as a `DataFrame`.
+ *
+ * This function will go through the input once to determine the input
schema if `inferSchema`
+ * is enabled. To avoid going through the entire data once, disable
`inferSchema` option or
+ * specify the schema explicitly using `schema`.
+ *
+ * You can set the following option(s):
+ * <ul>
+ * <li>`maxFilesPerTrigger` (default: no max limit): sets the maximum number
of new files to be
+ * considered in every trigger.</li>
+ * </ul>
+ *
+ * You can find the XML-specific options for reading XML file stream in
+ * <a
href="https://spark.apache.org/docs/latest/sql-data-sources-xml.html#data-source-option">
+ * Data Source Option</a> in the version you use.
+ *
+ * @since 4.0.0
+ */
+ def xml(path: String): DataFrame = format("xml").load(path)
Review Comment:
ditto for https://github.com/apache/spark/pull/42462/files#r1300848164
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]