Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/22593#discussion_r221497310
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -46,14 +46,16 @@ final class DataStreamWriter[T] private[sql](ds:
Dataset[T]) {
/**
* Specifies how data of a streaming DataFrame/Dataset is written to a
streaming sink.
- * - `OutputMode.Append()`: only the new rows in the streaming
DataFrame/Dataset will be
- * written to the sink
- * - `OutputMode.Complete()`: all the rows in the streaming
DataFrame/Dataset will be written
- * to the sink every time these is some
updates
- * - `OutputMode.Update()`: only the rows that were updated in the
streaming DataFrame/Dataset
+ * <ul>
+ * <li> `OutputMode.Append()`: only the new rows in the streaming
DataFrame/Dataset will be
+ * written to the sink.</li>
--- End diff --
I would just format this similarly with
https://github.com/apache/spark/blob/e06da95cd9423f55cdb154a2778b0bddf7be984c/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala#L338-L366
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]