Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11804#discussion_r56925645
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/Sink.scala ---
@@ -17,31 +17,19 @@
package org.apache.spark.sql.execution.streaming
+import org.apache.spark.sql.DataFrame
+
/**
- * An interface for systems that can collect the results of a streaming
query.
- *
- * When new data is produced by a query, a [[Sink]] must be able to
transactionally collect the
- * data and update the [[Offset]]. In the case of a failure, the sink will
be recreated
- * and must be able to return the [[Offset]] for all of the data that is
made durable.
- * This contract allows Spark to process data with exactly-once semantics,
even in the case
- * of failures that require the computation to be restarted.
+ * An interface for systems that can collect the results of a streaming
query. In order to preserve
+ * exactly once semantics a sink must be idempotent in the face of
multiple attempts to add the same
--- End diff --
nit: exactly once semantics <comma>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]