HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160967676
##########
python/pyspark/sql/dataframe.py:
##########
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf = self._jdf.dropDuplicates(self._jseq(subset))
return DataFrame(jdf, self.sparkSession)
+ def dropDuplicatesWithinWatermark(self, subset: Optional[List[str]] =
None) -> "DataFrame":
+ """Return a new :class:`DataFrame` with duplicate rows removed,
+ optionally only considering certain columns, within watermark.
+
+ For a static batch :class:`DataFrame`, it just drops duplicate rows.
For a streaming
+ :class:`DataFrame`, this will keep all data across triggers as
intermediate state to drop
+ duplicated rows. The state will be kept to guarantee the semantic,
"Events are deduplicated
+ as long as the time distance of earliest and latest events are smaller
than the delay
+ threshold of watermark." The watermark for the input
:class:`DataFrame` must be set via
+ :func:`withWatermark`. Users are encouraged to set the delay threshold
of watermark longer
Review Comment:
If we want to support this API in batch query, I think we have to implement
the same behavior, not just forwarding to dropDuplicates(). But that's also
very odd because we are telling users that watermark is no-op in batch query
and now we have to get delay threshold from withWatermark. I'd say it conflicts
with base concept.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]