eason-yuchen-liu commented on code in PR #46740:
URL: https://github.com/apache/spark/pull/46740#discussion_r1621544776


##########
python/pyspark/sql/tests/streaming/test_streaming.py:
##########
@@ -392,6 +392,31 @@ def test_streaming_with_temporary_view(self):
                 set([Row(value="view_a"), Row(value="view_b"), 
Row(value="view_c")]), set(result)
             )
 
+    def test_streaming_drop_duplicate_within_watermark(self):
+        """
+        This verfies dropDuplicatesWithinWatermark works with a streaming 
dataframe.
+        """
+        user_schema = StructType().add("time", TimestampType()).add("id", 
"integer")
+        df = (
+            self.spark.readStream.option("sep", ";")
+            .schema(user_schema)
+            .csv("python/test_support/sql/streaming/time")
+        )
+        q1 = (
+            df.withWatermark("time", "2 seconds")
+            .dropDuplicatesWithinWatermark(["id"])
+            .writeStream.outputMode("update")
+            .format("memory")
+            .queryName("q1")
+            .trigger(availableNow=True)
+            .start()
+        )
+        self.assertTrue(q1.isActive)
+        time.sleep(20)
+        q1.stop()
+        result = self.spark.sql("SELECT * FROM q1").collect()
+        self.assertTrue(len(result) >= 6 and len(result) <= 9)

Review Comment:
   After talking offline, we agree that this test case is just to verify the 
code works in pyspark, and the expected result is correct theoretically so we 
can keep it as is.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to