cloud-fan commented on a change in pull request #29724:
URL: https://github.com/apache/spark/pull/29724#discussion_r486837991



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -246,13 +244,14 @@ case class StreamingSymmetricHashJoinExec(
 
     //  Join one side input using the other side's buffered/state rows. Here 
is how it is done.
     //
-    //  - `leftJoiner.joinWith(rightJoiner)` generates all rows from matching 
new left input with

Review comment:
       I'm not familiar with this part, cc @zsxwing  @HeartSaVioR @xuanyuanking 

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -246,13 +244,14 @@ case class StreamingSymmetricHashJoinExec(
 
     //  Join one side input using the other side's buffered/state rows. Here 
is how it is done.
     //
-    //  - `leftJoiner.joinWith(rightJoiner)` generates all rows from matching 
new left input with

Review comment:
       I'm not familiar with this part, cc @zsxwing  @HeartSaVioR @xuanyuanking 

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -246,13 +244,14 @@ case class StreamingSymmetricHashJoinExec(
 
     //  Join one side input using the other side's buffered/state rows. Here 
is how it is done.
     //
-    //  - `leftJoiner.joinWith(rightJoiner)` generates all rows from matching 
new left input with

Review comment:
       I'm not familiar with this part, cc @zsxwing  @HeartSaVioR @xuanyuanking 

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -246,13 +244,14 @@ case class StreamingSymmetricHashJoinExec(
 
     //  Join one side input using the other side's buffered/state rows. Here 
is how it is done.
     //
-    //  - `leftJoiner.joinWith(rightJoiner)` generates all rows from matching 
new left input with

Review comment:
       I'm not familiar with this part, cc @zsxwing  @HeartSaVioR @xuanyuanking 

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -56,8 +56,8 @@ import org.apache.spark.util.{CompletionIterator, 
SerializableConfiguration}
  * - Apply the optional condition to filter the joined rows as the final 
output.
  *
  * If a timestamp column with event time watermark is present in the join keys 
or in the input
- * data, then the it uses the watermark figure out which rows in the buffer 
will not join with
- * and the new data, and therefore can be discarded. Depending on the provided 
query conditions, we
+ * data, then it uses the watermark figure out which rows in the buffer will 
not join with

Review comment:
       `uses the watermark to figure out`

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala
##########
@@ -246,13 +244,14 @@ case class StreamingSymmetricHashJoinExec(
 
     //  Join one side input using the other side's buffered/state rows. Here 
is how it is done.
     //
-    //  - `leftJoiner.joinWith(rightJoiner)` generates all rows from matching 
new left input with

Review comment:
       I'm not familiar with this part, cc @zsxwing  @HeartSaVioR @xuanyuanking 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to