vanzin commented on a change in pull request #26108: [SPARK-26154][SS]
Streaming left/right outer join should not return outer nulls for already
matched rows
URL: https://github.com/apache/spark/pull/26108#discussion_r339223676
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/SymmetricHashJoinStateManager.scala
##########
@@ -435,9 +541,10 @@ class SymmetricHashJoinStateManager(
}
/** Put new value for key at the given index */
- def put(key: UnsafeRow, valueIndex: Long, value: UnsafeRow): Unit = {
+ def put(key: UnsafeRow, valueIndex: Long, value: UnsafeRow, matched:
Boolean): Unit = {
val keyWithIndex = keyWithIndexRow(key, valueIndex)
- stateStore.put(keyWithIndex, value)
+ val valueWithMatched = valueRowConverter.convertToValueRow(value,
matched)
Review comment:
So, while replying to your comment, this came to my mind.
The way I see this code, when you start from a v1 state store, you'll also
be writing back v1 data.
Shouldn't you write out new data, so that you fix the problem going forward
once you start the app with the fixed version of Spark?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]