HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-639873447
Sorry my comment was edited so you may be missed the content, but it is also
a sort of pointing out for "pinpointing" - do you think your approach works
with other st
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-639873447
Sorry my comment was edited so you may be missed the content, but it is also
a sort of pointing out for "pinpointing" - do you think your approach works
with other st
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-639873447
Sorry my comment was edited so you may be missed the content, but it is also
a sort of pointing out of "pinpoint" - do you think your approach works with
other state
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-639239790
My alternative with wrapping state store is something like below:
```
class RowValidatingStateStore(
underlying: StateStore,
keyType: Seq[
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-639200645
> @HeartSaVioR After taking a further look. Instead of dealing with the
iterator, how about adding the invalidation for all state store operations in
StateStoreProvid
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-638520229
Will this be included to Spark 3.0.0? If this is to unblock SPARK-28067 to
be included to Spark 3.0.0 then it's OK to consider this first, but if this
plans to go to
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-637926400
And personally I'd rather do the check in StateStore with additional
overhead of reading "a" row in prior to achieve the same in all stateful
operations.
```
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-637926400
And personally I'd rather do the check in StateStore with additional
overhead of reading "a" row in prior to achieve the same in all stateful
operations.
```
HeartSaVioR edited a comment on pull request #28707:
URL: https://github.com/apache/spark/pull/28707#issuecomment-637926400
And personally I'd rather do the check in StateStore with additional
overhead of reading "a" row in prior to achieve the same in all stateful
operations.
```