curcur edited a comment on pull request #13648:
URL: https://github.com/apache/flink/pull/13648#issuecomment-714211577


   > * Visilibity in normal case: none of the felds written in `releaseView` 
are `volatile`. So in normal case (`t1:release` then `t2:createReadView`) `t2` 
can see some inconsistent state. For example, `readView == null`, but 
`isPartialBufferCleanupRequired == false`. Right?
   >   Maybe call `releaseView()`  from `createReadView()` unconditionally?
   
   That's true, in that case, let's not `releaseView()` during downstream task 
cancelation? And `releaseView()` is done only before creating a new view? 
   
   > * Overwites when release is slow: won't `t1` overwrite changes to 
`PipelinedSubpartition` made already by `t2`? For example, reset 
`sequenceNumber` after `t2` has sent some data?
   >   Maybe `PipelinedSubpartition.readerView` should be `AtomicReference` and 
then we can guard `PipelinedApproximateSubpartition.releaseView()` by CAS on it?
   
   I think this is the same question I answered in the write-up: 
   In short, it won't be possible, because a view can only be released once and 
this is guarded by the release flag of the view, details quoted below. 
   
   - What if the netty thread1 release view after netty thread2 recreates the 
view?
   Thread2 releases the view that thread1 holds the reference on before 
creating a new view. Thread1 can not release the old view (through view 
reference) again afterwards, since a view can only be released once.
   
   **And if we only release before creation, this whole threading interaction 
model would be simplified in a great way. That says only one netty thread can 
release the view**
   
   I couldn't see potential risks we can not do this.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to