zhijiangW opened a new pull request #12912:
URL: https://github.com/apache/flink/pull/12912
## What is the purpose of the change
Assuming two remote channels as listeners in LocalBufferPool, the deadlock
happens as follows:
1. While the Canceler thread calling `ch1#releaseAllResources`, it will
occupy the bufferQueue lock and try to call `ch2#notifyBufferAvailable`.
2. While task thread exiting to call `CachedBufferStorage#close`, it might
release exclusive buffers for ch2. Then ch2 will occupy the bufferQueue lock
and try to call `ch1#notifyBufferAvailable`.
3. ch1 and ch2 will both occupy self bufferQueue lock and wait for other
side's bufferQueue lock to cause deadlock.
Regarding the solution, we can check the released state outside of
bufferQueue lock in `RemoteInputChannel#notifyBufferAvailable` to return
immediately.
## Brief change log
- *Check the `isReleased` state before entering synchronized in
`RemoteInputChannel#notifyBufferAvailable`*
## Verifying this change
By the failure `StreamFaultToleranceTestBase`
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (yes / **no**)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (yes / **no**)
- The serializers: (yes / **no** / don't know)
- The runtime per-record code paths (performance sensitive): (yes / **no**
/ don't know)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** /
don't know)
- The S3 file system connector: (yes / **no** / don't know)
## Documentation
- Does this pull request introduce a new feature? (yes / **no**)
- If yes, how is the feature documented? (**not applicable** / docs /
JavaDocs / not documented)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]