[
https://issues.apache.org/jira/browse/FLINK-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093378#comment-16093378
]
ASF GitHub Bot commented on FLINK-7231:
---------------------------------------
GitHub user StephanEwen opened a pull request:
https://github.com/apache/flink/pull/4370
[FLINK-7231] [distr. coordination] Fix slot release affecting
SlotSharingGroup cleanup
**This is base on #4364 so only the last commit is relevant**
## What is the purpose of the change
This fixes [FLINK-7231](https://issues.apache.org/jira/browse/FLINK-7231) -
a bug making restarts unstable in the presence of certain combination of slot
sharing, losses of TaskManagers, and restart strategies.
## Brief change log
- Minimal adjustment in `ExecutionGraph`: On failed resource acquisition,
release slots (and with that sharing group assignments) before triggering the
recovery. Before this change, both happened concurrently/asynchronously (and
recovery may have overtaken slot release).
## Verifying this change
This change adds additional unit tests:
-
`ExecutionGraphRestartTest#testRestartWithEagerSchedulingAndSlotSharing()`
-
`ExecutionGraphRestartTest#testRestartWithSlotSharingAndNotEnoughResources()`
The effect (and fix) can also be observed by repeatedly trying the
following:
- Create a streaming job with multiple JobVertices
- Set the restart strategy to fixed-delay with zero delay
- Run the job
- Repeat: Kill TaskManager and bring up recovery TaskManager. There is a
good chance that various restarts are affected by
`java.lang.IllegalStateException: SlotSharingGroup cannot clear task
assignment, group still has allocated resources.`, meaning they take long
before actually recovering.
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): **no**
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: **(no**
- The serializers: **no**
- The runtime per-record code paths (performance sensitive): **no**
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Yarn/Mesos, ZooKeeper: **yes**
## Documentation
- Does this pull request introduce a new feature? **no**
- If yes, how is the feature documented? **not applicable**
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/StephanEwen/incubator-flink sharing_group_bug
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/flink/pull/4370.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #4370
----
commit f055645b3d905ea212b11eb570926d46447f3f52
Author: zjureel <[email protected]>
Date: 2017-07-18T17:27:56Z
[FLINK-6665] [FLINK-6667] [distributed coordination] Use a callback and a
ScheduledExecutor for ExecutionGraph restarts
Initial work by [email protected] , improved by [email protected].
commit 11e2144892a57c58ffe919ac228c702595f34025
Author: Stephan Ewen <[email protected]>
Date: 2017-07-18T17:49:56Z
[FLINK-7216] [distr. coordination] Guard against concurrent global failover
commit 16e9e133e0ed9dfba2d177c8f789f1b215a7759e
Author: Stephan Ewen <[email protected]>
Date: 2017-07-19T08:24:52Z
[FLINK-7231] [distr. coordination] Fix slot release affecting
SlotSharingGroup cleanup
----
> SlotSharingGroups are not always released in time for new restarts
> ------------------------------------------------------------------
>
> Key: FLINK-7231
> URL: https://issues.apache.org/jira/browse/FLINK-7231
> Project: Flink
> Issue Type: Bug
> Components: Distributed Coordination
> Affects Versions: 1.3.1
> Reporter: Stephan Ewen
> Assignee: Stephan Ewen
> Fix For: 1.4.0, 1.3.2
>
>
> In the case where there are not enough resources to schedule the streaming
> program, a race condition can lead to a sequence of the following errors:
> {code}
> java.lang.IllegalStateException: SlotSharingGroup cannot clear task
> assignment, group still has allocated resources.
> {code}
> This eventually recovers, but may involve many fast restart attempts before
> doing so.
> The root cause is that slots are not cleared before the next restart attempt.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)