[
https://issues.apache.org/jira/browse/FLINK-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16104726#comment-16104726
]
ASF GitHub Bot commented on FLINK-6521:
---------------------------------------
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/4376#discussion_r130052071
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/zookeeper/ZooKeeperHaServices.java
---
@@ -113,6 +116,11 @@ public ZooKeeperHaServices(
this.runningJobsRegistry = new
ZooKeeperRunningJobsRegistry(client, configuration);
this.blobStoreService = checkNotNull(blobStoreService);
+ try {
+ this.submittedJobGraphStore =
ZooKeeperUtils.createSubmittedJobGraphs(client, configuration, executor);
+ } catch (Exception e) {
+ throw new RuntimeException(e);
--- End diff --
We should not throw `RuntimeException` but instead a meaningful checked
exception.
> Add per job cleanup methods to HighAvailabilityServices
> -------------------------------------------------------
>
> Key: FLINK-6521
> URL: https://issues.apache.org/jira/browse/FLINK-6521
> Project: Flink
> Issue Type: Improvement
> Components: Distributed Coordination
> Affects Versions: 1.3.0, 1.4.0
> Reporter: Till Rohrmann
> Assignee: Fang Yong
>
> The {{HighAvailabilityServices}} are used to manage services and persistent
> state at a single point. This also entails the cleanup of data used for HA.
> So far the {{HighAvailabilityServices}} can only clean up the data for all
> stored jobs. In order to support cluster sessions, we have to extend this
> functionality to selectively delete data for single jobs. This is necessary
> to keep data for failed jobs and delete data for successfully executed jobs.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)