Hi Enrique,

I think it is related with FLINK-20219. Currently, the HA related
ConfigMap/ZNodes could not be cleaned up properly.

The HA related ConfigMaps clean up mechanism for session could get improved
in the following two ways.
* Delete the jobmanager leader ConfigMap once the job reached to a terminal
state(canceled, succeed, failed)
* Try to clean up all the HA ConfigMaps for terminal jobs when shut down
the cluster

[1]. https://issues.apache.org/jira/browse/FLINK-20219


Best,
Yang

Till Rohrmann <trohrm...@apache.org> 于2021年5月6日周四 下午10:23写道:

> Hi Enrique,
>
> I think you are running into this problem FLINK-20695 [1]. In a nutshell,
> Flink only deletes the config maps when it shuts down at the moment. We
> want to change this with the next release.
>
> [1]  https://issues.apache.org/jira/browse/FLINK-20695
>
> Cheers,
> Till
>
> On Wed, May 5, 2021 at 8:36 PM Enrique <enriquela...@gmail.com> wrote:
>
> > Hi all,
> >
> > I am deploying a Flink Cluster in session mode using Kubernetes HA and
> have
> > seen it working with the different config maps for the dispatcher,
> > restserver and resourcemanager. I also have configured storage for
> > checkpointing and HA metadata.
> >
> > When I submit a job, I can see that a config map is created for it
> > containing checkpoint information which is updated correctly. Yet, when I
> > cancel a job I assume the config map would be deleted but it's seems that
> > it
> > isn't. Is this the intended behaviour? I am worried that s many jobs are
> > submitted and cancelled from Flink Cluster a large number of Config Maps
> > would remain in the cluster.
> >
> > Thanks in advance,
> >
> > Enrique
> >
> >
> >
> > --
> > Sent from:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
> >
>

Reply via email to