Hi Nastaran,

Can you specify what more information do you need?

>From the discussion that you posted:
1) If you have batch jobs, then Flink does its own memory management
(outside the heap, so it is not subject to JVM's GC)
    and although when you cancel the job, you do not see the memory being
de-allocated,
    this memory is available to other jobs and you do not have to worry
about de-allocating manually.
2) if you use streaming, then you should use one of the provided state
backends and they will do the memory management
    for you (see [1] and [2]).

Cheers,
Kostas

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/state/state_backends.html
[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/state/large_state_tuning.html

On Wed, Nov 28, 2018 at 7:11 AM Nastaran Motavali <n.motav...@son.ir> wrote:

> Hi,
> I have a simple java application uses flink 1.6.2.
> When I run the jar file, I can see that the job consumes a part of the
> host's main memory. If I cancel the job, the consumed memory does not be
> released until I stop the whole cluster. How can I release the memory after
> cancellation?
> I have followed the conversation around this issue at the mailing list
> archive[1] but still need more explanations.
> [1]
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Need-help-to-understand-memory-consumption-td23821.html#a23926
>
>
>
> Kind regards,
>
> Nastaran Motavalli
>
>
>
>

Reply via email to