Hi,

by default all the metadata is lost when shutting down the JobManager in a non high available setup. Flink uses Zookeeper together with a distributed filesystem to store the required metadata [1] in a persistent and distributed manner.

A single node setup is rather uncommon, but you can also start Zookeeper locally as it is done in our end-to-end tests [2].

I hope this helps.

Regards,
Timo

[1] https://ci.apache.org/projects/flink/flink-docs-master/ops/jobmanager_high_availability.html [2] https://github.com/apache/flink/blob/master/flink-end-to-end-tests/test-scripts/test_ha_datastream.sh


Am 08.11.18 um 14:15 schrieb Chang Liu:
Or to say, how can I keep the jobs for system patching, server restart, etc. Is it related to Standalone vs YARN? Or is it related to whether to use Zookeeper?

Many thanks!

Best regards/祝好,

Chang Liu 刘畅


On 8 Nov 2018, at 13:38, Chang Liu <fluency...@gmail.com <mailto:fluency...@gmail.com>> wrote:

Thanks!

If I have a cluster more than one node (standalone or YRAN), can I stop and start any single node among them and keep the job running?

Best regards/祝好,

Chang Liu 刘畅


On 7 Nov 2018, at 16:17, 秦超峰 <18637156...@163.com <mailto:18637156...@163.com>> wrote:

the second


        
秦超峰
邮箱:windyqinchaof...@163.com

<https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E7%A7%A6%E8%B6%85%E5%B3%B0&uid=example%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22%E9%82%AE%E7%AE%B1%EF%BC%9Awindyqinchaofeng%40163.com%22%5D>

签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制

On 11/07/2018 17:14, Chang Liu <mailto:fluency...@gmail.com> wrote:

    Hi,

    I have a question regarding whether the current running job will
    restart if I stop and start the flink cluster?

    1. Let’s say I am just having a Standalone one node cluster.
    2. I have several Flink jobs already running on the cluster.
    3. If I do a bin/cluster-stop.sh and then do a
    bin/cluster-start.sh, will be previously running job restart again?

    OR

    Before I do bin/cluster-stop.sh, I have to do Savepoints for
    each of the job.
    After bin/cluster-start.sh is finished, I have to do Start Job
    based on Savepoints triggered before for each of the job I want
    to restart.

    Many thanks in advance :)

    Best regards/祝好,

    Chang Liu 刘畅





Reply via email to