the uid on the source of the job).
>>
>> Thanks!
>> Moiz
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-rolling-upgrade-support-tp10674p14313.html
>> Sent from the Apache Flink User Mailing List archive. mailing list archive
>> at Nabble.com.
>
>
>
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-rolling-upgrade-support-tp10674p14337.html
> To unsubscribe from Flink rolling upgrade support, click here.
> NAML
>
> View this message in context: Re: Flink rolling upgrade support
>
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
0674p14313.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com <http://nabble.com/>.
>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> h
ob instance
> starts from the correct offset? Do I need to do anything extra to make this
> work? (example set the uid on the source of the job).
>
> Thanks!
> Moiz
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2
umer group for the new job version (and start it
> from a savepoint), will the savepoint ensure that the second job instance
> starts from the correct offset? Do I need to do anything extra to make this
> work? (example set the uid on the source of the job).
>
> Thanks!
> Moiz
&
t
from a savepoint), will the savepoint ensure that the second job instance
starts from the correct offset? Do I need to do anything extra to make this
work? (example set the uid on the source of the job).
Thanks!
Moiz
--
View this message in context:
http://apache-flink-user-mailing-list-archive.
Hi!
I think in many cases it is more convenient to have a savepoint-and-stop
operation to use for upgrading the cluster/job but it should not be
required. If the output of your job needs to be exactly once and you don't
have an external deduplication mechanism than even the current
Hi Greg,
yes certainly, there are more requirements to this than the quick sketch I
gave above and that seems to be one of them.
Cheers,
Aljoscha
On Thu, 22 Dec 2016 at 17:54 Greg Hogan wrote:
> Aljoscha,
>
> For the second, possible solution is there also a requirement
Hi Stephan -
I agree that the savepoint-shutdown-restart model is nominally the same as the
rolling restart with one notable exception - a lack of atomicity. There is a
gap between invoking the savepoint command and the shutdown command. My problem
isn’t fortunate enough to have idempotent
Hi Andrew!
Would be great to know if what Aljoscha described works for you. Ideally,
this costs no more than a failure/recovery cycle, which one typically also
gets with rolling upgrades.
Best,
Stephan
On Tue, Dec 20, 2016 at 6:27 PM, Aljoscha Krettek
wrote:
> Hi,
>
Hi,
zero-downtime updates are currently not supported. What is supported in
Flink right now is a savepoint-shutdown-restore cycle. With this, you first
draw a savepoint (which is essentially a checkpoint with some meta data),
then you cancel your job, then you do whatever you need to do (update
Hi. Does Apache Flink currently have support for zero down time or the =
ability to do rolling upgrades?
If so, what are concerns to watch for and what best practices might =
exist? Are there version management and data inconsistency issues to =
watch for?=
11 matches
Mail list logo