Hi Mazen,
Flink does not support automatic scaling yet. Hence, the recommended way is
to monitor a Flink job and to trigger rescaling by stopping the job with a
savepoint and then to resume it with the adjusted parallelism. The
community is working on auto scaling but there is no concrete date
So what is the state of dynamic scaling (non static, set by the user in the
code) for SLA gurantees now, is it already included in FLink, and is it
still necessaru to stop/restart the job for dynamic scaling.
Thanks.
--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
Hi Govind,
In Flink 1.2 (feature complete, undergoing test) you will be able to scale
your jobs/operators up and down at will, however you'll have to build a
little tooling around it yourself and scale based on your own metrics. You
should be able to integrate this with Docker Swarm or Amazon
Hi All,
It would great if someone can help me with my questions. Appreciate all the
help.
Thanks.
> On Dec 23, 2016, at 12:11 PM, Govindarajan Srinivasaraghavan
> wrote:
>
> Hi,
>
> We have a computation heavy streaming flink job which will be processing
> around
Sorry, my bad. Comments should work now.
On Mon, Apr 4, 2016 at 3:51 PM, Aljoscha Krettek
wrote:
> Comments are not enabled.
>
> On Mon, 4 Apr 2016 at 13:58 Till Rohrmann wrote:
>
> > Hi Flink community,
> >
> > I recently started working on dynamic
Comments are not enabled.
On Mon, 4 Apr 2016 at 13:58 Till Rohrmann wrote:
> Hi Flink community,
>
> I recently started working on dynamic scaling. As a first step we want to
> introduce state sharding which is a requirement for partitioned state to be
> re-distributable.