, 1 Jun 2022 at 23:23, Alexander Fedulov
wrote:
> Hi Bariša,
>
> The way I see it is you either
> - need data from all sources because you are doing some
> conjoint processing. In that case stopping the pipeline is usually the
> right thing to do.
> - the streams consumed
Hi,
we are running a flink job with multiple kafka sources connected to
different kafka servers.
The problem we are facing is when one of the kafka's is down, the flink job
starts restarting.
Is there anyway for flink to pause processing of the kafka which is down,
and yet continue processing
Small update:
we believe that the off heap memory is used by the parquet writer ( used
in sink to write to S3 )
On Wed, 24 Feb 2021 at 23:25, Bariša wrote:
> I'm running flink 1.8.2 in a container, and under heavy load, container
> gets OOM from the kernel.
> I'm guessing that th
I'm running flink 1.8.2 in a container, and under heavy load, container
gets OOM from the kernel.
I'm guessing that that reason for the kernel OOM is large size of the
off-heap memory. Is there a way I can limit it in flink 1.8.2?
I can see that newer version of flink has a config param, checking
r
> example verify that there are expected number of registered TaskManagers.
> It might cover your case.
>
> Piotrek
>
> On 9 Oct 2018, at 12:21, Bariša wrote:
>
> As part of deploying task managers and job managers, I'd like to expose
> healthcheck on both task managers and
As part of deploying task managers and job managers, I'd like to expose
healthcheck on both task managers and job managers.
For the task managers, one of the requirements that they are healthy, is
that they have successfully registered themselves with the job manager.
Is there a way to achieve