Re: Check points are discarded with reason NULL

2023-07-23 Thread Hangxiang Yu
Hi,
This exception is thrown because the number of checkpoint exceptions
exceeds execution.checkpointing.tolerable-failed-checkpoints, see [1] for
more details.
There should be other root causes about the checkpoint exception in your
JM/TM logs. You could check or share these.

[1]
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/deployment/config/#execution-checkpointing-tolerable-failed-checkpoints

On Mon, Jul 24, 2023 at 2:06 PM Y SREEKARA BHARGAVA REDDY <
ynagiredd...@gmail.com> wrote:

> Hi Team,
>
> While running flink streaming. i  got following exception,
>
> Did you any one faced the issue?
> Check points are discarded with *reason is* NULL.
>
> org.apache.flink.util.FlinkRuntimeException: Exceeded checkpoint tolerable
> failure threshold.
>
> at org.apache.flink.runtime.checkpoint.CheckpointFailureManager
> .handleTaskLevelCheckpointException(CheckpointFailureManager.java:87)
>
> at org.apache.flink.runtime.checkpoint.CheckpointCoordinator
> .failPendingCheckpointDueToTaskFailure(CheckpointCoordinator.java:1467)
>
> at org.apache.flink.runtime.checkpoint.CheckpointCoordinator
> .discardCheckpoint(CheckpointCoordinator.java:1377)
>
> at org.apache.flink.runtime.checkpoint.CheckpointCoordinator
> .receiveDeclineMessage(CheckpointCoordinator.java:719)
>
> at org.apache.flink.runtime.scheduler.SchedulerBase
> .lambda$declineCheckpoint$5(SchedulerBase.java:807)
>
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:
> 511)
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask
> .access$201(ScheduledThreadPoolExecutor.java:180)
>
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask
> .run(ScheduledThreadPoolExecutor.java:293)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor
> .java:1149)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
> .java:624)
>
> at java.lang.Thread.run(Thread.java:748)
>
>
> Please let me know, hpow to fix the issue.
>


-- 
Best,
Hangxiang.


Re: kafka sink

2023-07-23 Thread Shammon FY
Hi nick,

Is there any error log? That may help to analyze the root cause.

On Sun, Jul 23, 2023 at 9:53 PM nick toker  wrote:

> hello
>
>
> we replaced deprecated kafka producer with kafka sink
> and from time to time when we submit a job he stack for 5 min in
> inisazaing ( on sink operators)
> we verify the the transaction prefix is unique
>
> it's not happened when we use kafka producer
>
> What can be the reason?
>
>


Re: Set processing time in the past

2023-07-23 Thread liu ron
Hi, Eugenio

Can you describe the requirements in more detail?

Best,
Ron

Shammon FY  于2023年7月17日周一 09:10写道:

> Hi Eugenio,
>
> I cannot catch it clearly, could you describe it in more detail?
>
> Best,
> Shammon FY
>
> On Sat, Jul 15, 2023 at 5:14 PM Eugenio Marotti <
> ing.eugenio.maro...@gmail.com> wrote:
>
>> Hi everyone,
>>
>> is there a way to set Flink processing time in the past?
>>
>> Thanks
>> Eugenio
>>
>


RE: TCP server socket with Kubernetes Cluster

2023-07-23 Thread Kamal Mittal via user
Hello Community,

Please share views for below mail.

Rgds,
Kamal

From: Kamal Mittal via user 
Sent: 21 July 2023 02:02 PM
To: user@flink.apache.org
Subject: TCP server socket with Kubernetes Cluster

Hello,

Created  a TCP server socket single source function and it is opened on a 
single POD (taskmanager) of Kubernetes cluster out of a set of PODs 
(taskmanager) by Flink.  Is there any way to know on which POD (taskmanager) it 
is opened? Does Flink gives any such information?

This is needed for client to access the same POD Kubernetes service.

Rgds,
Kamal


kafka sink

2023-07-23 Thread nick toker
hello


we replaced deprecated kafka producer with kafka sink
and from time to time when we submit a job he stack for 5 min in inisazaing
( on sink operators)
we verify the the transaction prefix is unique

it's not happened when we use kafka producer

What can be the reason?