o sinks, you may need to
> create a new combined sink and do it yourself.
>
> On Thu, Jan 5, 2023 at 11:48 PM Great Info wrote:
>
>>
>> I have a stream from Kafka, after reading it and doing some
>> transformations/enrichment I need to store the final data stream in th
I have a stream from Kafka, after reading it and doing some
transformations/enrichment I need to store the final data stream in the
database and publish it to Kafka so I am planning to add two sinks like
below
*finalInputStream.addSink(dataBaseSink); // Sink1finalInputStream.addSink(
I have some flink applications that read streams from Kafka, now
the producer side code has introduced some additional information in Kafka
headers while producing records.
Now I need to change my consumer-side logic to process the records if the
header contains a specific value, if the header
I have flink job and the current flow looks like below
Source_Kafka->*operator-1*(key by partition)->*operator-2*(enrich the
record)-*Sink1-Operator* & *Sink2-Operator *
With this flow the current problem is at operator-2, the core logic runs
here is to lookup some reference status data from
way to set
different parallelism for each slotSharingGourp
On Thu, Jul 14, 2022 at 10:12 PM Great Info wrote:
> -> If so, I think you can set Task1 and Task2 to the same parallelism and
> set them in the same slot sharing group. In this way, Task1 and Task2 will
> be deployed int
arted at
> the same time (to ensure the data consistency).
> You can get more details about failover strategy in [1]
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/ops/state/task_failure_recovery/#failover-strategies
>
> Best,
> Lijie
>
tart a
> thread in the open() method to refresh the data regularly. but be careful
> to clean up your data and threads in the close() method, otherwise it will
> lead to leaks.
>
> Best,
> Weihua
>
>
> On Tue, Jun 14, 2022 at 12:04 AM Great Info wrote:
>
>> Hi,
&
Hi,
I have one flink job which has two tasks
Task1- Source some static data over https and keep it in memory, this keeps
refreshing it every 1 hour
Task2- Process some real-time events from Kafka and uses static data to
validate something and transform, then forward to other Kafka topic.
so far,
I have one flink job which reads files from s3 and processes them.
Currently, it is running on flink 1.9.0, I need to upgrade my cluster to
1.13.5, so I have done the changes in my job pom and brought up the flink
cluster using 1.13.5 dist.
when I submit my application I am getting the below
I need to download Keystore and use it while creating the source connector,
currently, I am overriding the open method
I have deployed my own flink setup in AWS ECS. One Service for JobManager
and one Service for task Managers. I am running one ECS task for a job
manager and 3 ecs tasks for TASK managers.
I have a kind of batch job which I upload using flink rest every-day with
changing new arguments, when I
11 matches
Mail list logo