Hi all,
I am on Flink 1.12.3.
So here’s the scenario: I have a Kafka topic as a source, where each record repsents a change to an append only audit log. The kafka record has the following fields:
id (unique identifier for that audit log entry)
operation id (is shared across
Hi all,
I am on flink 1.13.2. I set up create table like so:
CREATE TABLE lead_buffer (
`id` INT NOT NULL,
`state` STRING NOT NULL,
PRIMARY KEY (`id`) NOT ENFORCED
) WITH (
'connector'= 'kafka',
'topic' = 'buffer',
'format'= 'debezium-avro-confluent',
Hi all,
I am on flink 1.12.3. I am trying to get a tumbling window work with the table API as documented here:
https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/window-tvf/#tumble
I have a kafka topic as a flink source. I convert the stream
Hi all,
The avro specification supports microseconds and reviewing the source code in org.apache.avro.LogicalTypes seems to indicate microsecond support. However, the conversion code in flink (see org.apache.flink.formats.avro.typeutils.AvroSchemaConverter#convertToSchema)
has this
: Wednesday, June 9, 2021 at 8:34 AM
To: Joseph Lorenzini
Cc: "user@flink.apache.org"
Subject: Re: Records Are Never Emitted in a Tumbling Event Window When Each Key Only Has One Record
Hi Joe,
could you please check (in web UI) if the watermark is advancing pas
records between two soruces have the exact same time, cause the watermarks to not advance?
Joe
From: Arvid Heise
Date: Wednesday, June 9, 2021 at 8:34 AM
To: Joseph Lorenzini
Cc: "user@flink.apache.org"
Subject: Re: Records Are Never Emitted in a Tumbling Event Window Whe
Hi all,
I have observed behavior joining two keyed streams together, where events are never emitted. The source of each stream is a different kafka topic. I am curious to know if this expected and if there’s a way
to work around it.
I am using a tumbling event window. All
Hi all,
I am implementing a metric reporter for newrelic. I’d like it to support a job’s operator metrics that come with the flink framework out of the box. In order to ensure each metric is unique you can’t use the
metric name, you need to use the metric identifier. However, I am
to partition in topic topic but this partition does not exist”
Confluent says they’ll be sending an upstream patch to apache kafka to improve the error message.
Thanks,
Joe
From: Becket Qin
Date: Thursday, December 10, 2020 at 9:27 AM
To: Joseph Lorenzini
Cc: "
Hi all,
I have a flink job that uses FlinkKafkaConsumer to consume messages off a kafka
topic and a FlinkKafkaProducer to produce records on a Kafka topic. The
consumer works fine. However, the flink job eventually fails with the following
exception.
Caused by:
Hi all,
I plan to run flink jobs as docker containers in a AWS Elastic Container Service. I will have checkpointing enabled where state is stored in a s3 bucket. Each deployment will run in a per-job mode. Are there
any non-obvious downsides to running these jobs with a local
11 matches
Mail list logo