Done. You should be all set :)
-Matthias
On 5/20/24 10:10 AM, bou...@ulukai.net wrote:
Dear Apache Kafka Team,
I hope to post in the right place: my name is Franck LEDAY, under
Apache-Jira ID "handfreezer".
I opened an issue as Improvement KAFKA-16707 but I failed to
Dear Apache Kafka Team,
I hope to post in the right place: my name is Franck LEDAY, under
Apache-Jira ID "handfreezer".
I opened an issue as Improvement KAFKA-16707 but I failed to
assigned it to me.
May I ask to be added to the contributors list for Apache Ka
Hello!
Currently I am running cluster of 3 kafka machines. Two of those are
hosted in same data center and last one is in different.
My kafka heap options are following:
KAFKA_HEAP_OPTS=-Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
-XX:MaxGCPauseMillis=20
日 2:24
收件人: users@kafka.apache.org
主题: Re: Request to be added to kafka contributors list
Thanks for reaching out Yang. You should be all set.
-Matthias
On 5/16/24 7:40 AM, Yang Fan wrote:
Dear Apache Kafka Team,
I hope this email finds you well. My name is Fan Yang, JIRA ID is fanyan, I
kind
Thanks Matthias,
I still can't find "Assign to me" button beside Assignee and Reporter. Could
you help me set it again?
Best regards,
Fan
发件人: Matthias J. Sax
发送时间: 2024年5月17日 2:24
收件人: users@kafka.apache.org
主题: Re: Request to be added to kafka co
Hi Team,
We have some Splunk dashboards along with custom UI elements to report Kafka
health status. We forward all Kafka health check statuses to be loaded into
Splunk. However, we are encountering capacity issues on Splunk as we service
multiple Kafka clusters across our data center as well
Hello.
I’m having a problem with Kafka protocol API.
Requests:
DescribeLogDirs Request (Version: 0) => [topics]
topics => topic [partitions]
topic => STRING
partitions => INT32
My request contains `[{topic: “blah”, partitions: [0,1,2,3,4,5,6,7,8,9]}]`, but
the resul
tion count is the same for all input topic).
To work around this, you would need to rewrite the program to use either
`groupBy((k,v) -> k)` instead of `groupByKey()`, or do a
`.repartition().groupByKey()`.
Does this make sense?
-Matthias
On 5/16/24 2:11 AM, Kay Hannay wrote:
Hi,
we have a
Thanks for reaching out Yang. You should be all set.
-Matthias
On 5/16/24 7:40 AM, Yang Fan wrote:
Dear Apache Kafka Team,
I hope this email finds you well. My name is Fan Yang, JIRA ID is fanyan, I
kindly request to be added to the contributors list for Apache Kafka. Being
part
Dear Apache Kafka Team,
I hope this email finds you well. My name is Fan Yang, JIRA ID is fanyan, I
kindly request to be added to the contributors list for Apache Kafka. Being
part of this list would allow me to be assigned to JIRA tickets and work on
them.
Thank you for considering my request
Hi,
we have a Kafka streams application which merges (merge, groupByKey,
aggretgate) a few topics into one topic. The application is stateful, of
course. There are currently six instances of the application running in
parallel.
We had an issue where one new Topic for aggregation did have
Thank you for the quick response! :)
I've filed KAFKA-16779 <https://issues.apache.org/jira/browse/KAFKA-16779>
to track the issue, with the information you requested. Please let me know
if I can provide anything further.
On Tue, May 14, 2024 at 8:28 PM Luke Chen wrote:
> Hi Nichol
Hi Nicholas,
I didn't know anything in v3.7.0 would cause this issue.
It would be good if you could open a JIRA for it.
Some info to be provided:
1. You said "in the past", what version of Kafka was it using?
2. What is your broker configuration?
3. KRaft mode? Combined mode? (controlle
Hi all,
We are facing bottlenecks related to 2 topics.
I want to know where can we find assignment.partition.strategy in strimzi
kafka.
How to find how organizing and queuing load of logs coming into kafka. It
looks all 3 partitions get hogged by source with tremendous amount of logs.
The max
Ah. Well this isn't anything new then since it's been the case since 2.6,
but the default task assignor in Kafka Streams will sometimes assign
partitions unevenly for a time if it's trying to move around stateful tasks
and there's no copy of that task's state on the local disk attached
Thank you, Sophie, for your reply and for these recommendations - they are
informative.
We are trying them out.
Thanks,
Nagendra U M
From: Sophie Blee-Goldman
Sent: Tuesday, May 7, 2024 1:54 AM
To: users@kafka.apache.org
Subject: Re: Kafka Stream App Rolling
Kafka upgraded from 3.5.1 to 3.7.0 version
On Fri, May 10, 2024 at 2:13 AM Sophie Blee-Goldman
wrote:
> What version did you upgrade from?
>
> On Wed, May 8, 2024 at 10:32 PM Penumarthi Durga Prasad Chowdary <
> prasad.penumar...@gmail.com> wrote:
>
> > Hi Team
What version did you upgrade from?
On Wed, May 8, 2024 at 10:32 PM Penumarthi Durga Prasad Chowdary <
prasad.penumar...@gmail.com> wrote:
> Hi Team,
> I'm utilizing Kafka Streams to handle data from Kafka topics, running
> multiple instances with the same application I
Hi Team,
I'm utilizing Kafka Streams to handle data from Kafka topics, running
multiple instances with the same application ID. This enables distributed
processing of Kafka data across these instances. Furthermore, I've
implemented state stores with time windows and session windows. To retrieve
. Especially in combination
with...
2. The internal.leave.group.on.close should always be set to "false" by
Kafka Streams. Are you overriding this? If so, that definitely explains a
lot of the rebalances. This config is basically like an internal backdoor
used by Kafka Streams to do exactly what
Hi,
We have multiple replicas of an application running on a kubernetes cluster.
Each application instance runs a stateful kafka stream application with an
in-memory state-store (backed by a changelog topic). All instances of the
stream apps are members of the same consumer group
Dear Kafka Community,
We are upgrading our kafka instance from 2.8.1 version to 3.5.2.
The setup we have was of 3 kafka broker nodes and 3 zookeeper nodes.
We performed a rolling upgrade of the broker with the 3.5.2 version and
observed that some topics got deleted post upgrade. we don't see
Hi Kamal,
I understand this, however the connections are maintained by a vertx Kafka
client and I am not able to find a way to catch the closed connection and
reopen it.
Would setting the
connections.max.idle.ms = -1 or max int/long help here.
Thanks
Sachin
On Sat, 4 May 2024 at 11:06 PM
Hi Sachin,
Why do you want to change the default settings? If the connection is open
and unused,
then it is fair to close the connection after the timeout and reopen it
when required.
On Fri, May 3, 2024 at 1:06 PM Sachin Mittal wrote:
> Hi,
> I am using a Kafka producer java client by
Hi,
I am using a Kafka producer java client by vert.x framework.
https://vertx.io/docs/apidocs/io/vertx/kafka/client/producer/KafkaProducer.html
There is a producer setting in kafka:
connections.max.idle.ms = 54
So if there are no records to produce then after 9 minutes I get this in my
logs
Just following up, I realized I forgot to add some information.
This is using kafka 3.5.1,
I am in the process of setting up a kafka cluster which is configured to
> use KRaft. There is a set of three controller nodes and a set of six
> brokers. Both the controllers and the b
Dear Team
Greetings for the day
I have KAFKA running in a namespace on a k8s cluster exposed with NodePort.
I took the backup of the whole namespace and restored it into a
different namespace using velero. Now I have another kafka deployed on the
2nd namespace which is also exposed at some
Hello,
I am in the process of setting up a kafka cluster which is configured to
use KRaft. There is a set of three controller nodes and a set of six
brokers. Both the controllers and the brokers are configured to use mTLS
(Mutual TLS). So the part of the controller config looks like
Congrats Greg!
On 4/15/24 10:44 AM, Hector Geraldino (BLOOMBERG/ 919 3RD A) wrote:
Congrats! Well deserved
From: d...@kafka.apache.org At: 04/13/24 14:42:22 UTC-4:00To:
d...@kafka.apache.org
Subject: [ANNOUNCE] New Kafka PMC Member: Greg Harris
Hi all,
Greg Harris has been a Kafka
On 2024/04/05 06:06:29 Manikumar wrote:
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.6.2
>
> This is a bug fix release and it includes fixes and improvements from 28
JIRAs.
>
> All of the changes in this release can be found in the rele
Thanks very much for your guidance, Matthias. Sorry for the delay in responding.
I could see MSK (Kafka 3.6.0) broker log messages with “triggered followup
rebalance scheduled for 0” but no occurrences of “no follow” in the broker logs.
Hoping that rebalancing will not prevent the stalling
Learning.
Apache Kafka is the backbone for high scale data ingestion and maintenance
of source of data truth and in the future for CDC and time travel for a
system
in the past and training AI models for predictive analytics.
More details: https://bugsbunnyshah.github.io/braineous/container-first
Hi Pushkar,
unfortunately, cross cluster processing is currently not possible with
Kafka Streams.
Best,
Bruno
On 4/11/24 4:13 PM, Pushkar Deole wrote:
Hi All,
We are using a streams application and currently the application uses a
common kafka cluster that is shared along with many other
Hi All,
We are using a streams application and currently the application uses a
common kafka cluster that is shared along with many other applications.
Our application consumes from topics that are populated by other
applications and it consumes the events from those topics, processes those
Hi Mangat,
back to work now. I've configured out Streams applications to use
exacly-once semantics, but to no avail. Actually, after some. more
investigation I've come to suspect that the issue is somehow related
to rebalancing.
The initially shown topology lives inside a Quarkus Kafka Streams
Thank for your help,
It seems we have identified the issue.
Kafka SSL is based on the javax.net.ssl implementation, which only supports
single-threaded operations.
During encryption, only one logical thread is working on the CPU.
I am attempting to integrate other SSL implementations
Hello everyone,
After integrating SSL with Kafka, I noticed a significant decrease in producer
speed.
The speed of transmitting messages in plaintext is 101MB/s, which is exactly my
network's maximum bandwidth speed.
However, the speed of transmitting messages encrypted with SSL is only 19.17
the topology.
I've put a minimal reproduction below (against kafka-streams 3.7.0, please
excuse the Scala). This fails with:
org.apache.kafka.streams.errors.StreamsException: failed to initialize
processor KSTREAM-LEFTJOIN-03
at
org.apache.kafka.streams.processor.internals.ProcessorNode.init
Perf tuning is always tricky... 350 rec/sec sounds pretty low though.
You would first need to figure out where the bottleneck is. Kafka
Streams exposes all kind of metrics:
https://kafka.apache.org/documentation/#kafka_streams_monitoring
Might be good to inspect them as a first step -- maybe
The Apache Kafka community is pleased to announce the release for
Apache Kafka 3.6.2
This is a bug fix release and it includes fixes and improvements from 28 JIRAs.
All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/3.6.2/RELEASE_NOTES.html
Hi All,
My streams application is not processing more than 350 records/sec on a
high load of 3milliom records produced every 2-3 minutes.
My scenarios are as below -
I am on Kafka and streams version of 3.5.1 .
My key-value pair is in protobuf format .
I do a groupbykey followed by TimeWindow
atthias and yourself for the
guidance on the stalling issue in the Kafka Streams client. After restoring the
default value for the METADATA_MAX_AGE_CONFIG, I haven’t seen the issue
happening. Heavy rebalancing (as mentioned before) continues to happen. I will
refer to the link which mentions about certa
Dear Kafka experts , Could anyone having this data share the details please
On Wed, Apr 3, 2024 at 3:42 PM Kafka Life wrote:
> Hi Kafka users
>
> Does any one have a document or ppt that showcases the capabilities of
> Kafka along with any cost management capability?
> i have
Apologies for the delay, Bruno. Thank you so much for the excellent link and
for your inputs! Also, I would like to thank Matthias and yourself for the
guidance on the stalling issue in the Kafka Streams client. After restoring the
default value for the METADATA_MAX_AGE_CONFIG, I haven’t seen
Hi Kafka users
Does any one have a document or ppt that showcases the capabilities of
Kafka along with any cost management capability?
i have a customer who is still using IBM MQM and rabbit MQ. I want the
client to consider kafka for messaging and data streaming. I wanted to seek
your expert
Hi,
I'm using kafka connect, passing data with avro schema.
By default I get a schema of mili-seconds time precision for datetime2
columns.
Do you support time precision of micro seconds as well?
Thanks
Hi,
The follower is not able to sync-up with the leader due to epochs diverged
between leader and follower.
To confirm this, you can enable request logger and check the
diverging-epoch field in the fetch-response:
https://sourcegraph.com/github.com/apache/kafka
t; > > > already, was unsure whether it would help, and left it aside for once
> > > then.
> > > > I'll try that immediately when I get back to work.
> > > >
> > > > About snapshots and deserialization - I doubt that the issue is
> caused
>
About snapshots and deserialization - I doubt that the issue is caused
> by
> > > deserialization failures because: when taking another (i.e. at a later
> > > point of time) snapshot of the exact same data, all messages fed into
> the
> > > input topi
side for once
> then.
> > I'll try that immediately when I get back to work.
> >
> > About snapshots and deserialization - I doubt that the issue is caused by
> > deserialization failures because: when taking another (i.e. at a later
> > point of time) snapshot of the exa
bt that the issue is caused by
> deserialization failures because: when taking another (i.e. at a later
> point of time) snapshot of the exact same data, all messages fed into the
> input topic pass the pipeline as expected.
>
> Logs of both Kafka and Kafka Streams show no signs of notable
by
deserialization failures because: when taking another (i.e. at a later
point of time) snapshot of the exact same data, all messages fed into the
input topic pass the pipeline as expected.
Logs of both Kafka and Kafka Streams show no signs of notable issues as far
as I can tell, apart from these (when
Hey Karsten,
There could be several reasons this could happen.
1. Did you check the error logs? There are several reasons why the Kafka
stream app may drop incoming messages. Use exactly-once semantics to limit
such cases.
2. Are you sure there was no error when deserializing the records from
Hi,
thanks for getting back. I'll try and illustrate the issue.
I've got an input topic 'folderTopicName' fed by a database CDC system.
Messages then pass a series of FK left joins and are eventually sent to an
output topic like this ('agencies' and 'documents' being KTables):
Hi,
That sounds worrisome!
Could you please provide us with a minimal example that shows the issue
you describe?
That would help a lot!
Best,
Bruno
On 3/25/24 4:07 PM, Karsten Stöckmann wrote:
Hi,
are there circumstances that might lead to messages silently (i.e. without
any logged
Hi,
are there circumstances that might lead to messages silently (i.e. without
any logged warnings or errors) disappearing from a topology?
Specifically, I've got a rather simple topology doing a series of FK left
joins and notice severe message loss in case the application is fired up
for the
/19/24 4:14 AM, Venkatesh Nagarajan wrote:
Thanks very much for sharing the links and for your important inputs, Bruno!
We recommend to use as many stream threads as cores on the compute node where
the Kafka Streams client is run. How many Kafka Streams tasks do you have to
distribute over
Hi Sanaa,
I actually ran a migration twice.
First locally just following the procedure described by the official Kafka
documentation https://kafka.apache.org/documentation/#kraft_zk_migration
and then on Kubernetes, because I notice you are talking about StatefulSet.
But in this case I used
aft_controller_id":-1,"kraft_metadata_epoch":-1,"kraft_controller_epoch":-1}
At this point, on a dashboard I have I see that a kafka broker is a
controller and a kraft controller broker is also a controller (although
it's not what I see in zookeeper as shown above). One thing to note
Hi,
I have an unusual situation where I have a cluster running Kafka 3.5.1 in
strimzi where 4 of the __consumer_offset partitions have dropped under min
isr.
Everything else appears to be working fine.
Upon investigating, i've found that the partition followers appear to be
out of sync
the brokers.
Thanks,
Paolo.
On Mon, 18 Mar 2024 at 21:22, Sanaa Syed
wrote:
> Hello,
>
> I've begun migrating some of my Zookeeper Kafka clusters to KRaft. A
> behaviour I've noticed twice across two different kafka cluster
> environments is after provisioning a kraft contro
Thanks very much for sharing the links and for your important inputs, Bruno!
> We recommend to use as many stream threads as cores on the compute node
> where the Kafka Streams client is run. How many Kafka Streams tasks do you
> have to distribute over the clients?
We use 1vCPU (p
Hello,
I've begun migrating some of my Zookeeper Kafka clusters to KRaft. A
behaviour I've noticed twice across two different kafka cluster
environments is after provisioning a kraft controller quorum in migration
mode, it is possible for a kafka broker to become an active controller
alongside
Hi Vansh,
Great that you want to join our community!
Subscription to the mailing list is self-serve. See details to subscribe
under the following link: https://kafka.apache.org/contact
Thank you for your interest in Apache Kafka!
Best,
Bruno
On 3/15/24 1:59 PM, Vansh Kabra wrote:
Dear
Dear Kafka Users Community,
My name is Vansh Kabra, and I'm reaching out to express my interest in
joining the Kafka Users mailing list (users@kafka.apache.org).
I have been actively working with Kafka in my projects and have found it to
be an invaluable tool for building scalable and reliable
Hi Venkatesh,
As you discovered, in Kafka Streams 3.5.1 there is no stop-the-world
rebalancing.
Static group member is helpful when Kafka Streams clients are restarted
as you pointed out.
> ERROR org.apache.kafka.streams.processor.internals.StandbyTask -
stream-thread [-StreamThrea
Just want to make a correction, Bruno - My understanding is that Kafka Streams
3.5.1 uses Incremental Cooperative Rebalancing which seems to help reduce the
impact of rebalancing caused by autoscaling etc.:
https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/
Static
before the consumer group becomes stable.
By stop-the-world rebalancing, I meant a rebalancing that would cause the
processing to completely stop when it happens. To avoid this, we use static
group membership as explained by Matthias in this presentation:
https://www.confluent.io/kafka-summit-lon19
m being sent. But after some
> investigation into the source code of KafkaProducer and Sender, I think
> closing kafka producer in callback is not 100% reliable in such cases. For
> example, If you set max.in.flight.requests.per.connection to 5, and you
> sent 5 batches 1, 2, 3, 4, 5,
Hi.
By default, Kafka returns ack without waiting fsync to the disk. But you
can change this behavior by log.flush.interval.messages config.
For data durability, Kafka mainly relies on replication instead.
> then there is potential for message loss if the node crashes before
On the crashed n
I am trying to understand when does Kafka signal to the producer that the
message was successfully accepted into Kafka.
Does Kafka:
1) Write to the pagecache of the node's OS and then return back an ACK ?
If so, then there is potential for message loss if the node crashes before
fsync to disk
the KafkaProducer inside the send callback
can prevent more extra records from being sent. But after some
investigation into the source code of KafkaProducer and Sender, I think
closing kafka producer in callback is not 100% reliable in such cases. For
example, If you set max.in.flight.requests.p
n?
What do you mean with stop-the-world rebalances?
Best,
Bruno
[1]
https://github.com/apache/kafka/blob/f0087ac6a8a7b1005e9588e42b3679146bd3eb13/clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java#L882C39-L882C66
On 3/13/24 2:34 AM, Venkatesh Nagarajan wr
Hi WIlliam,
I see from your example that you close the kafka producer in the send
loop, based on the content of sendException that is used in the callback of
the KafkaProducer send.
Since your send loop is a different thread than the KafkaProducer uses to
send you will encounter race conditions
: Kafka Streams 3.5.1 based app seems to get stalled
Thanks very much for your important inputs, Matthias.
I will use the default METADATA_MAX_AGE_CONFIG. I set it to 5 hours when I saw
a lot of such rebalancing related messages in the MSK broker logs:
INFO [GroupCoordinator 2]: Preparing
, that will be very helpful.
Thank you very much.
Kind regards,
Venkatesh
From: Matthias J. Sax
Date: Tuesday, 12 March 2024 at 1:31 pm
To: users@kafka.apache.org
Subject: [EXTERNAL] Re: Kafka Streams 3.5.1 based app seems to get stalled
Without detailed logs (maybe even DEBUG) hard to say
elivery. I previously thought idempotent producer could be a
solution, but I later found that idempotent producer could only guarantee
ordering when kafka is retrying producer batch internally.
Thanks and regards,
William
Greg Harris 于2024年3月12日周二 00:50写道:
> Hi William,
>
> From your descrip
, and a metadata
refresh does not trigger a rebalance.
-Matthias
On 3/10/24 5:56 PM, Venkatesh Nagarajan wrote:
Hi all,
A Kafka Streams application sometimes stops consuming events during load
testing. Please find below the details:
Details of the app:
* Kafka Streams Version: 3.5.1
ble, or would you want that record
to also be rejected?
This sounds like a use-case for transactional producers [1] utilizing
Exactly Once delivery. You can start a transaction, emit records, have
them persisted in Kafka, perform some computation afterwards, and then
decide whether to commit or ab
Hi Haruki,
Thanks for your answer.
> I still don't get why you need this behavior though
The reason is I have to ensure message ordering per partition strictly.
Once there is an exception in the producer callback, it indicates that the
exception is not a retryable exception(from kafka produce
Hi.
> I immediately stop sending more new records and stop the kafka
producer, but some extra records were still sent
I still don't get why you need this behavior though, as long as you set
max.in.flight.requests.per.connection to greater than 1, it's impossible to
avoid this beca
Hi all,
I am facing a problem when I detect an exception in kafka producer
callback, I immediately stop sending more new records and stop the kafka
producer, but some extra records were still sent.
I found a way to resolve this issue: setting
max.in.flight.requests.per.connection to 1 and closing
Hi All,
I'm working on setting up RBAC for Apache Kafka using Ranger. Right now,
I'm facing an authorization issue while testing the console producer script
in Kafka. I need help in properly configuring Kafka with Ranger. Below are
the steps I performed.
- I successfully installed the ranger
Hi all,
A Kafka Streams application sometimes stops consuming events during load
testing. Please find below the details:
Details of the app:
* Kafka Streams Version: 3.5.1
* Kafka: AWS MSK v3.6.0
* Consumes events from 6 topics
* Calls APIs to enrich events
* Sometimes
>
> I'm working on setting up RBAC for Apache Kafka using Ranger. Right now,
> I'm facing an authorization issue while testing the console producer script
> in Kafka. I need help in properly configuring Kafka with Ranger. Below are
> the steps I performed.
>
>
>- I su
Hi All,
I'm working on setting up RBAC for Apache Kafka using Ranger. Right now,
I'm facing an authorization issue while testing the console producer script
in Kafka. I need help in properly configuring Kafka with Ranger. Below are
the steps I performed.
- I successfully installed the ranger
> Thanks for your question. It certainly isn't clear from the original
> > > KIP-298, the attached discussion, or the follow-up KIP-610 as to why
> > > the situation is asymmetric.
> > >
> > > The reason as I understand it is: Source connectors are re
Mar 5, 2024 at 5:49 PM Greg Harris
> wrote:
>
> > Hi Yeikel,
> >
> > Thanks for your question. It certainly isn't clear from the original
> > KIP-298, the attached discussion, or the follow-up KIP-610 as to why
> > the situation is asymmetric.
> >
&
>
> The reason as I understand it is: Source connectors are responsible
> for importing data to Kafka. If an error occurs during this process,
> then writing useful information to a dead letter queue about the
> failure is at least as difficult as importing the record correctly.
>
>
Hi Yeikel,
Thanks for your question. It certainly isn't clear from the original
KIP-298, the attached discussion, or the follow-up KIP-610 as to why
the situation is asymmetric.
The reason as I understand it is: Source connectors are responsible
for importing data to Kafka. If an error occurs
to
either fail the connector or employ logging to track source failures
It seems that for now, I'll need to apply the transformations as a sink and
possibly reinsert them back to Kafka for downstream consumption, but that
sounds unnecessary
[1]https://cwiki.apache.org/confluence/plugins/servlet
Hey there folks!
My team is working on migrating application code that handles wire
formatting of messages: serialization, deserialization, encryption, and
signature. We are moving to custom Ser/Des classes implementing this
interface
https://kafka.apache.org/24/javadoc/org/apache/kafka/common
e docs in different branches. Really appreciate
> your hard work on this one.
>
> Thank you all contributors! Your contributions is what makes Apache Kafka
> community awesome <3
>
> There are many impactful changes in this release but the one closest to my
> heart is https:/
Thank you Stanislav for running the release, especially fixing the whole
mess with out of sync site docs in different branches. Really appreciate
your hard work on this one.
Thank you all contributors! Your contributions is what makes Apache Kafka
community awesome <3
There are many impact
islav Kozlovski wrote:
> > The Apache Kafka community is pleased to announce the release of
> > Apache Kafka 3.7.0
> >
> > This is a minor release that includes new features, fixes, and
> > improvements from 296 JIRAs
> >
> > An overview of the release and its notab
Thanks Stan and all contributors for the release!
Best,
Bruno
On 2/27/24 7:01 PM, Stanislav Kozlovski wrote:
The Apache Kafka community is pleased to announce the release of
Apache Kafka 3.7.0
This is a minor release that includes new features, fixes, and
improvements from 296 JIRAs
Thanks for running this release, Stanislav! And thanks to all the
contributors who helped implement all the bug fixes and new features we got
to put out this time around.
On Tue, Feb 27, 2024, 13:03 Stanislav Kozlovski <
stanislavkozlov...@apache.org> wrote:
> The Apache Kafka
how I have to replace producer() into the latest kafka client?
The Apache Kafka community is pleased to announce the release of
Apache Kafka 3.7.0
This is a minor release that includes new features, fixes, and
improvements from 296 JIRAs
An overview of the release and its notable changes can be found in the
release blog post:
https://kafka.apache.org/blog
Case closed, behaviour is actually as expected. - The source topic contains
multiplied data that gets propagated into the join just as it should. I'm
leveraging a stream processor for deduplication now.
Best wishes
Karsten
Vikram Singh schrieb am Fr.,
23. Feb. 2024, 12:13:
> +Ajit Kharpude
>
1 - 100 of 16292 matches
Mail list logo