That's correct. And each would need to use a different `transactional.id`.
-Matthias
On 11/14/19 11:17 AM, Anindya Haldar wrote:
> Thanks for the information. Does that mean that each producer thread, in case
> it wants to have its own transactions, should use its own instance of
> KafkaProduc
just restart of broker didn't help. I deleted couple of random partitions
from the data directory which were under replicated. I also noticed that
their timestamp was 4 days old. After deleting them and restarting the
broker all of the other topics got synced up.
May be it was the case of offlin
hi
I want to know what's the kafka server & client version you're using?
And I want to know ,you said you have 2000 consumer,is that mean 2000
consumer groups or 2000 consumer in one consumer group?
aravind s 于2019年11月15日周五 上午3:12写道:
> Hi,
>
> We have a use-case where there are close to 40 prod
What change did you observe in broker latency metrics 'totaltimems' ?
-Original Message-
From: aravind s
Sent: Wednesday, November 13, 2019 11:03 PM
To: users@kafka.apache.org
Subject: Kakfa Broker scaling tips for high number of consumers for a single
topic
Hi,
We have a use-case wh
You can use query_watermark_offsets() to get high watermark of the topic
partition to use as max offset.
Regards,
Koushik
-Original Message-
From: Aurelien DROISSART
Sent: Thursday, August 29, 2019 5:32 AM
To: users@kafka.apache.org
Subject: librdkafka : seek() to offset out of range
H
Thanks for the information. Does that mean that each producer thread, in case
it wants to have its own transactions, should use its own instance of
KafkaProducer?
Sincerely,
Anindya Haldar
Oracle Responsys
> On Nov 13, 2019, at 11:31 PM, Matthias J. Sax wrote:
>
> That is not possible. A pro
Hi,
We have a use-case where there are close to 40 producers to a topic with 2
replicas and consumers for this topic are around 2000. We have seen that
producer latency goes by 3 times when consumers grew from 500 to 2000. We
changed the following properties. The machine has sufficient memory. The
Hi all,
I've prepared a preliminary blog post about the upcoming Apache Kafka 2.4.0
release.
Please take a look and let me know if you want to add/modify details.
Thanks to all who contributed to this blog post.
https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache1
Thanks,
M
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 2.4.0.
There is work in progress for couple blockers PRs. I am publishing RC0 to
avoid further delays in testing the release.
This release includes many new features, including:
- Allow co
I've set up a POC using KafkaStreams with microservices consuming and
producing from/to topics. In the beginning I hadn't thought about
partition strategy, and so I was using the DefaultPartitioner for producer
partition assignments. My messages have keys (I use these for
forking/joining), and the
Hi,
Thank you! That worked. I used gradle 5.6.4 and it worked, just out of the
box.
Let's see if gradlew installAll will work :)
--
Miguel Silvestre
On Thu, Nov 14, 2019 at 10:41 AM Bruno Cadonna wrote:
> Hi Miguel,
>
> I build Kafka with Gradle 5.2.1 and at the end of the build I get the
> fo
Hi Miguel,
I build Kafka with Gradle 5.2.1 and at the end of the build I get the
following message:
"Deprecated Gradle features were used in this build, making it
incompatible with Gradle 6.0."
So, maybe you ran in one of those incompatibilities.
Try to compile with a 5.x version of Gradle.
Be
Hi,
I'm on macOS Mojave 10.14.6 but when I run gradle (I'm using version 6.0) I
get the following error:
What can I do?
FAILURE: Build failed with an exception.
* Where:
Build file '/Users/miguel.silvestre/Projects/others/kafka/build.gradle'
line: 480
* What went wrong:
A problem occurred eval
Why that? Just because there is explicit documentation?
@Debraj: Kafka Streams can be deployed as a regular Java application.
Hence, and tutorial on how to run a Java application on YARN should help.
-Matthias
On 11/11/19 10:33 AM, Ryanne Dolan wrote:
> Consider using Flink, Spark, or Samza in
14 matches
Mail list logo