Hi Martin,
That is a good point. In fact in the coming release we have made such
repartition topics really "transient" by periodically purging it with the
embedded admin client, so we can actually set its retention to -1:
On Tue, Jan 30, 2018 at 1:38 PM, David Espinosa wrote:
> Hi Andrey,
> My topics are replicated with a replicated factor equals to the number of
> nodes, 3 in this test.
> Didn't know about the kip-227.
> The problems I see at 70k topics coming from ZK are related to any
>
You need to write some custom code using Interactive Queries and
implement a scatter-gather pattern.
Basically, you need to do the range on each instance and then merge all
partial results.
https://kafka.apache.org/10/documentation/streams/developer-guide/interactive-queries.html
You can also
Hi Andrey,
My topics are replicated with a replicated factor equals to the number of
nodes, 3 in this test.
Didn't know about the kip-227.
The problems I see at 70k topics coming from ZK are related to any
operation where ZK has to retrieve topics metadata. Just listing topics at
50K or 60k you
Hi,
I'm trying to write an external tool to monitor consumer lag on Apache
Kafka.
For this purpose, I'm using the kafka-consumer-groups tool to fetch the
consumer offsets.
When using this tool, partition assignments seem to be unavailable
temporarily during the creation of a new topic even if
Hi Guozhang,
Thanks very much for your reply. I am inclined to consider this a bug, since
Kafka Streams in the default configuration is likely to run into this problem
while reprocessing old messages, and in most cases the problem wouldn't be
noticed (since there is no error -- the job just
I've Kafka 10.
I've a basic question - what determines when the Kafka topic marked for
deletion gets deleted ?
Today, i marked a topic for deletion, and it got deleted immediately
(possibly because the topic was not being used for last few months ?) ..
In earlier instances, i'd to wait for some
Your code for setting the handler seems right to me.
Another double checking: have you turned on DEBUG level metrics recording
in order for this metric? Note skippedDueToDeserializationError is recorded
as DEBUG level so you need to set metrics.recording.level accordingly
(default is INFO). Lower
Alternatively, you can dump out the consumer offsets using a command like
this:
kafka-console-consumer --topic __consumer_offsets --bootstrap-server
localhost:9092 --formatter
"kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter"
On Tue, Jan 30, 2018 at 8:38 AM, Subhash Sriram
Sorry, I have attached wrong server.properties file. Now the right one
is in the attachment.
Regards.
On 01/30/2018 02:59 PM, Zoran wrote:
Hi,
I have three servers:
blade1 (192.168.112.31),
blade2 (192.168.112.32) and
blade3 (192.168.112.33).
On each of servers kafka_2.11-1.0.0 is
Hi,
I have three servers:
blade1 (192.168.112.31),
blade2 (192.168.112.32) and
blade3 (192.168.112.33).
On each of servers kafka_2.11-1.0.0 is installed.
On blade3 (192.168.112.33:2181) zookeeper is installed as well.
I have created a topic repl3part5 with the following line:
Guozhang,
Here is the snippet.
private Properties getProperties() {
Properties p = new Properties();
...
p.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, kafkaConfig.getString("
streamThreads"));
p.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
12 matches
Mail list logo