Ok I have gone through the docs and looked the log.cleaner properties.
The use case we have is that say for a change log topic we have at a point
of time
(k, v1).
When for same key we now have
(k, v2) we really don't want (k, v1) to be retained and get cleaned up as
soon as possible.
So I see
Hi,
Ok so basically what I understand is that there are no global offset
maintained from changelog topic at broker level.
Every local state store maintains the offset under a local checkpoint file.
And in order to make sure state store rebuilds or builds its state by
reading from changelog topic
In jass config, Client section is used to authenticate a SASL connection
with zookeeper.
It is necessary to have the same principal name across all brokers.
http://kafka.apache.org/documentation.html#security_jaas_broker
On Sat, Apr 1, 2017 at 5:50 AM, Shrikant Patel wrote:
Should this timeout be less than max poll interval value? if yes than
generally speaking what should be the ratio between two or range for this
timeout value .
Thanks
Sachin
On 1 Apr 2017 04:57, "Matthias J. Sax" wrote:
Yes, you can increase
Hi All,
We using SASL for Authentication between Kafka and ZK. Followed -
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
We have 3 Kafka node, on each node, we have
principal="kafka/server_no.xxx@xxx.com. So
On first node in
Yes, you can increase ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG
-Matthias
On 3/31/17 11:32 AM, Sachin Mittal wrote:
> Hi,
> So I have added the config ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE
> and the NotLeaderForPartitionException is gone.
>
> However we see a new exception
That's a topic config you need to set at the broker side:
See config parameter `log.cleaner.*` in
http://kafka.apache.org/documentation/#brokerconfigs
-Matthias
On 3/31/17 11:49 AM, Sachin Mittal wrote:
> Hi,
> I have noticed that many times change log topics don't get compacted. The
> segment
1. The whole log will be read.
2. It will read all the key-value pairs. However, the store will contain
only the latest record for each key, after state recovery finished.
Both both (1) and (2): note, that changelog topics are compacted, thus,
it will not read everything since you started your
Hi,
I have noticed that many times change log topics don't get compacted. The
segment log file is always 1 GB.
So I would like to know how and when compaction comes into play.
is there a way we can get the topic compacted say trigger compaction after
x seconds of a given message or a given
Hi,
There are two ways to re start a streams application
1. executing streams.cleanUp() before streams.start()
This cleans up the local state store.
2. Just by calling streams.start()
What are the differences between two.
As I understand in first case it will try to create local state store by
Hi,
So I have added the config ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE
and the NotLeaderForPartitionException is gone.
However we see a new exception especially under heavy load:
org.apache.kafka.streams.errors.StreamsException: task [0_1] exception
caught when producing
at
username comes from authenticated clients
client.id can be assigned by any client (no authentication required).
It’s hard to enforce a quota on a client.id when the clients can just change
the code to use a difference client.id, hence the recent enhancement to add
user quotas based on the
Cross-posted twice (including an answer):
https://github.com/facebook/rocksdb/issues/2071
http://stackoverflow.com/questions/43140522/exception-in-thread-streamthread-1-java-lang-unsatisfiedlinkerror-cannot-load
> I don't understand why i need to re-build this. I downloaded the binaries
> which
That sounds a lot -- even if brokers can handle quite some load. How
many brokers do you got?
Main question: why do you need a consumer per partition? If you
processing super expensive such that one consumer cannot handle multiple
partitions?
-Matthias
On 3/31/17 5:02 AM, Laxmi Narayan wrote:
Quotas can be configured for (user, client-id), user and client-id groups.
user is the principal name of client authentication connection. In
user-quota,
all clients with same principal/user name will share the same quota.
client-id is the logical name given to client(s). in client-quota, all
Thanks for the reply. I had few more questions regarding quotas
What is the difference between the user-quota and client-quota? How can we
assign user-id to producers?
Also is it possible to assign client-id to a topic (or) partitions
belonging to a topic? If yes how can we do that?
Thanks,
you can pass client-id using --producer-props option.
ex: --producer-props client.id=id1
On Fri, Mar 31, 2017 at 9:32 PM, Archie wrote:
> I know that quotas are based on client-id
>
> Basically I want to run the kafka-producer-perf-test with a particular
> client id
I know that quotas are based on client-id
Basically I want to run the kafka-producer-perf-test with a particular
client id to test whether the quotas work properly
My question is how can I assign a client-id for a particular producer (or)
partition?
Thanks,
Archie
I am using kafka_2.11-0.10.2.0. I have downloaded the binaries directly
from apache which contains rocksdbjni-5.0.1.jar. I am developing kafka
streaming application. It is using RocksDB for internal purpose. However,
during my application run, getting below error.
Exception in thread
Hi,
Is there any performance downside of creating so many consumers ?
I mean literally I am gonna create atleast 7k connections in that case , I
have nearly 7k partitions with a given topic.
Keep learning keep moving .
On Fri, Mar 31, 2017 at 12:48 PM, Matthias J. Sax
I don't know what the problem is but have you looked through the logs? If that
doesn't suggest the resolution try posting them here.
From: Rafael Telles
Sent: 29 March 2017 14:53:27
To: users@kafka.apache.org
Subject: Kafka running
Great : ) . Thank you very much for the answer.
Walid.
2017-03-31 0:11 GMT+02:00 Matthias J. Sax :
> I don't see any problem with this.
>
> You might want to increase window retention time though. It's configures
> for each window individually (default is 1 day IIRC).
>
>
Hi,
I'm trying to implement Tracing (http://opentracing.io/) for Kafka java
client.
For that I need to send some metadata from producer to consumer.
I use for that Key (my implementation:
https://github.com/malafeev/opentracing-java-kafka).
My Custom partitioner skips metadata when it calculates
You need to create a KafkaConsumer per thread.
-Matthias
On 3/30/17 10:51 PM, Laxmi Narayan wrote:
> Hi ,
>
> I was thinking to listen each partition with separate thread in Kafka.
> But i get error saying :
>
>
>
>
> *org.apache.kafka.clients.consumer.KafkaConsumer@383ad023KafkaConsumer is
24 matches
Mail list logo