I just want to say that I have solved the situation by deleting
zookeeper's and kafka's data directories and setting
offsets.topic.replication.factor=3 in kafka server.properties file.
After that, __consumer_offsets topic is replicated and everything works
as expected.
I hope this will help
Appreciate you efforts
On 2 Feb 2018 2:37 p.m., "Zoran" wrote:
> I just want to say that I have solved the situation by deleting
> zookeeper's and kafka's data directories and setting
> offsets.topic.replication.factor=3 in kafka server.properties file.
>
> After that, __consumer_offsets topic i
Hello Experts , We have a Kafka Setup running for our analytics pipeline
...Below is the broker config ..
max.message.bytes = 67108864
replica.fetch.max.bytes = 67108864
zookeeper.session.timeout.ms = 7000
replica.socket.timeout.ms = 3
offsets.commit.timeout.ms = 5000
request.timeout.ms = 4000
Hi Guozhang,
You are right. I overlooked the fact that skippedDueToDeserializationError
is recorded as DEBUG.
That was it. Now that I got it, it feels like an overkill to set metrics
level to DEBUG just for this!
Thanks for your time!
Srikanth
On Tue, Jan 30, 2018 at 10:56 PM, Guozhang Wang wr
Hi,
My name is Matan Givoni and I am a team leader in a small startup company.
We are starting a development on a cloud based solution for multimedia and
telemetry streaming applications and we aren’t sure if Kafka is the right tool
for our use case.
We need to create a client side application
We have a DataDog integration showing some metrics, and for one of our
clusters the above two
values are > 0 and highlighted in red.
What's the usual remedy (Confluient Platform, OSS version) ?
Thanks
This means either the brokers are not healthy (bad hardware) or that the
replication fetchers can't keep up with the rate of incoming messages.
If the latter, you need to figure out where the latency bottleneck is and
what your latency SLAs are.
Common sources of latency bottlenecks:
- network h
Thanks Jeff!
On Fri, Feb 2, 2018 at 11:58 AM, Jeff Widman wrote:
> This means either the brokers are not healthy (bad hardware) or that the
> replication fetchers can't keep up with the rate of incoming messages.
>
> If the latter, you need to figure out where the latency bottleneck is and
> wha
I need to get messages from a topic in one Kafka cluster, transform the message
payload, and put the messages into topics in another Kafka cluster. Is it
possible to do this with Kafka Streams? I don’t see how I can configure the
stream to use one cluster for the input and another cluster for ou
Hello.
We are a .net shop and was wondering if you are planning streams API support
for .net?
If not, where can I find documentation on the REST API around KSQL mentioned in
this ticket:
https://github.com/confluentinc/confluent-kafka-dotnet/issues/344
Finally, if none of the above, then how c
Kafka Streams only work with a single cluster.
Thus, you would need to either transform the data first and replicate
the output topic to the target cluster, or replicate first and transform
within the target cluster.
Note, for the "intermediate" topic you need, you can set a low retention
time to
`KStream#through()` currently lets you specify a `Produced`. Shouldn't it
also let you specify a `Consumed`. This would let you specify a time stamp
extractor.
Hi all.
I've got a cluster of 3 brokers with around 50 topics. Several topics are
under replicated. Everything I've seen says I need to restart the followers to
fix that. All my under replicated topics have the same broker as the leader.
That makes me think it's a leader problem and not a
It's not required to specify Consumed:
- the specified Serdes from Produced are used for the consumer, too
- it would be invalid to specify a custom extractor (for correctness
reasons, we enforce the default timestamp extractor)
Note, that Streams API sets the metadata record timestamp on write,
> it would be invalid to specify a custom extractor (for correctness
reasons, we enforce the default timestamp extractor)
"default timestamp extractor" meaning the one that was used when reading
the input record?
So, KStream.through() and KStream.to()-Topology.stream() are not
semantically equiva
16 matches
Mail list logo