Hi
We have recently upgraded from Kafka 0.10 to 1.1 , and we have encountered
several occasions where some partitions in the cluster would go offline
and unable to recover with the following error:
20:33:04.702 [controller-event-thread] ERROR state.change.logger -
[Controller id=1 epoch=14]
Thank you Matthias - we're using version 1.0. I can tell my team to relax
and look at upgrading :)
On Mon, May 14, 2018 at 3:48 PM, Matthias J. Sax
wrote:
> It depends on your version. The behavior is known and we put one
> improvement into 1.1 release:
It depends on your version. The behavior is known and we put one
improvement into 1.1 release: https://github.com/apache/kafka/pull/4410
Thus, it's "by design" (for 1.0 and older) but we we want to improve it.
Cf: https://issues.apache.org/jira/browse/KAFKA-4969
-Matthias
On 5/13/18 7:52 PM,
Hi all,
We are running a KStreaming app with a basic topology of
consume from topic A -> transform and write through topic B (making the app
a consumer of topic B also) -> finally write to topic C
We are running it with two instances of the application. Topic A has 100
partitions, topics B and
It's quite possible that the bootstrap server being used in your test
case is different (since you pull it out of some "details") from the one
being used in the standalone Java program. I don't mean the IP address
(since the logs do indicate it is localhost), but I think it might be
the port.
Is the log coming from your application on Tomcat or Kafka? Make sure you
set the right log4j properties file. In general you can set this in
log4j.properties like this:
log4j.rootLogger=INFO, stdout
The line in your log4j.properties may look a little bit differently. The
key thing is to set the
Hi all,
sorry for delay answer, I was a bit busy in the last few days at work. We have
found the root cause of our problem, we have had a network problem with one of
our kafka brokers, in fact, in that broker there weren't any active partitions
anymore, all the partitions were in the other
Hi Kathick,
You probably want to add this line to your log4j.properties:
log4j.logger.org.apache.kafka=INFO
This will remove all DEBUG lines where the logger name starts with
org.apache.kafka.
HTH,
Andras
On Fri, May 11, 2018 at 9:28 AM, Karthick Kumar
wrote:
> Hi,
>
>
Hi Ted,
I highly appreciate the response over the weekend, and thanks for pointing
out the JIRAs.
I don't believe the processes are responsible, but individual threads which
are still holding the log/index files using IO streams. I am trying walk a
single node setup through debugger to find out
Hi,
Does anyone publish or subscribe kafka topic in TestNG?
I try to publish and subscribe kafka topic in my TestNG test case, and I
always get the following exception:
2018-05-13 15:33:58.540 WARN
o.a.kafka.common.network.Selector.pollSelectionKeys[531] - [Producer
clientId=producer-1]
10 matches
Mail list logo