Dustin Cote created KAFKA-4861:
----------------------------------
Summary: log.message.timestamp.type=LogAppendTime breaks Kafka
based consumers
Key: KAFKA-4861
URL: https://issues.apache.org/jira/browse/KAFKA-4861
Project: Kafka
Issue Type: Bug
Components: consumer
Affects Versions: 0.10.2.0
Reporter: Dustin Cote
Assignee: Jason Gustafson
Priority: Blocker
Using 0.10.2 brokers with the property
`log.message.timestamp.type=LogAppendTime` breaks all Kafka-based consumers for
the cluster. The consumer will return:
{code}
[2017-03-07 15:25:10,215] ERROR Unknown error when running consumer:
(kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: The
timestamp of the message is out of acceptable range.
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:535)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:508)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:764)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:745)
at
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:186)
at
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:149)
at
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:116)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:493)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:322)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:253)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:334)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at kafka.consumer.NewShinyConsumer.<init>(BaseConsumer.scala:55)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:69)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
{code}
On the broker side you see:
{code}
[2017-03-07 15:25:20,216] INFO [GroupCoordinator 0]: Group
console-consumer-73205 with generation 2 is now empty
(kafka.coordinator.GroupCoordinator)
[2017-03-07 15:25:20,217] ERROR [Group Metadata Manager on Broker 0]: Appending
metadata message for group console-consumer-73205 generation 2 failed due to
unexpected error: org.apache.kafka.common.errors.InvalidTimestampException
(kafka.coordinator.GroupMetadataManager)
[2017-03-07 15:25:20,218] WARN [GroupCoordinator 0]: Failed to write empty
metadata for group console-consumer-73205: The timestamp of the message is out
of acceptable range. (kafka.coordinator.GroupCoordinator)
{code}
Marking as a blocker since this appears to be a regression in that it doesn't
happen on 0.10.1.1
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)