[ 
https://issues.apache.org/jira/browse/KAFKA-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15902079#comment-15902079
 ] 

ASF GitHub Bot commented on KAFKA-4861:
---------------------------------------

GitHub user ijuma opened a pull request:

    https://github.com/apache/kafka/pull/2657

    KAFKA-4861; GroupMetadataManager record is rejected if broker configured 
with LogAppendTime

    The record should be created with CreateTime (like in the producer). The 
conversion to
    LogAppendTime happens automatically (if necessary).

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ijuma/kafka 
kafka-4861-log-append-time-breaks-group-data-manager

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/2657.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2657
    
----
commit 245319d6ba977bb346cc51abe8d948cf42fc2e13
Author: Ismael Juma <[email protected]>
Date:   2017-03-08T22:18:02Z

    KAFKA-4861; GroupMetadataManager record is rejected if broker configured 
with LogAppendTime
    
    The record should be created with CreateTime (like in the producer). The 
conversion to
    LogAppendTime happens automatically (if necessary).

----


> log.message.timestamp.type=LogAppendTime breaks Kafka based consumers
> ---------------------------------------------------------------------
>
>                 Key: KAFKA-4861
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4861
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.10.2.0
>            Reporter: Dustin Cote
>            Assignee: Jason Gustafson
>            Priority: Blocker
>             Fix For: 0.10.2.1
>
>
> Using 0.10.2 brokers with the property 
> `log.message.timestamp.type=LogAppendTime` breaks all Kafka-based consumers 
> for the cluster. The consumer will return:
> {code}
> [2017-03-07 15:25:10,215] ERROR Unknown error when running consumer:  
> (kafka.tools.ConsoleConsumer$)
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: The 
> timestamp of the message is out of acceptable range.
>       at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:535)
>       at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:508)
>       at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:764)
>       at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:745)
>       at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:186)
>       at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:149)
>       at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:116)
>       at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:493)
>       at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:322)
>       at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:253)
>       at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
>       at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:334)
>       at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
>       at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
>       at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
>       at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>       at kafka.consumer.NewShinyConsumer.<init>(BaseConsumer.scala:55)
>       at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:69)
>       at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
>       at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}
> On the broker side you see:
> {code}
> [2017-03-07 15:25:20,216] INFO [GroupCoordinator 0]: Group 
> console-consumer-73205 with generation 2 is now empty 
> (kafka.coordinator.GroupCoordinator)
> [2017-03-07 15:25:20,217] ERROR [Group Metadata Manager on Broker 0]: 
> Appending metadata message for group console-consumer-73205 generation 2 
> failed due to unexpected error: 
> org.apache.kafka.common.errors.InvalidTimestampException 
> (kafka.coordinator.GroupMetadataManager)
> [2017-03-07 15:25:20,218] WARN [GroupCoordinator 0]: Failed to write empty 
> metadata for group console-consumer-73205: The timestamp of the message is 
> out of acceptable range. (kafka.coordinator.GroupCoordinator)
> {code}
> Marking as a blocker since this appears to be a regression in that it doesn't 
> happen on 0.10.1.1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to