-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25995/#review65477
-----------------------------------------------------------



core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala
<https://reviews.apache.org/r/25995/#comment108686>

    This should be like the previous code - i.e., 1 + ...
    
    E.g., if config.offsetsCommitMaxRetries is one, then we can have two 
attempts. In this version at most one attempt will be made.



core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala
<https://reviews.apache.org/r/25995/#comment108687>

    Actually we should fix this. If this is the final commit during shutting 
down (and NOT autocommit) then we need to retry on error up to retries 
remaining.



core/src/main/scala/kafka/tools/MirrorMaker.scala
<https://reviews.apache.org/r/25995/#comment108691>

    Can you use a named parameter here? i.e., `valueFactory = Some(...`



core/src/main/scala/kafka/tools/MirrorMaker.scala
<https://reviews.apache.org/r/25995/#comment108690>

    Should this be fatal? i.e., fatal is normally used before exiting 
(abnormally). WARN would be more suitable.
    
    I don't think it makes sense to "not advance" the offset here especially if 
you will still keep sending messages. I think you need to remove it from the 
unacked offset list. E.g., you may configure your mirror maker producer to only 
few retries (in which case you are okay with data loss). In this scenario you 
should just let the error go and allow the mirror maker to proceed normally.
    
    If someone wants zero data loss the MM should be configured with required 
acks -1 and infinite retries.
    
    Maybe I'm misunderstanding what zero data loss really means - can you 
clarify? (Especially if someone configures the producer with acks (say) one and 
limited retries)



core/src/main/scala/kafka/tools/MirrorMaker.scala
<https://reviews.apache.org/r/25995/#comment108689>

    Should we make this a generic DoublyLinkedList data structure in utils or 
some other suitable place and unit test it as well?



core/src/main/scala/kafka/tools/MirrorMaker.scala
<https://reviews.apache.org/r/25995/#comment108688>

    Does not seem to be used. (but it probably should be used)



core/src/main/scala/kafka/utils/Utils.scala
<https://reviews.apache.org/r/25995/#comment108692>

    I don't think this is necessary right? i.e., args.map won't throw an NPE if 
you don't provide any additional arguments.
    
        scala> def f(args: Int*) {println(args.size)}
        f: (args: Int*)Unit
    
        scala> f(1,2)
        2
    
        scala> f()
        0


- Joel Koshy


On Dec. 17, 2014, 8:29 p.m., Jiangjie Qin wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25995/
> -----------------------------------------------------------
> 
> (Updated Dec. 17, 2014, 8:29 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1650 and KAKFA-1650
>     https://issues.apache.org/jira/browse/KAFKA-1650
>     https://issues.apache.org/jira/browse/KAKFA-1650
> 
> 
> Repository: kafka
> 
> 
> Description
> -------
> 
> Addressed Guozhang's comments.
> 
> 
> Addressed Guozhang's comments
> 
> 
> commit before switch to trunk
> 
> 
> commit before rebase
> 
> 
> Rebased on trunk, Addressed Guozhang's comments.
> 
> 
> Addressed Guozhang's comments on MaxInFlightRequests
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> 
> Incorporated Guozhang's comments
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> 
> Merged KAFKA-345 into this patch. Incorporated Joel and Jun's comments.
> 
> 
> Added consumer rebalance listener to mirror maker, will test it later.
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> Conflicts:
>       core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala
>       
> core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala
> 
> added custom config for consumer rebalance listener
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> 
> Add configurable consumer rebalance listener
> 
> 
> Incorporated Guozhang's comments
> 
> 
> Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
> mirrormaker-redesign
> 
> 
> Incorporated Guozhang's comments.
> 
> 
> Addressed Guozhang's comment.
> 
> 
> numMessageUnacked should be decremented no matter the send was successful or 
> not.
> 
> 
> Addressed Jun's comments.
> 
> 
> Incorporated Jun's comments
> 
> 
> Incorporated Jun's comments and rebased on trunk
> 
> 
> Rebased on current trunk
> 
> 
> Addressed Joel's comments.
> 
> 
> Addressed Joel's comments.
> 
> 
> Diffs
> -----
> 
>   core/src/main/scala/kafka/consumer/ConsumerConnector.scala 
> 62c0686e816d2888772d5a911becf625eedee397 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> e991d2187d03241f639eeaf6769fb59c8c99664c 
>   core/src/main/scala/kafka/javaapi/consumer/ZookeeperConsumerConnector.scala 
> 9baad34a9793e5067d11289ece2154ba87b388af 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> 77d951d13b8d8ad00af40257fe51623cc2caa61a 
>   core/src/main/scala/kafka/utils/Utils.scala 
> 738c1af9ef5de16fdf5130daab69757a14c48b5c 
> 
> Diff: https://reviews.apache.org/r/25995/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>

Reply via email to