[jira] [Resolved] (KAFKA-3276) Rename 0.10.0.0 to 0.10.1.0 and 0.9.1.0 to 0.10.0.0 in JIRA

2016-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3276.
--
   Resolution: Fixed
 Assignee: Ewen Cheslack-Postava
 Reviewer: Ismael Juma
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

Renamed the versions w/ review by [~ijuma]

> Rename 0.10.0.0 to 0.10.1.0 and 0.9.1.0 to 0.10.0.0 in JIRA
> ---
>
> Key: KAFKA-3276
> URL: https://issues.apache.org/jira/browse/KAFKA-3276
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.0.0
>
>
> Instead of changing the "Fix version" in hundreds of issues, it's easier to 
> rename the versions in JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3276) Rename 0.10.0.0 to 0.10.1.0 and 0.9.1.0 to 0.10.0.0 in JIRA

2016-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3276:
-
Fix Version/s: 0.10.1.0

> Rename 0.10.0.0 to 0.10.1.0 and 0.9.1.0 to 0.10.0.0 in JIRA
> ---
>
> Key: KAFKA-3276
> URL: https://issues.apache.org/jira/browse/KAFKA-3276
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
> Fix For: 0.10.1.0
>
>
> Instead of changing the "Fix version" in hundreds of issues, it's easier to 
> rename the versions in JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3036) Add up-conversion and down-conversion of ProducerRequest and FetchRequest to broker.

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3036:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Add up-conversion and down-conversion of ProducerRequest and FetchRequest to 
> broker.
> 
>
> Key: KAFKA-3036
> URL: https://issues.apache.org/jira/browse/KAFKA-3036
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> This ticket will implement the necessary up-conversion and down-conversion 
> for protocol migration in KIP-31 and KIP-32.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+CreateTime+and+LogAppendTime+to+Kafka+message
> As a short summary:
> 1. When message.format.version=0, down-convert MessageAndOffset v1 to 
> MessageAndOffset v0 when receives ProduceRequest v2
> 2. When message.format.version=1
> a. up-convert MessageAndOffset v0 to MessageAndOffset v1 when receives 
> ProduceRequest v1
> b. down-convert MessageAndOffst v1 to MessageAndOffset v0 when receives 
> FetchRequest v1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3026) KIP-32 (part 2): Changes in broker to over-write timestamp or reject message

2016-02-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160283#comment-15160283
 ] 

Ismael Juma commented on KAFKA-3026:


Thanks, that's what I thought.

> KIP-32 (part 2): Changes in broker to over-write timestamp or reject message
> 
>
> Key: KAFKA-3026
> URL: https://issues.apache.org/jira/browse/KAFKA-3026
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Anna Povzner
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> The decision about this JIRA assignment is still under discussion with 
> [~becket_qin]. Not going to implement without his agreement.
> This JIRA includes:
> When the broker receives a message, it checks the configs:
> 1. If message.timestamp.type=LogAppendTime, the server over-writes the 
> timestamp with its current local time
> Message could be compressed or not compressed. In either case, the timestamp 
> is always over-written to broker's current time
> 2. If message.timestamp.type=CreateTime, the server calculated the difference 
> between the current time on broker and Timestamp in the message:
> If difference is within max.message.time.difference.ms, the server will 
> accept it and append it to the log. For compressed message, server will 
> update the timestamp in compressed message to -1: this means that CreateTime 
> is used and the timestamp is in each individual inner message.
> If difference is higher than max.message.time.difference.ms, the server will 
> reject the entire batch with TimestampExceededThresholdException.
> (Actually adding the timestamp to the message and adding configs are covered 
> by KAFKA-3025).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3026) KIP-32 (part 2): Changes in broker to over-write timestamp or reject message

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3026:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> KIP-32 (part 2): Changes in broker to over-write timestamp or reject message
> 
>
> Key: KAFKA-3026
> URL: https://issues.apache.org/jira/browse/KAFKA-3026
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Anna Povzner
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> The decision about this JIRA assignment is still under discussion with 
> [~becket_qin]. Not going to implement without his agreement.
> This JIRA includes:
> When the broker receives a message, it checks the configs:
> 1. If message.timestamp.type=LogAppendTime, the server over-writes the 
> timestamp with its current local time
> Message could be compressed or not compressed. In either case, the timestamp 
> is always over-written to broker's current time
> 2. If message.timestamp.type=CreateTime, the server calculated the difference 
> between the current time on broker and Timestamp in the message:
> If difference is within max.message.time.difference.ms, the server will 
> accept it and append it to the log. For compressed message, server will 
> update the timestamp in compressed message to -1: this means that CreateTime 
> is used and the timestamp is in each individual inner message.
> If difference is higher than max.message.time.difference.ms, the server will 
> reject the entire batch with TimestampExceededThresholdException.
> (Actually adding the timestamp to the message and adding configs are covered 
> by KAFKA-3025).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160279#comment-15160279
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/920


> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3036) Add up-conversion and down-conversion of ProducerRequest and FetchRequest to broker.

2016-02-23 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160276#comment-15160276
 ] 

Jiangjie Qin commented on KAFKA-3036:
-

Yes. I'll close it.

> Add up-conversion and down-conversion of ProducerRequest and FetchRequest to 
> broker.
> 
>
> Key: KAFKA-3036
> URL: https://issues.apache.org/jira/browse/KAFKA-3036
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> This ticket will implement the necessary up-conversion and down-conversion 
> for protocol migration in KIP-31 and KIP-32.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+CreateTime+and+LogAppendTime+to+Kafka+message
> As a short summary:
> 1. When message.format.version=0, down-convert MessageAndOffset v1 to 
> MessageAndOffset v0 when receives ProduceRequest v2
> 2. When message.format.version=1
> a. up-convert MessageAndOffset v0 to MessageAndOffset v1 when receives 
> ProduceRequest v1
> b. down-convert MessageAndOffst v1 to MessageAndOffset v0 when receives 
> FetchRequest v1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3036) Add up-conversion and down-conversion of ProducerRequest and FetchRequest to broker.

2016-02-23 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved KAFKA-3036.
-
   Resolution: Fixed
Fix Version/s: (was: 0.9.1.0)
   0.10.0.0

This ticket has been implemented as a part of KAFKA-3025

> Add up-conversion and down-conversion of ProducerRequest and FetchRequest to 
> broker.
> 
>
> Key: KAFKA-3036
> URL: https://issues.apache.org/jira/browse/KAFKA-3036
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> This ticket will implement the necessary up-conversion and down-conversion 
> for protocol migration in KIP-31 and KIP-32.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+CreateTime+and+LogAppendTime+to+Kafka+message
> As a short summary:
> 1. When message.format.version=0, down-convert MessageAndOffset v1 to 
> MessageAndOffset v0 when receives ProduceRequest v2
> 2. When message.format.version=1
> a. up-convert MessageAndOffset v0 to MessageAndOffset v1 when receives 
> ProduceRequest v1
> b. down-convert MessageAndOffst v1 to MessageAndOffset v0 when receives 
> FetchRequest v1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3093: Add Connect status tracking API

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/920


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3093.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 920
[https://github.com/apache/kafka/pull/920]

> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3026) KIP-32 (part 2): Changes in broker to over-write timestamp or reject message

2016-02-23 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved KAFKA-3026.
-
   Resolution: Fixed
Fix Version/s: 0.10.0.0

This ticket has been implemented in KAFKA-3025

> KIP-32 (part 2): Changes in broker to over-write timestamp or reject message
> 
>
> Key: KAFKA-3026
> URL: https://issues.apache.org/jira/browse/KAFKA-3026
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Anna Povzner
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> The decision about this JIRA assignment is still under discussion with 
> [~becket_qin]. Not going to implement without his agreement.
> This JIRA includes:
> When the broker receives a message, it checks the configs:
> 1. If message.timestamp.type=LogAppendTime, the server over-writes the 
> timestamp with its current local time
> Message could be compressed or not compressed. In either case, the timestamp 
> is always over-written to broker's current time
> 2. If message.timestamp.type=CreateTime, the server calculated the difference 
> between the current time on broker and Timestamp in the message:
> If difference is within max.message.time.difference.ms, the server will 
> accept it and append it to the log. For compressed message, server will 
> update the timestamp in compressed message to -1: this means that CreateTime 
> is used and the timestamp is in each individual inner message.
> If difference is higher than max.message.time.difference.ms, the server will 
> reject the entire batch with TimestampExceededThresholdException.
> (Actually adding the timestamp to the message and adding configs are covered 
> by KAFKA-3025).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3026) KIP-32 (part 2): Changes in broker to over-write timestamp or reject message

2016-02-23 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160274#comment-15160274
 ] 

Jiangjie Qin commented on KAFKA-3026:
-

This has actually been down. We used one ticket to finish the task in three 
tickets. I'll clean it up.

> KIP-32 (part 2): Changes in broker to over-write timestamp or reject message
> 
>
> Key: KAFKA-3026
> URL: https://issues.apache.org/jira/browse/KAFKA-3026
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Anna Povzner
>Assignee: Jiangjie Qin
>
> The decision about this JIRA assignment is still under discussion with 
> [~becket_qin]. Not going to implement without his agreement.
> This JIRA includes:
> When the broker receives a message, it checks the configs:
> 1. If message.timestamp.type=LogAppendTime, the server over-writes the 
> timestamp with its current local time
> Message could be compressed or not compressed. In either case, the timestamp 
> is always over-written to broker's current time
> 2. If message.timestamp.type=CreateTime, the server calculated the difference 
> between the current time on broker and Timestamp in the message:
> If difference is within max.message.time.difference.ms, the server will 
> accept it and append it to the log. For compressed message, server will 
> update the timestamp in compressed message to -1: this means that CreateTime 
> is used and the timestamp is in each individual inner message.
> If difference is higher than max.message.time.difference.ms, the server will 
> reject the entire batch with TimestampExceededThresholdException.
> (Actually adding the timestamp to the message and adding configs are covered 
> by KAFKA-3025).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Make next Kafka release 0.10.0.0 instead of 0.9.1.0

2016-02-23 Thread Ismael Juma
I also changed the "Fix version" for a number of recent JIRAs from 0.10.0.0
to 0.9.1.0. The reason is that once we rename the JIRA versions (as per
KAFKA-3276), they will all automatically become 0.10.0.0 (without causing a
storm of notifications).

Ismael

On Tue, Feb 23, 2016 at 10:26 PM, Ismael Juma  wrote:

> Thank you Becket. I have:
>
> * Updated KIP wiki page so that all KIPs with a target of 0.9.1.0 now
> target 0.10.0.0
> * Filed https://issues.apache.org/jira/browse/KAFKA-3275 to track tasks
> that still need to be done. The next step would be to update JIRA versions
> as per https://issues.apache.org/jira/browse/KAFKA-3276 (we need a
> committer to do this, I believe).
>
> Ismael
>
> On Tue, Feb 23, 2016 at 5:11 PM, Becket Qin  wrote:
>
>> Thanks everyone for voting.
>>
>> The vote has passed with +6 (binding) and +5(non-binding)
>>
>> Jiangjie (Becket) Qin
>>
>> On Tue, Feb 23, 2016 at 2:38 PM, Harsha  wrote:
>>
>> > +1
>> >
>> > On Tue, Feb 23, 2016, at 02:25 PM, Christian Posta wrote:
>> > > +1 non binding
>> > >
>> > > On Tue, Feb 23, 2016 at 3:18 PM, Gwen Shapira 
>> wrote:
>> > >
>> > > > +1
>> > > >
>> > > > On Tue, Feb 23, 2016 at 1:58 PM, Jun Rao  wrote:
>> > > >
>> > > > > +1.
>> > > > >
>> > > > > Thanks,
>> > > > >
>> > > > > Jun
>> > > > >
>> > > > > On Fri, Feb 19, 2016 at 11:00 AM, Becket Qin <
>> becket@gmail.com>
>> > > > wrote:
>> > > > >
>> > > > > > Hi All,
>> > > > > >
>> > > > > > We would like to start this voting thread on making next Kafka
>> > release
>> > > > > > 0.10.0.0 instead of 0.9.1.0.
>> > > > > >
>> > > > > > The next Kafka release will have several significant important
>> new
>> > > > > > features/changes such as Kafka Stream, Message Format Change,
>> > Client
>> > > > > > Interceptors and several new consumer API changes, etc. We feel
>> it
>> > is
>> > > > > > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
>> > > > > >
>> > > > > > Some previous discussions are in the following thread.
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> >
>> http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwfzigx1frzd020vk9fanj0s9nkszfuwk677bqxfuuc...@mail.gmail.com%3E
>> > > > > >
>> > > > > > Thanks,
>> > > > > >
>> > > > > > Jiangjie (Becket) Qin
>> > > > > >
>> > > > >
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > *Christian Posta*
>> > > twitter: @christianposta
>> > > http://www.christianposta.com/blog
>> > > http://fabric8.io
>> >
>>
>
>


[jira] [Updated] (KAFKA-1377) transient unit test failure in LogOffsetTest

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1377:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> transient unit test failure in LogOffsetTest
> 
>
> Key: KAFKA-1377
> URL: https://issues.apache.org/jira/browse/KAFKA-1377
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>  Labels: newbie
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-1377.patch, KAFKA-1377_2014-04-11_17:42:13.patch, 
> KAFKA-1377_2014-04-11_18:14:45.patch
>
>
> Saw the following transient unit test failure.
> kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.server.LogOffsetTest.testGetOffsetsBeforeEarliestTime(LogOffsetTest.scala:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2511) KIP-31 & KIP-32: message format change + adding timestamp to messages

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2511:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> KIP-31 & KIP-32: message format change + adding timestamp to messages
> -
>
> Key: KAFKA-2511
> URL: https://issues.apache.org/jira/browse/KAFKA-2511
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> This ticket is created for KIP 31 and KIP-32. Please refer to the KIPs for 
> details.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-31+-+Message+format+change+proposal
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+CreateTime+and+LogAppendTime+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3026) KIP-32 (part 2): Changes in broker to over-write timestamp or reject message

2016-02-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160262#comment-15160262
 ] 

Ismael Juma commented on KAFKA-3026:


[~becket_qin], what's the status of this?

> KIP-32 (part 2): Changes in broker to over-write timestamp or reject message
> 
>
> Key: KAFKA-3026
> URL: https://issues.apache.org/jira/browse/KAFKA-3026
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Anna Povzner
>Assignee: Jiangjie Qin
>
> The decision about this JIRA assignment is still under discussion with 
> [~becket_qin]. Not going to implement without his agreement.
> This JIRA includes:
> When the broker receives a message, it checks the configs:
> 1. If message.timestamp.type=LogAppendTime, the server over-writes the 
> timestamp with its current local time
> Message could be compressed or not compressed. In either case, the timestamp 
> is always over-written to broker's current time
> 2. If message.timestamp.type=CreateTime, the server calculated the difference 
> between the current time on broker and Timestamp in the message:
> If difference is within max.message.time.difference.ms, the server will 
> accept it and append it to the log. For compressed message, server will 
> update the timestamp in compressed message to -1: this means that CreateTime 
> is used and the timestamp is in each individual inner message.
> If difference is higher than max.message.time.difference.ms, the server will 
> reject the entire batch with TimestampExceededThresholdException.
> (Actually adding the timestamp to the message and adding configs are covered 
> by KAFKA-3025).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1332) Add functionality to the offsetsBeforeTime() API

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1332:
---
Fix Version/s: 0.9.1.0

> Add functionality to the offsetsBeforeTime() API
> 
>
> Key: KAFKA-1332
> URL: https://issues.apache.org/jira/browse/KAFKA-1332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Neha Narkhede
> Fix For: 0.9.1.0
>
>
> Add functionality to the offsetsBeforeTime() API to load offsets 
> corresponding to a particular timestamp, including earliest and latest offsets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3036) Add up-conversion and down-conversion of ProducerRequest and FetchRequest to broker.

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3036:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Add up-conversion and down-conversion of ProducerRequest and FetchRequest to 
> broker.
> 
>
> Key: KAFKA-3036
> URL: https://issues.apache.org/jira/browse/KAFKA-3036
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> This ticket will implement the necessary up-conversion and down-conversion 
> for protocol migration in KIP-31 and KIP-32.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+CreateTime+and+LogAppendTime+to+Kafka+message
> As a short summary:
> 1. When message.format.version=0, down-convert MessageAndOffset v1 to 
> MessageAndOffset v0 when receives ProduceRequest v2
> 2. When message.format.version=1
> a. up-convert MessageAndOffset v0 to MessageAndOffset v1 when receives 
> ProduceRequest v1
> b. down-convert MessageAndOffst v1 to MessageAndOffset v0 when receives 
> FetchRequest v1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1332) Add functionality to the offsetsBeforeTime() API

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1332:
---
Affects Version/s: (was: 0.10.0.0)
   0.9.0.0

> Add functionality to the offsetsBeforeTime() API
> 
>
> Key: KAFKA-1332
> URL: https://issues.apache.org/jira/browse/KAFKA-1332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Neha Narkhede
> Fix For: 0.9.1.0
>
>
> Add functionality to the offsetsBeforeTime() API to load offsets 
> corresponding to a particular timestamp, including earliest and latest offsets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1332) Add functionality to the offsetsBeforeTime() API

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1332:
---
Fix Version/s: (was: 0.10.0.0)

> Add functionality to the offsetsBeforeTime() API
> 
>
> Key: KAFKA-1332
> URL: https://issues.apache.org/jira/browse/KAFKA-1332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Neha Narkhede
> Fix For: 0.9.1.0
>
>
> Add functionality to the offsetsBeforeTime() API to load offsets 
> corresponding to a particular timestamp, including earliest and latest offsets



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3101) Optimize Aggregation Outputs

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3101:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1545) java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1545:
---
Fix Version/s: (was: 0.10.0.0)

> java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on 
> some irregular hostnames
> ---
>
> Key: KAFKA-1545
> URL: https://issues.apache.org/jira/browse/KAFKA-1545
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Guozhang Wang
>Assignee: Rekha Joshi
>  Labels: newbie
>
> For example:
> kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic FAILED
> java.net.UnknownHostException: guwang-mn2: guwang-mn2: nodename nor 
> servname provided, or not known
> at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
> at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:59)
> at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:121)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:130)
> at kafka.server.LogOffsetTest.setUp(LogOffsetTest.scala:53)
> Caused by:
> java.net.UnknownHostException: guwang-mn2: nodename nor servname 
> provided, or not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
> at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
> ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3036) Add up-conversion and down-conversion of ProducerRequest and FetchRequest to broker.

2016-02-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160257#comment-15160257
 ] 

Ismael Juma commented on KAFKA-3036:


[~becket_qin], this has been done right?

> Add up-conversion and down-conversion of ProducerRequest and FetchRequest to 
> broker.
> 
>
> Key: KAFKA-3036
> URL: https://issues.apache.org/jira/browse/KAFKA-3036
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> This ticket will implement the necessary up-conversion and down-conversion 
> for protocol migration in KIP-31 and KIP-32.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+CreateTime+and+LogAppendTime+to+Kafka+message
> As a short summary:
> 1. When message.format.version=0, down-convert MessageAndOffset v1 to 
> MessageAndOffset v0 when receives ProduceRequest v2
> 2. When message.format.version=1
> a. up-convert MessageAndOffset v0 to MessageAndOffset v1 when receives 
> ProduceRequest v1
> b. down-convert MessageAndOffst v1 to MessageAndOffset v0 when receives 
> FetchRequest v1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3202) Add system test for KIP-31 and KIP-32 - Change message format version on the fly

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3202:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Add system test for KIP-31 and KIP-32 - Change message format version on the 
> fly
> 
>
> Key: KAFKA-3202
> URL: https://issues.apache.org/jira/browse/KAFKA-3202
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> The system test should cover the case that message format changes are made 
> when clients are producing/consuming. The message format change should not 
> cause client side issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3163) KIP-33 - Add a time based log index to Kafka

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3163:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> KIP-33 - Add a time based log index to Kafka
> 
>
> Key: KAFKA-3163
> URL: https://issues.apache.org/jira/browse/KAFKA-3163
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.1.0
>
>
> This ticket is associated with KIP-33.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2522) ConsumerGroupCommand sends all output to STDOUT

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2522:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> ConsumerGroupCommand sends all output to STDOUT
> ---
>
> Key: KAFKA-2522
> URL: https://issues.apache.org/jira/browse/KAFKA-2522
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.1.1, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.8.2.2
>Reporter: Dmitry Melanchenko
>Priority: Trivial
> Fix For: 0.9.1.0
>
> Attachments: kafka_2522_print_err_to_stderr.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> kafka.admin.ConsumerGroupCommand sends all messages to STDOUT. To be 
> consistent it should send normal output to STDOUT and error messages to 
> STDERR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3188) Add system test for KIP-31 and KIP-32 - Compatibility Test

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3188:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Add system test for KIP-31 and KIP-32 - Compatibility Test
> --
>
> Key: KAFKA-3188
> URL: https://issues.apache.org/jira/browse/KAFKA-3188
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.9.1.0
>
>
> The integration test should test the compatibility between 0.10.0 broker with 
> clients on older versions. The clients version should include 0.9.0 and 0.8.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1215) Rack-Aware replica assignment option

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1215:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Rack-Aware replica assignment option
> 
>
> Key: KAFKA-1215
> URL: https://issues.apache.org/jira/browse/KAFKA-1215
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Joris Van Remoortere
>Assignee: Allen Wang
> Fix For: 0.9.1.0
>
> Attachments: rack_aware_replica_assignment_v1.patch, 
> rack_aware_replica_assignment_v2.patch
>
>
> Adding a rack-id to kafka config. This rack-id can be used during replica 
> assignment by using the max-rack-replication argument in the admin scripts 
> (create topic, etc.). By default the original replication assignment 
> algorithm is used because max-rack-replication defaults to -1. 
> max-rack-replication > -1 is not honored if you are doing manual replica 
> assignment (preffered).
> If this looks good I can add some test cases specific to the rack-aware 
> assignment.
> I can also port this to trunk. We are currently running 0.8.0 in production 
> and need this, so i wrote the patch against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3257) bootstrap-test-env.sh version check fails when grep has --colour option enabled.

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3257:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> bootstrap-test-env.sh version check fails when grep has --colour option 
> enabled.
> 
>
> Key: KAFKA-3257
> URL: https://issues.apache.org/jira/browse/KAFKA-3257
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.9.0.1
>Reporter: Jiangjie Qin
>  Labels: newbie++
> Fix For: 0.9.1.0
>
>
> When checking the versions, we use the following command:
> {code}
> vagrant --version | egrep -o "[0-9]+\.[0-9]+\.[0-9]+"
> {code}
> This does not work if user box has --colour option enabled. In my case it 
> complains:
> Found Vagrant version 1.8.1. Please upgrade to 1.6.4 or higher (see 
> http://www.vagrantup.com for details)
> We should change this line to:
> {code}
> vagrant --version | egrep --colour=never -o "[0-9]+\.[0-9]+\.[0-9]+"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3046) add ByteBuffer Serializer

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3046:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
> Fix For: 0.9.1.0
>
>
> ByteBuffer is widely used in many scenarios. (eg: storm-sql can specify kafka 
> as the external data Source, we can use ByteBuffer for value serializer.) 
> Adding ByteBuffer Serializer officially will be convenient for 
> users to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3245) need a way to specify the number of replicas for change log topics

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3245:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> need a way to specify the number of replicas for change log topics
> --
>
> Key: KAFKA-3245
> URL: https://issues.apache.org/jira/browse/KAFKA-3245
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Currently the number of replicas of auto-created change log topics is one. 
> This make stream processing not fault tolerant. A way to specify the number 
> of replicas in config is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3201) Add system test for KIP-31 and KIP-32 - Upgrade Test

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3201:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Add system test for KIP-31 and KIP-32 - Upgrade Test
> 
>
> Key: KAFKA-3201
> URL: https://issues.apache.org/jira/browse/KAFKA-3201
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.9.1.0
>
>
> This system test should test the procedure to upgrade a Kafka broker from 
> 0.8.x and 0.9.0 to 0.10.0
> The procedure is documented in KIP-32:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3253) Skip duplicate message size check if there is no re-compression during log appending.

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3253:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Skip duplicate message size check if there is no re-compression during log 
> appending.
> -
>
> Key: KAFKA-3253
> URL: https://issues.apache.org/jira/browse/KAFKA-3253
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> In Log.append(), if the messages were not re-compressed, we don't need to 
> check the message size again because it has already been checked in 
> analyzeAndValidateMessageSet(). Also this second check is only needed when 
> assignOffsets is true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2982) Mark the old Scala producer and related classes as deprecated

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2982:
---
Fix Version/s: (was: 0.10.0.0)
   0.9.1.0

> Mark the old Scala producer and related classes as deprecated
> -
>
> Key: KAFKA-2982
> URL: https://issues.apache.org/jira/browse/KAFKA-2982
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
> Fix For: 0.9.1.0
>
>
> Now that the new producer and consumer are released the old Scala producer 
> and consumer clients should be deprecated to encourage use of the new clients 
> and facilitate the removal of the old clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Make next Kafka release 0.10.0.0 instead of 0.9.1.0

2016-02-23 Thread Ismael Juma
Thank you Becket. I have:

* Updated KIP wiki page so that all KIPs with a target of 0.9.1.0 now
target 0.10.0.0
* Filed https://issues.apache.org/jira/browse/KAFKA-3275 to track tasks
that still need to be done. The next step would be to update JIRA versions
as per https://issues.apache.org/jira/browse/KAFKA-3276 (we need a
committer to do this, I believe).

Ismael

On Tue, Feb 23, 2016 at 5:11 PM, Becket Qin  wrote:

> Thanks everyone for voting.
>
> The vote has passed with +6 (binding) and +5(non-binding)
>
> Jiangjie (Becket) Qin
>
> On Tue, Feb 23, 2016 at 2:38 PM, Harsha  wrote:
>
> > +1
> >
> > On Tue, Feb 23, 2016, at 02:25 PM, Christian Posta wrote:
> > > +1 non binding
> > >
> > > On Tue, Feb 23, 2016 at 3:18 PM, Gwen Shapira 
> wrote:
> > >
> > > > +1
> > > >
> > > > On Tue, Feb 23, 2016 at 1:58 PM, Jun Rao  wrote:
> > > >
> > > > > +1.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > > On Fri, Feb 19, 2016 at 11:00 AM, Becket Qin  >
> > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > We would like to start this voting thread on making next Kafka
> > release
> > > > > > 0.10.0.0 instead of 0.9.1.0.
> > > > > >
> > > > > > The next Kafka release will have several significant important
> new
> > > > > > features/changes such as Kafka Stream, Message Format Change,
> > Client
> > > > > > Interceptors and several new consumer API changes, etc. We feel
> it
> > is
> > > > > > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
> > > > > >
> > > > > > Some previous discussions are in the following thread.
> > > > > >
> > > > > >
> > > > >
> > > >
> >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwfzigx1frzd020vk9fanj0s9nkszfuwk677bqxfuuc...@mail.gmail.com%3E
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jiangjie (Becket) Qin
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *Christian Posta*
> > > twitter: @christianposta
> > > http://www.christianposta.com/blog
> > > http://fabric8.io
> >
>


[jira] [Updated] (KAFKA-3277) Update trunk version to be 0.10.0.0-SNAPSHOT

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3277:
---
Status: Patch Available  (was: Open)

> Update trunk version to be 0.10.0.0-SNAPSHOT
> 
>
> Key: KAFKA-3277
> URL: https://issues.apache.org/jira/browse/KAFKA-3277
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3277) Update trunk version to be 0.10.0.0-SNAPSHOT

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160241#comment-15160241
 ] 

ASF GitHub Bot commented on KAFKA-3277:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/963

KAFKA-3277; Update trunk version to be 0.10.0.0-SNAPSHOT

Also update `kafka-merge-pr.py` and `tests/kafkatest/__init__.py`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka update-trunk-0.10.0.0-SNAPSHOT

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/963.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #963


commit 679fc72f34d38e032cab56cce6cf70e0fced0e4f
Author: Ismael Juma 
Date:   2016-02-24T06:19:14Z

Update trunk version to be 0.10.0.0-SNAPSHOT

Also update `kafka-merge-pr.py` and
`tests/kafkatest/__init__.py`.




> Update trunk version to be 0.10.0.0-SNAPSHOT
> 
>
> Key: KAFKA-3277
> URL: https://issues.apache.org/jira/browse/KAFKA-3277
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3277; Update trunk version to be 0.10.0....

2016-02-23 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/963

KAFKA-3277; Update trunk version to be 0.10.0.0-SNAPSHOT

Also update `kafka-merge-pr.py` and `tests/kafkatest/__init__.py`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka update-trunk-0.10.0.0-SNAPSHOT

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/963.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #963


commit 679fc72f34d38e032cab56cce6cf70e0fced0e4f
Author: Ismael Juma 
Date:   2016-02-24T06:19:14Z

Update trunk version to be 0.10.0.0-SNAPSHOT

Also update `kafka-merge-pr.py` and
`tests/kafkatest/__init__.py`.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3277) Update trunk version to be 0.10.0.0-SNAPSHOT

2016-02-23 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3277:
--

 Summary: Update trunk version to be 0.10.0.0-SNAPSHOT
 Key: KAFKA-3277
 URL: https://issues.apache.org/jira/browse/KAFKA-3277
 Project: Kafka
  Issue Type: Sub-task
  Components: build
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.10.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3276) Rename 0.10.0.0 to 0.10.1.0 and 0.9.1.0 to 0.10.0.0 in JIRA

2016-02-23 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3276:
--

 Summary: Rename 0.10.0.0 to 0.10.1.0 and 0.9.1.0 to 0.10.0.0 in 
JIRA
 Key: KAFKA-3276
 URL: https://issues.apache.org/jira/browse/KAFKA-3276
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma


Instead of changing the "Fix version" in hundreds of issues, it's easier to 
rename the versions in JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3275) Replace 0.9.1.0 references with 0.10.0.0

2016-02-23 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3275:
--

 Summary: Replace 0.9.1.0 references with 0.10.0.0
 Key: KAFKA-3275
 URL: https://issues.apache.org/jira/browse/KAFKA-3275
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma


The next release will be 0.10.0.0 instead of 0.9.1.0 based on the mailing list 
vote:

http://search-hadoop.com/m/uyzND1kfh8g1RBuVm=Re+VOTE+Make+next+Kafka+release+0+10+0+0+instead+of+0+9+1+0

We need to update a number of places to take this into account.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1196) java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1196:
---
Priority: Critical  (was: Blocker)

> java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33
> ---
>
> Key: KAFKA-1196
> URL: https://issues.apache.org/jira/browse/KAFKA-1196
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: running java 1.7, linux and kafka compiled against scala 
> 2.9.2
>Reporter: Gerrit Jansen van Vuuren
>Assignee: Ewen Cheslack-Postava
>Priority: Critical
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1196.patch
>
>
> I have 6 topics each with 8 partitions spread over 4 kafka servers.
> the servers are 24 core 72 gig ram.
> While consuming from the topics I get an IlegalArgumentException and all 
> consumption stops, the error keeps on throwing.
> I've tracked it down to FectchResponse.scala line 33
> The error happens when the FetchResponsePartitionData object's readFrom 
> method calls:
> messageSetBuffer.limit(messageSetSize)
> I put in some debug code the the messageSetSize is 671758648, while the 
> buffer.capacity() gives 155733313, for some reason the buffer is smaller than 
> the required message size.
> I don't know the consumer code enough to debug this. It doesn't matter if 
> compression is used or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1196) java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

2016-02-23 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160213#comment-15160213
 ] 

Ewen Cheslack-Postava commented on KAFKA-1196:
--

[~ijuma] Pretty sure I didn't set the fix version on this (it's a throwback, 
and I doubt I would have known what fix version to set back then anyway). 
Probably was set incorrectly w/ the initial report and can be changed.

> java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33
> ---
>
> Key: KAFKA-1196
> URL: https://issues.apache.org/jira/browse/KAFKA-1196
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: running java 1.7, linux and kafka compiled against scala 
> 2.9.2
>Reporter: Gerrit Jansen van Vuuren
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1196.patch
>
>
> I have 6 topics each with 8 partitions spread over 4 kafka servers.
> the servers are 24 core 72 gig ram.
> While consuming from the topics I get an IlegalArgumentException and all 
> consumption stops, the error keeps on throwing.
> I've tracked it down to FectchResponse.scala line 33
> The error happens when the FetchResponsePartitionData object's readFrom 
> method calls:
> messageSetBuffer.limit(messageSetSize)
> I put in some debug code the the messageSetSize is 671758648, while the 
> buffer.capacity() gives 155733313, for some reason the buffer is smaller than 
> the required message size.
> I don't know the consumer code enough to debug this. It doesn't matter if 
> compression is used or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1196) java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33

2016-02-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160205#comment-15160205
 ] 

Ismael Juma commented on KAFKA-1196:


[~ewencp], this is marked as "Blocker" with a "Fix Version" of 0.10.0. Is that 
really the case?

> java.lang.IllegalArgumentException Buffer.limit on FetchResponse.scala + 33
> ---
>
> Key: KAFKA-1196
> URL: https://issues.apache.org/jira/browse/KAFKA-1196
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.0
> Environment: running java 1.7, linux and kafka compiled against scala 
> 2.9.2
>Reporter: Gerrit Jansen van Vuuren
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1196.patch
>
>
> I have 6 topics each with 8 partitions spread over 4 kafka servers.
> the servers are 24 core 72 gig ram.
> While consuming from the topics I get an IlegalArgumentException and all 
> consumption stops, the error keeps on throwing.
> I've tracked it down to FectchResponse.scala line 33
> The error happens when the FetchResponsePartitionData object's readFrom 
> method calls:
> messageSetBuffer.limit(messageSetSize)
> I put in some debug code the the messageSetSize is 671758648, while the 
> buffer.capacity() gives 155733313, for some reason the buffer is smaller than 
> the required message size.
> I don't know the consumer code enough to debug this. It doesn't matter if 
> compression is used or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: new producer failed with org.apache.kafka.common.errors.TimeoutException

2016-02-23 Thread Ewen Cheslack-Postava
Kris,

This is a bit surprising, but handling the bootstrap servers, broker
failures/retirement, and cluster metadata properly is surprisingly hard to
get right!

https://issues.apache.org/jira/browse/KAFKA-1843 explains some of the
challenges. https://issues.apache.org/jira/browse/KAFKA-3068 shows the
types of issues that can result from trying to better recover from failures
or your situation of graceful shutdown.

I think https://issues.apache.org/jira/browse/KAFKA-2459 might have
addressed the incorrect behavior you are seeing in 0.8.2.1 -- the same
bootstrap broker could be selected due to incorrect handling of
backoff/timeouts. I can't be sure without more info, but it sounds like it
could be the same issue. Despite part of the fix being rolled back due to
KAFKA-3068, I think the relevant part which fixes the timeouts should still
be present in 0.9.0.1. If you can easily reproduce, could you test if the
newer release fixes the issue for you?

-Ewen

On Mon, Feb 22, 2016 at 9:37 PM, Kris K  wrote:

> Hi All,
>
> I saw an issue today wherein the producers (new producers) started to fail
> with org.apache.kafka.common.errors.TimeoutException: Failed to update
> metadata after 6 ms.
>
> This issue happened when we took down one of the 6 brokers (running version
> 0.8.2.1) for planned maintenance (graceful shutdown).
>
> This broker happens to be the last one in the list of 3 brokers that are
> part of bootstrap.servers.
>
> As per my understanding, the producers should have used the other two
> brokers in the bootstrap.servers list for metadata calls. But this did not
> happen.
>
> Is there any producer property that could have caused this? Any way to
> figure out which broker is being used by producers for metadata calls?
>
> Thanks,
> Kris
>



-- 
Thanks,
Ewen


[jira] [Commented] (KAFKA-3274) Document command line tools

2016-02-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160150#comment-15160150
 ] 

Gwen Shapira commented on KAFKA-3274:
-

Docs collocated with source. Since they are properly versioned.

> Document command line tools
> ---
>
> Key: KAFKA-3274
> URL: https://issues.apache.org/jira/browse/KAFKA-3274
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> While the command line tools have "help" option, it is fairly brief (by 
> design) and also lack documentation outside the command line options (things 
> from environment variables to design patters).
> Will be nice to add this to the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #387

2016-02-23 Thread Apache Jenkins Server
See 



add karma to jira

2016-02-23 Thread Christian Posta
can someone add karma to my username 'ceposta' to be able to assign jira's
to myself?

thanks!

-- 
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io


[jira] [Commented] (KAFKA-2698) add paused API

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160048#comment-15160048
 ] 

ASF GitHub Bot commented on KAFKA-2698:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/962

KAFKA-2698: Add paused() method to o.a.k.c.c.Consumer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2698

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/962.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #962


commit 21c3cafc714c4b25673b87a9b62c81a87d720f65
Author: Tom Lee 
Date:   2015-11-02T00:58:38Z

KAFKA-2698: Add paused() method to o.a.k.c.c.Consumer




> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2698: Add paused() method to o.a.k.c.c.C...

2016-02-23 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/962

KAFKA-2698: Add paused() method to o.a.k.c.c.Consumer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2698

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/962.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #962


commit 21c3cafc714c4b25673b87a9b62c81a87d720f65
Author: Tom Lee 
Date:   2015-11-02T00:58:38Z

KAFKA-2698: Add paused() method to o.a.k.c.c.Consumer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1382) Update zkVersion on partition state update failures

2016-02-23 Thread dude (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160030#comment-15160030
 ] 

dude commented on KAFKA-1382:
-

we hit this when there is a network problem, which the kakfa broker 3 can not 
connect to zookeeper.  After the network work normal, the broker can not update 
the zk and will loop the cached zk version not equal the zookeeper infinitely 
and the brokercan not recover until restart it.

> Update zkVersion on partition state update failures
> ---
>
> Key: KAFKA-1382
> URL: https://issues.apache.org/jira/browse/KAFKA-1382
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.1.2, 0.8.2.0
>
> Attachments: KAFKA-1382.patch, KAFKA-1382_2014-05-30_21:19:21.patch, 
> KAFKA-1382_2014-05-31_15:50:25.patch, KAFKA-1382_2014-06-04_12:30:40.patch, 
> KAFKA-1382_2014-06-07_09:00:56.patch, KAFKA-1382_2014-06-09_18:23:42.patch, 
> KAFKA-1382_2014-06-11_09:37:22.patch, KAFKA-1382_2014-06-16_13:50:16.patch, 
> KAFKA-1382_2014-06-16_14:19:27.patch
>
>
> Our updateIsr code is currently:
>   private def updateIsr(newIsr: Set[Replica]) {
> debug("Updated ISR for partition [%s,%d] to %s".format(topic, 
> partitionId, newIsr.mkString(",")))
> val newLeaderAndIsr = new LeaderAndIsr(localBrokerId, leaderEpoch, 
> newIsr.map(r => r.brokerId).toList, zkVersion)
> // use the epoch of the controller that made the leadership decision, 
> instead of the current controller epoch
> val (updateSucceeded, newVersion) = 
> ZkUtils.conditionalUpdatePersistentPath(zkClient,
>   ZkUtils.getTopicPartitionLeaderAndIsrPath(topic, partitionId),
>   ZkUtils.leaderAndIsrZkData(newLeaderAndIsr, controllerEpoch), zkVersion)
> if (updateSucceeded){
>   inSyncReplicas = newIsr
>   zkVersion = newVersion
>   trace("ISR updated to [%s] and zkVersion updated to 
> [%d]".format(newIsr.mkString(","), zkVersion))
> } else {
>   info("Cached zkVersion [%d] not equal to that in zookeeper, skip 
> updating ISR".format(zkVersion))
> }
> We encountered an interesting scenario recently when a large producer fully
> saturated the broker's NIC for over an hour. The large volume of data led to
> a number of ISR shrinks (and subsequent expands). The NIC saturation
> affected the zookeeper client heartbeats and led to a session timeout. The
> timeline was roughly as follows:
> - Attempt to expand ISR
> - Expansion written to zookeeper (confirmed in zookeeper transaction logs)
> - Session timeout after around 13 seconds (the configured timeout is 20
>   seconds) so that lines up.
> - zkclient reconnects to zookeeper (with the same session ID) and retries
>   the write - but uses the old zkVersion. This fails because the zkVersion
>   has already been updated (above).
> - The ISR expand keeps failing after that and the only way to get out of it
>   is to bounce the broker.
> In the above code, if the zkVersion is different we should probably update
> the cached version and even retry the expansion until it succeeds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3018: added topic name validator to Prod...

2016-02-23 Thread choang
GitHub user choang opened a pull request:

https://github.com/apache/kafka/pull/961

KAFKA-3018: added topic name validator to ProducerRecord

Added validation for topic name when creating a `ProducerRecord`, and added 
corresponding tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/choang/kafka kafka-3018

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/961.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #961


commit de2e599268189921bf768de66bf003e56b9e879f
Author: Chi Hoang 
Date:   2016-02-23T22:06:58Z

KAFKA-3018: added topic name validator to ProducerRecord




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3018) Kafka producer hangs on producer.close() call if the producer topic contains single quotes in the topic name

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160018#comment-15160018
 ] 

ASF GitHub Bot commented on KAFKA-3018:
---

GitHub user choang opened a pull request:

https://github.com/apache/kafka/pull/961

KAFKA-3018: added topic name validator to ProducerRecord

Added validation for topic name when creating a `ProducerRecord`, and added 
corresponding tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/choang/kafka kafka-3018

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/961.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #961


commit de2e599268189921bf768de66bf003e56b9e879f
Author: Chi Hoang 
Date:   2016-02-23T22:06:58Z

KAFKA-3018: added topic name validator to ProducerRecord




> Kafka producer hangs on producer.close() call if the producer topic contains 
> single quotes in the topic name
> 
>
> Key: KAFKA-3018
> URL: https://issues.apache.org/jira/browse/KAFKA-3018
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: kanav anand
>Assignee: Jun Rao
>
> While creating topics with quotes in the name throws a exception but if you 
> try to close a producer configured with a topic name with quotes the producer 
> hangs.
> It can be easily replicated and verified by setting topic.name for a producer 
> with a string containing single quotes in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3201) Add system test for KIP-31 and KIP-32 - Upgrade Test

2016-02-23 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15160007#comment-15160007
 ] 

Jiangjie Qin commented on KAFKA-3201:
-

[~apovzner] Yes, that was what I had in mind. Thanks for confirming.

> Add system test for KIP-31 and KIP-32 - Upgrade Test
> 
>
> Key: KAFKA-3201
> URL: https://issues.apache.org/jira/browse/KAFKA-3201
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Jiangjie Qin
>Assignee: Anna Povzner
> Fix For: 0.10.0.0
>
>
> This system test should test the procedure to upgrade a Kafka broker from 
> 0.8.x and 0.9.0 to 0.10.0
> The procedure is documented in KIP-32:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #386

2016-02-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3245: config for changelog replication factor

[wangguoz] HOTFIX: fix consumer config for streams

[wangguoz] KAFKA-3046: Add ByteBuffer Serializer and Deserializer

[cshapi] MINOR: KTable.count() to only take a selector for key

--
[...truncated 3548 lines...]

org.apache.kafka.common.SerializeCompatibilityTopicPartitionTest > 
testTopiPartitionSerializationCompatibility PASSED

org.apache.kafka.common.serialization.SerializationTest > testStringSerializer 
PASSED

org.apache.kafka.common.serialization.SerializationTest > testIntegerSerializer 
PASSED

org.apache.kafka.common.serialization.SerializationTest > 
testByteBufferSerializer PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testBasicTypes PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultRange PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidators PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionDefault PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueExceptions PASSED

org.apache.kafka.common.protocol.ErrorsTest > testForExceptionInheritance PASSED

org.apache.kafka.common.protocol.ErrorsTest > testNoneException PASSED

org.apache.kafka.common.protocol.ErrorsTest > testUniqueErrorCodes PASSED

org.apache.kafka.common.protocol.ErrorsTest > testExceptionsAreNotGeneric PASSED

org.apache.kafka.common.protocol.ProtoUtilsTest > schemaVersionOutOfRange PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testArray 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > 
testNullableDefault PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testSimple 
PASSED

org.apache.kafka.common.requests.RequestResponseTest > testSerialization PASSED

org.apache.kafka.common.requests.RequestResponseTest > fetchResponseVersionTest 
PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
produceResponseVersionTest PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
testControlledShutdownResponse PASSED

org.apache.kafka.common.requests.RequestResponseTest > 
testRequestHeaderWithNullClientId PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.metrics.MetricsTest > testSimpleStats PASSED

org.apache.kafka.common.metrics.MetricsTest > testOldDataHasNoEffect PASSED

org.apache.kafka.common.metrics.MetricsTest > testQuotasEquality PASSED

org.apache.kafka.common.metrics.MetricsTest > testRemoveInactiveMetrics PASSED

org.apache.kafka.common.metrics.MetricsTest > testMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testRateWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testTimeWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testEventWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testRemoveMetric PASSED

org.apache.kafka.common.metrics.MetricsTest > testBadSensorHierarchy PASSED

org.apache.kafka.common.metrics.MetricsTest > testRemoveSensor PASSED

org.apache.kafka.common.metrics.MetricsTest > testPercentiles PASSED

org.apache.kafka.common.metrics.MetricsTest > testDuplicateMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testQuotas PASSED

org.apache.kafka.common.metrics.MetricsTest > testHierarchicalSensors PASSED

org.apache.kafka.common.metrics.JmxReporterTest > testJmxRegistration PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testHistogram PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testConstantBinScheme 
PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testLinearBinScheme PASSED


Re: [VOTE] Make next Kafka release 0.10.0.0 instead of 0.9.1.0

2016-02-23 Thread Becket Qin
Thanks everyone for voting.

The vote has passed with +6 (binding) and +5(non-binding)

Jiangjie (Becket) Qin

On Tue, Feb 23, 2016 at 2:38 PM, Harsha  wrote:

> +1
>
> On Tue, Feb 23, 2016, at 02:25 PM, Christian Posta wrote:
> > +1 non binding
> >
> > On Tue, Feb 23, 2016 at 3:18 PM, Gwen Shapira  wrote:
> >
> > > +1
> > >
> > > On Tue, Feb 23, 2016 at 1:58 PM, Jun Rao  wrote:
> > >
> > > > +1.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Fri, Feb 19, 2016 at 11:00 AM, Becket Qin 
> > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > We would like to start this voting thread on making next Kafka
> release
> > > > > 0.10.0.0 instead of 0.9.1.0.
> > > > >
> > > > > The next Kafka release will have several significant important new
> > > > > features/changes such as Kafka Stream, Message Format Change,
> Client
> > > > > Interceptors and several new consumer API changes, etc. We feel it
> is
> > > > > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
> > > > >
> > > > > Some previous discussions are in the following thread.
> > > > >
> > > > >
> > > >
> > >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwfzigx1frzd020vk9fanj0s9nkszfuwk677bqxfuuc...@mail.gmail.com%3E
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > *Christian Posta*
> > twitter: @christianposta
> > http://www.christianposta.com/blog
> > http://fabric8.io
>


Re: [VOTE] KIP-33 - Add a time based log index to Kafka

2016-02-23 Thread Becket Qin
Bump.

Per Jun's comments during KIP hangout, I have updated wiki with the upgrade
plan or KIP-33.

Let's vote!

Thanks,

Jiangjie (Becket) Qin

On Wed, Feb 3, 2016 at 10:32 AM, Becket Qin  wrote:

> Hi all,
>
> I would like to initiate the vote for KIP-33.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33
> +-+Add+a+time+based+log+index
>
> A good amount of the KIP has been touched during the discussion on KIP-32.
> So I also put the link to KIP-32 here for reference.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP
> -32+-+Add+timestamps+to+Kafka+message
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


Build failed in Jenkins: kafka-trunk-jdk7 #1060

2016-02-23 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3245: config for changelog replication factor

[wangguoz] HOTFIX: fix consumer config for streams

[wangguoz] KAFKA-3046: Add ByteBuffer Serializer and Deserializer

[cshapi] MINOR: KTable.count() to only take a selector for key

[cshapi] HOTFIX: Add missing file for KeyValue unit test

[me] KAFKA-3007: implement max.poll.records (KIP-41)

[cshapi] KAFKA-3272: Add debugging options to kafka-run-class.sh so we can 
easily

--
[...truncated 2986 lines...]

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED


[jira] [Commented] (KAFKA-3274) Document command line tools

2016-02-23 Thread Christian Posta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159966#comment-15159966
 ] 

Christian Posta commented on KAFKA-3274:


Where's the best place to put this? In the docs collocated with the src (i.e., 
https://github.com/christian-posta/kafka/tree/trunk/docs) or in the confluence 
wiki?

> Document command line tools
> ---
>
> Key: KAFKA-3274
> URL: https://issues.apache.org/jira/browse/KAFKA-3274
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> While the command line tools have "help" option, it is fairly brief (by 
> design) and also lack documentation outside the command line options (things 
> from environment variables to design patters).
> Will be nice to add this to the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3274) Document command line tools

2016-02-23 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-3274:
---

 Summary: Document command line tools
 Key: KAFKA-3274
 URL: https://issues.apache.org/jira/browse/KAFKA-3274
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


While the command line tools have "help" option, it is fairly brief (by design) 
and also lack documentation outside the command line options (things from 
environment variables to design patters).

Will be nice to add this to the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3273) MessageFormatter and MessageReader interfaces should be resilient to changes

2016-02-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159906#comment-15159906
 ] 

Ismael Juma commented on KAFKA-3273:


Thanks for clarifying [~ewencp].

> MessageFormatter and MessageReader interfaces should be resilient to changes
> 
>
> Key: KAFKA-3273
> URL: https://issues.apache.org/jira/browse/KAFKA-3273
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> They should use `ConsumerRecord` and `ProducerRecord` as parameters and 
> return types respectively in order to avoid breaking users each time a new 
> parameter is added.
> An additional question is whether we need to maintain compatibility with 
> previous releases. [~junrao] suggested that we do not, but [~ewencp] thought 
> we should.
> Note that the KIP-31/32 change has broken compatibility for 
> `MessageFormatter` so we need to do _something_ for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3272) Add debugging options to kafka-run-class.sh so we can easily run remote debugging

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159905#comment-15159905
 ] 

ASF GitHub Bot commented on KAFKA-3272:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/955


> Add debugging options to kafka-run-class.sh so we can easily run remote 
> debugging
> -
>
> Key: KAFKA-3272
> URL: https://issues.apache.org/jira/browse/KAFKA-3272
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.1
>Reporter: Christian Posta
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> Add a KAFKA_DEBUG environment variable to easily enable remote debugging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3272) Add debugging options to kafka-run-class.sh so we can easily run remote debugging

2016-02-23 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-3272.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 955
[https://github.com/apache/kafka/pull/955]

> Add debugging options to kafka-run-class.sh so we can easily run remote 
> debugging
> -
>
> Key: KAFKA-3272
> URL: https://issues.apache.org/jira/browse/KAFKA-3272
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.1
>Reporter: Christian Posta
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> Add a KAFKA_DEBUG environment variable to easily enable remote debugging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3272: Add debugging options to kafka-run...

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/955


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3273) MessageFormatter and MessageReader interfaces should be resilient to changes

2016-02-23 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159899#comment-15159899
 ] 

Ewen Cheslack-Postava commented on KAFKA-3273:
--

To clarify my thoughts, I don't think these interfaces are deal breakers wrt 
compatibility, but it is inconvenient to change them if you have custom 
formatters (which folks using different serialization formats might) or if you 
want to use a single class across multiple Kafka versions for compatibility 
tests. The latter becomes more annoying if the relevant formatter wasn't 
already available in the original version. It can be worked around, but is 
inconvenient.

If we're going to break compatibility, changing to an interface with the 
Consumer or ProducerRecord is definitely a better choice than just continuing 
to add parameters.

> MessageFormatter and MessageReader interfaces should be resilient to changes
> 
>
> Key: KAFKA-3273
> URL: https://issues.apache.org/jira/browse/KAFKA-3273
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> They should use `ConsumerRecord` and `ProducerRecord` as parameters and 
> return types respectively in order to avoid breaking users each time a new 
> parameter is added.
> An additional question is whether we need to maintain compatibility with 
> previous releases. [~junrao] suggested that we do not, but [~ewencp] thought 
> we should.
> Note that the KIP-31/32 change has broken compatibility for 
> `MessageFormatter` so we need to do _something_ for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3273) MessageFormatter and MessageReader interfaces should be resilient to changes

2016-02-23 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3273:
--

 Summary: MessageFormatter and MessageReader interfaces should be 
resilient to changes
 Key: KAFKA-3273
 URL: https://issues.apache.org/jira/browse/KAFKA-3273
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.9.1.0


They should use `ConsumerRecord` and `ProducerRecord` as parameters and return 
types respectively in order to avoid breaking clients each time a new parameter 
is added.

An additional question is whether we need to maintain compatibility with 
previous releases. [~junrao] suggested that we do not, but [~ewencp] thought we 
should.

Note that the KIP-31/32 change has broken compatibility for `MessageFormatter` 
so we need to do _something_ for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3273) MessageFormatter and MessageReader interfaces should be resilient to changes

2016-02-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3273:
---
Description: 
They should use `ConsumerRecord` and `ProducerRecord` as parameters and return 
types respectively in order to avoid breaking users each time a new parameter 
is added.

An additional question is whether we need to maintain compatibility with 
previous releases. [~junrao] suggested that we do not, but [~ewencp] thought we 
should.

Note that the KIP-31/32 change has broken compatibility for `MessageFormatter` 
so we need to do _something_ for the next release.

  was:
They should use `ConsumerRecord` and `ProducerRecord` as parameters and return 
types respectively in order to avoid breaking clients each time a new parameter 
is added.

An additional question is whether we need to maintain compatibility with 
previous releases. [~junrao] suggested that we do not, but [~ewencp] thought we 
should.

Note that the KIP-31/32 change has broken compatibility for `MessageFormatter` 
so we need to do _something_ for the next release.


> MessageFormatter and MessageReader interfaces should be resilient to changes
> 
>
> Key: KAFKA-3273
> URL: https://issues.apache.org/jira/browse/KAFKA-3273
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.1.0
>
>
> They should use `ConsumerRecord` and `ProducerRecord` as parameters and 
> return types respectively in order to avoid breaking users each time a new 
> parameter is added.
> An additional question is whether we need to maintain compatibility with 
> previous releases. [~junrao] suggested that we do not, but [~ewencp] thought 
> we should.
> Note that the KIP-31/32 change has broken compatibility for 
> `MessageFormatter` so we need to do _something_ for the next release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3007) Implement max.poll.records for new consumer (KIP-41)

2016-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3007.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 931
[https://github.com/apache/kafka/pull/931]

> Implement max.poll.records for new consumer (KIP-41)
> 
>
> Key: KAFKA-3007
> URL: https://issues.apache.org/jira/browse/KAFKA-3007
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: aarti gupta
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> Currently, the consumer.poll(timeout)
> returns all messages that have not been acked since the last fetch
> The only way to process a single message, is to throw away all but the first 
> message in the list
> This would mean we are required to fetch all messages into memory, and this 
> coupled with the client being not thread-safe, (i.e. we cannot use a 
> different thread to ack messages, makes it hard to consume messages when the 
> order of message arrival is important, and a large number of messages are 
> pending to be consumed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3007: implement max.poll.records (KIP-41...

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/931


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3007) Implement max.poll.records for new consumer (KIP-41)

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159888#comment-15159888
 ] 

ASF GitHub Bot commented on KAFKA-3007:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/931


> Implement max.poll.records for new consumer (KIP-41)
> 
>
> Key: KAFKA-3007
> URL: https://issues.apache.org/jira/browse/KAFKA-3007
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: aarti gupta
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> Currently, the consumer.poll(timeout)
> returns all messages that have not been acked since the last fetch
> The only way to process a single message, is to throw away all but the first 
> message in the list
> This would mean we are required to fetch all messages into memory, and this 
> coupled with the client being not thread-safe, (i.e. we cannot use a 
> different thread to ack messages, makes it hard to consume messages when the 
> order of message arrival is important, and a large number of messages are 
> pending to be consumed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-47 - Add timestamp-based log deletion policy

2016-02-23 Thread Joel Koshy
Great - thanks for clarifying.

Joel

On Tue, Feb 23, 2016 at 1:47 PM, Bill Warshaw  wrote:

> Sorry that I didn't see this comment before the meeting Joel.  I'll try to
> clarify what I said at the meeting:
>
> - The KIP currently states that timestamp-based log deletion will only work
> with LogAppendTime.  I need to update the KIP to reflect that, after the
> work is done for KIP-33, it will work with both LogAppendTime and
> CreateTime.
> - To use the existing time-based retention mechanism to delete a precise
> range of messages, a client application would need to do the following:
>   - by default, turn off these retention mechanisms
>   - when the application wishes to delete a range of messages which were
> sent before a certain time, compute an approximate value to set
> "log.retention.minutes" to, to create a window of messages based on that
> timestamp that are ok to delete.  There is some degree of imprecision
> implied here.
>   - wait until we are confident that the log retention mechanism has been
> run and deleted any stale segments
>   - reset "log.retention.minutes" to turn off time-based log retention
> until the next time the client application wants to delete something
>
> - To use the proposed timestamp-based retention mechanism, there is only
> one step: the application just has to set "log.retention.min.timestamp" to
> whatever time boundary it deems fit.  It doesn't need to compute any fuzzy
> windows, try to wait until asynchronous processes have been completed or
> continually flip settings between enabled and disabled.
>
> I will update the KIP to reflect the discussion around LogAppendTime vs
> CreateTime and the work being done in KIP-33.
>
> Thanks,
> Bill
>
>
> On Tue, Feb 23, 2016 at 1:22 PM, Joel Koshy  wrote:
>
> > I'm having some trouble reconciling the current proposal with your
> original
> > requirement which was essentially being able to purge log data up to a
> > precise point (an offset). The KIP currently suggests that
> timestamp-based
> > deletion would only work with LogAppendTime, so it does not seem
> > significantly different from time-based retention (after KIP-32/33) - IOW
> > to me it appears that you would need to use CreateTime and not
> > LogAppendTime. Also one of the rejected alternatives observes that
> changing
> > the existing configuration settings to try to flush ranges of a given
> > partition's log are problematic, but it seems to me you would have to do
> > this in with timestamp-based deletion as well right? I think it would be
> > useful for me if you or anyone else can go over the exact
> > mechanics/workflow for accomplishing precise purges at today's KIP
> meeting.
> >
> > Thanks,
> >
> > Joel
> >
> > On Monday, February 22, 2016, Bill Warshaw  wrote:
> >
> > > Sounds good.  I'll hold off on sending out a VOTE thread until after
> the
> > > KIP meeting tomorrow.
> > >
> > > On Mon, Feb 22, 2016 at 12:56 PM, Becket Qin 
> > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > I think it makes sense to implement KIP-47 after KIP-33 so we can
> make
> > it
> > > > work for both LogAppendTime and CreateTime.
> > > >
> > > > And yes, I'm actively working on KIP-33. I had a voting thread on
> > KIP-33
> > > > before and I'll bump it up.
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > >
> > > >
> > > > On Mon, Feb 22, 2016 at 9:11 AM, Jun Rao  wrote:
> > > >
> > > > > Becket,
> > > > >
> > > > > Since you submitted KIP-33, are you actively working on that? If
> so,
> > it
> > > > > would make sense to implement KIP-47 after KIP-33 so that it works
> > for
> > > > both
> > > > > CreateTime and LogAppendTime.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Feb 19, 2016 at 6:25 PM, Bill Warshaw  >
> > > > wrote:
> > > > >
> > > > > > Hi Jun,
> > > > > >
> > > > > > 1.  I thought more about Andrew's comment about LogAppendTime.
> The
> > > > > > time-based index you are referring to is associated with KIP-33,
> > > > correct?
> > > > > > Currently my implementation is just checking the last message in
> a
> > > > > segment,
> > > > > > so we're restricted to LogAppendTime.  When the work for KIP-33
> is
> > > > > > completed, it sounds like CreateTime would also be valid.  Do you
> > > > happen
> > > > > to
> > > > > > know if anyone is currently working on KIP-33?
> > > > > >
> > > > > > 2. I did update the wiki after reading your original comment, but
> > > > reading
> > > > > > over it again I realize I could word a couple things more
> > clearly.  I
> > > > > will
> > > > > > do that tonight.
> > > > > >
> > > > > > Bill
> > > > > >
> > > > > > On Fri, Feb 19, 2016 at 7:02 PM, Jun Rao 
> wrote:
> > > > > >
> > > > > > > Hi, Bill,
> > > > > > >
> > > > > > > I replied with the following comments earlier to the thread.
> 

[GitHub] kafka pull request: HOTFIX: Add missing file for KeyValue unit tes...

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/960


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-23 Thread Ismael Juma
Hi Andrew,

Thanks for your input.

On Tue, Feb 23, 2016 at 11:16 AM, Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:

> From my point of view, it seems very odd to deprecate the Scala producer
> but not the consumer. So, I would vote to deprecate them both in 0.10.
>

I explained in other emails why I think we should not deprecate the old
consumers in 0.10.0.0.

It doesn't sound like there's an established mechanism for deprecation. So,
> for the sake of discussion, how about:
> * Start with deprecation annotations. It's just a marker that they're now
> living on borrowed time.
> * Remove the ability to connect from these deprecated clients two releases
> later - so I mean 0.12, not 0.10.0.2.
>

I don't think we have a good reason to remove the ability of these clients
to connect to Kafka brokers. We do want to remove the classes from the core
JAR at some point though.

Ismael


[GitHub] kafka pull request: HOTFIX: Add missing file for KeyValue unit tes...

2016-02-23 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/960

HOTFIX: Add missing file for KeyValue unit test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KCountP1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/960.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #960


commit 913a617240eadb853ca8196df97e9578eca8f729
Author: Guozhang Wang 
Date:   2016-02-04T22:37:31Z

first version

commit 10eaddb32426e00c792b5856ba73c48becd268f4
Author: Guozhang Wang 
Date:   2016-02-05T18:30:21Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
KCount

commit cfb51035014f1ce5e9bb683fa74730f35a128dfa
Author: Guozhang Wang 
Date:   2016-02-05T18:51:04Z

github comments

commit c41b4b13d301946523b5f3c5716757b8cf09cdb9
Author: Guozhang Wang 
Date:   2016-02-23T22:02:33Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
KCount

commit be6ca07c817b72c087155e089d7d015a17c6cc93
Author: Guozhang Wang 
Date:   2016-02-23T22:48:26Z

add unit tests

commit ea2896e94451d90cf05fe6eb849416e84ff1649b
Author: Guozhang Wang 
Date:   2016-02-23T23:26:06Z

add missing file

commit 0910ef18cfddf9ab3e761fc660f375301365b91d
Author: Guozhang Wang 
Date:   2016-02-23T23:26:39Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
KCountP1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: KTable.count() to only take a selector ...

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/872


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3046) add ByteBuffer Serializer

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159831#comment-15159831
 ] 

ASF GitHub Bot commented on KAFKA-3046:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/718


> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
> Fix For: 0.10.0.0
>
>
> ByteBuffer is widely used in many scenarios. (eg: storm-sql can specify kafka 
> as the external data Source, we can use ByteBuffer for value serializer.) 
> Adding ByteBuffer Serializer officially will be convenient for 
> users to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3046) add ByteBuffer Serializer

2016-02-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3046.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 718
[https://github.com/apache/kafka/pull/718]

> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
> Fix For: 0.10.0.0
>
>
> ByteBuffer is widely used in many scenarios. (eg: storm-sql can specify kafka 
> as the external data Source, we can use ByteBuffer for value serializer.) 
> Adding ByteBuffer Serializer officially will be convenient for 
> users to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3046: add ByteBuffer Serializer.

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/718


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-23 Thread Ismael Juma
Hi Flavio,

On Tue, Feb 23, 2016 at 10:46 AM, Flavio Junqueira  wrote:

> It does make sense, thanks for the clarification. If we deprecate the
> producer first, does it mean that the following release won't have a scala
> producer but will have a scala consumer? Actually I should have asked this
> question first: what's the deprecation path precisely?
>

Not necessarily. I think I've covered these questions in my reply to Jay.

Ismael


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-23 Thread Ismael Juma
Hi Jay,

Comments inline.

On Tue, Feb 23, 2016 at 10:38 AM, Jay Kreps  wrote:

> Would it make more sense to be able to announce both new clients stable and
> deprecate both the scala clients at the same time (irrespective of whether
> that is next release or one after)?


I agree that there are benefits if we do it this way. However, I also think
that there are benefits in giving advance warning if we have an alternative
that is fully featured, production-ready and where we have given users some
time to migrate (ie the 0.9.0.0 cycle). We did not wait for the new
consumer to be ready before we added the new producer after all. The
trade-offs are similar, in my opinion.

Otherwise is an intermediate state
> where we recommend you get off the scala producer but not the scala
> consumer a bit awkward? I kind of think it is but don't have a strong
> feeling either way so I'm +0.
>

We would be recommending you get off both, but we would only add the
deprecation warning for the old producers for 0.10.0.0. I'd like us to be
nice to our users and avoid spamming them with warnings without giving them
a cycle to move to the new and recommended implementation. We could remove
all the old clients at the same time, if we think that's better (ie the
release after 0.10.0.0 at the earliest).

Also what does deprecate mean? Does it mean we announce a schedule for
> their removal or does it mean we will add the @deprecated annoyance markers
> or both?
>

@deprecated markers for sure. With regards to the schedule for removal, I
am not sure and I would like to hear opinions from people who are still
using the old producers. Removing them is great for us from a
maintainability perspective, but we also need to take into account how it
will affect people who want to upgrade to the release where they are
removed. I am not sure if we _need_ to commit to a particular release for
removal. It may make sense to make that call after 0.10.0.0 goes out.

It would be nice to avoid the MapReduce api conversion thing where the old
> api is @deprecated but the new api has a bunch of gaps that render it
> unusable for a year or so...that was kind of annoying. :-)
>

Yes, definitely. This is why I don't think we should deprecate the old
consumers just yet.

Ismael


[jira] [Updated] (KAFKA-3245) need a way to specify the number of replicas for change log topics

2016-02-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3245:
-
Assignee: Yasuhiro Matsuda

> need a way to specify the number of replicas for change log topics
> --
>
> Key: KAFKA-3245
> URL: https://issues.apache.org/jira/browse/KAFKA-3245
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.10.0.0
>
>
> Currently the number of replicas of auto-created change log topics is one. 
> This make stream processing not fault tolerant. A way to specify the number 
> of replicas in config is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1059

2016-02-23 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3242: minor rename / logging change to Controller

--
[...truncated 2972 lines...]

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > 

[jira] [Resolved] (KAFKA-3245) need a way to specify the number of replicas for change log topics

2016-02-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3245.
--
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 948
[https://github.com/apache/kafka/pull/948]

> need a way to specify the number of replicas for change log topics
> --
>
> Key: KAFKA-3245
> URL: https://issues.apache.org/jira/browse/KAFKA-3245
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
> Fix For: 0.10.0.0
>
>
> Currently the number of replicas of auto-created change log topics is one. 
> This make stream processing not fault tolerant. A way to specify the number 
> of replicas in config is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix consumer config for streams

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/959


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3245) need a way to specify the number of replicas for change log topics

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159815#comment-15159815
 ] 

ASF GitHub Bot commented on KAFKA-3245:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/948


> need a way to specify the number of replicas for change log topics
> --
>
> Key: KAFKA-3245
> URL: https://issues.apache.org/jira/browse/KAFKA-3245
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
> Fix For: 0.10.0.0
>
>
> Currently the number of replicas of auto-created change log topics is one. 
> This make stream processing not fault tolerant. A way to specify the number 
> of replicas in config is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3245: config for changelog replication f...

2016-02-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/948


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2832) support exclude.internal.topics in new consumer

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159808#comment-15159808
 ] 

ASF GitHub Bot commented on KAFKA-2832:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/932


> support exclude.internal.topics in new consumer
> ---
>
> Key: KAFKA-2832
> URL: https://issues.apache.org/jira/browse/KAFKA-2832
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> The old consumer supports exclude.internal.topics that prevents internal 
> topics from being consumed by default. It would be useful to add that in the 
> new consumer, especially when wildcards are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2832) support exclude.internal.topics in new consumer

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159809#comment-15159809
 ] 

ASF GitHub Bot commented on KAFKA-2832:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/932

KAFKA-2832: Add a consumer config option to exclude internal topics

A new consumer config option 'exclude.internal.topics' was added to allow 
excluding internal topics when wildcards are used to specify consumers.
The new option takes a boolean value, with a default 'false' value (i.e. no 
exclusion).

This patch is co-authored with @rajinisivaram.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-2832

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/932.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #932


commit 1b03ad9d1e8dd0a40adb3272ff857c48289b1e05
Author: Vahid Hashemian 
Date:   2016-02-18T15:53:39Z

KAFKA-2832: Add a consumer config option to exclude internal topics

A new consumer config option 'exclude.internal.topics' was added to allow 
excluding internal topics when wildcards are used to specify consumers.
The new option takes a boolean value, with a default of 'true' (i.e. 
exclude internal topics).

This patch is co-authored with @rajinisivaram.




> support exclude.internal.topics in new consumer
> ---
>
> Key: KAFKA-2832
> URL: https://issues.apache.org/jira/browse/KAFKA-2832
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
> Fix For: 0.9.1.0
>
>
> The old consumer supports exclude.internal.topics that prevents internal 
> topics from being consumed by default. It would be useful to add that in the 
> new consumer, especially when wildcards are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: fix consumer config for streams

2016-02-23 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/959

HOTFIX: fix consumer config for streams

@guozhangwang 
My bad. I removed ZOOKEEPER_CONNECT_CONFIG from consumer's config by 
mistake. It is needed by our own partition assigner running in consumers.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka hotfix3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/959.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #959


commit b01d80d10548647e15f70c7fcaf34a0b8d8c5742
Author: Yasuhiro Matsuda 
Date:   2016-02-23T22:47:55Z

HOTFIX: fix consumer config for streams




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2832: Add a consumer config option to ex...

2016-02-23 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/932

KAFKA-2832: Add a consumer config option to exclude internal topics

A new consumer config option 'exclude.internal.topics' was added to allow 
excluding internal topics when wildcards are used to specify consumers.
The new option takes a boolean value, with a default 'false' value (i.e. no 
exclusion).

This patch is co-authored with @rajinisivaram.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-2832

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/932.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #932


commit 1b03ad9d1e8dd0a40adb3272ff857c48289b1e05
Author: Vahid Hashemian 
Date:   2016-02-18T15:53:39Z

KAFKA-2832: Add a consumer config option to exclude internal topics

A new consumer config option 'exclude.internal.topics' was added to allow 
excluding internal topics when wildcards are used to specify consumers.
The new option takes a boolean value, with a default of 'true' (i.e. 
exclude internal topics).

This patch is co-authored with @rajinisivaram.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2832: Add a consumer config option to ex...

2016-02-23 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/932


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Make next Kafka release 0.10.0.0 instead of 0.9.1.0

2016-02-23 Thread Harsha
+1

On Tue, Feb 23, 2016, at 02:25 PM, Christian Posta wrote:
> +1 non binding
> 
> On Tue, Feb 23, 2016 at 3:18 PM, Gwen Shapira  wrote:
> 
> > +1
> >
> > On Tue, Feb 23, 2016 at 1:58 PM, Jun Rao  wrote:
> >
> > > +1.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Fri, Feb 19, 2016 at 11:00 AM, Becket Qin 
> > wrote:
> > >
> > > > Hi All,
> > > >
> > > > We would like to start this voting thread on making next Kafka release
> > > > 0.10.0.0 instead of 0.9.1.0.
> > > >
> > > > The next Kafka release will have several significant important new
> > > > features/changes such as Kafka Stream, Message Format Change, Client
> > > > Interceptors and several new consumer API changes, etc. We feel it is
> > > > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
> > > >
> > > > Some previous discussions are in the following thread.
> > > >
> > > >
> > >
> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwfzigx1frzd020vk9fanj0s9nkszfuwk677bqxfuuc...@mail.gmail.com%3E
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > >
> >
> 
> 
> 
> -- 
> *Christian Posta*
> twitter: @christianposta
> http://www.christianposta.com/blog
> http://fabric8.io


Jenkins build is back to normal : kafka-trunk-jdk8 #385

2016-02-23 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3214) Add consumer system tests for compressed topics

2016-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159778#comment-15159778
 ] 

ASF GitHub Bot commented on KAFKA-3214:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/958

KAFKA-3214: Added system tests for compressed topics

Added the following tests:
1. Extended TestVerifiableProducer (sanity check test) to test Trunk with 
snappy compression (one producer/one topic).
2. Added CompressionTest that tests 3 producers: 2a) each uses a different 
compression; 2b) each either uses snappy compression or no compression.

Enabled VerifiableProducer to run producers with different compression 
types (passed in the constructor).



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/958.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #958


commit 588d4caf3f8830dfcc185da30dfdb40de04cd7cd
Author: Anna Povzner 
Date:   2016-02-23T22:22:34Z

KAFKA-3214: Added system tests for compressed topics




> Add consumer system tests for compressed topics
> ---
>
> Key: KAFKA-3214
> URL: https://issues.apache.org/jira/browse/KAFKA-3214
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Anna Povzner
>
> As far as I can tell, we don't have any ducktape tests which verify 
> correctness when compression is enabled. If we did, we might have caught 
> KAFKA-3179 earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3214: Added system tests for compressed ...

2016-02-23 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/958

KAFKA-3214: Added system tests for compressed topics

Added the following tests:
1. Extended TestVerifiableProducer (sanity check test) to test Trunk with 
snappy compression (one producer/one topic).
2. Added CompressionTest that tests 3 producers: 2a) each uses a different 
compression; 2b) each either uses snappy compression or no compression.

Enabled VerifiableProducer to run producers with different compression 
types (passed in the constructor).



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-3214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/958.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #958


commit 588d4caf3f8830dfcc185da30dfdb40de04cd7cd
Author: Anna Povzner 
Date:   2016-02-23T22:22:34Z

KAFKA-3214: Added system tests for compressed topics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Make next Kafka release 0.10.0.0 instead of 0.9.1.0

2016-02-23 Thread Christian Posta
+1 non binding

On Tue, Feb 23, 2016 at 3:18 PM, Gwen Shapira  wrote:

> +1
>
> On Tue, Feb 23, 2016 at 1:58 PM, Jun Rao  wrote:
>
> > +1.
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Feb 19, 2016 at 11:00 AM, Becket Qin 
> wrote:
> >
> > > Hi All,
> > >
> > > We would like to start this voting thread on making next Kafka release
> > > 0.10.0.0 instead of 0.9.1.0.
> > >
> > > The next Kafka release will have several significant important new
> > > features/changes such as Kafka Stream, Message Format Change, Client
> > > Interceptors and several new consumer API changes, etc. We feel it is
> > > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
> > >
> > > Some previous discussions are in the following thread.
> > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwfzigx1frzd020vk9fanj0s9nkszfuwk677bqxfuuc...@mail.gmail.com%3E
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> >
>



-- 
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io


Re: [VOTE] Make next Kafka release 0.10.0.0 instead of 0.9.1.0

2016-02-23 Thread Gwen Shapira
+1

On Tue, Feb 23, 2016 at 1:58 PM, Jun Rao  wrote:

> +1.
>
> Thanks,
>
> Jun
>
> On Fri, Feb 19, 2016 at 11:00 AM, Becket Qin  wrote:
>
> > Hi All,
> >
> > We would like to start this voting thread on making next Kafka release
> > 0.10.0.0 instead of 0.9.1.0.
> >
> > The next Kafka release will have several significant important new
> > features/changes such as Kafka Stream, Message Format Change, Client
> > Interceptors and several new consumer API changes, etc. We feel it is
> > better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
> >
> > Some previous discussions are in the following thread.
> >
> >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwfzigx1frzd020vk9fanj0s9nkszfuwk677bqxfuuc...@mail.gmail.com%3E
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
>


[GitHub] kafka pull request: Minor: add useful debug log messages to KConne...

2016-02-23 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/957

Minor: add useful debug log messages to KConnect source task execution



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka source_worker_debug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/957.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #957


commit aba9c1899c9c6b99747731fc65e67d09bf80d14a
Author: Gwen Shapira 
Date:   2016-02-23T22:06:24Z

Minor: add useful debug log messages to KConnect source task execution




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3271) Notification upon unclean leader election

2016-02-23 Thread Monal Daxini (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159732#comment-15159732
 ] 

Monal Daxini commented on KAFKA-3271:
-

If there is an api to query a broker (leader of a partition) what the offset is 
when it became a leader is good enough, even if the consumer is not notified. 
When an invalid offset exception is encountered a client can make a leader 
offset metadata request for the specific topic and partition.

> Notification upon unclean leader election
> -
>
> Key: KAFKA-3271
> URL: https://issues.apache.org/jira/browse/KAFKA-3271
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Yasuhiro Matsuda
>Priority: Minor
>
> It is a legitimate restriction that unclean leader election results in some 
> message loss. That said, it is always good to try to minimize the message 
> loss. A notification of unclean leader election can reduce message loss in 
> the following scenario.
> 1. The latest offset is L.
> 2. A consumer is at C, where C < L
> 3. A slow broker (not in ISR) is at S, where S < C
> 4. All brokers in ISR die.
> 5. The slow broker becomes a leader by unclean leader election.
> 6. Now the offset of S.
> 7. The new messages get offsets S, S+1, S+2, and so on.
> Currently the consumer won't receive new messages of offsets between S and C. 
> However, if the consumer is notified when unclean leader election happened 
> and resets its offset to S, it can receive new messages between S and C.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Make next Kafka release 0.10.0.0 instead of 0.9.1.0

2016-02-23 Thread Jun Rao
+1.

Thanks,

Jun

On Fri, Feb 19, 2016 at 11:00 AM, Becket Qin  wrote:

> Hi All,
>
> We would like to start this voting thread on making next Kafka release
> 0.10.0.0 instead of 0.9.1.0.
>
> The next Kafka release will have several significant important new
> features/changes such as Kafka Stream, Message Format Change, Client
> Interceptors and several new consumer API changes, etc. We feel it is
> better to make next Kafka release 0.10.0.0 instead of 0.9.1.0.
>
> Some previous discussions are in the following thread.
>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwfzigx1frzd020vk9fanj0s9nkszfuwk677bqxfuuc...@mail.gmail.com%3E
>
> Thanks,
>
> Jiangjie (Becket) Qin
>


  1   2   >