[jira] [Commented] (KAFKA-3267) Describe/Alter Configs protocol, server and client (KIP-133)

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015247#comment-16015247
 ] 

ASF GitHub Bot commented on KAFKA-3267:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3076


> Describe/Alter Configs protocol, server and client (KIP-133)
> 
>
> Key: KAFKA-3267
> URL: https://issues.apache.org/jira/browse/KAFKA-3267
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3267) Describe/Alter Configs protocol, server and client (KIP-133)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3267:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3076
[https://github.com/apache/kafka/pull/3076]

> Describe/Alter Configs protocol, server and client (KIP-133)
> 
>
> Key: KAFKA-3267
> URL: https://issues.apache.org/jira/browse/KAFKA-3267
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3076: KAFKA-3267: Describe and Alter Configs Admin APIs ...

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3076


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #1550

2017-05-17 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5277) Sticky Assignor should not cache the calculated assignment (KIP-54 follow-up)

2017-05-17 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-5277:
--

 Summary: Sticky Assignor should not cache the calculated 
assignment (KIP-54 follow-up)
 Key: KAFKA-5277
 URL: https://issues.apache.org/jira/browse/KAFKA-5277
 Project: Kafka
  Issue Type: Improvement
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian
Priority: Minor


As a follow-up to KIP-54, remove the dependency of Sticky Assignor to 
previously calculated assignment. This dependency is not required because each 
consumer participating in the rebalance now notifies the group leader of their 
assignment prior to rebalance. So the leader can compile the previous 
assignment of the whole group from this information. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #1020: KAFKA-2273: Sticky partition assignment strategy (...

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1020


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2273) KIP-54: Add rebalance with a minimal number of reassignments to server-defined strategy list

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015139#comment-16015139
 ] 

ASF GitHub Bot commented on KAFKA-2273:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1020


> KIP-54: Add rebalance with a minimal number of reassignments to 
> server-defined strategy list
> 
>
> Key: KAFKA-2273
> URL: https://issues.apache.org/jira/browse/KAFKA-2273
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Olof Johansson
>Assignee: Vahid Hashemian
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> Add a new partitions.assignment.strategy to the server-defined list that will 
> do reassignments based on moving as few partitions as possible. This should 
> be a quite common reassignment strategy especially for the cases where the 
> consumer has to maintain state, either in memory, or on disk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-2273) KIP-54: Add rebalance with a minimal number of reassignments to server-defined strategy list

2017-05-17 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2273:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1020
[https://github.com/apache/kafka/pull/1020]

> KIP-54: Add rebalance with a minimal number of reassignments to 
> server-defined strategy list
> 
>
> Key: KAFKA-2273
> URL: https://issues.apache.org/jira/browse/KAFKA-2273
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Olof Johansson
>Assignee: Vahid Hashemian
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> Add a new partitions.assignment.strategy to the server-defined list that will 
> do reassignments based on moving as few partitions as possible. This should 
> be a quite common reassignment strategy especially for the cases where the 
> consumer has to maintain state, either in memory, or on disk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-3995) Split the ProducerBatch and resend when received RecordTooLargeException

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3995:
--

Assignee: Jiangjie Qin  (was: Mayuresh Gharat)

> Split the ProducerBatch and resend when received RecordTooLargeException
> 
>
> Key: KAFKA-3995
> URL: https://issues.apache.org/jira/browse/KAFKA-3995
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> We recently see a few cases where RecordTooLargeException is thrown because 
> the compressed message sent by KafkaProducer exceeded the max message size.
> The root cause of this issue is because the compressor is estimating the 
> batch size using an estimated compression ratio based on heuristic 
> compression ratio statistics. This does not quite work for the traffic with 
> highly variable compression ratios. 
> For example, if the batch size is set to 1MB and the max message size is 1MB. 
> Initially a the producer is sending messages (each message is 1MB) to topic_1 
> whose data can be compressed to 1/10 of the original size. After a while the 
> estimated compression ratio in the compressor will be trained to 1/10 and the 
> producer would put 10 messages into one batch. Now the producer starts to 
> send messages (each message is also 1MB) to topic_2 whose message can only be 
> compress to 1/5 of the original size. The producer would still use 1/10 as 
> the estimated compression ratio and put 10 messages into a batch. That batch 
> would be 2 MB after compression which exceeds the maximum message size. In 
> this case the user do not have many options other than resend everything or 
> close the producer if they care about ordering.
> This is especially an issue for services like MirrorMaker whose producer is 
> shared by many different topics.
> To solve this issue, we can probably add a configuration 
> "enable.compression.ratio.estimation" to the producer. So when this 
> configuration is set to false, we stop estimating the compressed size but 
> will close the batch once the uncompressed bytes in the batch reaches the 
> batch size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3995) Split the ProducerBatch and resend when received RecordTooLargeException

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3995:
---
Status: Patch Available  (was: Reopened)

> Split the ProducerBatch and resend when received RecordTooLargeException
> 
>
> Key: KAFKA-3995
> URL: https://issues.apache.org/jira/browse/KAFKA-3995
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: Jiangjie Qin
>Assignee: Mayuresh Gharat
>
> We recently see a few cases where RecordTooLargeException is thrown because 
> the compressed message sent by KafkaProducer exceeded the max message size.
> The root cause of this issue is because the compressor is estimating the 
> batch size using an estimated compression ratio based on heuristic 
> compression ratio statistics. This does not quite work for the traffic with 
> highly variable compression ratios. 
> For example, if the batch size is set to 1MB and the max message size is 1MB. 
> Initially a the producer is sending messages (each message is 1MB) to topic_1 
> whose data can be compressed to 1/10 of the original size. After a while the 
> estimated compression ratio in the compressor will be trained to 1/10 and the 
> producer would put 10 messages into one batch. Now the producer starts to 
> send messages (each message is also 1MB) to topic_2 whose message can only be 
> compress to 1/5 of the original size. The producer would still use 1/10 as 
> the estimated compression ratio and put 10 messages into a batch. That batch 
> would be 2 MB after compression which exceeds the maximum message size. In 
> this case the user do not have many options other than resend everything or 
> close the producer if they care about ordering.
> This is especially an issue for services like MirrorMaker whose producer is 
> shared by many different topics.
> To solve this issue, we can probably add a configuration 
> "enable.compression.ratio.estimation" to the producer. So when this 
> configuration is set to false, we stop estimating the compressed size but 
> will close the batch once the uncompressed bytes in the batch reaches the 
> batch size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3266) Implement KIP-140 RPCs and APIs for creating, altering, and listing ACLs

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3266:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Implement KIP-140 RPCs and APIs for creating, altering, and listing ACLs
> 
>
> Key: KAFKA-3266
> URL: https://issues.apache.org/jira/browse/KAFKA-3266
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-4830) Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-05-17 Thread james chien (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015110#comment-16015110
 ] 

james chien edited comment on KAFKA-4830 at 5/18/17 2:41 AM:
-

I think if we do that then we will introduce a new API called 
`{KStream#print(KeyValueMapper)}`.


was (Author: james.c):
I think if we do that then we will introduce a new API called 
```KStream#print(KeyValueMapper)```.

> Augment KStream.print() to allow users pass in extra parameters in the 
> printed string
> -
>
> Key: KAFKA-4830
> URL: https://issues.apache.org/jira/browse/KAFKA-4830
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: james chien
>  Labels: needs-kip, newbie
>
> Today {{KStream.print}} use the hard-coded result string as:
> {code}
> "[" + this.streamName + "]: " + keyToPrint + " , " + valueToPrint
> {code}
> And some users are asking to augment this so that they can customize the 
> output string as {{KStream.print(KeyValueMapper)}} :
> {code}
> "[" + this.streamName + "]: " + mapper.apply(keyToPrint, valueToPrint)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-4830) Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-05-17 Thread james chien (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015110#comment-16015110
 ] 

james chien edited comment on KAFKA-4830 at 5/18/17 2:41 AM:
-

I think if we do that then we will introduce a new API called 
{{KStream#print(KeyValueMapper)}}.


was (Author: james.c):
I think if we do that then we will introduce a new API called 
`{KStream#print(KeyValueMapper)}`.

> Augment KStream.print() to allow users pass in extra parameters in the 
> printed string
> -
>
> Key: KAFKA-4830
> URL: https://issues.apache.org/jira/browse/KAFKA-4830
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: james chien
>  Labels: needs-kip, newbie
>
> Today {{KStream.print}} use the hard-coded result string as:
> {code}
> "[" + this.streamName + "]: " + keyToPrint + " , " + valueToPrint
> {code}
> And some users are asking to augment this so that they can customize the 
> output string as {{KStream.print(KeyValueMapper)}} :
> {code}
> "[" + this.streamName + "]: " + mapper.apply(keyToPrint, valueToPrint)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4830) Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-05-17 Thread james chien (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015110#comment-16015110
 ] 

james chien commented on KAFKA-4830:


I think if we do that then we will introduce a new API called 
`KStream#print(KeyValueMapper)`.

> Augment KStream.print() to allow users pass in extra parameters in the 
> printed string
> -
>
> Key: KAFKA-4830
> URL: https://issues.apache.org/jira/browse/KAFKA-4830
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: james chien
>  Labels: needs-kip, newbie
>
> Today {{KStream.print}} use the hard-coded result string as:
> {code}
> "[" + this.streamName + "]: " + keyToPrint + " , " + valueToPrint
> {code}
> And some users are asking to augment this so that they can customize the 
> output string as {{KStream.print(KeyValueMapper)}} :
> {code}
> "[" + this.streamName + "]: " + mapper.apply(keyToPrint, valueToPrint)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1549

2017-05-17 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5231: Bump up producer epoch when sending abort txn markers on

--
[...truncated 1.36 MB...]

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidClientFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validClientFinalMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
invalidServerFirstMessage PASSED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage STARTED

org.apache.kafka.common.security.scram.ScramMessagesTest > 
validServerFinalMessage PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryWithoutPasswordConfiguration PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testSingleOption STARTED

org.apache.kafka.common.security.JaasContextTest > testSingleOption PASSED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes STARTED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes PASSED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
STARTED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionName STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionName PASSED

org.apache.kafka.common.security.JaasContextTest > testControlFlag STARTED


[jira] [Comment Edited] (KAFKA-4830) Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-05-17 Thread james chien (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015110#comment-16015110
 ] 

james chien edited comment on KAFKA-4830 at 5/18/17 2:40 AM:
-

I think if we do that then we will introduce a new API called 
```KStream#print(KeyValueMapper)```.


was (Author: james.c):
I think if we do that then we will introduce a new API called 
`KStream#print(KeyValueMapper)`.

> Augment KStream.print() to allow users pass in extra parameters in the 
> printed string
> -
>
> Key: KAFKA-4830
> URL: https://issues.apache.org/jira/browse/KAFKA-4830
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: james chien
>  Labels: needs-kip, newbie
>
> Today {{KStream.print}} use the hard-coded result string as:
> {code}
> "[" + this.streamName + "]: " + keyToPrint + " , " + valueToPrint
> {code}
> And some users are asking to augment this so that they can customize the 
> output string as {{KStream.print(KeyValueMapper)}} :
> {code}
> "[" + this.streamName + "]: " + mapper.apply(keyToPrint, valueToPrint)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3266) Implement KIP-140 RPCs and APIs for creating, altering, and listing ACLs

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015103#comment-16015103
 ] 

ASF GitHub Bot commented on KAFKA-3266:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2941


> Implement KIP-140 RPCs and APIs for creating, altering, and listing ACLs
> 
>
> Key: KAFKA-3266
> URL: https://issues.apache.org/jira/browse/KAFKA-3266
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2941: KAFKA-3266: Implement KIP-140 RPCs and APIs for cr...

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2941


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-5275) Review and potentially tweak AdminClient API for the initial release (KIP-117)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-5275:
--

Assignee: Ismael Juma

> Review and potentially tweak AdminClient API for the initial release (KIP-117)
> --
>
> Key: KAFKA-5275
> URL: https://issues.apache.org/jira/browse/KAFKA-5275
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> Once all the pieces are in, we should take a pass and ensure that the APIs 
> work well together and that they are consistent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5276) Support derived and prefixed configs in DescribeConfigs (KIP-133)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5276:
---
Summary: Support derived and prefixed configs in DescribeConfigs (KIP-133)  
(was: Support derived and prefixed configs in DescribeConfigs)

> Support derived and prefixed configs in DescribeConfigs (KIP-133)
> -
>
> Key: KAFKA-5276
> URL: https://issues.apache.org/jira/browse/KAFKA-5276
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> The broker supports config overrides per listener. The way we do that is by 
> prefixing the configs with the listener name. These configs are not defined 
> by ConfigDef and they don't appear in `values()`. They do appear in 
> `originals()`. We should change the code so that we return these configs. 
> Because these configs are read-only, nothing needs to be done for 
> AlterConfigs.
> With regards to derived configs, an example is advertised.listeners, which 
> falls back to listeners. This is currently done outside AbstractConfig. We 
> should look into including these into AbstractConfig so that the fallback 
> happens for the returned configs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5275) Review and potentially tweak AdminClient API for the initial release (KIP-117)

2017-05-17 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015081#comment-16015081
 ] 

Ismael Juma commented on KAFKA-5275:


Because the ACLs and Configs PRs were developed in parallel, there's a bit of 
overlap. We should clean that up too.

> Review and potentially tweak AdminClient API for the initial release (KIP-117)
> --
>
> Key: KAFKA-5275
> URL: https://issues.apache.org/jira/browse/KAFKA-5275
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> Once all the pieces are in, we should take a pass and ensure that the APIs 
> work well together and that they are consistent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3074: KAFKA-5036: hold onto the leader lock in Partition...

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3074


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5036) Followups from KIP-101

2017-05-17 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-5036.

Resolution: Fixed

Issue resolved by pull request 3074
[https://github.com/apache/kafka/pull/3074]

> Followups from KIP-101
> --
>
> Key: KAFKA-5036
> URL: https://issues.apache.org/jira/browse/KAFKA-5036
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.11.0.0
>Reporter: Jun Rao
>Assignee: Jun Rao
> Fix For: 0.11.0.0
>
>
> 1. It would be safer to hold onto the leader lock in Partition while serving 
> an OffsetForLeaderEpoch request.
> 2. Currently, we update the leader epoch in epochCache after log append in 
> the follower but before log append in the leader. It would be more consistent 
> to always do this after log append. This also avoids issues related to 
> failure in log append.
> 3. OffsetsForLeaderEpochRequest/OffsetsForLeaderEpochResponse:
> The code that does grouping can probably be replaced by calling 
> CollectionUtils.groupDataByTopic(). Done: 
> https://github.com/apache/kafka/commit/359a68510801a22630a7af275c9935fb2d4c8dbf
> 4. The following line in LeaderEpochFileCache is hit several times when 
> LogTest is executed:
> {code}
>if (cachedLatestEpoch == None) error("Attempt to assign log end offset 
> to epoch before epoch has been set. This should never happen.")
> {code}
> This should be an assert (with the tests fixed up)
> 5. The constructor of LeaderEpochFileCache has the following:
> {code}
> lock synchronized { ListBuffer(checkpoint.read(): _*) }
> {code}
> But everywhere else uses a read or write lock. We should use consistent 
> locking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5036) Followups from KIP-101

2017-05-17 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao reassigned KAFKA-5036:
--

Assignee: Ben Stopford  (was: Jun Rao)

> Followups from KIP-101
> --
>
> Key: KAFKA-5036
> URL: https://issues.apache.org/jira/browse/KAFKA-5036
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.11.0.0
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.11.0.0
>
>
> 1. It would be safer to hold onto the leader lock in Partition while serving 
> an OffsetForLeaderEpoch request.
> 2. Currently, we update the leader epoch in epochCache after log append in 
> the follower but before log append in the leader. It would be more consistent 
> to always do this after log append. This also avoids issues related to 
> failure in log append.
> 3. OffsetsForLeaderEpochRequest/OffsetsForLeaderEpochResponse:
> The code that does grouping can probably be replaced by calling 
> CollectionUtils.groupDataByTopic(). Done: 
> https://github.com/apache/kafka/commit/359a68510801a22630a7af275c9935fb2d4c8dbf
> 4. The following line in LeaderEpochFileCache is hit several times when 
> LogTest is executed:
> {code}
>if (cachedLatestEpoch == None) error("Attempt to assign log end offset 
> to epoch before epoch has been set. This should never happen.")
> {code}
> This should be an assert (with the tests fixed up)
> 5. The constructor of LeaderEpochFileCache has the following:
> {code}
> lock synchronized { ListBuffer(checkpoint.read(): _*) }
> {code}
> But everywhere else uses a read or write lock. We should use consistent 
> locking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5036) Followups from KIP-101

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015074#comment-16015074
 ] 

ASF GitHub Bot commented on KAFKA-5036:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3074


> Followups from KIP-101
> --
>
> Key: KAFKA-5036
> URL: https://issues.apache.org/jira/browse/KAFKA-5036
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.11.0.0
>Reporter: Jun Rao
>Assignee: Ben Stopford
> Fix For: 0.11.0.0
>
>
> 1. It would be safer to hold onto the leader lock in Partition while serving 
> an OffsetForLeaderEpoch request.
> 2. Currently, we update the leader epoch in epochCache after log append in 
> the follower but before log append in the leader. It would be more consistent 
> to always do this after log append. This also avoids issues related to 
> failure in log append.
> 3. OffsetsForLeaderEpochRequest/OffsetsForLeaderEpochResponse:
> The code that does grouping can probably be replaced by calling 
> CollectionUtils.groupDataByTopic(). Done: 
> https://github.com/apache/kafka/commit/359a68510801a22630a7af275c9935fb2d4c8dbf
> 4. The following line in LeaderEpochFileCache is hit several times when 
> LogTest is executed:
> {code}
>if (cachedLatestEpoch == None) error("Attempt to assign log end offset 
> to epoch before epoch has been set. This should never happen.")
> {code}
> This should be an assert (with the tests fixed up)
> 5. The constructor of LeaderEpochFileCache has the following:
> {code}
> lock synchronized { ListBuffer(checkpoint.read(): _*) }
> {code}
> But everywhere else uses a read or write lock. We should use consistent 
> locking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5276) Support derived and prefixed configs in DescribeConfigs

2017-05-17 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5276:
--

 Summary: Support derived and prefixed configs in DescribeConfigs
 Key: KAFKA-5276
 URL: https://issues.apache.org/jira/browse/KAFKA-5276
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.11.0.0


The broker supports config overrides per listener. The way we do that is by 
prefixing the configs with the listener name. These configs are not defined by 
ConfigDef and they don't appear in `values()`. They do appear in `originals()`. 
We should change the code so that we return these configs. Because these 
configs are read-only, nothing needs to be done for AlterConfigs.

With regards to derived configs, an example is advertised.listeners, which 
falls back to listeners. This is currently done outside AbstractConfig. We 
should look into including these into AbstractConfig so that the fallback 
happens for the returned configs.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5275) Review and potentially tweak AdminClient API for the initial release (KIP-117)

2017-05-17 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5275:
--

 Summary: Review and potentially tweak AdminClient API for the 
initial release (KIP-117)
 Key: KAFKA-5275
 URL: https://issues.apache.org/jira/browse/KAFKA-5275
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
 Fix For: 0.11.0.0


Once all the pieces are in, we should take a pass and ensure that the APIs work 
well together and that they are consistent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5274) Review and improve AdminClient Javadoc for the first release (KIP-117)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5274:
---
Summary: Review and improve AdminClient Javadoc for the first release 
(KIP-117)  (was: Review and improve AdminClient Javadoc for the first release 
(KIP-133))

> Review and improve AdminClient Javadoc for the first release (KIP-117)
> --
>
> Key: KAFKA-5274
> URL: https://issues.apache.org/jira/browse/KAFKA-5274
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> Once all the AdminClient pieces are in, we should take a pass at the Javadoc 
> and improve it wherever possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-1955) Explore disk-based buffering in new Kafka Producer

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015046#comment-16015046
 ] 

ASF GitHub Bot commented on KAFKA-1955:
---

GitHub user blbradley opened a pull request:

https://github.com/apache/kafka/pull/3083

KAFKA-1955: [WIP] Disk based buffer in Producer

Based on patch from @jkreps in [this JIRA 
ticket](https://issues.apache.org/jira/browse/KAFKA-1955).

- [ ] Get some unit tests that would cover disk-backed usage
- [ ] Do some manual performance testing of this usage and understand the 
impact on throughput.
- [ ] Do some manual testing of failure cases (i.e. if the broker goes down 
for 30 seconds we should be able to keep taking writes) and observe how well 
the producer handles the catch up time when it has a large backlog to get rid 
of.
- [ ] Add a new configuration for the producer to enable this, something 
like use.file.buffers=true/false.
- [ ] Add documentation that covers these new options.

I've brought the patch into sync with trunk. Testing is next, which I've 
started on. I am flexible on how this can be implemented.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/blbradley/kafka kafka-disk-buffer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3083


commit 6b29fc95c394283ff4f2410ad37f7c8fcbd0d8d7
Author: Brandon Bradley 
Date:   2017-05-17T17:12:53Z

WIP: KAFKA-1955 August 8th 2015 rebase

commit 75d2af1d7f8dda4e2fe41da60455d813d655edd0
Author: Brandon Bradley 
Date:   2017-05-17T22:43:47Z

Merge branch 'trunk' into kafka-disk-buffer

patch works against trunk test suite

commit d3c765db789eef2fe71eca7a45dbca72e356f346
Author: Brandon Bradley 
Date:   2017-05-17T23:14:34Z

fix imports, add whitespace from diff

commit b58118c6413a5e900f5c1ebee112bd24e8d4b119
Author: Brandon Bradley 
Date:   2017-05-17T23:34:04Z

simple file buffer test

commit cd389f073eca18effa6449d9934aea0f90e84139
Author: Brandon Bradley 
Date:   2017-05-17T23:35:21Z

failing unallocated memory check

commit 49b6860e6c3be4bac62937dc835d5b6f97c7ff11
Author: Brandon Bradley 
Date:   2017-05-18T00:35:29Z

allocate buffer dynamically, passing tests

commit ed7aab5357fe9d7805dcb305d0318fb4ea770550
Author: Brandon Bradley 
Date:   2017-05-18T00:46:10Z

failing allocated memory check

commit 875ac83096199e35307a7ef47772907607aba1f1
Author: Brandon Bradley 
Date:   2017-05-18T00:56:47Z

do not add to free list during allocation

commit 4223e14896f4609d5bef80e97ee6d9982d2127a5
Author: Brandon Bradley 
Date:   2017-05-18T01:20:46Z

add license




> Explore disk-based buffering in new Kafka Producer
> --
>
> Key: KAFKA-1955
> URL: https://issues.apache.org/jira/browse/KAFKA-1955
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1955.patch, 
> KAFKA-1955-RABASED-TO-8th-AUG-2015.patch
>
>
> There are two approaches to using Kafka for capturing event data that has no 
> other "source of truth store":
> 1. Just write to Kafka and try hard to keep the Kafka cluster up as you would 
> a database.
> 2. Write to some kind of local disk store and copy from that to Kafka.
> The cons of the second approach are the following:
> 1. You end up depending on disks on all the producer machines. If you have 
> 1 producers, that is 10k places state is kept. These tend to fail a lot.
> 2. You can get data arbitrarily delayed
> 3. You still don't tolerate hard outages since there is no replication in the 
> producer tier
> 4. This tends to make problems with duplicates more common in certain failure 
> scenarios.
> There is one big pro, though: you don't have to keep Kafka running all the 
> time.
> So far we have done nothing in Kafka to help support approach (2), but people 
> have built a lot of buffering things. It's not clear that this is necessarily 
> bad.
> However implementing this in the new Kafka producer might actually be quite 
> easy. Here is an idea for how to do it. Implementation of this idea is 
> probably pretty easy but it would require some pretty thorough testing to see 
> if it was a success.
> The new producer maintains a pool of ByteBuffer instances which it 

[jira] [Created] (KAFKA-5274) Review and improve AdminClient Javadoc for the first release

2017-05-17 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5274:
--

 Summary: Review and improve AdminClient Javadoc for the first 
release
 Key: KAFKA-5274
 URL: https://issues.apache.org/jira/browse/KAFKA-5274
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
 Fix For: 0.11.0.0


Once all the AdminClient pieces are in, we should take a pass at the Javadoc 
and improve it wherever possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5274) Review and improve AdminClient Javadoc for the first release (KIP-133)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5274:
---
Summary: Review and improve AdminClient Javadoc for the first release 
(KIP-133)  (was: Review and improve AdminClient Javadoc for the first release)

> Review and improve AdminClient Javadoc for the first release (KIP-133)
> --
>
> Key: KAFKA-5274
> URL: https://issues.apache.org/jira/browse/KAFKA-5274
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> Once all the AdminClient pieces are in, we should take a pass at the Javadoc 
> and improve it wherever possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3083: KAFKA-1955: [WIP] Disk based buffer in Producer

2017-05-17 Thread blbradley
GitHub user blbradley opened a pull request:

https://github.com/apache/kafka/pull/3083

KAFKA-1955: [WIP] Disk based buffer in Producer

Based on patch from @jkreps in [this JIRA 
ticket](https://issues.apache.org/jira/browse/KAFKA-1955).

- [ ] Get some unit tests that would cover disk-backed usage
- [ ] Do some manual performance testing of this usage and understand the 
impact on throughput.
- [ ] Do some manual testing of failure cases (i.e. if the broker goes down 
for 30 seconds we should be able to keep taking writes) and observe how well 
the producer handles the catch up time when it has a large backlog to get rid 
of.
- [ ] Add a new configuration for the producer to enable this, something 
like use.file.buffers=true/false.
- [ ] Add documentation that covers these new options.

I've brought the patch into sync with trunk. Testing is next, which I've 
started on. I am flexible on how this can be implemented.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/blbradley/kafka kafka-disk-buffer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3083.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3083


commit 6b29fc95c394283ff4f2410ad37f7c8fcbd0d8d7
Author: Brandon Bradley 
Date:   2017-05-17T17:12:53Z

WIP: KAFKA-1955 August 8th 2015 rebase

commit 75d2af1d7f8dda4e2fe41da60455d813d655edd0
Author: Brandon Bradley 
Date:   2017-05-17T22:43:47Z

Merge branch 'trunk' into kafka-disk-buffer

patch works against trunk test suite

commit d3c765db789eef2fe71eca7a45dbca72e356f346
Author: Brandon Bradley 
Date:   2017-05-17T23:14:34Z

fix imports, add whitespace from diff

commit b58118c6413a5e900f5c1ebee112bd24e8d4b119
Author: Brandon Bradley 
Date:   2017-05-17T23:34:04Z

simple file buffer test

commit cd389f073eca18effa6449d9934aea0f90e84139
Author: Brandon Bradley 
Date:   2017-05-17T23:35:21Z

failing unallocated memory check

commit 49b6860e6c3be4bac62937dc835d5b6f97c7ff11
Author: Brandon Bradley 
Date:   2017-05-18T00:35:29Z

allocate buffer dynamically, passing tests

commit ed7aab5357fe9d7805dcb305d0318fb4ea770550
Author: Brandon Bradley 
Date:   2017-05-18T00:46:10Z

failing allocated memory check

commit 875ac83096199e35307a7ef47772907607aba1f1
Author: Brandon Bradley 
Date:   2017-05-18T00:56:47Z

do not add to free list during allocation

commit 4223e14896f4609d5bef80e97ee6d9982d2127a5
Author: Brandon Bradley 
Date:   2017-05-18T01:20:46Z

add license




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4830) Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-05-17 Thread james chien (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015043#comment-16015043
 ] 

james chien commented on KAFKA-4830:


great, I will work on this :)

> Augment KStream.print() to allow users pass in extra parameters in the 
> printed string
> -
>
> Key: KAFKA-4830
> URL: https://issues.apache.org/jira/browse/KAFKA-4830
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: needs-kip, newbie
>
> Today {{KStream.print}} use the hard-coded result string as:
> {code}
> "[" + this.streamName + "]: " + keyToPrint + " , " + valueToPrint
> {code}
> And some users are asking to augment this so that they can customize the 
> output string as {{KStream.print(KeyValueMapper)}} :
> {code}
> "[" + this.streamName + "]: " + mapper.apply(keyToPrint, valueToPrint)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4830) Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-05-17 Thread james chien (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

james chien reassigned KAFKA-4830:
--

Assignee: james chien

> Augment KStream.print() to allow users pass in extra parameters in the 
> printed string
> -
>
> Key: KAFKA-4830
> URL: https://issues.apache.org/jira/browse/KAFKA-4830
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: james chien
>  Labels: needs-kip, newbie
>
> Today {{KStream.print}} use the hard-coded result string as:
> {code}
> "[" + this.streamName + "]: " + keyToPrint + " , " + valueToPrint
> {code}
> And some users are asking to augment this so that they can customize the 
> output string as {{KStream.print(KeyValueMapper)}} :
> {code}
> "[" + this.streamName + "]: " + mapper.apply(keyToPrint, valueToPrint)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5231) TransactinoCoordinator does not bump epoch when aborting open transactions

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015019#comment-16015019
 ] 

ASF GitHub Bot commented on KAFKA-5231:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3082

KAFKA-5231: Protect txn metadata map with read-write lock



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K5231-read-write-lock

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3082.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3082


commit 6b5c6cf6042c61e785e9f005ea0b85ff8e5246c1
Author: Guozhang Wang 
Date:   2017-05-16T06:12:12Z

bump up producer epoch

commit b7884d106b8b8c10187b8748f5120728464acfc9
Author: Guozhang Wang 
Date:   2017-05-17T00:07:11Z

Jason's comments

commit 8f78d601beb52c90507c1307a5073e8100e98631
Author: Guozhang Wang 
Date:   2017-05-17T19:31:07Z

Jun's comments

commit 7c7f4da31191fce9935eecc65b6e3273bae520d1
Author: Guozhang Wang 
Date:   2017-05-17T19:52:50Z

bump up limit for class fanout to 40 for Sender class

commit c3aef033207f5f7da5ccc84c5dd6f71165d94f62
Author: Guozhang Wang 
Date:   2017-05-17T20:36:05Z

Jun's comments round two

commit ccf8fdc51c7aa066bf2f8098f0906ebf9f937a2d
Author: Guozhang Wang 
Date:   2017-05-17T23:44:48Z

rebased from trunk

commit 52ef07b060f59f7e0fe342faa5c040448ed1b2be
Author: Guozhang Wang 
Date:   2017-05-17T20:19:20Z

change the state lock to read-write lock

commit dd11d5d457ba09ed5a2a1a0d2d35712a0420b722
Author: Guozhang Wang 
Date:   2017-05-17T23:29:22Z

grab the read lock until append to local is done

commit 16c1791236d8bcf851fad93ef1b7a61170f62aa5
Author: Guozhang Wang 
Date:   2017-05-18T01:04:18Z

put the validation and return of metadata under the same lock




> TransactinoCoordinator does not bump epoch when aborting open transactions
> --
>
> Key: KAFKA-5231
> URL: https://issues.apache.org/jira/browse/KAFKA-5231
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>Assignee: Guozhang Wang
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When the TransactionCoordinator receives an InitPidRequest when there is an 
> open transaction for a transactional id, it should first bump the epoch and 
> then abort the open transaction.
> Currently, it aborts the open transaction with the existing epoch, hence the 
> old producer is never fenced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5231) TransactinoCoordinator does not bump epoch when aborting open transactions

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015016#comment-16015016
 ] 

ASF GitHub Bot commented on KAFKA-5231:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3066


> TransactinoCoordinator does not bump epoch when aborting open transactions
> --
>
> Key: KAFKA-5231
> URL: https://issues.apache.org/jira/browse/KAFKA-5231
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>Assignee: Guozhang Wang
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When the TransactionCoordinator receives an InitPidRequest when there is an 
> open transaction for a transactional id, it should first bump the epoch and 
> then abort the open transaction.
> Currently, it aborts the open transaction with the existing epoch, hence the 
> old producer is never fenced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3082: KAFKA-5231: Protect txn metadata map with read-wri...

2017-05-17 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3082

KAFKA-5231: Protect txn metadata map with read-write lock



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K5231-read-write-lock

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3082.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3082


commit 6b5c6cf6042c61e785e9f005ea0b85ff8e5246c1
Author: Guozhang Wang 
Date:   2017-05-16T06:12:12Z

bump up producer epoch

commit b7884d106b8b8c10187b8748f5120728464acfc9
Author: Guozhang Wang 
Date:   2017-05-17T00:07:11Z

Jason's comments

commit 8f78d601beb52c90507c1307a5073e8100e98631
Author: Guozhang Wang 
Date:   2017-05-17T19:31:07Z

Jun's comments

commit 7c7f4da31191fce9935eecc65b6e3273bae520d1
Author: Guozhang Wang 
Date:   2017-05-17T19:52:50Z

bump up limit for class fanout to 40 for Sender class

commit c3aef033207f5f7da5ccc84c5dd6f71165d94f62
Author: Guozhang Wang 
Date:   2017-05-17T20:36:05Z

Jun's comments round two

commit ccf8fdc51c7aa066bf2f8098f0906ebf9f937a2d
Author: Guozhang Wang 
Date:   2017-05-17T23:44:48Z

rebased from trunk

commit 52ef07b060f59f7e0fe342faa5c040448ed1b2be
Author: Guozhang Wang 
Date:   2017-05-17T20:19:20Z

change the state lock to read-write lock

commit dd11d5d457ba09ed5a2a1a0d2d35712a0420b722
Author: Guozhang Wang 
Date:   2017-05-17T23:29:22Z

grab the read lock until append to local is done

commit 16c1791236d8bcf851fad93ef1b7a61170f62aa5
Author: Guozhang Wang 
Date:   2017-05-18T01:04:18Z

put the validation and return of metadata under the same lock




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3066: KAFKA-5231: Bump up producer epoch when sending ab...

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3066


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5231) TransactinoCoordinator does not bump epoch when aborting open transactions

2017-05-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5231.
--
Resolution: Fixed

Issue resolved by pull request 3066
[https://github.com/apache/kafka/pull/3066]

> TransactinoCoordinator does not bump epoch when aborting open transactions
> --
>
> Key: KAFKA-5231
> URL: https://issues.apache.org/jira/browse/KAFKA-5231
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>Assignee: Guozhang Wang
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When the TransactionCoordinator receives an InitPidRequest when there is an 
> open transaction for a transactional id, it should first bump the epoch and 
> then abort the open transaction.
> Currently, it aborts the open transaction with the existing epoch, hence the 
> old producer is never fenced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3081: MINOR: Log transaction metadata state transitions ...

2017-05-17 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3081

MINOR: Log transaction metadata state transitions plus a few cleanups



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka 
minor-add-txn-transition-logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3081.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3081


commit a68f04245a4bb192a0f0ea258b803f7bfd544ad0
Author: Jason Gustafson 
Date:   2017-05-18T00:46:14Z

MINOR: Log transaction metadata state transitions plus a few cleanups




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5273) KafkaConsumer.committed() should get latest committed offsets from the server

2017-05-17 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5273:
---

 Summary: KafkaConsumer.committed() should get latest committed 
offsets from the server
 Key: KAFKA-5273
 URL: https://issues.apache.org/jira/browse/KAFKA-5273
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apurva Mehta


Currently, the `KafkaConsumer.committed(topicPartition)` will return the 
current position of the consumer for that partition if the consumer has been 
assigned the partition. Otherwise, it will lookup the committed position from 
the server. 

With the new producer `sendOffsetsToTransaction` api, we get into a state where 
we can commit the offsets for an assigned partition through the producer. So 
the consumer doesn't update it's cached view and subsequently returns a stale 
committed offset for it's assigned partition. 

We should either update the consumer's cache when offsets are committed through 
the producer, or drop the cache totally and always lookup the server to get the 
committed offset. This way the `committed` method will always return the latest 
committed offset for any partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (KAFKA-5272) Improve validation for Describe/Alter Configs (KIP-133)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-5272 started by Ismael Juma.
--
> Improve validation for Describe/Alter Configs (KIP-133)
> ---
>
> Key: KAFKA-5272
> URL: https://issues.apache.org/jira/browse/KAFKA-5272
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>
> TopicConfigHandler.processConfigChanges() warns about certain topic configs. 
> We should include such validations in alter configs and reject the change if 
> the validation fails. Note that this should be done without changing the 
> behaviour of the ConfigCommand (as it does not have access to the broker 
> configs).
> We should consider adding other validations like KAFKA-4092 and KAFKA-4680.
> Finally, the AlterConfigsPolicy mentioned in KIP-133 will be added at the 
> same time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5272) Improve validation for Describe/Alter Configs (KIP-133)

2017-05-17 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5272:
--

 Summary: Improve validation for Describe/Alter Configs (KIP-133)
 Key: KAFKA-5272
 URL: https://issues.apache.org/jira/browse/KAFKA-5272
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
Assignee: Ismael Juma


TopicConfigHandler.processConfigChanges() warns about certain topic configs. We 
should include such validations in alter configs and reject the change if the 
validation fails. Note that this should be done without changing the behaviour 
of the ConfigCommand (as it does not have access to the broker configs).

We should consider adding other validations like KAFKA-4092 and KAFKA-4680.

Finally, the AlterConfigsPolicy mentioned in KIP-133 will be added at the same 
time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3267) Describe/Alter Configs protocol, server and client (KIP-133)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3267:
---
Summary: Describe/Alter Configs protocol, server and client (KIP-133)  
(was: Describe/Alter Configs protocol and server side implementation (KIP-133))

> Describe/Alter Configs protocol, server and client (KIP-133)
> 
>
> Key: KAFKA-3267
> URL: https://issues.apache.org/jira/browse/KAFKA-3267
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5261) Performance improvement of SimpleAclAuthorizer

2017-05-17 Thread Stephane Maarek (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014945#comment-16014945
 ] 

Stephane Maarek commented on KAFKA-5261:


this and possibly
{code}

  private def aclMatch(session: Session, operations: Operation, resource: 
Resource, principal: KafkaPrincipal, host: String, permissionType: 
PermissionType, acls: Set[Acl]): Boolean = {
acls.find { acl =>
  acl.permissionType == permissionType &&
(acl.principal == principal || acl.principal == Acl.WildCardPrincipal) 
&&
(operations == acl.operation || acl.operation == All) &&
(acl.host == host || acl.host == Acl.WildCardHost)
}.exists { acl =>
  authorizerLogger.debug(s"operation = $operations on resource = $resource 
from host = $host is $permissionType based on acl = $acl")
  true
}
  }
{code}

In case acls is big, that could be costly to do over and over again

> Performance improvement of SimpleAclAuthorizer
> --
>
> Key: KAFKA-5261
> URL: https://issues.apache.org/jira/browse/KAFKA-5261
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.2.1
>Reporter: Stephane Maarek
>
> Currently, looking at the KafkaApis class, it seems that every request going 
> through Kafka is also going through an authorize check:
> {code}
>   private def authorize(session: Session, operation: Operation, resource: 
> Resource): Boolean =
> authorizer.forall(_.authorize(session, operation, resource))
> {code}
> The SimpleAclAuthorizer logic runs through checks which all look to be done 
> in linear time (except on first run) proportional to the number of acls on a 
> specific resource. This operation is re-run every time a client tries to use 
> a Kafka Api, especially on the very often called `handleProducerRequest` and  
> `handleFetchRequest`
> I believe a cache could be built to store the result of the authorize call, 
> possibly allowing more expensive authorize() calls to happen, and reducing 
> greatly the CPU usage in the long run. The cache would be invalidated every 
> time a change happens to aclCache
> Thoughts before I try giving it a go with a PR? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4278) Undocumented REST resources

2017-05-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned KAFKA-4278:
-

Assignee: (was: Bharat Viswanadham)

> Undocumented REST resources
> ---
>
> Key: KAFKA-4278
> URL: https://issues.apache.org/jira/browse/KAFKA-4278
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>  Labels: newbie
>
> We've added some REST resources and I think we didn't document them.
> / - get version
> /connector-plugins - show installed connectors
> Those are the ones I've found (or rather, failed to find) - there could be 
> more.
> Perhaps the best solution is to auto-generate the REST documentation the way 
> we generate configuration docs?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5188) Add Integration tests for transactional producer

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014929#comment-16014929
 ] 

ASF GitHub Bot commented on KAFKA-5188:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2994


> Add Integration tests for transactional producer
> 
>
> Key: KAFKA-5188
> URL: https://issues.apache.org/jira/browse/KAFKA-5188
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2994: KAFKA-5188: Integration tests for transactions

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2994


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5188) Add Integration tests for transactional producer

2017-05-17 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5188.

Resolution: Fixed

Issue resolved by pull request 2994
[https://github.com/apache/kafka/pull/2994]

> Add Integration tests for transactional producer
> 
>
> Key: KAFKA-5188
> URL: https://issues.apache.org/jira/browse/KAFKA-5188
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5268) TransactionsBounceTest occasionally sees INVALID_TXN_STATE errors

2017-05-17 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-5268:
--

Assignee: Jason Gustafson

> TransactionsBounceTest occasionally sees INVALID_TXN_STATE errors
> -
>
> Key: KAFKA-5268
> URL: https://issues.apache.org/jira/browse/KAFKA-5268
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>Assignee: Jason Gustafson
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> This test occasionally fails because we hit an invalid state on the 
> `TransactionCoordinator` when processing an EndTxnRequest.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5269) TransactionBounceTest occasionally fails due to partition errors

2017-05-17 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta reassigned KAFKA-5269:
---

Assignee: Apurva Mehta

> TransactionBounceTest occasionally fails due to partition errors
> 
>
> Key: KAFKA-5269
> URL: https://issues.apache.org/jira/browse/KAFKA-5269
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>
> The test sometimes encounters a partition level error 
> `UNKNOWN_TOPIC_OR_PARTITION` for the output topic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5271) Producer only needs to send AddOffsetsToTxn one time for each group

2017-05-17 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5271.

Resolution: Duplicate

Sorry for the noise. Created this just as Apurva was creating a similar issue.

> Producer only needs to send AddOffsetsToTxn one time for each group
> ---
>
> Key: KAFKA-5271
> URL: https://issues.apache.org/jira/browse/KAFKA-5271
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> Currently we send the AddOffsetsToTxn request every time the user calls 
> {{sendOffsets}}. We really only need to send it one time for each groupId 
> included in the transaction. This is a minor optimization since we don't 
> expect that it would be a common use case to add offsets more than once to a 
> transaction.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5271) Producer only needs to send AddOffsetsToTxn one time for each group

2017-05-17 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5271:
--

 Summary: Producer only needs to send AddOffsetsToTxn one time for 
each group
 Key: KAFKA-5271
 URL: https://issues.apache.org/jira/browse/KAFKA-5271
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


Currently we send the AddOffsetsToTxn request every time the user calls 
{{sendOffsets}}. We really only need to send it one time for each groupId 
included in the transaction. This is a minor optimization since we don't expect 
that it would be a common use case to add offsets more than once to a 
transaction.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5270) TransactionManager should send and `AddOffsetsToTxn` request only once per group per transaction

2017-05-17 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5270:
---

 Summary: TransactionManager should send and `AddOffsetsToTxn` 
request only once per group per transaction
 Key: KAFKA-5270
 URL: https://issues.apache.org/jira/browse/KAFKA-5270
 Project: Kafka
  Issue Type: Sub-task
Reporter: Apurva Mehta


Currently, we send the `AddOffsetsToTxn` request unconditionally every time, 
even if we receive multiple sendOffsets for the same group. We could keep track 
of the added groups in the TransactionManager and not resend this RPC multiple 
times for the same transaction as the subsequent instances add no new 
information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5269) TransactionBounceTest occasionally fails due to partition errors

2017-05-17 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014864#comment-16014864
 ] 

Apurva Mehta commented on KAFKA-5269:
-

A relevant stack trace:
{noformat}
[2017-05-17 15:15:38,145] ERROR aborting producer batches because the 
transaction manager is in an error state. 
(org.apache.kafka.clients.producer.internals.Sender:208)
org.apache.kafka.common.KafkaException: Unexpected error in 
TxnOffsetCommitResponse: This server does not host this topic-partition.
at 
org.apache.kafka.clients.producer.internals.TransactionManager$TxnOffsetCommitHandler.handleResponse(TransactionManager.java:855)
at 
org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:529)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:100)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:378)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:196)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:152)
at java.lang.Thread.run(Thread.java:745)
[2017-05-17 15:15:38,150] ERR
{noformat}

> TransactionBounceTest occasionally fails due to partition errors
> 
>
> Key: KAFKA-5269
> URL: https://issues.apache.org/jira/browse/KAFKA-5269
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>
> The test sometimes encounters a partition level error 
> `UNKNOWN_TOPIC_OR_PARTITION` for the output topic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5269) TransactionBounceTest occasionally fails due to partition errors

2017-05-17 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5269:

Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-4815

> TransactionBounceTest occasionally fails due to partition errors
> 
>
> Key: KAFKA-5269
> URL: https://issues.apache.org/jira/browse/KAFKA-5269
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>
> The test sometimes encounters a partition level error 
> `UNKNOWN_TOPIC_OR_PARTITION` for the output topic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5269) TransactionBounceTest occasionally fails due to partition errors

2017-05-17 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5269:
---

 Summary: TransactionBounceTest occasionally fails due to partition 
errors
 Key: KAFKA-5269
 URL: https://issues.apache.org/jira/browse/KAFKA-5269
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta


The test sometimes encounters a partition level error 
`UNKNOWN_TOPIC_OR_PARTITION` for the output topic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5268) TransactionsBounceTest occasionally sees INVALID_TXN_STATE errors

2017-05-17 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta updated KAFKA-5268:

Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-4815

> TransactionsBounceTest occasionally sees INVALID_TXN_STATE errors
> -
>
> Key: KAFKA-5268
> URL: https://issues.apache.org/jira/browse/KAFKA-5268
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> This test occasionally fails because we hit an invalid state on the 
> `TransactionCoordinator` when processing an EndTxnRequest.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5268) TransactionsBounceTest occasionally sees INVALID_TXN_STATE errors

2017-05-17 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5268:
---

 Summary: TransactionsBounceTest occasionally sees 
INVALID_TXN_STATE errors
 Key: KAFKA-5268
 URL: https://issues.apache.org/jira/browse/KAFKA-5268
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta
 Fix For: 0.11.0.0


This test occasionally fails because we hit an invalid state on the 
`TransactionCoordinator` when processing an EndTxnRequest.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5267) Simplify KStreamPrint using RichFunctions

2017-05-17 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5267:
--

 Summary: Simplify KStreamPrint using RichFunctions
 Key: KAFKA-5267
 URL: https://issues.apache.org/jira/browse/KAFKA-5267
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax
Priority: Minor


Currently, we have an own processor {{KStreamPrint}} in order to initialize 
Serdes. If we get support for {{RichFunctions}} as proposed in KAFKA-4125 we 
can remove {{KStreamPrint}} and implement {{PrintForeachAction}} as a rich 
function to do the Serde initialization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4743) Add a tool to Reset Consumer Group Offsets

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014804#comment-16014804
 ] 

ASF GitHub Bot commented on KAFKA-4743:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2624


> Add a tool to Reset Consumer Group Offsets
> --
>
> Key: KAFKA-4743
> URL: https://issues.apache.org/jira/browse/KAFKA-4743
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer, core, tools
>Reporter: Jorge Quilcate
>Assignee: Jorge Quilcate
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> Add an external tool to reset Consumer Group offsets, and achieve rewind over 
> the topics, without changing client-side code.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+Reset+Consumer+Group+Offsets+tooling



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2624: KAFKA-4743: Add Reset Consumer Group Offsets tooli...

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2624


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4743) Add a tool to Reset Consumer Group Offsets

2017-05-17 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4743:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2624
[https://github.com/apache/kafka/pull/2624]

> Add a tool to Reset Consumer Group Offsets
> --
>
> Key: KAFKA-4743
> URL: https://issues.apache.org/jira/browse/KAFKA-4743
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer, core, tools
>Reporter: Jorge Quilcate
>Assignee: Jorge Quilcate
>  Labels: kip
> Fix For: 0.11.0.0
>
>
> Add an external tool to reset Consumer Group offsets, and achieve rewind over 
> the topics, without changing client-side code.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+Reset+Consumer+Group+Offsets+tooling



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5266) Follow-up improvements for consumer offset reset tool (KIP-122)

2017-05-17 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5266:
--

 Summary: Follow-up improvements for consumer offset reset tool 
(KIP-122)
 Key: KAFKA-5266
 URL: https://issues.apache.org/jira/browse/KAFKA-5266
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Jason Gustafson
 Fix For: 0.11.0.0


1. We should try to ensure that offsets are in range for the topic partition. 
We currently only verify this for the shift option.
2. If you provide a CSV file, you shouldn't need to specify one of the 
--all-topics or --topic options.
3. We currently support a "reset to current offsets" option if none of the 
supported reset options are provided. This seems kind of useless. Perhaps we 
should just enforce that one of the reset options is provided.
4. The command fails with an NPE if we cannot find one of the offsets we are 
trying to reset. It would be better to raise an exception with a friendlier 
message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4830) Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-05-17 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014715#comment-16014715
 ] 

Matthias J. Sax commented on KAFKA-4830:


[~james.c] KAFKA-4772 got merged. So feel free to work on this JIRA now :)

> Augment KStream.print() to allow users pass in extra parameters in the 
> printed string
> -
>
> Key: KAFKA-4830
> URL: https://issues.apache.org/jira/browse/KAFKA-4830
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: needs-kip, newbie
>
> Today {{KStream.print}} use the hard-coded result string as:
> {code}
> "[" + this.streamName + "]: " + keyToPrint + " , " + valueToPrint
> {code}
> And some users are asking to augment this so that they can customize the 
> output string as {{KStream.print(KeyValueMapper)}} :
> {code}
> "[" + this.streamName + "]: " + mapper.apply(keyToPrint, valueToPrint)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5255) Auto generate request/response classes

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5255:
---
Fix Version/s: 0.11.1.0

> Auto generate request/response classes
> --
>
> Key: KAFKA-5255
> URL: https://issues.apache.org/jira/browse/KAFKA-5255
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
> Fix For: 0.11.1.0
>
>
> We should automatically generate the request/response classes from the 
> protocol definition. This is a major source of boilerplate, development 
> effort and inconsistency at the moment. If we auto-generate the classes, we 
> may also be able to avoid the intermediate `Struct` representation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-17 Thread BigData dev
Hi,
When I was trying to find more info, there is already a proposed KIP for
this
https://cwiki.apache.org/confluence/display/KAFKA/KIP-14+-+Tools+Standardization


Thanks,
Bharat

On Wed, May 17, 2017 at 12:38 PM, BigData dev 
wrote:

> Hi Ewen, Matthias,
> For common configuration across all the tools, I will work on that as part
> of other KIP by looking into all Kafka tools.
>
>
> Thanks,
> Bharat
>
>
> On Wed, May 17, 2017 at 9:40 AM, Matthias J. Sax 
> wrote:
>
>> +1
>>
>> I also second Ewen comment -- standardizing the common supported
>> parameters over all tools would be great!
>>
>>
>> -Matthias
>>
>> On 5/17/17 12:57 AM, Damian Guy wrote:
>> > +1
>> >
>> > On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava 
>> > wrote:
>> >
>> >> +1 (binding)
>> >>
>> >> I mentioned this in the PR that triggered this:
>> >>
>> >>> KIP is accurate, though this is one of those things that we should
>> >> probably get a KIP for a standard set of config options across all
>> tools so
>> >> additions like this can just fall under the umbrella of that KIP...
>> >>
>> >> I think it would be great if someone wrote up a small KIP providing
>> some
>> >> standardized settings that we could get future additions automatically
>> >> umbrella'd under, e.g. no need to do a KIP if just adding a
>> consumer.config
>> >> or consumer-property config conforming to existing expectations for
>> other
>> >> tools. We could also standardize on a few other settings names that are
>> >> inconsistent across different tools and set out a clear path forward
>> for
>> >> future tools.
>> >>
>> >> I think I still have at least one open PR from when I first started on
>> the
>> >> project where I was trying to clean up some command line stuff to be
>> more
>> >> consistent. This has been an issue for many years now...
>> >>
>> >> -Ewen
>> >>
>> >>
>> >>
>> >> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska 
>> >> wrote:
>> >>
>> >>> +1 thanks.
>> >>>
>> >>> Eno
>>  On 16 May 2017, at 04:20, BigData dev 
>> wrote:
>> 
>>  Hi All,
>>  Given the simple and non-controversial nature of the KIP, I would
>> like
>> >> to
>>  start the voting process for KIP-157: Add consumer config options to
>>  streams reset tool
>> 
>>  *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
>> >>> 157+-+Add+consumer+config+options+to+streams+reset+tool
>>  > >>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
>> 
>> 
>>  The vote will run for a minimum of 72 hours.
>> 
>>  Thanks,
>> 
>>  Bharat
>> >>>
>> >>>
>> >>
>> >
>>
>>
>


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-17 Thread BigData dev
Hi Ewen, Matthias,
For common configuration across all the tools, I will work on that as part
of other KIP by looking into all Kafka tools.


Thanks,
Bharat


On Wed, May 17, 2017 at 9:40 AM, Matthias J. Sax 
wrote:

> +1
>
> I also second Ewen comment -- standardizing the common supported
> parameters over all tools would be great!
>
>
> -Matthias
>
> On 5/17/17 12:57 AM, Damian Guy wrote:
> > +1
> >
> > On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava 
> > wrote:
> >
> >> +1 (binding)
> >>
> >> I mentioned this in the PR that triggered this:
> >>
> >>> KIP is accurate, though this is one of those things that we should
> >> probably get a KIP for a standard set of config options across all
> tools so
> >> additions like this can just fall under the umbrella of that KIP...
> >>
> >> I think it would be great if someone wrote up a small KIP providing some
> >> standardized settings that we could get future additions automatically
> >> umbrella'd under, e.g. no need to do a KIP if just adding a
> consumer.config
> >> or consumer-property config conforming to existing expectations for
> other
> >> tools. We could also standardize on a few other settings names that are
> >> inconsistent across different tools and set out a clear path forward for
> >> future tools.
> >>
> >> I think I still have at least one open PR from when I first started on
> the
> >> project where I was trying to clean up some command line stuff to be
> more
> >> consistent. This has been an issue for many years now...
> >>
> >> -Ewen
> >>
> >>
> >>
> >> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska 
> >> wrote:
> >>
> >>> +1 thanks.
> >>>
> >>> Eno
>  On 16 May 2017, at 04:20, BigData dev 
> wrote:
> 
>  Hi All,
>  Given the simple and non-controversial nature of the KIP, I would like
> >> to
>  start the voting process for KIP-157: Add consumer config options to
>  streams reset tool
> 
>  *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>> 157+-+Add+consumer+config+options+to+streams+reset+tool
>   >>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
> 
> 
>  The vote will run for a minimum of 72 hours.
> 
>  Thanks,
> 
>  Bharat
> >>>
> >>>
> >>
> >
>
>


[jira] [Work started] (KAFKA-4278) Undocumented REST resources

2017-05-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4278 started by Bharat Viswanadham.
-
> Undocumented REST resources
> ---
>
> Key: KAFKA-4278
> URL: https://issues.apache.org/jira/browse/KAFKA-4278
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>Assignee: Bharat Viswanadham
>  Labels: newbie
>
> We've added some REST resources and I think we didn't document them.
> / - get version
> /connector-plugins - show installed connectors
> Those are the ones I've found (or rather, failed to find) - there could be 
> more.
> Perhaps the best solution is to auto-generate the REST documentation the way 
> we generate configuration docs?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4278) Undocumented REST resources

2017-05-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned KAFKA-4278:
-

Assignee: Bharat Viswanadham

> Undocumented REST resources
> ---
>
> Key: KAFKA-4278
> URL: https://issues.apache.org/jira/browse/KAFKA-4278
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Gwen Shapira
>Assignee: Bharat Viswanadham
>  Labels: newbie
>
> We've added some REST resources and I think we didn't document them.
> / - get version
> /connector-plugins - show installed connectors
> Those are the ones I've found (or rather, failed to find) - there could be 
> more.
> Perhaps the best solution is to auto-generate the REST documentation the way 
> we generate configuration docs?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3267) Describe/Alter Configs protocol and server side implementation (KIP-133)

2017-05-17 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3267:
---
Status: Patch Available  (was: In Progress)

> Describe/Alter Configs protocol and server side implementation (KIP-133)
> 
>
> Key: KAFKA-3267
> URL: https://issues.apache.org/jira/browse/KAFKA-3267
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Ismael Juma
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-1955) Explore disk-based buffering in new Kafka Producer

2017-05-17 Thread Brandon Bradley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014550#comment-16014550
 ] 

Brandon Bradley commented on KAFKA-1955:


The rebased patch applies cleanly to 68ad80f8. I'm trying to get it updated to 
trunk and submit a proper pull request.

> Explore disk-based buffering in new Kafka Producer
> --
>
> Key: KAFKA-1955
> URL: https://issues.apache.org/jira/browse/KAFKA-1955
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1955.patch, 
> KAFKA-1955-RABASED-TO-8th-AUG-2015.patch
>
>
> There are two approaches to using Kafka for capturing event data that has no 
> other "source of truth store":
> 1. Just write to Kafka and try hard to keep the Kafka cluster up as you would 
> a database.
> 2. Write to some kind of local disk store and copy from that to Kafka.
> The cons of the second approach are the following:
> 1. You end up depending on disks on all the producer machines. If you have 
> 1 producers, that is 10k places state is kept. These tend to fail a lot.
> 2. You can get data arbitrarily delayed
> 3. You still don't tolerate hard outages since there is no replication in the 
> producer tier
> 4. This tends to make problems with duplicates more common in certain failure 
> scenarios.
> There is one big pro, though: you don't have to keep Kafka running all the 
> time.
> So far we have done nothing in Kafka to help support approach (2), but people 
> have built a lot of buffering things. It's not clear that this is necessarily 
> bad.
> However implementing this in the new Kafka producer might actually be quite 
> easy. Here is an idea for how to do it. Implementation of this idea is 
> probably pretty easy but it would require some pretty thorough testing to see 
> if it was a success.
> The new producer maintains a pool of ByteBuffer instances which it attempts 
> to recycle and uses to buffer and send messages. When unsent data is queuing 
> waiting to be sent to the cluster it is hanging out in this pool.
> One approach to implementing a disk-baked buffer would be to slightly 
> generalize this so that the buffer pool has the option to use a mmap'd file 
> backend for it's ByteBuffers. When the BufferPool was created with a 
> totalMemory setting of 1GB it would preallocate a 1GB sparse file and memory 
> map it, then chop the file into batchSize MappedByteBuffer pieces and 
> populate it's buffer with those.
> Everything else would work normally except now all the buffered data would be 
> disk backed and in cases where there was significant backlog these would 
> start to fill up and page out.
> We currently allow messages larger than batchSize and to handle these we do a 
> one-off allocation of the necessary size. We would have to disallow this when 
> running in mmap mode. However since the disk buffer will be really big this 
> should not be a significant limitation as the batch size can be pretty big.
> We would want to ensure that the pooling always gives out the most recently 
> used ByteBuffer (I think it does). This way under normal operation where 
> requests are processed quickly a given buffer would be reused many times 
> before any physical disk write activity occurred.
> Note that although this let's the producer buffer very large amounts of data 
> the buffer isn't really fault-tolerant, since the ordering in the file isn't 
> known so there is no easy way to recovery the producer's buffer in a failure. 
> So the scope of this feature would just be to provide a bigger buffer for 
> short outages or latency spikes in the Kafka cluster during which you would 
> hope you don't also experience failures in your producer processes.
> To complete the feature we would need to:
> a. Get some unit tests that would cover disk-backed usage
> b. Do some manual performance testing of this usage and understand the impact 
> on throughput.
> c. Do some manual testing of failure cases (i.e. if the broker goes down for 
> 30 seconds we should be able to keep taking writes) and observe how well the 
> producer handles the catch up time when it has a large backlog to get rid of.
> d. Add a new configuration for the producer to enable this, something like 
> use.file.buffers=true/false.
> e. Add documentation that covers these new options.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5265) Move ACLs, Config, NodeVersions classes into org.apache.kafka.common

2017-05-17 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-5265:
---
Fix Version/s: 0.11.0.0

> Move ACLs, Config, NodeVersions classes into org.apache.kafka.common
> 
>
> Key: KAFKA-5265
> URL: https://issues.apache.org/jira/browse/KAFKA-5265
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>
> We should move the `ACLs`, `Config`, and `NodeVersions` classes into 
> `org.apache.kafka.common`.  That will make the easier to use in server code 
> as well as admin client code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5265) Move ACLs, Config, NodeVersions classes into org.apache.kafka.common

2017-05-17 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-5265:
---
Affects Version/s: 0.11.0.0

> Move ACLs, Config, NodeVersions classes into org.apache.kafka.common
> 
>
> Key: KAFKA-5265
> URL: https://issues.apache.org/jira/browse/KAFKA-5265
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>
> We should move the `ACLs`, `Config`, and `NodeVersions` classes into 
> `org.apache.kafka.common`.  That will make the easier to use in server code 
> as well as admin client code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5265) Move ACLs, Config, NodeVersions classes into org.apache.kafka.common

2017-05-17 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5265:
--

 Summary: Move ACLs, Config, NodeVersions classes into 
org.apache.kafka.common
 Key: KAFKA-5265
 URL: https://issues.apache.org/jira/browse/KAFKA-5265
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


We should move the `ACLs`, `Config`, and `NodeVersions` classes into 
`org.apache.kafka.common`.  That will make the easier to use in server code as 
well as admin client code.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4772) Exploit #peek to implement #print() and other methods

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014539#comment-16014539
 ] 

ASF GitHub Bot commented on KAFKA-4772:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2955


> Exploit #peek to implement #print() and other methods
> -
>
> Key: KAFKA-4772
> URL: https://issues.apache.org/jira/browse/KAFKA-4772
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: james chien
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 0.11.0.0
>
>
> From: https://github.com/apache/kafka/pull/2493#pullrequestreview-22157555
> Things that I can think of:
> - print / writeAsTest can be a special impl of peek; KStreamPrint etc can be 
> removed.
> - consider collapse KStreamPeek with KStreamForeach with a flag parameter 
> indicating if the acted key-value pair should still be forwarded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2955: KAFKA-4772: Exploit #peek to implement #print() an...

2017-05-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2955


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4772) Exploit #peek to implement #print() and other methods

2017-05-17 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4772.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 2955
[https://github.com/apache/kafka/pull/2955]

> Exploit #peek to implement #print() and other methods
> -
>
> Key: KAFKA-4772
> URL: https://issues.apache.org/jira/browse/KAFKA-4772
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: james chien
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 0.11.0.0
>
>
> From: https://github.com/apache/kafka/pull/2493#pullrequestreview-22157555
> Things that I can think of:
> - print / writeAsTest can be a special impl of peek; KStreamPrint etc can be 
> removed.
> - consider collapse KStreamPeek with KStreamForeach with a flag parameter 
> indicating if the acted key-value pair should still be forwarded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5226) NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize

2017-05-17 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-5226:
--

Assignee: Bill Bejeck  (was: Matthias J. Sax)

> NullPointerException (NPE) in SourceNodeRecordDeserializer.deserialize
> --
>
> Key: KAFKA-5226
> URL: https://issues.apache.org/jira/browse/KAFKA-5226
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1
> Environment: 64-bit Amazon Linux, JDK8
>Reporter: Ian Springer
>Assignee: Bill Bejeck
> Attachments: kafka.log
>
>
> I saw the following NPE in our Kafka Streams app, which has 3 nodes running 
> on 3 separate machines.. Out of hundreds of messages processed, the NPE only 
> occurred twice. I are not sure of the cause, so I am unable to reproduce it. 
> I'm hoping the Kafka Streams team can guess the cause based on the stack 
> trace. If I can provide any additional details about our app, please let me 
> know.
>  
> {code}
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka version : 0.10.2.1
> INFO  2017-05-10 02:58:26,021 org.apache.kafka.common.utils.AppInfoParser  
> Kafka commitId : e89bffd6b2eff799
> INFO  2017-05-10 02:58:26,031 o.s.context.support.DefaultLifecycleProcessor  
> Starting beans in phase 0
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from CREATED to RUNNING.
> INFO  2017-05-10 02:58:26,075 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] Started 
> Kafka Stream process
> INFO  2017-05-10 02:58:26,086 o.a.k.c.consumer.internals.AbstractCoordinator  
> Discovered coordinator p1kaf1.prod.apptegic.com:9092 (id: 2147482646 rack: 
> null) for group evergage-app.
> INFO  2017-05-10 02:58:26,126 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [] for group evergage-app
> INFO  2017-05-10 02:58:26,126 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 02:58:26,127 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 02:58:27,712 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 18
> INFO  2017-05-10 02:58:27,716 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 02:58:27,716 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 02:58:27,729 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> state stores
> INFO  2017-05-10 02:58:27,731 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 02:58:27,742 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> [14 hours pass...]
> INFO  2017-05-10 16:21:27,476 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [us.app.Trigger-0] for group 
> evergage-app
> INFO  2017-05-10 16:21:27,477 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from RUNNING to REBALANCING.
> INFO  2017-05-10 16:21:27,482 o.a.k.c.consumer.internals.AbstractCoordinator  
> (Re-)joining group evergage-app
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.AbstractCoordinator  
> Successfully joined group evergage-app with generation 19
> INFO  2017-05-10 16:21:27,489 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Setting newly assigned partitions [us.app.Trigger-0] for group evergage-app
> INFO  2017-05-10 16:21:27,489 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to REBALANCING.
> INFO  2017-05-10 16:21:27,489 
> o.a.kafka.streams.processor.internals.StreamTask  task [0_0] Initializing 
> processor nodes of the topology
> INFO  2017-05-10 16:21:27,493 org.apache.kafka.streams.KafkaStreams  
> stream-client [evergage-app-bd9c9868-4b9b-4d2e-850f-9b5bec1fc0a9] State 
> transition from REBALANCING to RUNNING.
> INFO  2017-05-10 16:21:30,584 o.a.k.c.consumer.internals.ConsumerCoordinator  
> Revoking previously assigned partitions [us.app.Trigger-0] for group 
> evergage-app
> INFO  2017-05-10 16:21:30,584 

[jira] [Work started] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2

2017-05-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4171 started by Bharat Viswanadham.
-
> Kafka-connect prints outs keystone and truststore password in log2
> --
>
> Key: KAFKA-4171
> URL: https://issues.apache.org/jira/browse/KAFKA-4171
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Akshath Patkar
>Assignee: Bharat Viswanadham
>
> Kafka-connect prints outs keystone and truststore password in log
> [2016-09-14 16:30:33,971] WARN The configuration 
> consumer.ssl.truststore.password = X was supplied but isn't a known 
> config. (org.apache.kafka.clients.consumer.ConsumerConfig:186)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5264) kafka.api.SslProducerSendTest.testSendToPartition transient failure

2017-05-17 Thread JIRA
Xavier Léauté created KAFKA-5264:


 Summary: kafka.api.SslProducerSendTest.testSendToPartition 
transient failure
 Key: KAFKA-5264
 URL: https://issues.apache.org/jira/browse/KAFKA-5264
 Project: Kafka
  Issue Type: Sub-task
Reporter: Xavier Léauté


Error Message

{code}
org.apache.kafka.common.errors.TopicExistsException: Topic 'topic' already 
exists.
{code}

Stacktrace
{code}
org.apache.kafka.common.errors.TopicExistsException: Topic 'topic' already 
exists.
{code}

Standard Output
{code}
[2017-05-17 17:24:31,041] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-05-17 17:24:32,898] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-05-17 17:24:32,907] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-05-17 17:24:34,792] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-05-17 17:24:34,798] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-05-17 17:24:36,665] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-05-17 17:24:36,673] ERROR ZKShutdownHandler is not registered, so 
ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
changes (org.apache.zookeeper.server.ZooKeeperServer:472)
[2017-05-17 17:24:38,657] ERROR [Controller 1]: Error while electing or 
becoming controller on broker 1 (kafka.controller.KafkaController:105)
org.I0Itec.zkclient.exception.ZkInterruptedException: 
java.lang.InterruptedException
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1003)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1100)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1095)
at kafka.utils.ZkUtils.readDataMaybeNull(ZkUtils.scala:659)
at 
kafka.utils.ReplicationUtils$.getLeaderIsrAndEpochForPartition(ReplicationUtils.scala:75)
at 
kafka.utils.ZkUtils$$anonfun$getPartitionLeaderAndIsrForTopics$1.apply(ZkUtils.scala:708)
at 
kafka.utils.ZkUtils$$anonfun$getPartitionLeaderAndIsrForTopics$1.apply(ZkUtils.scala:707)
at 
scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
at 
scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
at 
kafka.utils.ZkUtils.getPartitionLeaderAndIsrForTopics(ZkUtils.scala:707)
at 
kafka.controller.KafkaController.updateLeaderAndIsrCache(KafkaController.scala:736)
at 
kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:664)
at 
kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:255)
at kafka.controller.KafkaController.elect(KafkaController.scala:1565)
at 
kafka.controller.KafkaController$Reelect$.process(KafkaController.scala:1512)
at 
kafka.controller.KafkaController$ControllerEventThread.doWork(KafkaController.scala:1154)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1406)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1210)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1241)
at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:125)
at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1104)
at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1100)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:991)
... 19 more
[2017-05-17 17:24:38,660] ERROR [controller-event-thread]: Error processing 
event Reelect (kafka.controller.KafkaController$ControllerEventThread:105)
org.I0Itec.zkclient.exception.ZkInterruptedException: 
java.lang.InterruptedException
at 

[jira] [Commented] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2

2017-05-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014528#comment-16014528
 ] 

Bharat Viswanadham commented on KAFKA-4171:
---

Hi Ewen,
Please let me know any more work is pending for this work item?

> Kafka-connect prints outs keystone and truststore password in log2
> --
>
> Key: KAFKA-4171
> URL: https://issues.apache.org/jira/browse/KAFKA-4171
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Akshath Patkar
>Assignee: Bharat Viswanadham
>
> Kafka-connect prints outs keystone and truststore password in log
> [2016-09-14 16:30:33,971] WARN The configuration 
> consumer.ssl.truststore.password = X was supplied but isn't a known 
> config. (org.apache.kafka.clients.consumer.ConsumerConfig:186)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2

2017-05-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014526#comment-16014526
 ] 

Bharat Viswanadham commented on KAFKA-4171:
---

Hi,
I think this issue got resolved, I think this issue has been resolved. Now 
logUnsed() method is logging the only key, it is not printing the value.
So, this issue will not be seen.


> Kafka-connect prints outs keystone and truststore password in log2
> --
>
> Key: KAFKA-4171
> URL: https://issues.apache.org/jira/browse/KAFKA-4171
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Akshath Patkar
>Assignee: Bharat Viswanadham
>
> Kafka-connect prints outs keystone and truststore password in log
> [2016-09-14 16:30:33,971] WARN The configuration 
> consumer.ssl.truststore.password = X was supplied but isn't a known 
> config. (org.apache.kafka.clients.consumer.ConsumerConfig:186)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2

2017-05-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned KAFKA-4171:
-

Assignee: Bharat Viswanadham

> Kafka-connect prints outs keystone and truststore password in log2
> --
>
> Key: KAFKA-4171
> URL: https://issues.apache.org/jira/browse/KAFKA-4171
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Akshath Patkar
>Assignee: Bharat Viswanadham
>
> Kafka-connect prints outs keystone and truststore password in log
> [2016-09-14 16:30:33,971] WARN The configuration 
> consumer.ssl.truststore.password = X was supplied but isn't a known 
> config. (org.apache.kafka.clients.consumer.ConsumerConfig:186)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-1313) Support adding replicas to existing topic partitions via kafka-topics tool without manually setting broker assignments

2017-05-17 Thread Alexis Lesieur (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014491#comment-16014491
 ] 

Alexis Lesieur commented on KAFKA-1313:
---

I feel like an "easy" fix for this would at least be enable the 
{code}bin/kafka-reassign-partitions.sh --generate 
--topics-to-move-json-file{code} to take an optional parameter for each topic 
that would be rf.
That way, at least, we would have an easy way to generate assignments with 
higher rf.

> Support adding replicas to existing topic partitions via kafka-topics tool 
> without manually setting broker assignments
> --
>
> Key: KAFKA-1313
> URL: https://issues.apache.org/jira/browse/KAFKA-1313
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Affects Versions: 0.8.0
>Reporter: Marc Labbe
>Assignee: Sreepathi Prasanna
>  Labels: newbie++
>
> There is currently no easy way to add replicas to an existing topic 
> partitions.
> For example, topic create-test has been created with ReplicationFactor=1: 
> Topic:create-test  PartitionCount:3ReplicationFactor:1 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1 Isr: 1
> Topic: create-test Partition: 1Leader: 2   Replicas: 2 Isr: 2
> Topic: create-test Partition: 2Leader: 3   Replicas: 3 Isr: 3
> I would like to increase the ReplicationFactor=2 (or more) so it shows up 
> like this instead.
> Topic:create-test  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: create-test Partition: 0Leader: 1   Replicas: 1,2 Isr: 1,2
> Topic: create-test Partition: 1Leader: 2   Replicas: 2,3 Isr: 2,3
> Topic: create-test Partition: 2Leader: 3   Replicas: 3,1 Isr: 3,1
> Use cases for this:
> - adding brokers and thus increase fault tolerance
> - fixing human errors for topics created with wrong values



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5258) move all partition and replica state transition rules into their states

2017-05-17 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-5258:

Description: 
Today the PartitionStateMachine and ReplicaStateMachine defines and asserts the 
valid state transitions inline for each state, looking something like:
{code}
private def handleStateChange(...) {
  targetState match {
case stateA => {
  assertValidPreviousStates(topicAndPartition, List(stateX, stateY, 
stateZ), stateA)
  // actual work
}
case stateB => {
  assertValidPreviousStates(topicAndPartition, List(stateD, stateE), stateB)
  // actual work
}
  }
}
{code}
It would be cleaner to move all partition and replica state transition rules 
into their states and simply do the assertion at the top of the 
handleStateChange method like so:
{code}
private def handleStateChange(...) {
  assertValidTransition(targetState)
  targetState match {
case stateA => {
  // actual work
}
case stateB => {
  // actual work
}
  }
}

sealed trait State {
  def state: Byte
  def validPreviousStates: Set[State]
}

case object StateA extends State {
  val state: Byte = 1
  val validPreviousStates: Set[State] = Set(StateX)
}

case object StateB extends State {
  val state: Byte = 2
  val validPreviousStates: Set[State] = Set(StateX, StateY, StateZ)
}
{code}

  was:
Today the PartitionStateMachine and ReplicaStateMachine defines and asserts the 
valid state transitions inline for each state, looking something like:
{code}
private def handleStateChange(...) {
  targetState match {
case stateA => {
  assertValidPreviousStates(topicAndPartition, List(stateX, stateY, 
stateZ), stateA)
  // actual work
}
case stateB => {
  assertValidPreviousStates(topicAndPartition, List(stateD, stateE), stateB)
  // actual work
}
  }
}
{code}
It would be cleaner to move all partition and replica state transition rules 
into their and simply do the assertion at the top of the handleStateChange 
method like so:
{code}
private def handleStateChange(...) {
  assertValidTransition(targetState)
  targetState match {
case stateA => {
  // actual work
}
case stateB => {
  // actual work
}
  }
}

sealed trait State {
  def state: Byte
  def validPreviousStates: Set[State]
}

case object StateA extends State {
  val state: Byte = 1
  val validPreviousStates: Set[State] = Set(StateX)
}

case object StateB extends State {
  val state: Byte = 2
  val validPreviousStates: Set[State] = Set(StateX, StateY, StateZ)
}
{code}


> move all partition and replica state transition rules into their states
> ---
>
> Key: KAFKA-5258
> URL: https://issues.apache.org/jira/browse/KAFKA-5258
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Minor
>
> Today the PartitionStateMachine and ReplicaStateMachine defines and asserts 
> the valid state transitions inline for each state, looking something like:
> {code}
> private def handleStateChange(...) {
>   targetState match {
> case stateA => {
>   assertValidPreviousStates(topicAndPartition, List(stateX, stateY, 
> stateZ), stateA)
>   // actual work
> }
> case stateB => {
>   assertValidPreviousStates(topicAndPartition, List(stateD, stateE), 
> stateB)
>   // actual work
> }
>   }
> }
> {code}
> It would be cleaner to move all partition and replica state transition rules 
> into their states and simply do the assertion at the top of the 
> handleStateChange method like so:
> {code}
> private def handleStateChange(...) {
>   assertValidTransition(targetState)
>   targetState match {
> case stateA => {
>   // actual work
> }
> case stateB => {
>   // actual work
> }
>   }
> }
> sealed trait State {
>   def state: Byte
>   def validPreviousStates: Set[State]
> }
> case object StateA extends State {
>   val state: Byte = 1
>   val validPreviousStates: Set[State] = Set(StateX)
> }
> case object StateB extends State {
>   val state: Byte = 2
>   val validPreviousStates: Set[State] = Set(StateX, StateY, StateZ)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5258) move all partition and replica state transition rules into their states

2017-05-17 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-5258:

Description: 
Today the PartitionStateMachine and ReplicaStateMachine defines and asserts the 
valid state transitions inline for each state, looking something like:
{code}
private def handleStateChange(...) {
  targetState match {
case stateA => {
  assertValidPreviousStates(topicAndPartition, List(stateX, stateY, 
stateZ), stateA)
  // actual work
}
case stateB => {
  assertValidPreviousStates(topicAndPartition, List(stateD, stateE), stateB)
  // actual work
}
  }
}
{code}
It would be cleaner to move all partition and replica state transition rules 
into their and simply do the assertion at the top of the handleStateChange 
method like so:
{code}
private def handleStateChange(...) {
  assertValidTransition(targetState)
  targetState match {
case stateA => {
  // actual work
}
case stateB => {
  // actual work
}
  }
}

sealed trait State {
  def state: Byte
  def validPreviousStates: Set[State]
}

case object StateA extends State {
  val state: Byte = 1
  val validPreviousStates: Set[State] = Set(StateX)
}

case object StateB extends State {
  val state: Byte = 2
  val validPreviousStates: Set[State] = Set(StateX, StateY, StateZ)
}
{code}

  was:
Today the PartitionStateMachine and ReplicaStateMachine defines and asserts the 
valid state transitions inline for each state, looking something like:
{code}
private def handleStateChange(...) {
  targetState match {
case stateA => {
  assertValidPreviousStates(topicAndPartition, List(stateX, stateY, 
stateZ), stateA)
  // actual work
}
case stateB => {
  assertValidPreviousStates(topicAndPartition, List(stateD, stateE), stateB)
  // actual work
}
  }
}
{code}
It would be cleaner to move all partition and replica state transition rules 
into a map and simply do the assertion at the top of the handleStateChange 
method like so:
{code}
private val validPreviousStates: Map[State, Set[State]] = ...
private def handleStateChange(...) {
  assertValidTransition(targetState)
  targetState match {
case stateA => {
  // actual work
}
case stateB => {
  // actual work
}
  }
}
{code}


> move all partition and replica state transition rules into their states
> ---
>
> Key: KAFKA-5258
> URL: https://issues.apache.org/jira/browse/KAFKA-5258
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Minor
>
> Today the PartitionStateMachine and ReplicaStateMachine defines and asserts 
> the valid state transitions inline for each state, looking something like:
> {code}
> private def handleStateChange(...) {
>   targetState match {
> case stateA => {
>   assertValidPreviousStates(topicAndPartition, List(stateX, stateY, 
> stateZ), stateA)
>   // actual work
> }
> case stateB => {
>   assertValidPreviousStates(topicAndPartition, List(stateD, stateE), 
> stateB)
>   // actual work
> }
>   }
> }
> {code}
> It would be cleaner to move all partition and replica state transition rules 
> into their and simply do the assertion at the top of the handleStateChange 
> method like so:
> {code}
> private def handleStateChange(...) {
>   assertValidTransition(targetState)
>   targetState match {
> case stateA => {
>   // actual work
> }
> case stateB => {
>   // actual work
> }
>   }
> }
> sealed trait State {
>   def state: Byte
>   def validPreviousStates: Set[State]
> }
> case object StateA extends State {
>   val state: Byte = 1
>   val validPreviousStates: Set[State] = Set(StateX)
> }
> case object StateB extends State {
>   val state: Byte = 2
>   val validPreviousStates: Set[State] = Set(StateX, StateY, StateZ)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5258) move all partition and replica state transition rules into their states

2017-05-17 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-5258:

Summary: move all partition and replica state transition rules into their 
states  (was: move all partition and replica state transition rules into a map)

> move all partition and replica state transition rules into their states
> ---
>
> Key: KAFKA-5258
> URL: https://issues.apache.org/jira/browse/KAFKA-5258
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Minor
>
> Today the PartitionStateMachine and ReplicaStateMachine defines and asserts 
> the valid state transitions inline for each state, looking something like:
> {code}
> private def handleStateChange(...) {
>   targetState match {
> case stateA => {
>   assertValidPreviousStates(topicAndPartition, List(stateX, stateY, 
> stateZ), stateA)
>   // actual work
> }
> case stateB => {
>   assertValidPreviousStates(topicAndPartition, List(stateD, stateE), 
> stateB)
>   // actual work
> }
>   }
> }
> {code}
> It would be cleaner to move all partition and replica state transition rules 
> into a map and simply do the assertion at the top of the handleStateChange 
> method like so:
> {code}
> private val validPreviousStates: Map[State, Set[State]] = ...
> private def handleStateChange(...) {
>   assertValidTransition(targetState)
>   targetState match {
> case stateA => {
>   // actual work
> }
> case stateB => {
>   // actual work
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-17 Thread Matthias J. Sax
+1

I also second Ewen comment -- standardizing the common supported
parameters over all tools would be great!


-Matthias

On 5/17/17 12:57 AM, Damian Guy wrote:
> +1
> 
> On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava 
> wrote:
> 
>> +1 (binding)
>>
>> I mentioned this in the PR that triggered this:
>>
>>> KIP is accurate, though this is one of those things that we should
>> probably get a KIP for a standard set of config options across all tools so
>> additions like this can just fall under the umbrella of that KIP...
>>
>> I think it would be great if someone wrote up a small KIP providing some
>> standardized settings that we could get future additions automatically
>> umbrella'd under, e.g. no need to do a KIP if just adding a consumer.config
>> or consumer-property config conforming to existing expectations for other
>> tools. We could also standardize on a few other settings names that are
>> inconsistent across different tools and set out a clear path forward for
>> future tools.
>>
>> I think I still have at least one open PR from when I first started on the
>> project where I was trying to clean up some command line stuff to be more
>> consistent. This has been an issue for many years now...
>>
>> -Ewen
>>
>>
>>
>> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska 
>> wrote:
>>
>>> +1 thanks.
>>>
>>> Eno
 On 16 May 2017, at 04:20, BigData dev  wrote:

 Hi All,
 Given the simple and non-controversial nature of the KIP, I would like
>> to
 start the voting process for KIP-157: Add consumer config options to
 streams reset tool

 *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
>>> 157+-+Add+consumer+config+options+to+streams+reset+tool
 >> 157+-+Add+consumer+config+options+to+streams+reset+tool>*


 The vote will run for a minimum of 72 hours.

 Thanks,

 Bharat
>>>
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Commented] (KAFKA-5245) KStream builder should capture serdes

2017-05-17 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014328#comment-16014328
 ] 

Matthias J. Sax commented on KAFKA-5245:


You can just assign the JIRA to yourself and get started :) This might help 
too: https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes

If you have any further question, just let us know.

> KStream builder should capture serdes 
> --
>
> Key: KAFKA-5245
> URL: https://issues.apache.org/jira/browse/KAFKA-5245
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Yeva Byzek
>Priority: Minor
>  Labels: beginner, newbie
>
> Even if one specifies a serdes in `builder.stream`, later a call to 
> `groupByKey` may require the serdes again if it differs from the configured 
> streams app serdes. The preferred behavior is that if no serdes is provided 
> to `groupByKey`, it should use whatever was provided in `builder.stream` and 
> not what was in the app.
> From the current docs:
> “When to set explicit serdes: Variants of groupByKey exist to override the 
> configured default serdes of your application, which you must do if the key 
> and/or value types of the resulting KGroupedStream do not match the 
> configured default serdes.”



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014322#comment-16014322
 ] 

ASF GitHub Bot commented on KAFKA-4222:
---

GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3080

KAFKA-4222: QueryableIntegrationTest.queryOnRebalance transient failure

Don't produce messages on a separate thread continuosly. Just produce one 
of each value and stop.
Close the producer once finished.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka qs-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3080.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3080






> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Eno Thereska
> Fix For: 0.11.0.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5245) KStream builder should capture serdes

2017-05-17 Thread anugrah (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014310#comment-16014310
 ] 

anugrah edited comment on KAFKA-5245 at 5/17/17 4:09 PM:
-

hey, i would like to work on this. Any pointers as to how am i supposed to 
proceed ? I am not clear around what is to be done here. Can you please help ?


was (Author: anukin):
hey, i would like to work on this. Any pointers as to how am i supposed to 
proceed ?

> KStream builder should capture serdes 
> --
>
> Key: KAFKA-5245
> URL: https://issues.apache.org/jira/browse/KAFKA-5245
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Yeva Byzek
>Priority: Minor
>  Labels: beginner, newbie
>
> Even if one specifies a serdes in `builder.stream`, later a call to 
> `groupByKey` may require the serdes again if it differs from the configured 
> streams app serdes. The preferred behavior is that if no serdes is provided 
> to `groupByKey`, it should use whatever was provided in `builder.stream` and 
> not what was in the app.
> From the current docs:
> “When to set explicit serdes: Variants of groupByKey exist to override the 
> configured default serdes of your application, which you must do if the key 
> and/or value types of the resulting KGroupedStream do not match the 
> configured default serdes.”



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3080: KAFKA-4222: QueryableIntegrationTest.queryOnRebala...

2017-05-17 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3080

KAFKA-4222: QueryableIntegrationTest.queryOnRebalance transient failure

Don't produce messages on a separate thread continuosly. Just produce one 
of each value and stop.
Close the producer once finished.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka qs-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3080.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3080






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5245) KStream builder should capture serdes

2017-05-17 Thread anugrah (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014310#comment-16014310
 ] 

anugrah commented on KAFKA-5245:


hey, i would like to work on this. Any pointers as to how am i supposed to 
proceed ?

> KStream builder should capture serdes 
> --
>
> Key: KAFKA-5245
> URL: https://issues.apache.org/jira/browse/KAFKA-5245
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Yeva Byzek
>Priority: Minor
>  Labels: beginner, newbie
>
> Even if one specifies a serdes in `builder.stream`, later a call to 
> `groupByKey` may require the serdes again if it differs from the configured 
> streams app serdes. The preferred behavior is that if no serdes is provided 
> to `groupByKey`, it should use whatever was provided in `builder.stream` and 
> not what was in the app.
> From the current docs:
> “When to set explicit serdes: Variants of groupByKey exist to override the 
> configured default serdes of your application, which you must do if the key 
> and/or value types of the resulting KGroupedStream do not match the 
> configured default serdes.”



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5263) kakfa-clients consume 100% CPU with manual partition assignment when network connection is lost

2017-05-17 Thread Konstantin Smirnov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Smirnov updated KAFKA-5263:
--
Attachment: cpu_consuming_profile.csv

Captured profile

> kakfa-clients consume 100% CPU with manual partition assignment when network 
> connection is lost
> ---
>
> Key: KAFKA-5263
> URL: https://issues.apache.org/jira/browse/KAFKA-5263
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0, 0.10.2.1
>Reporter: Konstantin Smirnov
> Attachments: cpu_consuming.log, cpu_consuming_profile.csv
>
>
> Noticed that lose of the connection to Kafka broker leads kafka-clients to 
> consume 100% CPU. The bug only appears when the manual partition assignmet is 
> used. It appears since the version 0.10.1.0. The bug is quite similar to 
> KAFKA-1642.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5263) kakfa-clients consume 100% CPU with manual partition assignment when network connection is lost

2017-05-17 Thread Konstantin Smirnov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Smirnov updated KAFKA-5263:
--
Attachment: cpu_consuming.log

Sample application output

> kakfa-clients consume 100% CPU with manual partition assignment when network 
> connection is lost
> ---
>
> Key: KAFKA-5263
> URL: https://issues.apache.org/jira/browse/KAFKA-5263
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0, 0.10.2.1
>Reporter: Konstantin Smirnov
> Attachments: cpu_consuming.log
>
>
> Noticed that lose of the connection to Kafka broker leads kafka-clients to 
> consume 100% CPU. The bug only appears when the manual partition assignmet is 
> used. It appears since the version 0.10.1.0. The bug is quite similar to 
> KAFKA-1642.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5263) kakfa-clients consume 100% CPU with manual partition assignment when network connection is lost

2017-05-17 Thread Konstantin Smirnov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Smirnov updated KAFKA-5263:
--
Summary: kakfa-clients consume 100% CPU with manual partition assignment 
when network connection is lost  (was: kakfa-clients consume 100% CPU with 
manual partition assignment)

> kakfa-clients consume 100% CPU with manual partition assignment when network 
> connection is lost
> ---
>
> Key: KAFKA-5263
> URL: https://issues.apache.org/jira/browse/KAFKA-5263
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0, 0.10.2.1
>Reporter: Konstantin Smirnov
>
> Noticed that lose of the connection to Kafka broker leads kafka-clients to 
> consume 100% CPU. The bug only appears when the manual partition assignmet is 
> used. It appears since the version 0.10.1.0. The bug is quite similar to 
> KAFKA-1642.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5263) kakfa-clients consume 100% CPU with manual partition assignment

2017-05-17 Thread Konstantin Smirnov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014226#comment-16014226
 ] 

Konstantin Smirnov edited comment on KAFKA-5263 at 5/17/17 3:36 PM:


Sample code leading to the trouble:
{code}
public static void main(String[] args) {
System.out.println("Starting consumer");
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test12345");
props.put("reconnect.backoff.ms", "1000");
props.put("retry.backoff.ms", "1000");
props.put("session.timeout.ms", "1");
props.put("key.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");

try (KafkaConsumer consumer = new 
KafkaConsumer<>(props)) {
String topic = "test-topic"; 
//consumer.subscribe(Arrays.asList(topic));
List partitions = new ArrayList<>();
for (PartitionInfo partition : consumer.partitionsFor(topic)) {
partitions.add(new TopicPartition(topic, 
partition.partition()));
}
consumer.assign(partitions);
for (;;) {
ConsumerRecords records = consumer.poll(1000);
if (!records.isEmpty()) {
System.out.println("Records aren't empty!");
}
records.forEach(System.out::println);
}
}
}
{code}
Steps to reproduce:
* Start Kafka broker
* Run the sample code
* Stop Kafka broker


was (Author: kosm):
Sample code leading to the trouble:
{code}
public static void main(String[] args) {
System.out.println("Starting consumer");
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test12345");
props.put("reconnect.backoff.ms", "1000");
props.put("retry.backoff.ms", "1000");
props.put("session.timeout.ms", "1");
props.put("key.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");

try (KafkaConsumer consumer = new 
KafkaConsumer<>(props)) {
String topic = "test-topic"; 
//consumer.subscribe(Arrays.asList(topic));
List partitions = new ArrayList<>();
for (PartitionInfo partition : consumer.partitionsFor(topic)) {
partitions.add(new TopicPartition(topic, 
partition.partition()));
}
consumer.assign(partitions);
for (;;) {
ConsumerRecords records = consumer.poll(1000);
if (!records.isEmpty()) {
System.out.println("Records aren't empty!");
}
records.forEach(System.out::println);
}
}
}
{code}

> kakfa-clients consume 100% CPU with manual partition assignment
> ---
>
> Key: KAFKA-5263
> URL: https://issues.apache.org/jira/browse/KAFKA-5263
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0, 0.10.2.1
>Reporter: Konstantin Smirnov
>
> Noticed that lose of the connection to Kafka broker leads kafka-clients to 
> consume 100% CPU. The bug only appears when the manual partition assignmet is 
> used. It appears since the version 0.10.1.0. The bug is quite similar to 
> KAFKA-1642.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5263) kakfa-clients consume 100% CPU with manual partition assignment

2017-05-17 Thread Konstantin Smirnov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16014226#comment-16014226
 ] 

Konstantin Smirnov commented on KAFKA-5263:
---

Sample code leading to the trouble:
{code}
public static void main(String[] args) {
System.out.println("Starting consumer");
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test12345");
props.put("reconnect.backoff.ms", "1000");
props.put("retry.backoff.ms", "1000");
props.put("session.timeout.ms", "1");
props.put("key.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");

try (KafkaConsumer consumer = new 
KafkaConsumer<>(props)) {
String topic = "test-topic"; 
//consumer.subscribe(Arrays.asList(topic));
List partitions = new ArrayList<>();
for (PartitionInfo partition : consumer.partitionsFor(topic)) {
partitions.add(new TopicPartition(topic, 
partition.partition()));
}
consumer.assign(partitions);
for (;;) {
ConsumerRecords records = consumer.poll(1000);
if (!records.isEmpty()) {
System.out.println("Records aren't empty!");
}
records.forEach(System.out::println);
}
}
}
{code}

> kakfa-clients consume 100% CPU with manual partition assignment
> ---
>
> Key: KAFKA-5263
> URL: https://issues.apache.org/jira/browse/KAFKA-5263
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0, 0.10.2.1
>Reporter: Konstantin Smirnov
>
> Noticed that lose of the connection to Kafka broker leads kafka-clients to 
> consume 100% CPU. The bug only appears when the manual partition assignmet is 
> used. It appears since the version 0.10.1.0. The bug is quite similar to 
> KAFKA-1642.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5263) kakfa-clients consume 100% CPU with manual partition assignment

2017-05-17 Thread Konstantin Smirnov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Smirnov updated KAFKA-5263:
--
Affects Version/s: 0.10.1.0
   0.10.1.1
  Description: Noticed that lose of the connection to Kafka broker 
leads kafka-clients to consume 100% CPU. The bug only appears when the manual 
partition assignmet is used. It appears since the version 0.10.1.0. The bug is 
quite similar to KAFKA-1642.

> kakfa-clients consume 100% CPU with manual partition assignment
> ---
>
> Key: KAFKA-5263
> URL: https://issues.apache.org/jira/browse/KAFKA-5263
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0, 0.10.1.1, 0.10.2.0, 0.10.2.1
>Reporter: Konstantin Smirnov
>
> Noticed that lose of the connection to Kafka broker leads kafka-clients to 
> consume 100% CPU. The bug only appears when the manual partition assignmet is 
> used. It appears since the version 0.10.1.0. The bug is quite similar to 
> KAFKA-1642.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >