[jira] [Updated] (KAFKA-8854) Optimize bulk loading of RocksDB during state restoration of Streams

2019-09-02 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8854:
---
Issue Type: Improvement  (was: Bug)

> Optimize bulk loading of RocksDB during state restoration of Streams
> 
>
> Key: KAFKA-8854
> URL: https://issues.apache.org/jira/browse/KAFKA-8854
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> We've done some optimizations of RocksDB bulk loading in the past, but there 
> are more spaces to explore, some reference: 
> https://www.rockset.com/blog/optimizing-bulk-load-in-rocksdb/
> Reducing the RocksDB restoration is the key to improve availability of tasks 
> of Kafka Streams



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (KAFKA-8860) SslPrincipalMapper should handle distinguished names with spaces

2019-09-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8860.
--
Resolution: Fixed

Issue resolved by pull request 7140
[https://github.com/apache/kafka/pull/7140]

> SslPrincipalMapper should handle distinguished names with spaces
> 
>
> Key: KAFKA-8860
> URL: https://issues.apache.org/jira/browse/KAFKA-8860
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Manikumar
>Priority: Major
> Fix For: 2.4.0
>
>
> This Jira is to track the issue reported by  
> [t...@teebee.de|mailto:t...@teebee.de] in PR 
> [#7140|https://github.com/apache/kafka/pull/7140] 
> PR [#6099|https://github.com/apache/kafka/pull/6099] tried to undo the 
> splitting of the {{ssl.principal.mapper.rules}} 
> [list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1054]
>  on [comma with 
> whitespace|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L78]
>  by [sophisticated 
> rejoining|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  of the split list using a comma as separator. However, since possibly 
> surrounding whitespace is not reconstructed this approach fails in general. 
> Consider the following test case:
> {code:java}
> @Test
> public void testCommaWithWhitespace() throws Exception \{
> String value = "RULE:^CN=((, *|\\w)+)(,.*|$)/$1/,DEFAULT";
> @SuppressWarnings("unchecked")
> List rules = (List) 
> ConfigDef.parseType("ssl.principal.mapper.rules", value, Type.LIST);
> SslPrincipalMapper mapper = SslPrincipalMapper.fromRules(rules);
> assertEquals("Tkac\\, Adam", mapper.getName("CN=Tkac\\, 
> Adam,OU=ITZ,DC=geodis,DC=cz"));
> }
> {code}
> The space after the escaped comma is 
> [essential|https://sogo.nu/bugs/view.php?id=2152]. Unfortunately, it has 
> disappeared after splitting and rejoining.
> Moreover, in 
> [{{joinSplitRules}}|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  the decision to rejoin list elements is based on local information only 
> which might not be sufficient. It works for 
> {quote}"RULE:^CN=([^,ADEFLTU,]+)(,.*|$)/$1/"{quote}  but fails for the 
> _equivalent_ regular expression 
> {quote}RULE:^CN=([^,DEFAULT,]+)(,.*|$)/$1/"{quote}
> The approach of the current PR is to change the type of the 
> {{ssl.principal.mapper.rules}} attribute from 
> [LIST|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  to 
> [STRING|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  and to delegate the splitting of the rules to the 
> [SslPrincipalMapper|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java].
>  It knows about the structure of the rules and can perform the splitting 
> context-based.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8860) SslPrincipalMapper should handle distinguished names with spaces

2019-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920998#comment-16920998
 ] 

ASF GitHub Bot commented on KAFKA-8860:
---

omkreddy commented on pull request #7140: KAFKA-8860: Let SslPrincipalMapper 
split SSL principal mapping rules
URL: https://github.com/apache/kafka/pull/7140
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> SslPrincipalMapper should handle distinguished names with spaces
> 
>
> Key: KAFKA-8860
> URL: https://issues.apache.org/jira/browse/KAFKA-8860
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Manikumar
>Priority: Major
> Fix For: 2.4.0
>
>
> This Jira is to track the issue reported by  
> [t...@teebee.de|mailto:t...@teebee.de] in PR 
> [#7140|https://github.com/apache/kafka/pull/7140] 
> PR [#6099|https://github.com/apache/kafka/pull/6099] tried to undo the 
> splitting of the {{ssl.principal.mapper.rules}} 
> [list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1054]
>  on [comma with 
> whitespace|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L78]
>  by [sophisticated 
> rejoining|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  of the split list using a comma as separator. However, since possibly 
> surrounding whitespace is not reconstructed this approach fails in general. 
> Consider the following test case:
> {code:java}
> @Test
> public void testCommaWithWhitespace() throws Exception \{
> String value = "RULE:^CN=((, *|\\w)+)(,.*|$)/$1/,DEFAULT";
> @SuppressWarnings("unchecked")
> List rules = (List) 
> ConfigDef.parseType("ssl.principal.mapper.rules", value, Type.LIST);
> SslPrincipalMapper mapper = SslPrincipalMapper.fromRules(rules);
> assertEquals("Tkac\\, Adam", mapper.getName("CN=Tkac\\, 
> Adam,OU=ITZ,DC=geodis,DC=cz"));
> }
> {code}
> The space after the escaped comma is 
> [essential|https://sogo.nu/bugs/view.php?id=2152]. Unfortunately, it has 
> disappeared after splitting and rejoining.
> Moreover, in 
> [{{joinSplitRules}}|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  the decision to rejoin list elements is based on local information only 
> which might not be sufficient. It works for 
> {quote}"RULE:^CN=([^,ADEFLTU,]+)(,.*|$)/$1/"{quote}  but fails for the 
> _equivalent_ regular expression 
> {quote}RULE:^CN=([^,DEFAULT,]+)(,.*|$)/$1/"{quote}
> The approach of the current PR is to change the type of the 
> {{ssl.principal.mapper.rules}} attribute from 
> [LIST|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  to 
> [STRING|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  and to delegate the splitting of the rules to the 
> [SslPrincipalMapper|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java].
>  It knows about the structure of the rules and can perform the splitting 
> context-based.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8860) SslPrincipalMapper should handle distinguished names with spaces

2019-09-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8860:
-
Fix Version/s: 2.4.0
Affects Version/s: 2.2.0

> SslPrincipalMapper should handle distinguished names with spaces
> 
>
> Key: KAFKA-8860
> URL: https://issues.apache.org/jira/browse/KAFKA-8860
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Manikumar
>Priority: Major
> Fix For: 2.4.0
>
>
> This Jira is to track the issue reported by  
> [t...@teebee.de|mailto:t...@teebee.de] in PR 
> [#7140|https://github.com/apache/kafka/pull/7140] 
> PR [#6099|https://github.com/apache/kafka/pull/6099] tried to undo the 
> splitting of the {{ssl.principal.mapper.rules}} 
> [list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1054]
>  on [comma with 
> whitespace|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L78]
>  by [sophisticated 
> rejoining|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  of the split list using a comma as separator. However, since possibly 
> surrounding whitespace is not reconstructed this approach fails in general. 
> Consider the following test case:
> {code:java}
> @Test
> public void testCommaWithWhitespace() throws Exception \{
> String value = "RULE:^CN=((, *|\\w)+)(,.*|$)/$1/,DEFAULT";
> @SuppressWarnings("unchecked")
> List rules = (List) 
> ConfigDef.parseType("ssl.principal.mapper.rules", value, Type.LIST);
> SslPrincipalMapper mapper = SslPrincipalMapper.fromRules(rules);
> assertEquals("Tkac\\, Adam", mapper.getName("CN=Tkac\\, 
> Adam,OU=ITZ,DC=geodis,DC=cz"));
> }
> {code}
> The space after the escaped comma is 
> [essential|https://sogo.nu/bugs/view.php?id=2152]. Unfortunately, it has 
> disappeared after splitting and rejoining.
> Moreover, in 
> [{{joinSplitRules}}|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  the decision to rejoin list elements is based on local information only 
> which might not be sufficient. It works for 
> {quote}"RULE:^CN=([^,ADEFLTU,]+)(,.*|$)/$1/"{quote}  but fails for the 
> _equivalent_ regular expression 
> {quote}RULE:^CN=([^,DEFAULT,]+)(,.*|$)/$1/"{quote}
> The approach of the current PR is to change the type of the 
> {{ssl.principal.mapper.rules}} attribute from 
> [LIST|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  to 
> [STRING|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  and to delegate the splitting of the rules to the 
> [SslPrincipalMapper|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java].
>  It knows about the structure of the rules and can perform the splitting 
> context-based.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8860) SslPrincipalMapper should handle distinguished names with spaces

2019-09-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8860:
-
Description: 
This Jira is to track the issue reported by  
[t...@teebee.de|mailto:t...@teebee.de] in PR 
[#7140|https://github.com/apache/kafka/pull/7140] 

PR [#6099|https://github.com/apache/kafka/pull/6099] tried to undo the 
splitting of the {{ssl.principal.mapper.rules}} 
[list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1054]
 on [comma with 
whitespace|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L78]
 by [sophisticated 
rejoining|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
 of the split list using a comma as separator. However, since possibly 
surrounding whitespace is not reconstructed this approach fails in general. 
Consider the following test case:
{code:java}
@Test
public void testCommaWithWhitespace() throws Exception \{
String value = "RULE:^CN=((, *|\\w)+)(,.*|$)/$1/,DEFAULT";

@SuppressWarnings("unchecked")
List rules = (List) 
ConfigDef.parseType("ssl.principal.mapper.rules", value, Type.LIST);

SslPrincipalMapper mapper = SslPrincipalMapper.fromRules(rules);
assertEquals("Tkac\\, Adam", mapper.getName("CN=Tkac\\, 
Adam,OU=ITZ,DC=geodis,DC=cz"));
}
{code}
The space after the escaped comma is 
[essential|https://sogo.nu/bugs/view.php?id=2152]. Unfortunately, it has 
disappeared after splitting and rejoining.

Moreover, in 
[{{joinSplitRules}}|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
 the decision to rejoin list elements is based on local information only which 
might not be sufficient. It works for 
{quote}"RULE:^CN=([^,ADEFLTU,]+)(,.*|$)/$1/"{quote}  but fails for the 
_equivalent_ regular expression 
{quote}RULE:^CN=([^,DEFAULT,]+)(,.*|$)/$1/"{quote}

The approach of the current PR is to change the type of the 
{{ssl.principal.mapper.rules}} attribute from 
[LIST|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
 to 
[STRING|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
 and to delegate the splitting of the rules to the 
[SslPrincipalMapper|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java].
 It knows about the structure of the rules and can perform the splitting 
context-based.

  was:
This Jira is to track the issue reported by  
[t...@teebee.de|mailto:t...@teebee.de] in PR 
[#7140|https://github.com/apache/kafka/pull/7140] 

PR [#6099|https://github.com/apache/kafka/pull/6099] tried to undo the 
splitting of the {{ssl.principal.mapper.rules}} 
[list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1054]
 on [comma with 
whitespace|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L78]
 by [sophisticated 
rejoining|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
 of the split list using a comma as separator. However, since possibly 
surrounding whitespace is not reconstructed this approach fails in general. 
Consider the following test case:
{code:java}
@Test
public void testCommaWithWhitespace() throws Exception \{
String value = "RULE:^CN=((, *|\\w)+)(,.*|$)/$1/,DEFAULT";

@SuppressWarnings("unchecked")
List rules = (List) 
ConfigDef.parseType("ssl.principal.mapper.rules", value, Type.LIST);

SslPrincipalMapper mapper = SslPrincipalMapper.fromRules(rules);
assertEquals("Tkac\\, Adam", mapper.getName("CN=Tkac\\, 
Adam,OU=ITZ,DC=geodis,DC=cz"));
}
{code}
The space after the escaped comma is 
[essential|https://sogo.nu/bugs/view.php?id=2152]. Unfortunately, it has 
disappeared after splitting and rejoining.

Moreover, in 
[{{joinSplitRules}}|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
 the decision to rejoin list elements is based on local information only which 
might not be sufficient. It works for 
{{"RULE:^CN=([^,ADEFLTU,]+)(,.*|$)/$1/"*+}} *but fails for the _equivalent_ 
regular expression {{"RULE:^CN=([^,DEFAULT,])(,.}}*{{|$)/$1/"}}.

The approach of the current PR is to change the type of the 
{{ssl.principal.mapper.rules}} attribute from 
[LIST|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
 to 

[jira] [Commented] (KAFKA-8860) SslPrincipalMapper should handle distinguished names with spaces

2019-09-02 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920987#comment-16920987
 ] 

Manikumar commented on KAFKA-8860:
--

[https://github.com/apache/kafka/pull/7140]

> SslPrincipalMapper should handle distinguished names with spaces
> 
>
> Key: KAFKA-8860
> URL: https://issues.apache.org/jira/browse/KAFKA-8860
> Project: Kafka
>  Issue Type: Bug
>Reporter: Manikumar
>Priority: Major
>
> This Jira is to track the issue reported by  
> [t...@teebee.de|mailto:t...@teebee.de] in PR 
> [#7140|https://github.com/apache/kafka/pull/7140] 
> PR [#6099|https://github.com/apache/kafka/pull/6099] tried to undo the 
> splitting of the {{ssl.principal.mapper.rules}} 
> [list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1054]
>  on [comma with 
> whitespace|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L78]
>  by [sophisticated 
> rejoining|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  of the split list using a comma as separator. However, since possibly 
> surrounding whitespace is not reconstructed this approach fails in general. 
> Consider the following test case:
> {code:java}
> @Test
> public void testCommaWithWhitespace() throws Exception \{
> String value = "RULE:^CN=((, *|\\w)+)(,.*|$)/$1/,DEFAULT";
> @SuppressWarnings("unchecked")
> List rules = (List) 
> ConfigDef.parseType("ssl.principal.mapper.rules", value, Type.LIST);
> SslPrincipalMapper mapper = SslPrincipalMapper.fromRules(rules);
> assertEquals("Tkac\\, Adam", mapper.getName("CN=Tkac\\, 
> Adam,OU=ITZ,DC=geodis,DC=cz"));
> }
> {code}
> The space after the escaped comma is 
> [essential|https://sogo.nu/bugs/view.php?id=2152]. Unfortunately, it has 
> disappeared after splitting and rejoining.
> Moreover, in 
> [{{joinSplitRules}}|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
>  the decision to rejoin list elements is based on local information only 
> which might not be sufficient. It works for 
> {{"RULE:^CN=([^,ADEFLTU,]+)(,.*|$)/$1/"*+}} *but fails for the _equivalent_ 
> regular expression {{"RULE:^CN=([^,DEFAULT,])(,.}}*{{|$)/$1/"}}.
> The approach of the current PR is to change the type of the 
> {{ssl.principal.mapper.rules}} attribute from 
> [LIST|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  to 
> [STRING|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
>  and to delegate the splitting of the rules to the 
> [SslPrincipalMapper|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java].
>  It knows about the structure of the rules and can perform the splitting 
> context-based.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8860) SslPrincipalMapper should handle distinguished names with spaces

2019-09-02 Thread Manikumar (Jira)
Manikumar created KAFKA-8860:


 Summary: SslPrincipalMapper should handle distinguished names with 
spaces
 Key: KAFKA-8860
 URL: https://issues.apache.org/jira/browse/KAFKA-8860
 Project: Kafka
  Issue Type: Bug
Reporter: Manikumar


This Jira is to track the issue reported by  
[t...@teebee.de|mailto:t...@teebee.de] in PR 
[#7140|https://github.com/apache/kafka/pull/7140] 

PR [#6099|https://github.com/apache/kafka/pull/6099] tried to undo the 
splitting of the {{ssl.principal.mapper.rules}} 
[list|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L1054]
 on [comma with 
whitespace|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L78]
 by [sophisticated 
rejoining|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
 of the split list using a comma as separator. However, since possibly 
surrounding whitespace is not reconstructed this approach fails in general. 
Consider the following test case:
{code:java}
@Test
public void testCommaWithWhitespace() throws Exception \{
String value = "RULE:^CN=((, *|\\w)+)(,.*|$)/$1/,DEFAULT";

@SuppressWarnings("unchecked")
List rules = (List) 
ConfigDef.parseType("ssl.principal.mapper.rules", value, Type.LIST);

SslPrincipalMapper mapper = SslPrincipalMapper.fromRules(rules);
assertEquals("Tkac\\, Adam", mapper.getName("CN=Tkac\\, 
Adam,OU=ITZ,DC=geodis,DC=cz"));
}
{code}
The space after the escaped comma is 
[essential|https://sogo.nu/bugs/view.php?id=2152]. Unfortunately, it has 
disappeared after splitting and rejoining.

Moreover, in 
[{{joinSplitRules}}|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java#L42]
 the decision to rejoin list elements is based on local information only which 
might not be sufficient. It works for 
{{"RULE:^CN=([^,ADEFLTU,]+)(,.*|$)/$1/"*+}} *but fails for the _equivalent_ 
regular expression {{"RULE:^CN=([^,DEFAULT,])(,.}}*{{|$)/$1/"}}.

The approach of the current PR is to change the type of the 
{{ssl.principal.mapper.rules}} attribute from 
[LIST|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
 to 
[STRING|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L781]
 and to delegate the splitting of the rules to the 
[SslPrincipalMapper|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/SslPrincipalMapper.java].
 It knows about the structure of the rules and can perform the splitting 
context-based.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8859) Refactor Cache-level Streams Metrics

2019-09-02 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-8859:


 Summary: Refactor Cache-level Streams Metrics
 Key: KAFKA-8859
 URL: https://issues.apache.org/jira/browse/KAFKA-8859
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bruno Cadonna


Refactoring of cache-level Streams metrics according KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8760) KIP-504: Add new Java Authorizer API

2019-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920872#comment-16920872
 ] 

ASF GitHub Bot commented on KAFKA-8760:
---

rajinisivaram commented on pull request #7268: KAFKA-8760; New Java Authorizer 
API (KIP-504)
URL: https://github.com/apache/kafka/pull/7268
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KIP-504: Add new Java Authorizer API 
> -
>
> Key: KAFKA-8760
> URL: https://issues.apache.org/jira/browse/KAFKA-8760
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.4.0
>
>
> See 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-504+-+Add+new+Java+Authorizer+Interface]
>  for details.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ante B. updated KAFKA-8858:
---
Labels: Stream consumer corrupt offset rebalance transactions  (was: Stream 
consumer corrupt offset transactions)

> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Apache Kakfa 2.1.1
>Reporter: Ante B.
>Priority: Major
>  Labels: Stream, consumer, corrupt, offset, rebalance, 
> transactions
>
> I have a basic Kafka Streams application that reads from a {{topic}}, 
> performs a rolling aggregate, and performs a join to publish to an 
> {{agg_topic}}. Our project has the timeout failure in Kafka 2.1.1 env and we 
> don't know the reason yet.
> Our stream consumer stuck for some reason. 
> After we changed our group id to another one it became normal. So seems 
> offset data for this consumer is corrupted.
> Can you help us please to resolve this problem to be able to revert us to the 
> previous consumer name because we have many inconveniences due to this.
> Ping me pls if you will need some additional info.
> Our temporary workaround is to disable the {{exactly_once}} config which 
> skips the initializing transactional state. Also offset reseted for corrupted 
> partition, with no effect.
> Full problem description in log:
> {code:java}
> [2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [abc-streamer-StreamThread-21] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [abc-streamer-StreamThread-14] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [abc-streamer-StreamThread-13] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> {noformat}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ante B. updated KAFKA-8858:
---
Description: 
I have a basic Kafka Streams application that reads from a {{topic}}, performs 
a rolling aggregate, and performs a join to publish to an {{agg_topic}}. Our 
project has the timeout failure in Kafka 2.1.1 env and we don't know the reason 
yet.

Our stream consumer stuck for some reason. 

After we changed our group id to another one it became normal. So seems offset 
data for this consumer is corrupted.

Can you help us please to resolve this problem to be able to revert us to the 
previous consumer name because we have many inconveniences due to this.

Ping me pls if you will need some additional info.

Our temporary workaround is to disable the {{exactly_once}} config which skips 
the initializing transactional state. Also offset reseted for corrupted 
partition, with no effect.

Full problem description in log:
{code:java}
[2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[abc-streamer-StreamThread-21] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[abc-streamer-StreamThread-14] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[abc-streamer-StreamThread-13] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
{noformat}
 

 

 

  was:
I have a basic Kafka Streams application that reads from a {{topic}}, performs 
a rolling aggregate, and performs a join to publish to an {{agg_topic}}. Our 
project has the timeout failure in Kafka 2.1.1 env and we don't know the reason 
yet.

Our stream consumer stuck for some reason. 

After we changed our group id to another one it became normal. So seems offset 
data for this consumer is corrupted.

Can you help us please to resolve this problem to be able to revert us to the 
previous consumer name because we have many inconveniences due to this.

Ping me pls if you will need some additional info.

Our temporary workaround is to disable the {{exactly_once}} config which skips 
the initializing transactional state. Also offset reseted for corrupted 
partition, with no effect.

Full problem description in log:
{code:java}
[2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-21] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-14] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-13] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
{noformat}
 

 

 


> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Apache Kakfa 2.1.1
>Reporter: Ante B.
>Priority: Major
>  Labels: Stream, consumer, corrupt, offset, transactions
>
> I have a basic Kafka Streams application that 

[jira] [Updated] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ante B. updated KAFKA-8858:
---
Labels: Stream consumer corrupt offset transactions  (was: Stream consumer 
corrupt offset)

> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Apache Kakfa 2.1.1
>Reporter: Ante B.
>Priority: Major
>  Labels: Stream, consumer, corrupt, offset, transactions
>
> I have a basic Kafka Streams application that reads from a {{topic}}, 
> performs a rolling aggregate, and performs a join to publish to an 
> {{agg_topic}}. Our project has the timeout failure in Kafka 2.1.1 env and we 
> don't know the reason yet.
> Our stream consumer stuck for some reason. 
> After we changed our group id to another one it became normal. So seems 
> offset data for this consumer is corrupted.
> Can you help us please to resolve this problem to be able to revert us to the 
> previous consumer name because we have many inconveniences due to this.
> Ping me pls if you will need some additional info.
> Our temporary workaround is to disable the {{exactly_once}} config which 
> skips the initializing transactional state. Also offset reseted for corrupted 
> partition, with no effect.
> Full problem description in log:
> {code:java}
> [2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-21] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-14] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-13] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> {noformat}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ante B. updated KAFKA-8858:
---
Labels: Stream consumer corrupt offset  (was: )

> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Apache Kakfa 2.1.1
>Reporter: Ante B.
>Priority: Major
>  Labels: Stream, consumer, corrupt, offset
>
> I have a basic Kafka Streams application that reads from a {{topic}}, 
> performs a rolling aggregate, and performs a join to publish to an 
> {{agg_topic}}. Our project has the timeout failure after we upgrade to Kafka 
> 2.1.1, and we don't know the reason yet.
> Our stream consumer stuck for some reason. 
> After we changed our group id to another one it became normal. So seems 
> offset data for this consumer is corrupted.
> Can you help us please to resolve this problem to be able to revert us to the 
> previous consumer name because we have many inconveniences due to this.
> Ping me pls if you will need some additional info.
> Our temporary workaround is to disable the {{exactly_once}} config which 
> skips the initializing transactional state. Also offset reseted for corrupted 
> partition, with no effect.
> Full problem description in log:
> {code:java}
> [2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-21] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-14] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-13] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> {noformat}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ante B. updated KAFKA-8858:
---
Description: 
I have a basic Kafka Streams application that reads from a {{topic}}, performs 
a rolling aggregate, and performs a join to publish to an {{agg_topic}}. Our 
project has the timeout failure in Kafka 2.1.1 env and we don't know the reason 
yet.

Our stream consumer stuck for some reason. 

After we changed our group id to another one it became normal. So seems offset 
data for this consumer is corrupted.

Can you help us please to resolve this problem to be able to revert us to the 
previous consumer name because we have many inconveniences due to this.

Ping me pls if you will need some additional info.

Our temporary workaround is to disable the {{exactly_once}} config which skips 
the initializing transactional state. Also offset reseted for corrupted 
partition, with no effect.

Full problem description in log:
{code:java}
[2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-21] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-14] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-13] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
{noformat}
 

 

 

  was:
I have a basic Kafka Streams application that reads from a {{topic}}, performs 
a rolling aggregate, and performs a join to publish to an {{agg_topic}}. Our 
project has the timeout failure after we upgrade to Kafka 2.1.1, and we don't 
know the reason yet.

Our stream consumer stuck for some reason. 

After we changed our group id to another one it became normal. So seems offset 
data for this consumer is corrupted.

Can you help us please to resolve this problem to be able to revert us to the 
previous consumer name because we have many inconveniences due to this.

Ping me pls if you will need some additional info.

Our temporary workaround is to disable the {{exactly_once}} config which skips 
the initializing transactional state. Also offset reseted for corrupted 
partition, with no effect.

Full problem description in log:
{code:java}
[2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-21] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-14] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-13] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
{noformat}
 

 

 


> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Apache Kakfa 2.1.1
>Reporter: Ante B.
>Priority: Major
>  Labels: Stream, consumer, corrupt, offset
>
> I have a basic Kafka Streams application that 

[jira] [Updated] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ante B. updated KAFKA-8858:
---
Component/s: streams

> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Apache Kakfa 2.1.1
>Reporter: Ante B.
>Priority: Major
>
> I have a basic Kafka Streams application that reads from a {{topic}}, 
> performs a rolling aggregate, and performs a join to publish to an 
> {{agg_topic}}. Our project has the timeout failure after we upgrade to Kafka 
> 2.1.1, and we don't know the reason yet.
> Our stream consumer stuck for some reason. 
> After we changed our group id to another one it became normal. So seems 
> offset data for this consumer is corrupted.
> Can you help us please to resolve this problem to be able to revert us to the 
> previous consumer name because we have many inconveniences due to this.
> Ping me pls if you will need some additional info.
> Our temporary workaround is to disable the {{exactly_once}} config which 
> skips the initializing transactional state. Also offset reseted for corrupted 
> partition, with no effect.
> Full problem description in log:
> {code:java}
> [2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-21] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-14] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-13] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> {noformat}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8856) Add Streams Config for Backward-compatible Metrics

2019-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920859#comment-16920859
 ] 

ASF GitHub Bot commented on KAFKA-8856:
---

cadonna commented on pull request #7279: KAFKA-8856: Add Streams config for 
backward-compatible metrics
URL: https://github.com/apache/kafka/pull/7279
 
 
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Streams Config for Backward-compatible Metrics
> --
>
> Key: KAFKA-8856
> URL: https://issues.apache.org/jira/browse/KAFKA-8856
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> With KIP-444 the tag names and names of streams metrics change. To allow 
> users having a grace period of changing their corresponding monitoring / 
> alerting eco-systems, a config shall be added that specifies which version of 
> the metrics names will be exposed.
> The definition of the new config is:
> name: built.in.metrics.version
> type: Enum
> values: {"0.10.0", "0.10.1", ... "2.3", "2.4"}
> default: "2.4" 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ante B. updated KAFKA-8858:
---
Affects Version/s: 2.1.1

> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
> Environment: Apache Kakfa 2.1.1
>Reporter: Ante B.
>Priority: Major
>
> I have a basic Kafka Streams application that reads from a {{topic}}, 
> performs a rolling aggregate, and performs a join to publish to an 
> {{agg_topic}}. Our project has the timeout failure after we upgrade to Kafka 
> 2.1.1, and we don't know the reason yet.
> Our stream consumer stuck for some reason. 
> After we changed our group id to another one it became normal. So seems 
> offset data for this consumer is corrupted.
> Can you help us please to resolve this problem to be able to revert us to the 
> previous consumer name because we have many inconveniences due to this.
> Ping me pls if you will need some additional info.
> Our temporary workaround is to disable the {{exactly_once}} config which 
> skips the initializing transactional state. Also offset reseted for corrupted 
> partition, with no effect.
> Full problem description in log:
> {code:java}
> [2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-21] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-14] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [fds-streamer-StreamThread-13] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> {noformat}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2019-09-02 Thread Ante B. (Jira)
Ante B. created KAFKA-8858:
--

 Summary: Kafka Streams - Failed to Rebalance Error and stream 
consumer stuck for some reason
 Key: KAFKA-8858
 URL: https://issues.apache.org/jira/browse/KAFKA-8858
 Project: Kafka
  Issue Type: Bug
 Environment: Apache Kakfa 2.1.1
Reporter: Ante B.


I have a basic Kafka Streams application that reads from a {{topic}}, performs 
a rolling aggregate, and performs a join to publish to an {{agg_topic}}. Our 
project has the timeout failure after we upgrade to Kafka 2.1.1, and we don't 
know the reason yet.

Our stream consumer stuck for some reason. 

After we changed our group id to another one it became normal. So seems offset 
data for this consumer is corrupted.

Can you help us please to resolve this problem to be able to revert us to the 
previous consumer name because we have many inconveniences due to this.

Ping me pls if you will need some additional info.

Our temporary workaround is to disable the {{exactly_once}} config which skips 
the initializing transactional state. Also offset reseted for corrupted 
partition, with no effect.

Full problem description in log:
{code:java}
[2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-21] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-14] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
[2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
[fds-streamer-StreamThread-13] Error caught during partition assignment, will 
abort the current process and re-throw at the end of rebalance: {} 
 org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
initializing transactional state in 6ms.
{noformat}
 

 

 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8857) Config describe should not return isReadOnly=false based on synonyms

2019-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920833#comment-16920833
 ] 

ASF GitHub Bot commented on KAFKA-8857:
---

rajinisivaram commented on pull request #7278: KAFKA-8857; Don't check synonyms 
while determining if config is readOnly
URL: https://github.com/apache/kafka/pull/7278
 
 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Config describe should not return isReadOnly=false based on synonyms
> 
>
> Key: KAFKA-8857
> URL: https://issues.apache.org/jira/browse/KAFKA-8857
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.4.0
>
>
> At the moment, for configs like log.retention.hours which have multiple 
> synonyms (log.retention.ms, log.retention.minutes), we return 
> `isReadyOnly=false` in describeConfigs response for all the synomyms even 
> though only  log.retention.ms can be updated dynamically. We should return 
> isReadOnly=false for log.retention.ms and isReadOnly=true for 
> log.retention.hours and log.retention.minutes to avoid confusion. Users can 
> still determine if there are updateable synonyms by looking at synonyms 
> returned in describeConfigs response.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8857) Config describe should not return isReadOnly=false based on synonyms

2019-09-02 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-8857:
-

 Summary: Config describe should not return isReadOnly=false based 
on synonyms
 Key: KAFKA-8857
 URL: https://issues.apache.org/jira/browse/KAFKA-8857
 Project: Kafka
  Issue Type: Bug
  Components: admin
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.4.0


At the moment, for configs like log.retention.hours which have multiple 
synonyms (log.retention.ms, log.retention.minutes), we return 
`isReadyOnly=false` in describeConfigs response for all the synomyms even 
though only  log.retention.ms can be updated dynamically. We should return 
isReadOnly=false for log.retention.ms and isReadOnly=true for 
log.retention.hours and log.retention.minutes to avoid confusion. Users can 
still determine if there are updateable synonyms by looking at synonyms 
returned in describeConfigs response.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8032) Flaky Test UserQuotaTest#testQuotaOverrideDelete

2019-09-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920812#comment-16920812
 ] 

ASF GitHub Bot commented on KAFKA-8032:
---

omkreddy commented on pull request #6361: KAFKA-8032: Wait for quota override 
removal in Quota tests 
URL: https://github.com/apache/kafka/pull/6361
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test UserQuotaTest#testQuotaOverrideDelete
> 
>
> Key: KAFKA-8032
> URL: https://issues.apache.org/jira/browse/KAFKA-8032
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.api/UserQuotaTest/testQuotaOverrideDelete/]
> {quote}java.lang.AssertionError: Client with id=QuotasTestProducer-1 should 
> have been throttled at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229) 
> at kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215) 
> at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124){quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (KAFKA-8856) Add Streams Config for Backward-compatible Metrics

2019-09-02 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna reassigned KAFKA-8856:


Assignee: Bruno Cadonna

> Add Streams Config for Backward-compatible Metrics
> --
>
> Key: KAFKA-8856
> URL: https://issues.apache.org/jira/browse/KAFKA-8856
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> With KIP-444 the tag names and names of streams metrics change. To allow 
> users having a grace period of changing their corresponding monitoring / 
> alerting eco-systems, a config shall be added that specifies which version of 
> the metrics names will be exposed.
> The definition of the new config is:
> name: built.in.metrics.version
> type: Enum
> values: {"0.10.0", "0.10.1", ... "2.3", "2.4"}
> default: "2.4" 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8856) Add Streams Config for Backward-compatible Metrics

2019-09-02 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-8856:


 Summary: Add Streams Config for Backward-compatible Metrics
 Key: KAFKA-8856
 URL: https://issues.apache.org/jira/browse/KAFKA-8856
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bruno Cadonna


With KIP-444 the tag names and names of streams metrics change. To allow users 
having a grace period of changing their corresponding monitoring / alerting 
eco-systems, a config shall be added that specifies which version of the 
metrics names will be exposed.

The definition of the new config is:
name: built.in.metrics.version
type: Enum
values: {"0.10.0", "0.10.1", ... "2.3", "2.4"}
default: "2.4" 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8705) NullPointerException was thrown by topology optimization when two MergeNodes have common KeyChaingingNode

2019-09-02 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920779#comment-16920779
 ] 

Reynir Hübner commented on KAFKA-8705:
--

I get the same exception on version *2.2.1* when TOPOLOGY_OPTIMIZATION is set 
to OPTIMIZE.

> NullPointerException was thrown by topology optimization when two MergeNodes 
> have common KeyChaingingNode
> -
>
> Key: KAFKA-8705
> URL: https://issues.apache.org/jira/browse/KAFKA-8705
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Hiroshi Nakahara
>Assignee: Bill Bejeck
>Priority: Major
>
> NullPointerException was thrown by topology optimization when two MergeNodes 
> have common KeyChaingingNode.
> Kafka Stream version: 2.3.0
> h3. Code
> {code:java}
> import org.apache.kafka.common.serialization.Serdes;
> import org.apache.kafka.streams.StreamsBuilder;
> import org.apache.kafka.streams.StreamsConfig;
> import org.apache.kafka.streams.kstream.Consumed;
> import org.apache.kafka.streams.kstream.KStream;
> import java.util.Properties;
> public class Main {
> public static void main(String[] args) {
> final StreamsBuilder streamsBuilder = new StreamsBuilder();
> final KStream parentStream = 
> streamsBuilder.stream("parentTopic", Consumed.with(Serdes.Integer(), 
> Serdes.Integer()))
> .selectKey(Integer::sum);  // To make parentStream 
> KeyChaingingPoint
> final KStream childStream1 = 
> parentStream.mapValues(v -> v + 1);
> final KStream childStream2 = 
> parentStream.mapValues(v -> v + 2);
> final KStream childStream3 = 
> parentStream.mapValues(v -> v + 3);
> childStream1
> .merge(childStream2)
> .merge(childStream3)
> .to("outputTopic");
> final Properties properties = new Properties();
> properties.setProperty(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
> StreamsConfig.OPTIMIZE);
> streamsBuilder.build(properties);
> }
> }
> {code}
> h3. Expected result
> streamsBuilder.build should create Topology without throwing Exception.  The 
> expected topology is:
> {code:java}
> Topologies:
>Sub-topology: 0
> Source: KSTREAM-SOURCE-00 (topics: [parentTopic])
>   --> KSTREAM-KEY-SELECT-01
> Processor: KSTREAM-KEY-SELECT-01 (stores: [])
>   --> KSTREAM-MAPVALUES-02, KSTREAM-MAPVALUES-03, 
> KSTREAM-MAPVALUES-04
>   <-- KSTREAM-SOURCE-00
> Processor: KSTREAM-MAPVALUES-02 (stores: [])
>   --> KSTREAM-MERGE-05
>   <-- KSTREAM-KEY-SELECT-01
> Processor: KSTREAM-MAPVALUES-03 (stores: [])
>   --> KSTREAM-MERGE-05
>   <-- KSTREAM-KEY-SELECT-01
> Processor: KSTREAM-MAPVALUES-04 (stores: [])
>   --> KSTREAM-MERGE-06
>   <-- KSTREAM-KEY-SELECT-01
> Processor: KSTREAM-MERGE-05 (stores: [])
>   --> KSTREAM-MERGE-06
>   <-- KSTREAM-MAPVALUES-02, KSTREAM-MAPVALUES-03
> Processor: KSTREAM-MERGE-06 (stores: [])
>   --> KSTREAM-SINK-07
>   <-- KSTREAM-MERGE-05, KSTREAM-MAPVALUES-04
> Sink: KSTREAM-SINK-07 (topic: outputTopic)
>   <-- KSTREAM-MERGE-06
> {code}
> h3. Actual result
> NullPointerException was thrown with the following stacktrace.
> {code:java}
> Exception in thread "main" java.lang.NullPointerException
>   at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
>   at 
> org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.maybeUpdateKeyChangingRepartitionNodeMap(InternalStreamsBuilder.java:397)
>   at 
> org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.maybeOptimizeRepartitionOperations(InternalStreamsBuilder.java:315)
>   at 
> org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.maybePerformOptimizations(InternalStreamsBuilder.java:304)
>   at 
> org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:275)
>   at 
> org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:558)
>   at Main.main(Main.java:24){code}
> h3. Cause
> This exception occurs in 
> InternalStreamsBuilder#maybeUpdateKeyChaingingRepartitionNodeMap.
> {code:java}
> private void maybeUpdateKeyChangingRepartitionNodeMap() {
> final Map> 
> mergeNodesToKeyChangers = new HashMap<>();
> for (final StreamsGraphNode mergeNode : mergeNodes) {
> mergeNodesToKeyChangers.put(mergeNode, new LinkedHashSet<>());
> final Collection keys = 
> 

[jira] [Created] (KAFKA-8855) Collect and Expose Client's Name and Version in the Brokers

2019-09-02 Thread David Jacot (Jira)
David Jacot created KAFKA-8855:
--

 Summary: Collect and Expose Client's Name and Version in the 
Brokers
 Key: KAFKA-8855
 URL: https://issues.apache.org/jira/browse/KAFKA-8855
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot


Implements KIP-511 as documented here: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (KAFKA-8719) kafka-console-consumer bypassing sentry evaluations while specifying --partition option

2019-09-02 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-8719.
---
Resolution: Cannot Reproduce

> kafka-console-consumer bypassing sentry evaluations while specifying 
> --partition option
> ---
>
> Key: KAFKA-8719
> URL: https://issues.apache.org/jira/browse/KAFKA-8719
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, tools
>Reporter: Sathish
>Priority: Major
>  Labels: kafka-console-cons
>
> While specifying --partition option on kafka-console-consumer, it is 
> bypassing the sentry evaluations and making the users to consume messages 
> freely. Even though a consumer group does not have access to consume messages 
> from topics --partition option bypassing the evaluation
> Example command used:
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111 --partition 0
> This succeeds even though, if spark-kafka-111 does not have any access on 
> topic booktopic1
> whereas 
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111
> Fails with topic authorisation issues



--
This message was sent by Atlassian Jira
(v8.3.2#803003)