[GitHub] [kafka] ableegoldman commented on pull request #8697: [WIP] KAFKA-9983: KIP-613, add task-level e2e latency metrics (min and max)

2020-05-21 Thread GitBox


ableegoldman commented on pull request #8697:
URL: https://github.com/apache/kafka/pull/8697#issuecomment-632468421


   Actually this needs to be reworked to account for the case there is no sink 
node. Also got the Percentiles working so I'll add them back to this PR and 
call for review again when ready



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on pull request #8248: KAFKA-9501: convert between active and standby without closing stores

2020-05-21 Thread GitBox


guozhangwang commented on pull request #8248:
URL: https://github.com/apache/kafka/pull/8248#issuecomment-632459600


   test this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #8703: MINOR: add a getOrCreate function to KRPC collections

2020-05-21 Thread GitBox


ijuma commented on pull request #8703:
URL: https://github.com/apache/kafka/pull/8703#issuecomment-632454292


   Why is it the case that non generated code doesn't know about keys? It seems 
pretty awkward to implement methods like this that are not related to the the 
protocol 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck commented on pull request #8504: KAFKA-9298: reuse mapped stream error in joins

2020-05-21 Thread GitBox


bbejeck commented on pull request #8504:
URL: https://github.com/apache/kafka/pull/8504#issuecomment-632447670


   @mjsax @vvcephei updated (rebased as well) this to original PR state, only 
reuse repartition node when the name is generated.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on a change in pull request #8690: KAFKA-9965: Fix uneven distribution in RoundRobinPartitioner

2020-05-21 Thread GitBox


cmccabe commented on a change in pull request #8690:
URL: https://github.com/apache/kafka/pull/8690#discussion_r428998676



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/RoundRobinPartitioner.java
##
@@ -65,12 +65,20 @@ public int partition(String topic, Object key, byte[] 
keyBytes, Object value, by
 }
 
 private int nextValue(String topic) {
-AtomicInteger counter = topicCounterMap.computeIfAbsent(topic, k -> {
-return new AtomicInteger(0);
-});
+AtomicInteger counter = topicCounterMap.
+computeIfAbsent(topic, k -> new AtomicInteger(0));
 return counter.getAndIncrement();
 }
 
+@Override
+public void onNewBatch(String topic, Cluster cluster, int prevPartition) {
+// After onNewBatch is called, we will call partition() again.
+// So 'rewind' the counter for this topic.
+AtomicInteger counter = topicCounterMap.
+computeIfAbsent(topic, k -> new AtomicInteger(0));
+counter.getAndDecrement();

Review comment:
   It's possible.  There's no easy fix, though, other than rethinking the 
`Partitioner` API.  That would be an incompatible change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe edited a comment on pull request #8703: MINOR: add a getOrCreate function to KRPC collections

2020-05-21 Thread GitBox


cmccabe edited a comment on pull request #8703:
URL: https://github.com/apache/kafka/pull/8703#issuecomment-632428809


   > What is the purpose of this new method? Should we add getOrCreate to 
ImplicitLinkedHashMultiCollection as well?
   
   The purpose is to make it easy to access map elements by their keys.  An 
example might help:
   
   ```
   int brokerId = 2;
   Broker broker = brokerCollection.getOrCreate(brokerId);
   ```
   
   Unfortunately, we can't add this to `ImplicitLinkedHashMultiCollection` 
because only the generated code has the notion of keys.  In 
`ImplicitLinkedHashMultiCollection`, we just have elements which is more 
awkward to deal with.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on pull request #8703: MINOR: add a getOrCreate function to KRPC collections

2020-05-21 Thread GitBox


cmccabe commented on pull request #8703:
URL: https://github.com/apache/kafka/pull/8703#issuecomment-632428809


   bq. What is the purpose of this new method? Should we add getOrCreate to 
ImplicitLinkedHashMultiCollection as well?
   
   The purpose is to make it easy to access map elements by their keys.
   
   We can't add it to `ImplicitLinkedHashMultiCollection` because only the 
generated code has the notion of keys.  In `ImplicitLinkedHashMultiCollection`, 
we just have elements which is more awkward to deal with.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on pull request #8703: MINOR: add a getOrCreate function to KRPC collections

2020-05-21 Thread GitBox


cmccabe commented on pull request #8703:
URL: https://github.com/apache/kafka/pull/8703#issuecomment-632428282


   Looks like Jenkins is flaking out a bit.
   ```
   FATAL: Unable to delete script file /tmp/jenkins3984496192919521109.sh
   java.io.EOFException
   ```
   Will re-push to get a clean test run.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-10004) KAFKA-10004: ConfigCommand fails to find default broker configs without ZK

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe updated KAFKA-10004:
-
Fix Version/s: 2.5.1

> KAFKA-10004: ConfigCommand fails to find default broker configs without ZK
> --
>
> Key: KAFKA-10004
> URL: https://issues.apache.org/jira/browse/KAFKA-10004
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> When running
> {code:java}
> bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 
> --entity-type brokers
>  {code}
> the output will be:
>  Dynamic configs for broker 0 are: 
>  Dynamic configs for broker  are:
>  *The entity name for brokers must be a valid integer broker id, found: 
> *
>  
> The default entity cannot successfully get the configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] cmccabe commented on pull request #8675: KAFKA-10004: ConfigCommand fails to find default broker configs without ZK

2020-05-21 Thread GitBox


cmccabe commented on pull request #8675:
URL: https://github.com/apache/kafka/pull/8675#issuecomment-632426965


   Backported to 2.5.1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-9942) ConfigCommand fails to set client quotas for default users with --bootstrap-server.

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe reassigned KAFKA-9942:
---

Assignee: Brian Byrne  (was: Cheng Tan)

> ConfigCommand fails to set client quotas for default users with 
> --bootstrap-server.
> ---
>
> Key: KAFKA-9942
> URL: https://issues.apache.org/jira/browse/KAFKA-9942
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
> Fix For: 2.6.0
>
>
> {quote}$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> {quote}
> This usage of --entity-default with --bootstrap-server for alternating 
> configs will trigger the exception below. Similar for --describe
> {quote}/opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker04:9093 
> --describe --entity-type clients --entity-default --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownServerException: Path must not end with 
> / character
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
> at 
> kafka.admin.ConfigCommand$.getAllClientQuotasConfigs(ConfigCommand.scala:501)
> at kafka.admin.ConfigCommand$.getClientQuotasConfig(ConfigCommand.scala:487)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:361)
> at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:292)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:91)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.kafka.common.errors.UnknownServerException: Path must 
> not end with / character
> {quote}
> However, if the --entity-type is brokers, the alternation works fine. 
> {quote}$ No exception, works properly
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-default --alter --add-config unclean.leader.election.enable=true
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> {quote}
>  
> Update:
>  
> For --describe:
> Commands work properly:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe --broker-defaults
> {quote}
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type topics --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type users --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --user-defaults
>  
> {quote}
>  
> For --alter:
> Commands work properly:
> {quote}bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
> max.messages.bytes=128000 --entity-type topics --entity-default (an entity 
> name must be specified with --alter of topics)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> unclean.leader.election.enable=true --entity-type brokers --entity-default
> {quote}
>  
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --entity-type users --entity-default (No exception 
> thrown but failed to add the config)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=10,consumer_byte_rate=10 --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --user-defaults (No exception thrown but failed to 
> add the config)
>  
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-9942) ConfigCommand fails to set client quotas for default users with --bootstrap-server.

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe updated KAFKA-9942:

Comment: was deleted

(was: [https://github.com/apache/kafka/pull/8610] Patch here)

> ConfigCommand fails to set client quotas for default users with 
> --bootstrap-server.
> ---
>
> Key: KAFKA-9942
> URL: https://issues.apache.org/jira/browse/KAFKA-9942
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
> Fix For: 2.6.0
>
>
> {quote}$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> {quote}
> This usage of --entity-default with --bootstrap-server for alternating 
> configs will trigger the exception below. Similar for --describe
> {quote}/opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker04:9093 
> --describe --entity-type clients --entity-default --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownServerException: Path must not end with 
> / character
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
> at 
> kafka.admin.ConfigCommand$.getAllClientQuotasConfigs(ConfigCommand.scala:501)
> at kafka.admin.ConfigCommand$.getClientQuotasConfig(ConfigCommand.scala:487)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:361)
> at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:292)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:91)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.kafka.common.errors.UnknownServerException: Path must 
> not end with / character
> {quote}
> However, if the --entity-type is brokers, the alternation works fine. 
> {quote}$ No exception, works properly
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-default --alter --add-config unclean.leader.election.enable=true
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> {quote}
>  
> Update:
>  
> For --describe:
> Commands work properly:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe --broker-defaults
> {quote}
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type topics --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type users --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --user-defaults
>  
> {quote}
>  
> For --alter:
> Commands work properly:
> {quote}bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
> max.messages.bytes=128000 --entity-type topics --entity-default (an entity 
> name must be specified with --alter of topics)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> unclean.leader.election.enable=true --entity-type brokers --entity-default
> {quote}
>  
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --entity-type users --entity-default (No exception 
> thrown but failed to add the config)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=10,consumer_byte_rate=10 --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --user-defaults (No exception thrown but failed to 
> add the config)
>  
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9942) ConfigCommand fails to set client quotas for default users with --bootstrap-server.

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe updated KAFKA-9942:

Fix Version/s: (was: 2.5.1)
Affects Version/s: 2.6.0

> ConfigCommand fails to set client quotas for default users with 
> --bootstrap-server.
> ---
>
> Key: KAFKA-9942
> URL: https://issues.apache.org/jira/browse/KAFKA-9942
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Cheng Tan
>Assignee: Cheng Tan
>Priority: Major
> Fix For: 2.6.0
>
>
> {quote}$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> {quote}
> This usage of --entity-default with --bootstrap-server for alternating 
> configs will trigger the exception below. Similar for --describe
> {quote}/opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker04:9093 
> --describe --entity-type clients --entity-default --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownServerException: Path must not end with 
> / character
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
> at 
> kafka.admin.ConfigCommand$.getAllClientQuotasConfigs(ConfigCommand.scala:501)
> at kafka.admin.ConfigCommand$.getClientQuotasConfig(ConfigCommand.scala:487)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:361)
> at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:292)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:91)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.kafka.common.errors.UnknownServerException: Path must 
> not end with / character
> {quote}
> However, if the --entity-type is brokers, the alternation works fine. 
> {quote}$ No exception, works properly
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-default --alter --add-config unclean.leader.election.enable=true
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> {quote}
>  
> Update:
>  
> For --describe:
> Commands work properly:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe --broker-defaults
> {quote}
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type topics --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type users --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --user-defaults
>  
> {quote}
>  
> For --alter:
> Commands work properly:
> {quote}bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
> max.messages.bytes=128000 --entity-type topics --entity-default (an entity 
> name must be specified with --alter of topics)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> unclean.leader.election.enable=true --entity-type brokers --entity-default
> {quote}
>  
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --entity-type users --entity-default (No exception 
> thrown but failed to add the config)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=10,consumer_byte_rate=10 --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --user-defaults (No exception thrown but failed to 
> add the config)
>  
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9942) ConfigCommand fails to set client quotas for default users with --bootstrap-server.

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-9942.
-
Fix Version/s: 2.5.1
   2.6.0
   Resolution: Fixed

> ConfigCommand fails to set client quotas for default users with 
> --bootstrap-server.
> ---
>
> Key: KAFKA-9942
> URL: https://issues.apache.org/jira/browse/KAFKA-9942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Cheng Tan
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> {quote}$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> {quote}
> This usage of --entity-default with --bootstrap-server for alternating 
> configs will trigger the exception below. Similar for --describe
> {quote}/opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker04:9093 
> --describe --entity-type clients --entity-default --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownServerException: Path must not end with 
> / character
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
> at 
> kafka.admin.ConfigCommand$.getAllClientQuotasConfigs(ConfigCommand.scala:501)
> at kafka.admin.ConfigCommand$.getClientQuotasConfig(ConfigCommand.scala:487)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:361)
> at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:292)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:91)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.kafka.common.errors.UnknownServerException: Path must 
> not end with / character
> {quote}
> However, if the --entity-type is brokers, the alternation works fine. 
> {quote}$ No exception, works properly
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-default --alter --add-config unclean.leader.election.enable=true
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> {quote}
>  
> Update:
>  
> For --describe:
> Commands work properly:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe --broker-defaults
> {quote}
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type topics --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type users --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --user-defaults
>  
> {quote}
>  
> For --alter:
> Commands work properly:
> {quote}bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
> max.messages.bytes=128000 --entity-type topics --entity-default (an entity 
> name must be specified with --alter of topics)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> unclean.leader.election.enable=true --entity-type brokers --entity-default
> {quote}
>  
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --entity-type users --entity-default (No exception 
> thrown but failed to add the config)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=10,consumer_byte_rate=10 --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --user-defaults (No exception thrown but failed to 
> add the config)
>  
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9942) ConfigCommand fails to set client quotas for default users with --bootstrap-server.

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe updated KAFKA-9942:

Summary: ConfigCommand fails to set client quotas for default users with 
--bootstrap-server.  (was: --entity-default flag is not working for alternating 
/ describing configs in AdminClient)

> ConfigCommand fails to set client quotas for default users with 
> --bootstrap-server.
> ---
>
> Key: KAFKA-9942
> URL: https://issues.apache.org/jira/browse/KAFKA-9942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Cheng Tan
>Priority: Major
>
> {quote}$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> {quote}
> This usage of --entity-default with --bootstrap-server for alternating 
> configs will trigger the exception below. Similar for --describe
> {quote}/opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker04:9093 
> --describe --entity-type clients --entity-default --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownServerException: Path must not end with 
> / character
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
> at 
> kafka.admin.ConfigCommand$.getAllClientQuotasConfigs(ConfigCommand.scala:501)
> at kafka.admin.ConfigCommand$.getClientQuotasConfig(ConfigCommand.scala:487)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:361)
> at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:292)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:91)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.kafka.common.errors.UnknownServerException: Path must 
> not end with / character
> {quote}
> However, if the --entity-type is brokers, the alternation works fine. 
> {quote}$ No exception, works properly
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-default --alter --add-config unclean.leader.election.enable=true
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> {quote}
>  
> Update:
>  
> For --describe:
> Commands work properly:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe --broker-defaults
> {quote}
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type topics --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type users --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --user-defaults
>  
> {quote}
>  
> For --alter:
> Commands work properly:
> {quote}bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
> max.messages.bytes=128000 --entity-type topics --entity-default (an entity 
> name must be specified with --alter of topics)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> unclean.leader.election.enable=true --entity-type brokers --entity-default
> {quote}
>  
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --entity-type users --entity-default (No exception 
> thrown but failed to add the config)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=10,consumer_byte_rate=10 --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --user-defaults (No exception thrown but failed to 
> add the config)
>  
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9980) Fix bug where alterClientQuotas could not set default client quotas

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe updated KAFKA-9980:

Fix Version/s: (was: 2.5.1)
   2.6.0
Affects Version/s: 2.6.0

> Fix bug where alterClientQuotas could not set default client quotas
> ---
>
> Key: KAFKA-9980
> URL: https://issues.apache.org/jira/browse/KAFKA-9980
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
> Fix For: 2.6.0
>
>
> quota_tests.py is failing. Specifically for this test:
> {quote}
>  [INFO:2020-05-11 19:22:47,493]: RunnerClient: Loading test \{'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/client', 'file_name': 'quota_test.py', 
> 'method_name': 'test_quota', 'cls_name': 'QuotaTest', 'injected_args': 
> {'quota_type': 'client-id', 'override_quota': False}}
> {quote}
>  
> I log into the docker container and do
>  
> {quote}
>  /opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker03:9093 
> --describe --entity-type clients --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
>  and the command return
>  
> {quote}Configs for the default client-id are consumer_byte_rate=200.0, 
> producer_byte_rate=250.0
>  Configs for client-id 'overridden_id' are consumer_byte_rate=1.0E9, 
> producer_byte_rate=1.0E9
>  Seems like the config is properly but the quota is not effective
>   
> {quote}
>  For investigation, I added a logging at 
> {quote}{{AdminZKClient.changeConfigs()}}
> {quote}
>  
>  
> {quote}def changeConfigs(entityType: String, entityName: String, configs: 
> Properties): Unit =
> {
>         warn(s"entityType = $entityType entityName = $entityName configs = 
> $configs") ...
> }
> {quote}
> And use --bootstrap-server and --zookeeper to --alter the default client 
> quota. I got
>  
> {quote}
>  Alter with --zookeeper:WARN entityType = clients entityName =  
> configs = \{producer_byte_rate=10, consumer_byte_rate=10} 
> (kafka.zk.AdminZkClient)
> {quote}
>  
>  and
>  
> {quote}
>  Alter with --bootstrap-server:WARN entityType = clients entityName = 
> %3Cdefault%3E configs = \{producer_byte_rate=10, 
> consumer_byte_rate=10} (kafka.zk.AdminZkClient)
> {quote}
>  
>  I guess the encoding difference might cause the issue. The encoding happens 
> in
>  
> {quote}
>  Sanitizer.sanitize()
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9980) Fix bug where alterClientQuotas could not set default client quotas

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-9980.
-
Fix Version/s: 2.5.1
   Resolution: Fixed

> Fix bug where alterClientQuotas could not set default client quotas
> ---
>
> Key: KAFKA-9980
> URL: https://issues.apache.org/jira/browse/KAFKA-9980
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
> Fix For: 2.5.1
>
>
> quota_tests.py is failing. Specifically for this test:
> {quote}
>  [INFO:2020-05-11 19:22:47,493]: RunnerClient: Loading test \{'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/client', 'file_name': 'quota_test.py', 
> 'method_name': 'test_quota', 'cls_name': 'QuotaTest', 'injected_args': 
> {'quota_type': 'client-id', 'override_quota': False}}
> {quote}
>  
> I log into the docker container and do
>  
> {quote}
>  /opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker03:9093 
> --describe --entity-type clients --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
>  and the command return
>  
> {quote}Configs for the default client-id are consumer_byte_rate=200.0, 
> producer_byte_rate=250.0
>  Configs for client-id 'overridden_id' are consumer_byte_rate=1.0E9, 
> producer_byte_rate=1.0E9
>  Seems like the config is properly but the quota is not effective
>   
> {quote}
>  For investigation, I added a logging at 
> {quote}{{AdminZKClient.changeConfigs()}}
> {quote}
>  
>  
> {quote}def changeConfigs(entityType: String, entityName: String, configs: 
> Properties): Unit =
> {
>         warn(s"entityType = $entityType entityName = $entityName configs = 
> $configs") ...
> }
> {quote}
> And use --bootstrap-server and --zookeeper to --alter the default client 
> quota. I got
>  
> {quote}
>  Alter with --zookeeper:WARN entityType = clients entityName =  
> configs = \{producer_byte_rate=10, consumer_byte_rate=10} 
> (kafka.zk.AdminZkClient)
> {quote}
>  
>  and
>  
> {quote}
>  Alter with --bootstrap-server:WARN entityType = clients entityName = 
> %3Cdefault%3E configs = \{producer_byte_rate=10, 
> consumer_byte_rate=10} (kafka.zk.AdminZkClient)
> {quote}
>  
>  I guess the encoding difference might cause the issue. The encoding happens 
> in
>  
> {quote}
>  Sanitizer.sanitize()
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] cmccabe merged pull request #8658: KAFKA-9980: Fix bug where alterClientQuotas could not set default client quotas

2020-05-21 Thread GitBox


cmccabe merged pull request #8658:
URL: https://github.com/apache/kafka/pull/8658


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on pull request #8658: KAFKA-9980: Fix bug where alterClientQuotas could not set default client quotas

2020-05-21 Thread GitBox


cmccabe commented on pull request #8658:
URL: https://github.com/apache/kafka/pull/8658#issuecomment-632418566


   LGTM.  Thanks, @bdbyrne .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-9980) Fix bug where alterClientQuotas could not set default client quotas

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe updated KAFKA-9980:

Summary: Fix bug where alterClientQuotas could not set default client 
quotas  (was: Text encoding bug prevents correctly setting client quotas for 
default entities)

> Fix bug where alterClientQuotas could not set default client quotas
> ---
>
> Key: KAFKA-9980
> URL: https://issues.apache.org/jira/browse/KAFKA-9980
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
>
> quota_tests.py is failing. Specifically for this test:
> {quote}
>  [INFO:2020-05-11 19:22:47,493]: RunnerClient: Loading test \{'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/client', 'file_name': 'quota_test.py', 
> 'method_name': 'test_quota', 'cls_name': 'QuotaTest', 'injected_args': 
> {'quota_type': 'client-id', 'override_quota': False}}
> {quote}
>  
> I log into the docker container and do
>  
> {quote}
>  /opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker03:9093 
> --describe --entity-type clients --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
>  and the command return
>  
> {quote}Configs for the default client-id are consumer_byte_rate=200.0, 
> producer_byte_rate=250.0
>  Configs for client-id 'overridden_id' are consumer_byte_rate=1.0E9, 
> producer_byte_rate=1.0E9
>  Seems like the config is properly but the quota is not effective
>   
> {quote}
>  For investigation, I added a logging at 
> {quote}{{AdminZKClient.changeConfigs()}}
> {quote}
>  
>  
> {quote}def changeConfigs(entityType: String, entityName: String, configs: 
> Properties): Unit =
> {
>         warn(s"entityType = $entityType entityName = $entityName configs = 
> $configs") ...
> }
> {quote}
> And use --bootstrap-server and --zookeeper to --alter the default client 
> quota. I got
>  
> {quote}
>  Alter with --zookeeper:WARN entityType = clients entityName =  
> configs = \{producer_byte_rate=10, consumer_byte_rate=10} 
> (kafka.zk.AdminZkClient)
> {quote}
>  
>  and
>  
> {quote}
>  Alter with --bootstrap-server:WARN entityType = clients entityName = 
> %3Cdefault%3E configs = \{producer_byte_rate=10, 
> consumer_byte_rate=10} (kafka.zk.AdminZkClient)
> {quote}
>  
>  I guess the encoding difference might cause the issue. The encoding happens 
> in
>  
> {quote}
>  Sanitizer.sanitize()
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] abbccdda commented on a change in pull request #8702: MINOR: Fix join group request timeout lower bound

2020-05-21 Thread GitBox


abbccdda commented on a change in pull request #8702:
URL: https://github.com/apache/kafka/pull/8702#discussion_r428986739



##
File path: 
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
##
@@ -565,8 +566,8 @@ private void recordRebalanceFailure() {
 
 // Note that we override the request timeout using the rebalance 
timeout since that is the
 // maximum time that it may block on the coordinator. We add an extra 
5 seconds for small delays.
-
-int joinGroupTimeoutMs = Math.max(rebalanceConfig.rebalanceTimeoutMs, 
rebalanceConfig.rebalanceTimeoutMs + 5000);
+int joinGroupTimeoutMs = Math.max(client.defaultRequestTimeoutMs(),

Review comment:
   Could we log a debug info here for the timeout we used?

##
File path: 
clients/src/test/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinatorTest.java
##
@@ -312,8 +313,27 @@ public void testJoinGroupRequestTimeout() {
 mockTime.sleep(REQUEST_TIMEOUT_MS + 1);
 assertFalse(consumerClient.poll(future, mockTime.timer(0)));
 
-mockTime.sleep(REBALANCE_TIMEOUT_MS - REQUEST_TIMEOUT_MS + 5000);
+mockTime.sleep(REBALANCE_TIMEOUT_MS - REQUEST_TIMEOUT_MS + 
AbstractCoordinator.JOIN_GROUP_TIMEOUT_LAPSE);
 assertTrue(consumerClient.poll(future, mockTime.timer(0)));
+assertTrue(future.exception() instanceof DisconnectException);
+}
+
+@Test
+public void 
testJoinGroupRequestTimeoutLowerBoundedByDefaultRequestTimeout() {
+int rebalanceTimeoutMs = REQUEST_TIMEOUT_MS - 1;
+setupCoordinator(RETRY_BACKOFF_MS, rebalanceTimeoutMs, 
Optional.empty());
+mockClient.prepareResponse(groupCoordinatorResponse(node, 
Errors.NONE));
+coordinator.ensureCoordinatorReady(mockTime.timer(0));
+
+RequestFuture future = coordinator.sendJoinGroupRequest();
+
+long expectedRequestDeadline = mockTime.milliseconds() + 
REQUEST_TIMEOUT_MS;
+mockTime.sleep(rebalanceTimeoutMs + 
AbstractCoordinator.JOIN_GROUP_TIMEOUT_LAPSE + 1);
+assertFalse(consumerClient.poll(future, mockTime.timer(0)));
+
+mockTime.sleep(expectedRequestDeadline - mockTime.milliseconds() + 1);

Review comment:
   We could just test `mockTime.sleep(REQUEST_TIMEOUT_MS + 1)` for this 
case and get rid of `expectedRequestDeadline`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on pull request #8697: KAFKA-9983: KIP-613, add task-level e2e latency metrics (min and max)

2020-05-21 Thread GitBox


ableegoldman commented on pull request #8697:
URL: https://github.com/apache/kafka/pull/8697#issuecomment-632404608


   Alright I fixed the latency measurement to record the right thing @mjsax 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] fhussonnois commented on pull request #2604: KAFKA-4794: Add access to OffsetStorageReader from SourceConnector

2020-05-21 Thread GitBox


fhussonnois commented on pull request #2604:
URL: https://github.com/apache/kafka/pull/2604#issuecomment-632396304


   @rhauch thank you very much for finalizing this PR; I apologize for not 
finding any free time to work on it these past few weeks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch removed a comment on pull request #2604: KAFKA-4794: Add access to OffsetStorageReader from SourceConnector

2020-05-21 Thread GitBox


rhauch removed a comment on pull request #2604:
URL: https://github.com/apache/kafka/pull/2604#issuecomment-632392110


   @fhussonnois, thanks again for doing the brunt of the work on this feature. 
We're getting close to AK 2.6.0 feature freeze, and it'd be great to merge this 
feature now that KIP-131 has been adopted. Since my previous review was some 
time ago, I hope you don't mind that I pushed the remaining changes I suggested 
in my previous review. I've also dismissed my previous "Request Changes" review.
   
   @kkonstantine would you mind taking a look, since I'd rather have an 
independent review of my changes? Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #2604: KAFKA-4794: Add access to OffsetStorageReader from SourceConnector

2020-05-21 Thread GitBox


rhauch commented on pull request #2604:
URL: https://github.com/apache/kafka/pull/2604#issuecomment-632392219


   ok to test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #2604: KAFKA-4794: Add access to OffsetStorageReader from SourceConnector

2020-05-21 Thread GitBox


rhauch commented on pull request #2604:
URL: https://github.com/apache/kafka/pull/2604#issuecomment-632392110







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] JimGalasyn opened a new pull request #8710: DOCS-4435: Migrate new security note from CP to AK docs

2020-05-21 Thread GitBox


JimGalasyn opened a new pull request #8710:
URL: https://github.com/apache/kafka/pull/8710


   Migrate content from https://github.com/confluentinc/docs/pull/4655.
   
   Also restore content from https://github.com/apache/kafka/pull/4532, which 
seems to have been lost along the way. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on pull request #8679: KAFKA-10003: Mark KStream.through() as deprecated

2020-05-21 Thread GitBox


mjsax commented on pull request #8679:
URL: https://github.com/apache/kafka/pull/8679#issuecomment-632363696


   Java 8 passed.
   Java 11:
   ```
   
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta[false]
   ```
   Java 14:
   ```
   
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta[false]
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on a change in pull request #8697: KAFKA-9983: KIP-613, add task-level e2e latency metrics (min and max)

2020-05-21 Thread GitBox


ableegoldman commented on a change in pull request #8697:
URL: https://github.com/apache/kafka/pull/8697#discussion_r428933011



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/metrics/TaskMetrics.java
##
@@ -86,6 +87,14 @@ private TaskMetrics() {}
 private static final String NUM_BUFFERED_RECORDS_DESCRIPTION = "The count 
of buffered records that are polled " +
 "from consumer and not yet processed for this active task";
 
+private static final String RECORD_E2E_LATENCY = "record-e2e-latency";
+static final String RECORD_E2E_LATENCY_MAX_DESCRIPTION =
+"The maximum end-to-end latency of a record, measuring by comparing 
the record timestamp with the "
++ "system time when it has been fully processed by the task";

Review comment:
   Oh right, I put the `record` in the wrong place but this description is 
correct. It should record at the `RecordCollector` for the task-level metrics





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on pull request #8697: KAFKA-9983: KIP-613, add task-level e2e latency metrics (min and max)

2020-05-21 Thread GitBox


mjsax commented on pull request #8697:
URL: https://github.com/apache/kafka/pull/8697#issuecomment-632360387


   Retest this please.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #8697: KAFKA-9983: KIP-613, add task-level e2e latency metrics (min and max)

2020-05-21 Thread GitBox


mjsax commented on a change in pull request #8697:
URL: https://github.com/apache/kafka/pull/8697#discussion_r428930276



##
File path: 
streams/src/test/java/org/apache/kafka/streams/processor/internals/StreamTaskTest.java
##
@@ -420,6 +422,50 @@ public void shouldRecordProcessRatio() {
 assertThat(metric.metricValue(), equalTo(1.0d));
 }
 
+@Test
+public void shouldRecordE2ELatency() {
+time = new MockTime(0L, 0L, 0L);
+metrics = new Metrics(new 
MetricConfig().recordLevel(Sensor.RecordingLevel.DEBUG), time);
+
+task = createStatelessTask(createConfig(false, "0"), 
StreamsConfig.METRICS_LATEST);
+
+final KafkaMetric maxMetric = getMetric("record-e2e-latency", 
"%s-max", task.id().toString(), StreamsConfig.METRICS_LATEST);
+final KafkaMetric minMetric = getMetric("record-e2e-latency", 
"%s-min", task.id().toString(), StreamsConfig.METRICS_LATEST);
+
+assertThat(maxMetric.metricValue(), equalTo(Double.NaN));
+
+task.addRecords(partition1, asList(
+getConsumerRecord(partition1, 0L),
+getConsumerRecord(partition1, 10L),
+getConsumerRecord(partition1, 5L),
+getConsumerRecord(partition1, 20L)

Review comment:
   We we increase this ts to 35? This would allow to test min in the last 
step better

##
File path: 
streams/src/test/java/org/apache/kafka/streams/processor/internals/metrics/TaskMetricsTest.java
##
@@ -14,15 +14,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.kafka.streams.kstream.internals.metrics;
+package org.apache.kafka.streams.processor.internals.metrics;

Review comment:
   Nice one!

##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/metrics/TaskMetrics.java
##
@@ -86,6 +87,14 @@ private TaskMetrics() {}
 private static final String NUM_BUFFERED_RECORDS_DESCRIPTION = "The count 
of buffered records that are polled " +
 "from consumer and not yet processed for this active task";
 
+private static final String RECORD_E2E_LATENCY = "record-e2e-latency";
+static final String RECORD_E2E_LATENCY_MAX_DESCRIPTION =
+"The maximum end-to-end latency of a record, measuring by comparing 
the record timestamp with the "
++ "system time when it has been fully processed by the task";

Review comment:
   Assuming that a task might have a cache, is this correct, ie, `has been 
fully processed by the task`)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on pull request #8702: MINOR: Fix join group request timeout lower bound

2020-05-21 Thread GitBox


guozhangwang commented on pull request #8702:
URL: https://github.com/apache/kafka/pull/8702#issuecomment-632360522


   retest this please
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on pull request #8698: KAFKA-10022:console-producer supports the setting of client.id

2020-05-21 Thread GitBox


guozhangwang commented on pull request #8698:
URL: https://github.com/apache/kafka/pull/8698#issuecomment-632358479


   @xinzhuxiansheng could you run the whole unit test suite locally since 
jenkins seems a bit unstable at the moment? You can run `./gradlew 
cleanTest test`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji opened a new pull request #8709: KAFKA-9952; Remove immediate fetch completion logic on high watermark updates

2020-05-21 Thread GitBox


hachikuji opened a new pull request #8709:
URL: https://github.com/apache/kafka/pull/8709


   For KIP-392, we added logic to make sure that high watermark changes are 
propagated to followers without delay in order to improve end to end latency 
when fetching from followers. The downside of this change is that it increases 
the rate of fetch requests from followers which can have a noticeable impact on 
performance (see KAFKA-9731). 
   
   To fix that problem, we have already modified the code so that we only 
propagate high watermark changes immediately when a replica selector is used 
(which is not the default). However, leaving this logic around means that it is 
risky to enable follower fetching since it changes the follower request rate, 
which can have a big impact on overall broker performance. 
   
   This patch disables immediate propagation of the high watermark more 
generally. Instead, users can use the max wait time in order to control the 
worst-case latency. Note that this is typically only a problem anyway for 
low-throughput clusters since otherwise we will have a steady rate of high 
watermark updates.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bdbyrne edited a comment on pull request #8658: KAFKA-9980: Fix client quotas default entity name handling in broker.

2020-05-21 Thread GitBox


bdbyrne edited a comment on pull request #8658:
URL: https://github.com/apache/kafka/pull/8658#issuecomment-632348072


   Both test failures in 
`org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta[true]`
 appear unrelated:
   
   ```
   Failing for the past 1 build (Since Failed#6410 )
   Took 2 min 32 sec.
   Error Message
   java.lang.AssertionError: Did not receive all 20 records from topic 
multiPartitionOutputTopic within 6 ms
   Expected: is a value equal to or greater than <20>
but: <0> was less than <20>
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bdbyrne commented on pull request #8658: KAFKA-9980: Fix client quotas default entity name handling in broker.

2020-05-21 Thread GitBox


bdbyrne commented on pull request #8658:
URL: https://github.com/apache/kafka/pull/8658#issuecomment-632348072


   Both errors in 
`org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta[true]`
 appear unrelated:
   
   ```
   Failing for the past 1 build (Since Failed#6410 )
   Took 2 min 32 sec.
   Error Message
   java.lang.AssertionError: Did not receive all 20 records from topic 
multiPartitionOutputTopic within 6 ms
   Expected: is a value equal to or greater than <20>
but: <0> was less than <20>
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dima5rr commented on a change in pull request #8706: KAFKA-10030 allow fetching a key from a single partition

2020-05-21 Thread GitBox


dima5rr commented on a change in pull request #8706:
URL: https://github.com/apache/kafka/pull/8706#discussion_r428899383



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/StoreQueryIntegrationTest.java
##
@@ -332,6 +332,7 @@ private Properties streamsConfiguration() {
 config.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 200);
 config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 1000);
 config.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 100);
+config.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);

Review comment:
   Added dedicated test





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] C0urante commented on a change in pull request #8118: KAFKA-9472: Remove deleted tasks from status store

2020-05-21 Thread GitBox


C0urante commented on a change in pull request #8118:
URL: https://github.com/apache/kafka/pull/8118#discussion_r428897363



##
File path: 
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TaskStatus.java
##
@@ -61,5 +61,16 @@ public TaskStatus(ConnectorTaskId id, State state, String 
workerUrl, int generat
  */
 void onShutdown(ConnectorTaskId id);
 
+/**
+ * Invoked after the task is no longer needed. This differs from
+ * {@link #onShutdown(ConnectorTaskId)} in that a shut down task may 
be expected to restart
+ * soon (as in the case of a rebalance), whereas a destroyed task will 
not be restarted
+ * until and unless a reconfiguration of its connector occurs.
+ * 
+ * This may occur after the number of tasks for a connector is reduced.

Review comment:
   Ack, addressed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kkonstantine commented on a change in pull request #8118: KAFKA-9472: Remove deleted tasks from status store

2020-05-21 Thread GitBox


kkonstantine commented on a change in pull request #8118:
URL: https://github.com/apache/kafka/pull/8118#discussion_r428455039



##
File path: 
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TaskStatus.java
##
@@ -61,5 +61,16 @@ public TaskStatus(ConnectorTaskId id, State state, String 
workerUrl, int generat
  */
 void onShutdown(ConnectorTaskId id);
 
+/**
+ * Invoked after the task is no longer needed. This differs from
+ * {@link #onShutdown(ConnectorTaskId)} in that a shut down task may 
be expected to restart
+ * soon (as in the case of a rebalance), whereas a destroyed task will 
not be restarted
+ * until and unless a reconfiguration of its connector occurs.
+ * 
+ * This may occur after the number of tasks for a connector is reduced.

Review comment:
   Invoked when the task is deleted because the connector tasks where 
reduced or the connector itself was deleted.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kkonstantine commented on pull request #8511: KAFKA-9888: Copy connector configs before passing to REST extensions

2020-05-21 Thread GitBox


kkonstantine commented on pull request #8511:
URL: https://github.com/apache/kafka/pull/8511#issuecomment-632317517


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kkonstantine commented on pull request #8511: KAFKA-9888: Copy connector configs before passing to REST extensions

2020-05-21 Thread GitBox


kkonstantine commented on pull request #8511:
URL: https://github.com/apache/kafka/pull/8511#issuecomment-632317445


   2/3 build green, yet a failure that seems unrelated but happened during a 
connect IT. Given that I'll rerun once more. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #8357: KAFKA-9767: Add logging to basic auth rest extension

2020-05-21 Thread GitBox


rhauch commented on pull request #8357:
URL: https://github.com/apache/kafka/pull/8357#issuecomment-632311965


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #8357: KAFKA-9767: Add logging to basic auth rest extension

2020-05-21 Thread GitBox


rhauch commented on pull request #8357:
URL: https://github.com/apache/kafka/pull/8357#issuecomment-632312121


   ok to test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #8620: KAFKA-9944: Added supporting customized HTTP response headers for Kafka Connect.

2020-05-21 Thread GitBox


rhauch commented on pull request #8620:
URL: https://github.com/apache/kafka/pull/8620#issuecomment-632309779


   ok to test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #8620: KAFKA-9944: Added supporting customized HTTP response headers for Kafka Connect.

2020-05-21 Thread GitBox


rhauch commented on pull request #8620:
URL: https://github.com/apache/kafka/pull/8620#issuecomment-632309635


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on a change in pull request #8620: KAFKA-9944: Added supporting customized HTTP response headers for Kafka Connect.

2020-05-21 Thread GitBox


rhauch commented on a change in pull request #8620:
URL: https://github.com/apache/kafka/pull/8620#discussion_r428877515



##
File path: checkstyle/import-control.xml
##
@@ -362,6 +362,7 @@
   
   
   
+  

Review comment:
   FYI: this new dependency has no transitive dependencies





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #8695: KAFKA-9320: KIP-573 - Enable TLSv1.3 by default

2020-05-21 Thread GitBox


ijuma commented on pull request #8695:
URL: https://github.com/apache/kafka/pull/8695#issuecomment-632308200


   Since the vote passed, can we flesh out the PR to include more tests that 
exercise TLS 1.3? A few things to think about:
   
   1. Unit tests like the ones included in the PR currently. Can we go through 
the various possible combinations of client and server configuration and check 
that they all work or fail in the way we expect.
   
   2. Make sure the integration tests use the same TLS configuration we use by 
default (if they don't already). Since Java 8 sticks to TLS 1.2 for now, we 
will get coverage of the old and new approach this way.
   
   3. Adjust system tests to use TLS 1.3 by default, but also include variants 
where client uses TLS 1.2 and broker uses 1.3, the reverse and finally where 
TLS 1.2 is used for both.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] abbccdda commented on pull request #8677: KAFKA-9999: Make internal topic creation error non-fatal

2020-05-21 Thread GitBox


abbccdda commented on pull request #8677:
URL: https://github.com/apache/kafka/pull/8677#issuecomment-632305083


   Have updated the PR with my current understanding of the proposal @vvcephei 
@ableegoldman , the part that needs more discussion is on the case for 
`prepareRepartitionTopics` which could also fail to create any internal topic 
as well. Should we continue in that case?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #6284: KAFKA-6755: Allow literal value for MaskField SMT

2020-05-21 Thread GitBox


rhauch commented on pull request #6284:
URL: https://github.com/apache/kafka/pull/6284#issuecomment-632301873


   ok to test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #6284: KAFKA-6755: Allow literal value for MaskField SMT

2020-05-21 Thread GitBox


rhauch commented on pull request #6284:
URL: https://github.com/apache/kafka/pull/6284#issuecomment-632302071


   Rebased on latest `trunk` to correct a conflict in the `docs/connect.html` 
file.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-9780) Deprecate commit records without record metadata

2020-05-21 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9780.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk` after 
[KIP-586|https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata]
 has been adopted

> Deprecate commit records without record metadata
> 
>
> Key: KAFKA-9780
> URL: https://issues.apache.org/jira/browse/KAFKA-9780
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.4.1
>Reporter: Mario Molina
>Assignee: Mario Molina
>Priority: Minor
> Fix For: 2.6.0
>
>
> Since KIP-382 (MirrorMaker 2.0) a new method {{commitRecord}} was included in 
> {{SourceTask}} class to be called by the worker adding a new parameter with 
> the record metadata. The old {{commitRecord}} method is called and from the 
> new one and it's preserved just for backwards compatibility.
> The idea is to deprecate this method so that we could remove it in a future 
> release.
> There is a KIP for this ticket: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] rhauch merged pull request #8379: KAFKA-9780: Deprecate commit records without record metadata

2020-05-21 Thread GitBox


rhauch merged pull request #8379:
URL: https://github.com/apache/kafka/pull/8379


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #8654: KAFKA-9931: Implement KIP-605 to expand support for Connect worker internal topic configurations

2020-05-21 Thread GitBox


rhauch commented on pull request #8654:
URL: https://github.com/apache/kafka/pull/8654#issuecomment-632288718


   FYI: previous builds had no failures related to Connect.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch commented on pull request #8654: KAFKA-9931: Implement KIP-605 to expand support for Connect worker internal topic configurations

2020-05-21 Thread GitBox


rhauch commented on pull request #8654:
URL: https://github.com/apache/kafka/pull/8654#issuecomment-632288310


   Incorporated @kkonstantine's more recent suggestions, and further changed 
`TopicAdmin` to use existing `TopicConfig` constants rather than string 
literals. 
   
   Rebased to pick up the changes from #8653 rather than incorporating the same 
commit in this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on pull request #8698: KAFKA-10022:console-producer supports the setting of client.id

2020-05-21 Thread GitBox


guozhangwang commented on pull request #8698:
URL: https://github.com/apache/kafka/pull/8698#issuecomment-632288176


   `test this` is actually a keyword to trigger jenkins job :) But thank you.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on pull request #8698: KAFKA-10022:console-producer supports the setting of client.id

2020-05-21 Thread GitBox


guozhangwang commented on pull request #8698:
URL: https://github.com/apache/kafka/pull/8698#issuecomment-632288255


   test this jenkins



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] xinzhuxiansheng commented on pull request #8698: KAFKA-10022:console-producer supports the setting of client.id

2020-05-21 Thread GitBox


xinzhuxiansheng commented on pull request #8698:
URL: https://github.com/apache/kafka/pull/8698#issuecomment-632275998


   > 
   > 
   > test this please
   
   I just added a unit test to validate the code



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-10030) Throw exception while fetching a key from a single partition

2020-05-21 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113449#comment-17113449
 ] 

Matthias J. Sax commented on KAFKA-10030:
-

Thanks for the bug report and PR! – I added you to the list of contributors and 
assigned the ticket to you. You can now also self-assign tickets.

> Throw exception while fetching a key from a single partition
> 
>
> Key: KAFKA-10030
> URL: https://issues.apache.org/jira/browse/KAFKA-10030
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
> Environment: StreamsConfig.NUM_STREAM_THREADS_CONFIG=2
>Reporter: Dima R
>Assignee: Dima R
>Priority: Major
>  Labels: KAFKA-9445, KIP-562
> Fix For: 2.6.0, 2.5.1
>
>
> StreamThreadStateStoreProvider#stores throws exception whenever taskId is not 
> found, which is not correct behaviour in multi-threaded env where state store 
> partitions are distributed among several StreamTasks. 
> {code:java}
> final Task task = tasks.get(keyTaskId);
> if (task == null) {
>  throw new InvalidStateStoreException(
>  String.format("The specified partition %d for store %s does not exist.",
>  storeQueryParams.partition(),
>  storeName));
> }{code}
> Reproducible with KStream number of threads more then 1 
> StoreQueryIntegrationTest#streamsConfiguration
> config.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);
>  
> Suggested solution is to not throw exception if at least one state store is 
> found



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] mjsax commented on pull request #8706: KAFKA-10030 allow fetching a key from a single partition

2020-05-21 Thread GitBox


mjsax commented on pull request #8706:
URL: https://github.com/apache/kafka/pull/8706#issuecomment-632267029


   @vinothchandar @brary for review



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-10030) Throw exception while fetching a key from a single partition

2020-05-21 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-10030:
---

Assignee: Dima R

> Throw exception while fetching a key from a single partition
> 
>
> Key: KAFKA-10030
> URL: https://issues.apache.org/jira/browse/KAFKA-10030
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
> Environment: StreamsConfig.NUM_STREAM_THREADS_CONFIG=2
>Reporter: Dima R
>Assignee: Dima R
>Priority: Major
>  Labels: KAFKA-9445, KIP-562
> Fix For: 2.6.0, 2.5.1
>
>
> StreamThreadStateStoreProvider#stores throws exception whenever taskId is not 
> found, which is not correct behaviour in multi-threaded env where state store 
> partitions are distributed among several StreamTasks. 
> {code:java}
> final Task task = tasks.get(keyTaskId);
> if (task == null) {
>  throw new InvalidStateStoreException(
>  String.format("The specified partition %d for store %s does not exist.",
>  storeQueryParams.partition(),
>  storeName));
> }{code}
> Reproducible with KStream number of threads more then 1 
> StoreQueryIntegrationTest#streamsConfiguration
> config.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);
>  
> Suggested solution is to not throw exception if at least one state store is 
> found



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] kkonstantine commented on pull request #8612: KAFKA-8120 Getting NegativeArraySizeException when using Kafka Connect

2020-05-21 Thread GitBox


kkonstantine commented on pull request #8612:
URL: https://github.com/apache/kafka/pull/8612#issuecomment-632265994


   @wj1918 thanks for opening a PR! 
   
   I'd definitely recommend fleshing out the bugfix in a separate PR from the 
rest of the refactoring since the latter does not seem trivial. Additionally, 
w/r/t the refactoring, I'd suggest thinking whether it is essential. Keep in 
mind that this connector is available for demonstration purposes only and 
therefore it's maintenance is not very heavy. But, back to the first point, 
before reviewing any changes, it'd be good to have them in separate PRs, since 
that will significantly help - at least myself - to review. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on pull request #8706: KAFKA-10030 allow fetching a key from a single partition

2020-05-21 Thread GitBox


mjsax commented on pull request #8706:
URL: https://github.com/apache/kafka/pull/8706#issuecomment-632265933


   Retest this please.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #8706: KAFKA-10030 allow fetching a key from a single partition

2020-05-21 Thread GitBox


mjsax commented on a change in pull request #8706:
URL: https://github.com/apache/kafka/pull/8706#discussion_r428831152



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/StoreQueryIntegrationTest.java
##
@@ -332,6 +332,7 @@ private Properties streamsConfiguration() {
 config.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 200);
 config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 1000);
 config.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 100);
+config.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);

Review comment:
   We should write a proper test case instead of "piggy-backing" it into an 
existing test.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on pull request #8605: MINOR: align the constructor of KafkaConsumer to KafkaProducer

2020-05-21 Thread GitBox


chia7712 commented on pull request #8605:
URL: https://github.com/apache/kafka/pull/8605#issuecomment-632256376


   @nresare This "coincidence" is more favorite than "approved" :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on a change in pull request #8701: MINOR: Add reason to log message when incrementing the log start offset

2020-05-21 Thread GitBox


hachikuji commented on a change in pull request #8701:
URL: https://github.com/apache/kafka/pull/8701#discussion_r428819911



##
File path: core/src/main/scala/kafka/log/Log.scala
##
@@ -180,6 +180,17 @@ object RollParams {
   }
 }
 
+sealed trait LogStartIncrementCause

Review comment:
   I knew someone was going to ask this  . I guess I'll do it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] nresare commented on pull request #8605: MINOR: align the constructor of KafkaConsumer to KafkaProducer

2020-05-21 Thread GitBox


nresare commented on pull request #8605:
URL: https://github.com/apache/kafka/pull/8605#issuecomment-632253490


   Please note that `addDeserializerToConfig(Properties, Deserializer, 
Deserializer)` is now only used in test cases and can be removed along with 
the two tests of that method.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck commented on a change in pull request #8504: KAFKA-9298: reuse mapped stream error in joins

2020-05-21 Thread GitBox


bbejeck commented on a change in pull request #8504:
URL: https://github.com/apache/kafka/pull/8504#discussion_r428815724



##
File path: 
streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamKStreamJoinTest.java
##
@@ -77,6 +79,38 @@ public void 
shouldLogAndMeterOnSkippedRecordsWithNullValueWithBuiltInMetricsVers
 
shouldLogAndMeterOnSkippedRecordsWithNullValue(StreamsConfig.METRICS_LATEST);
 }
 
+
+@Test
+public void shouldReuseRepartitionTopicWithGeneratedName() {
+final StreamsBuilder builder = new StreamsBuilder();
+final Properties props = new Properties();
+props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.NO_OPTIMIZATION);
+final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
+newStream.join(stream2, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100))).to("out-one");
+newStream.join(stream3, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100))).to("out-to");
+assertEquals(expectedTopologyWithGeneratedRepartitionTopic, 
builder.build(props).describe().toString());
+}
+
+@Test
+public void shouldCreateRepartitionTopicsWithUserProvidedName() {
+final StreamsBuilder builder = new StreamsBuilder();
+final Properties props = new Properties();
+props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.NO_OPTIMIZATION);
+final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
+final StreamJoined streamJoined = 
StreamJoined.with(Serdes.String(), Serdes.String(), Serdes.String());
+newStream.join(stream2, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100)), 
streamJoined.withName("first-join")).to("out-one");
+newStream.join(stream3, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100)), 
streamJoined.withName("second-join")).to("out-two");
+final Topology topology =  builder.build(props);
+System.out.println(topology.describe().toString());
+assertEquals(expectedTopologyWithUserNamedRepartitionTopics, 
topology.describe().toString());

Review comment:
   Thanks for the discussion @vvcephei and @mjsax. I'll revert this PR to 
its original state which conforms to @vvcephei's comments above. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] nresare commented on pull request #8605: MINOR: align the constructor of KafkaConsumer to KafkaProducer

2020-05-21 Thread GitBox


nresare commented on pull request #8605:
URL: https://github.com/apache/kafka/pull/8605#issuecomment-632250830


   @ijuma please note that this diff has changed fundamentally since you 
reviewed it, and it now modifies KafkaConsumer and not KafkaProducer. 
   
   I would like to endorse this PR. Funnily enough I came to very similar 
conclusions when I worked on the same code yesterday and raised 
https://github.com/apache/kafka/pull/8707 without any knowledge that this one 
existed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck commented on pull request #8637: KAFKA-9976: Reuse repartition node in all cases for KGroupedStream and KGroupedTable aggregates

2020-05-21 Thread GitBox


bbejeck commented on pull request #8637:
URL: https://github.com/apache/kafka/pull/8637#issuecomment-632249736


   >I think it's best to close this PR and also the ticker (either as "not a 
problem" or "won't fix")?
   
   ack



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck closed pull request #8637: KAFKA-9976: Reuse repartition node in all cases for KGroupedStream and KGroupedTable aggregates

2020-05-21 Thread GitBox


bbejeck closed pull request #8637:
URL: https://github.com/apache/kafka/pull/8637


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #8504: KAFKA-9298: reuse mapped stream error in joins

2020-05-21 Thread GitBox


mjsax commented on a change in pull request #8504:
URL: https://github.com/apache/kafka/pull/8504#discussion_r428812516



##
File path: 
streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamKStreamJoinTest.java
##
@@ -77,6 +79,38 @@ public void 
shouldLogAndMeterOnSkippedRecordsWithNullValueWithBuiltInMetricsVers
 
shouldLogAndMeterOnSkippedRecordsWithNullValue(StreamsConfig.METRICS_LATEST);
 }
 
+
+@Test
+public void shouldReuseRepartitionTopicWithGeneratedName() {
+final StreamsBuilder builder = new StreamsBuilder();
+final Properties props = new Properties();
+props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.NO_OPTIMIZATION);
+final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
+newStream.join(stream2, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100))).to("out-one");
+newStream.join(stream3, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100))).to("out-to");
+assertEquals(expectedTopologyWithGeneratedRepartitionTopic, 
builder.build(props).describe().toString());
+}
+
+@Test
+public void shouldCreateRepartitionTopicsWithUserProvidedName() {
+final StreamsBuilder builder = new StreamsBuilder();
+final Properties props = new Properties();
+props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.NO_OPTIMIZATION);
+final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
+final StreamJoined streamJoined = 
StreamJoined.with(Serdes.String(), Serdes.String(), Serdes.String());
+newStream.join(stream2, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100)), 
streamJoined.withName("first-join")).to("out-one");
+newStream.join(stream3, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100)), 
streamJoined.withName("second-join")).to("out-two");
+final Topology topology =  builder.build(props);
+System.out.println(topology.describe().toString());
+assertEquals(expectedTopologyWithUserNamedRepartitionTopics, 
topology.describe().toString());

Review comment:
   Thanks @vvcephei -- that is convincing.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] nresare commented on pull request #8707: Simplify KafkaConsumer constructor logic

2020-05-21 Thread GitBox


nresare commented on pull request #8707:
URL: https://github.com/apache/kafka/pull/8707#issuecomment-632248424


   @chia7712 what an interesting coincidence that our changes looks almost 
identical although I can assure you that they have been developed completely 
independently



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] abbccdda commented on pull request #8486: KAFKA-9840: Skip End Offset validation when the leader epoch is not reliable

2020-05-21 Thread GitBox


abbccdda commented on pull request #8486:
URL: https://github.com/apache/kafka/pull/8486#issuecomment-632246269


   @ijuma will ping Jason for another review 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] feyman2016 commented on pull request #8589: KAFKA-9146: KIP-571 Add option to force delete active members in StreamsResetter

2020-05-21 Thread GitBox


feyman2016 commented on pull request #8589:
URL: https://github.com/apache/kafka/pull/8589#issuecomment-632246270


   @abbccdda Thanks so much for the timely and detailed comments, I will update 
soon.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-9747) No tasks created for a connector

2020-05-21 Thread Andrew Garrett (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113414#comment-17113414
 ] 

Andrew Garrett commented on KAFKA-9747:
---

I'm seeing similar behavior, getting random tasks that are coming back as blank 
when checking status through the REST API. Then a second later, these tasks are 
showing as running on some machine after re-requesting the status. We have 10 
Connect pods running in distributed mode with a load balancer pointing to the 
overarching k8s service for the Connect pods.

> No tasks created for a connector
> 
>
> Key: KAFKA-9747
> URL: https://issues.apache.org/jira/browse/KAFKA-9747
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
> Environment: OS: Ubuntu 18.04 LTS
> Platform: Confluent Platform 5.4
> HW: The same behaviour on various AWS instances - from t3.small to c5.xlarge
>Reporter: Vit Koma
>Priority: Major
> Attachments: connect-distributed.properties, connect.log
>
>
> We are running Kafka Connect in a distributed mode on 3 nodes using Debezium 
> (MongoDB) and Confluent S3 connectors. When adding a new connector via the 
> REST API the connector is created in RUNNING state, but no tasks are created 
> for the connector.
> Pausing and resuming the connector does not help. When we stop all workers 
> and then start them again, the tasks are created and everything runs as it 
> should.
> The issue does not show up if we run only a single node.
> The issue is not caused by the connector plugins, because we see the same 
> behaviour for both Debezium and S3 connectors. Also in debug logs I can see 
> that Debezium is correctly returning a task configuration from the 
> Connector.taskConfigs() method.
> Connector configuration examples
> Debezium:
> {
>   "name": "qa-mongodb-comp-converter-task|1",
>   "config": {
> "connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
> "mongodb.hosts": 
> "mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
> "mongodb.name": "qa-debezium-comp",
> "mongodb.ssl.enabled": true,
> "collection.whitelist": "converter[.]task",
> "tombstones.on.delete": true
>   }
> }
> S3 Connector:
> {
>   "name": "qa-s3-sink-task|1",
>   "config": {
> "connector.class": "io.confluent.connect.s3.S3SinkConnector",
> "topics": "qa-debezium-comp.converter.task",
> "topics.dir": "data/env/qa",
> "s3.region": "eu-west-1",
> "s3.bucket.name": "",
> "flush.size": "15000",
> "rotate.interval.ms": "360",
> "storage.class": "io.confluent.connect.s3.storage.S3Storage",
> "format.class": 
> "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
> "schema.generator.class": 
> "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
> "partitioner.class": 
> "io.confluent.connect.storage.partitioner.DefaultPartitioner",
> "schema.compatibility": "NONE",
> "key.converter": "org.apache.kafka.connect.json.JsonConverter",
> "value.converter": "org.apache.kafka.connect.json.JsonConverter",
> "key.converter.schemas.enable": false,
> "value.converter.schemas.enable": false,
> "transforms": "ExtractDocument",
> 
> "transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
>   }
> }
> The connectors are created using curl: {{curl -X POST -H "Content-Type: 
> application/json" --data @ http:/:10083/connectors}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] rajinisivaram commented on pull request #8705: KAFKA-10029; Don't update completedReceives when channels are closed to avoid ConcurrentModificationException

2020-05-21 Thread GitBox


rajinisivaram commented on pull request #8705:
URL: https://github.com/apache/kafka/pull/8705#issuecomment-632241769


   @chia7712 I was tempted to do that initially, but that is not the pattern we 
use for everything else in Selector and it has always been this way (for 
several years), so adding tests to make sure we don't break it made more sense.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #8708: MINOR: avoid unnecessary seq iteration in ApiVersion.lastVersion

2020-05-21 Thread GitBox


ijuma commented on pull request #8708:
URL: https://github.com/apache/kafka/pull/8708#issuecomment-632234746


   ok to test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji merged pull request #8651: kafkatest: Deploy VerifiableClient in constructor to avoid test timeouts

2020-05-21 Thread GitBox


hachikuji merged pull request #8651:
URL: https://github.com/apache/kafka/pull/8651


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on pull request #8651: kafkatest: Deploy VerifiableClient in constructor to avoid test timeouts

2020-05-21 Thread GitBox


hachikuji commented on pull request #8651:
URL: https://github.com/apache/kafka/pull/8651#issuecomment-632223688


   Link to tests: 
http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/2020-05-20--001.1589989080--edenhill--verifclt_deploy--227c13b64/report.html



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] abbccdda commented on a change in pull request #8589: KAFKA-9146: KIP-571 Add option to force delete active members in StreamsResetter

2020-05-21 Thread GitBox


abbccdda commented on a change in pull request #8589:
URL: https://github.com/apache/kafka/pull/8589#discussion_r428745517



##
File path: 
clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java
##
@@ -3612,6 +3611,26 @@ private boolean dependsOnSpecificNode(ConfigResource 
resource) {
 || resource.type() == ConfigResource.Type.BROKER_LOGGER;
 }
 
+private List getMembersFromGroup(String groupId) {
+Collection members;
+try {
+members = 
describeConsumerGroups(Collections.singleton(groupId)).describedGroups().get(groupId).get().members();
+} catch (Throwable ex) {

Review comment:
   I think we should catch `Exception` here:
   
https://stackoverflow.com/questions/2274102/difference-between-using-throwable-and-exception-in-a-try-catch
   

##
File path: 
clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java
##
@@ -3621,24 +3640,31 @@ public RemoveMembersFromConsumerGroupResult 
removeMembersFromConsumerGroup(Strin
 KafkaFutureImpl> future = new 
KafkaFutureImpl<>();
 
 ConsumerGroupOperationContext, 
RemoveMembersFromConsumerGroupOptions> context =
-new ConsumerGroupOperationContext<>(groupId, options, deadline, 
future);
+new ConsumerGroupOperationContext<>(groupId, options, 
deadline, future);

Review comment:
   Let's get back the original indentation.

##
File path: core/src/main/scala/kafka/tools/StreamsResetter.java
##
@@ -186,9 +190,15 @@ private void validateNoActiveConsumers(final String 
groupId,
 final List members =
 new 
ArrayList<>(describeResult.describedGroups().get(groupId).get().members());
 if (!members.isEmpty()) {
-throw new IllegalStateException("Consumer group '" + groupId + "' 
is still active "
-+ "and has following members: " + members + ". "
-+ "Make sure to stop all running application instances 
before running the reset tool.");
+if (options.has(forceOption)) {
+System.out.println("Force deleting all active members in the 
group: " + groupId);
+adminClient.removeMembersFromConsumerGroup(groupId, new 
RemoveMembersFromConsumerGroupOptions()).all();

Review comment:
   Should we check the member removal result here before proceeding? If 
that call failed, the whole operation should fail with error message containing 
the result IMHO.

##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
##
@@ -507,7 +544,7 @@ private Topology 
setupTopologyWithoutIntermediateUserTopic() {
 return builder.build();
 }
 
-private void cleanGlobal(final boolean withIntermediateTopics,
+private int tryCleanGlobal(final boolean withIntermediateTopics,

Review comment:
   We could add meta comment for the return value here, and instead of 
returning an exit code, I feel a boolean is suffice to indicate whether the 
clean operation was successful or not.

##
File path: 
clients/src/test/java/org/apache/kafka/clients/admin/KafkaAdminClientTest.java
##
@@ -2411,6 +2411,50 @@ public void testRemoveMembersFromGroup() throws 
Exception {
 assertNull(noErrorResult.all().get());
 assertNull(noErrorResult.memberResult(memberOne).get());
 assertNull(noErrorResult.memberResult(memberTwo).get());
+
+// Return with success for "removeAll" scenario

Review comment:
   This test looks good, but it seems that we didn't test the case where 
some members get deleted successfully while some are not?

##
File path: 
clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java
##
@@ -3621,24 +3640,31 @@ public RemoveMembersFromConsumerGroupResult 
removeMembersFromConsumerGroup(Strin
 KafkaFutureImpl> future = new 
KafkaFutureImpl<>();
 
 ConsumerGroupOperationContext, 
RemoveMembersFromConsumerGroupOptions> context =
-new ConsumerGroupOperationContext<>(groupId, options, deadline, 
future);
+new ConsumerGroupOperationContext<>(groupId, options, 
deadline, future);
 
+List members;
+if (options.removeAll()) {
+members = getMembersFromGroup(groupId);
+} else {
+members = options.members().stream().map(
+
MemberToRemove::toMemberIdentity).collect(Collectors.toList());
+}
 Call findCoordinatorCall = getFindCoordinatorCall(context,
-() -> getRemoveMembersFromGroupCall(context));
+() -> getRemoveMembersFromGroupCall(context, members));
 runnable.call(findCoordinatorCall, startFindCoordinatorMs);
 
 return new RemoveMembersFromConsumerGroupResult(future, 
options.members());
 }
 
-private Call 

[GitHub] [kafka] chia7712 commented on pull request #8705: KAFKA-10029; Don't update completedReceives when channels are closed to avoid ConcurrentModificationException

2020-05-21 Thread GitBox


chia7712 commented on pull request #8705:
URL: https://github.com/apache/kafka/pull/8705#issuecomment-632196447


   How about making a collection copy of ```completedReceives``` when 
traversing ```completedReceives```? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rnpridgeon commented on a change in pull request #8691: KAFKA-9960: implement KIP-606 to add metadata context to MetricsReporter

2020-05-21 Thread GitBox


rnpridgeon commented on a change in pull request #8691:
URL: https://github.com/apache/kafka/pull/8691#discussion_r428760555



##
File path: 
clients/src/main/java/org/apache/kafka/common/metrics/KafkaMetricsContext.java
##
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.metrics;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * A implementation of MetricsContext, it encapsulates required metrics 
context properties for Kafka services and clients
+ */
+public class KafkaMetricsContext implements MetricsContext {
+/**
+ * Client or Service's metadata map.
+ */
+private Map metadata = new HashMap<>();
+
+/**
+ * Create a MetricsContext with namespace, no service or client properties
+ * @param namespace value for _namespace key
+ */
+public KafkaMetricsContext(String namespace) {
+this(namespace, new HashMap<>());
+}
+
+/**
+ * Create a MetricsContext with namespace, service or client properties
+ * @param namespace value for _namespace key
+ * @param metadata  metadata additional entries to add to the context.
+ *  values will be converted to string using 
Object.toString()
+ */
+public KafkaMetricsContext(String namespace, Map metadata) {
+this.metadata.put(MetricsContext.NAMESPACE, namespace);
+metadata.forEach((key, value) -> this.metadata.put(key, 
value.toString()));

Review comment:
   Use `putIfAbsent` to avoid silently overwriting over labels set 
upstream. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vvcephei commented on a change in pull request #8504: KAFKA-9298: reuse mapped stream error in joins

2020-05-21 Thread GitBox


vvcephei commented on a change in pull request #8504:
URL: https://github.com/apache/kafka/pull/8504#discussion_r428764142



##
File path: 
streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamKStreamJoinTest.java
##
@@ -77,6 +79,38 @@ public void 
shouldLogAndMeterOnSkippedRecordsWithNullValueWithBuiltInMetricsVers
 
shouldLogAndMeterOnSkippedRecordsWithNullValue(StreamsConfig.METRICS_LATEST);
 }
 
+
+@Test
+public void shouldReuseRepartitionTopicWithGeneratedName() {
+final StreamsBuilder builder = new StreamsBuilder();
+final Properties props = new Properties();
+props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.NO_OPTIMIZATION);
+final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
+newStream.join(stream2, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100))).to("out-one");
+newStream.join(stream3, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100))).to("out-to");
+assertEquals(expectedTopologyWithGeneratedRepartitionTopic, 
builder.build(props).describe().toString());
+}
+
+@Test
+public void shouldCreateRepartitionTopicsWithUserProvidedName() {
+final StreamsBuilder builder = new StreamsBuilder();
+final Properties props = new Properties();
+props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.NO_OPTIMIZATION);
+final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
+final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
+final StreamJoined streamJoined = 
StreamJoined.with(Serdes.String(), Serdes.String(), Serdes.String());
+newStream.join(stream2, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100)), 
streamJoined.withName("first-join")).to("out-one");
+newStream.join(stream3, (value1, value2) -> value1 + value2, 
JoinWindows.of(ofMillis(100)), 
streamJoined.withName("second-join")).to("out-two");
+final Topology topology =  builder.build(props);
+System.out.println(topology.describe().toString());
+assertEquals(expectedTopologyWithUserNamedRepartitionTopics, 
topology.describe().toString());

Review comment:
   Thanks for the discussion, all.
   
   Coming back to this proposal, and considering the points you've raised, it 
seems like we should re-use the generated repartition node when the name is 
generated, and create two repartition nodes when they are named.
   
   The purpose of re-using the repartition node in this PR isn't exactly to 
optimize anything, just to avoid throwing the exception that happens when we 
currently try to create the exact same repartition node twice. We could instead 
_always_ create two nodes, but this is needlessly wasteful. Reusing the 
same-named node makes perfect sense.
   
   When the operations are named, on the other hand, there is no problem right 
now, since we are creating differently named nodes. Since there's no problem, 
we shouldn't "solve" it ;)
   
   It's true that this isn't the most optimal physical plan, but for anyone who 
cares enough to look into it, they can just add the repartition node first, as 
you suggested @mjsax; we don't need to throw an exception to force them to 
fine-tune their program.
   
   The other option is that they can enable topology optimization, which will 
also collapse the named repartition nodes in a well-defined way.
   
   Compatibility is a concern, and it seems like it's satisfied if we follow 
this path:
   1. You currently cannot reuse the same stream in two anonymous joins, so we 
can share the node without breaking any program
   2. You currently _can_ reuse the same stream in two _named_ joins, and we 
will create two (named) repartition topics. We have no choice but to maintain 
this, or we will break compatibility.
   3. Inserting a repartition node is well defined to break compatibility, so 
people will know they have to reset.
   4. Adding Optimization is well defined to break compatibility, so people 
will know they have to reset.
   
   Have I missed some consideration?
   Thanks,
   -John





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[GitHub] [kafka] rajinisivaram merged pull request #8650: MINOR: Added unit tests for ConnectionQuotas

2020-05-21 Thread GitBox


rajinisivaram merged pull request #8650:
URL: https://github.com/apache/kafka/pull/8650


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rajinisivaram commented on pull request #8650: MINOR: Added unit tests for ConnectionQuotas

2020-05-21 Thread GitBox


rajinisivaram commented on pull request #8650:
URL: https://github.com/apache/kafka/pull/8650#issuecomment-632186914


   Test failures unrelated, merging to trunk.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on pull request #8707: Simplify KafkaConsumer constructor logic

2020-05-21 Thread GitBox


chia7712 commented on pull request #8707:
URL: https://github.com/apache/kafka/pull/8707#issuecomment-632182363


   It seems #8605 is duplicate to this issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] rhauch merged pull request #8653: MINOR: Correct MirrorMaker2 integration test configs for Connect internal topics

2020-05-21 Thread GitBox


rhauch merged pull request #8653:
URL: https://github.com/apache/kafka/pull/8653


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on pull request #8703: MINOR: add a getOrCreate function to KRPC collections

2020-05-21 Thread GitBox


chia7712 commented on pull request #8703:
URL: https://github.com/apache/kafka/pull/8703#issuecomment-632170085


   What is the purpose of this new method? Should we add ```getOrCreate``` to 
```ImplicitLinkedHashMultiCollection``` as well?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on pull request #8658: KAFKA-9980: Fix client quotas default entity name handling in broker.

2020-05-21 Thread GitBox


cmccabe commented on pull request #8658:
URL: https://github.com/apache/kafka/pull/8658#issuecomment-632144774


   ok to test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-9980) Text encoding bug prevents correctly setting client quotas for default entities

2020-05-21 Thread Brian Byrne (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113260#comment-17113260
 ] 

Brian Byrne commented on KAFKA-9980:


Updated patch here: https://github.com/apache/kafka/pull/8658

> Text encoding bug prevents correctly setting client quotas for default 
> entities
> ---
>
> Key: KAFKA-9980
> URL: https://issues.apache.org/jira/browse/KAFKA-9980
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
>
> quota_tests.py is failing. Specifically for this test:
> {quote}
>  [INFO:2020-05-11 19:22:47,493]: RunnerClient: Loading test \{'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/client', 'file_name': 'quota_test.py', 
> 'method_name': 'test_quota', 'cls_name': 'QuotaTest', 'injected_args': 
> {'quota_type': 'client-id', 'override_quota': False}}
> {quote}
>  
> I log into the docker container and do
>  
> {quote}
>  /opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker03:9093 
> --describe --entity-type clients --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
>  and the command return
>  
> {quote}Configs for the default client-id are consumer_byte_rate=200.0, 
> producer_byte_rate=250.0
>  Configs for client-id 'overridden_id' are consumer_byte_rate=1.0E9, 
> producer_byte_rate=1.0E9
>  Seems like the config is properly but the quota is not effective
>   
> {quote}
>  For investigation, I added a logging at 
> {quote}{{AdminZKClient.changeConfigs()}}
> {quote}
>  
>  
> {quote}def changeConfigs(entityType: String, entityName: String, configs: 
> Properties): Unit =
> {
>         warn(s"entityType = $entityType entityName = $entityName configs = 
> $configs") ...
> }
> {quote}
> And use --bootstrap-server and --zookeeper to --alter the default client 
> quota. I got
>  
> {quote}
>  Alter with --zookeeper:WARN entityType = clients entityName =  
> configs = \{producer_byte_rate=10, consumer_byte_rate=10} 
> (kafka.zk.AdminZkClient)
> {quote}
>  
>  and
>  
> {quote}
>  Alter with --bootstrap-server:WARN entityType = clients entityName = 
> %3Cdefault%3E configs = \{producer_byte_rate=10, 
> consumer_byte_rate=10} (kafka.zk.AdminZkClient)
> {quote}
>  
>  I guess the encoding difference might cause the issue. The encoding happens 
> in
>  
> {quote}
>  Sanitizer.sanitize()
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9980) Text encoding bug prevents correctly setting client quotas for default entities

2020-05-21 Thread Brian Byrne (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Byrne reassigned KAFKA-9980:
--

Assignee: Brian Byrne  (was: Cheng Tan)

> Text encoding bug prevents correctly setting client quotas for default 
> entities
> ---
>
> Key: KAFKA-9980
> URL: https://issues.apache.org/jira/browse/KAFKA-9980
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
>
> quota_tests.py is failing. Specifically for this test:
> {quote}
>  [INFO:2020-05-11 19:22:47,493]: RunnerClient: Loading test \{'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/client', 'file_name': 'quota_test.py', 
> 'method_name': 'test_quota', 'cls_name': 'QuotaTest', 'injected_args': 
> {'quota_type': 'client-id', 'override_quota': False}}
> {quote}
>  
> I log into the docker container and do
>  
> {quote}
>  /opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker03:9093 
> --describe --entity-type clients --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
>  and the command return
>  
> {quote}Configs for the default client-id are consumer_byte_rate=200.0, 
> producer_byte_rate=250.0
>  Configs for client-id 'overridden_id' are consumer_byte_rate=1.0E9, 
> producer_byte_rate=1.0E9
>  Seems like the config is properly but the quota is not effective
>   
> {quote}
>  For investigation, I added a logging at 
> {quote}{{AdminZKClient.changeConfigs()}}
> {quote}
>  
>  
> {quote}def changeConfigs(entityType: String, entityName: String, configs: 
> Properties): Unit =
> {
>         warn(s"entityType = $entityType entityName = $entityName configs = 
> $configs") ...
> }
> {quote}
> And use --bootstrap-server and --zookeeper to --alter the default client 
> quota. I got
>  
> {quote}
>  Alter with --zookeeper:WARN entityType = clients entityName =  
> configs = \{producer_byte_rate=10, consumer_byte_rate=10} 
> (kafka.zk.AdminZkClient)
> {quote}
>  
>  and
>  
> {quote}
>  Alter with --bootstrap-server:WARN entityType = clients entityName = 
> %3Cdefault%3E configs = \{producer_byte_rate=10, 
> consumer_byte_rate=10} (kafka.zk.AdminZkClient)
> {quote}
>  
>  I guess the encoding difference might cause the issue. The encoding happens 
> in
>  
> {quote}
>  Sanitizer.sanitize()
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] lbradstreet removed a comment on pull request #8708: MINOR: avoid unnecessary list iteration in ApiVersion.lastVersion

2020-05-21 Thread GitBox


lbradstreet removed a comment on pull request #8708:
URL: https://github.com/apache/kafka/pull/8708#issuecomment-632110111


   Temporarily closing this. I'm surprised that we actually see latestVersion 
being called here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-10029) Selector.completedReceives should not be modified when channel is closed

2020-05-21 Thread Rajini Sivaram (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113226#comment-17113226
 ] 

Rajini Sivaram commented on KAFKA-10029:


[~ijuma] It was a regression in 2.5.0 introduced by KAFKA-7639.

> Selector.completedReceives should not be modified when channel is closed
> 
>
> Key: KAFKA-10029
> URL: https://issues.apache.org/jira/browse/KAFKA-10029
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 2.5.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> Selector.completedReceives are processed using `forEach` by SocketServer and 
> NetworkClient when processing receives from a poll. Since we may close 
> channels while processing receives, changes to the map while closing channels 
> can result in ConcurrentModificationException. We clear the entire map after 
> each poll anyway, so we don't need to remove channel from the map while 
> closing channels.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] lbradstreet commented on pull request #8708: MINOR: avoid unnecessary list iteration in ApiVersion.lastVersion

2020-05-21 Thread GitBox


lbradstreet commented on pull request #8708:
URL: https://github.com/apache/kafka/pull/8708#issuecomment-632110111


   Temporarily closing this. I'm surprised that we actually see latestVersion 
being called here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lbradstreet closed pull request #8708: MINOR: avoid unnecessary list iteration in ApiVersion.lastVersion

2020-05-21 Thread GitBox


lbradstreet closed pull request #8708:
URL: https://github.com/apache/kafka/pull/8708


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lbradstreet opened a new pull request #8708: MINOR: avoid unnecessary list iteration in ApiVersion.lastVersion

2020-05-21 Thread GitBox


lbradstreet opened a new pull request #8708:
URL: https://github.com/apache/kafka/pull/8708


   We unnecessarily iterate the versions list each time we lookup lastVersion, 
including in the hotpath Log.appendAsFollower. Given that allVersions is a 
constant, this is unnecessary. See the profile below from a relatively low 
replica count cluster. Higher replica count clusters with smaller batches may 
see a slightly larger effect.
   
   
![image](https://user-images.githubusercontent.com/252189/82567449-3d757780-9b32-11ea-9a04-9dc448c0ae3a.png)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] nizhikov commented on pull request #8695: KAFKA-9320: KIP-573 - Enable TLSv1.3 by default

2020-05-21 Thread GitBox


nizhikov commented on pull request #8695:
URL: https://github.com/apache/kafka/pull/8695#issuecomment-632087277


   @ijuma 
[KIP-573](https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default)
 updated.
   Actually, I'm started one 
([link](https://mail-archives.apache.org/mod_mbox/kafka-dev/202003.mbox/%3CC741B223-739D-4FDE-B8F9-63DD4ACC433F%40gmail.com%3E)),
 but didn't get any votes :)
   Should I start another?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #8695: KAFKA-9320: KIP-573 - Enable TLSv1.3 by default

2020-05-21 Thread GitBox


ijuma commented on pull request #8695:
URL: https://github.com/apache/kafka/pull/8695#issuecomment-632073022


   @nizhikov Thanks. Can you update the KIP and start the voting on it?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-10029) Selector.completedReceives should not be modified when channel is closed

2020-05-21 Thread Ismael Juma (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113165#comment-17113165
 ] 

Ismael Juma commented on KAFKA-10029:
-

Good catch. Is this a recent regression?

> Selector.completedReceives should not be modified when channel is closed
> 
>
> Key: KAFKA-10029
> URL: https://issues.apache.org/jira/browse/KAFKA-10029
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 2.5.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> Selector.completedReceives are processed using `forEach` by SocketServer and 
> NetworkClient when processing receives from a poll. Since we may close 
> channels while processing receives, changes to the map while closing channels 
> can result in ConcurrentModificationException. We clear the entire map after 
> each poll anyway, so we don't need to remove channel from the map while 
> closing channels.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] nresare opened a new pull request #8707: Simplify KafkaConsumer constructor logic

2020-05-21 Thread GitBox


nresare opened a new pull request #8707:
URL: https://github.com/apache/kafka/pull/8707


   This mirrors how the KafkaProducer constructors are organised and reduce
   code duplication.
   
   More specifically this PR makes the following changes without changing any 
behaviour or public API:
   
   1. Move the helper method `propsToMap()` from `KafkaProducer` to `Utils` 
with the other
   similar helper `mkMap()` and `mkProperties()` (to make it available to 
KafkaConsumer).
   2. Move the `ConsumerConfig` instantiation to the constructor that actually 
does the work.
   3. Remove the `addDeserializerToConfig()` that operates on `Properties` and 
it's two tests, as all of those are very similar to the ones that operate on 
`Map` instances.
   4. Have the `KafkaConsumer` constructors that accept `Properties` parameter 
convert those early to `Map` instances.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >