[jira] [Commented] (KAFKA-8729) Augment ProduceResponse error messaging for specific culprit records

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941477#comment-16941477
 ] 

ASF GitHub Bot commented on KAFKA-8729:
---

guozhangwang commented on pull request #7150: KAFKA-8729, pt 2: Add 
error_records and error_message to PartitionResponse
URL: https://github.com/apache/kafka/pull/7150
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Augment ProduceResponse error messaging for specific culprit records
> 
>
> Key: KAFKA-8729
> URL: https://issues.apache.org/jira/browse/KAFKA-8729
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Reporter: Guozhang Wang
>Assignee: Tu Tran
>Priority: Major
>
> 1. We should replace the misleading CORRUPT_RECORD error code with a new 
> INVALID_RECORD.
> 2. We should augment the ProduceResponse with customizable error message and 
> indicators of culprit records.
> 3. We should change the client-side handling logic of non-retriable 
> INVALID_RECORD to re-batch the records.
> Details see: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-467%3A+Augment+ProduceResponse+error+messaging+for+specific+culprit+records



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8927) Remove config `partition.grouper` and interface `PartitionGrouper`

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941443#comment-16941443
 ] 

ASF GitHub Bot commented on KAFKA-8927:
---

guozhangwang commented on pull request #7376: KAFKA-8927: Deprecate 
PartitionGrouper interface
URL: https://github.com/apache/kafka/pull/7376
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Remove config `partition.grouper` and interface `PartitionGrouper`
> --
>
> Key: KAFKA-8927
> URL: https://issues.apache.org/jira/browse/KAFKA-8927
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>  Labels: kip
>
> The `PartitionGrouper` interface was originally exposed to allow user a 
> higher degree of flexibility with regard to partition to task mapping. 
> However, Kafka Streams runtime employs many undocumented restrictions to 
> write a correct `PartitionGrouper` and hence it is easy for users to break 
> the runtime that way.
> In practice, we have not seen the usage of the interface. Hence, we should 
> consider to deprecate and eventually remove it.
> KIP-528: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-528%3A+Deprecate+PartitionGrouper+configuration+and+interface]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8700) Flaky Test QueryableStateIntegrationTest#queryOnRebalance

2019-09-30 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941440#comment-16941440
 ] 

Guozhang Wang commented on KAFKA-8700:
--

Thanks for your interest in this and KAFKA-8059! I left you some comments about 
how to contribute and conquer the flakiness.

> Flaky Test QueryableStateIntegrationTest#queryOnRebalance
> -
>
> Key: KAFKA-8700
> URL: https://issues.apache.org/jira/browse/KAFKA-8700
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3807/tests]
> {quote}java.lang.AssertionError: Condition not met within timeout 12. 
> waiting for metadata, store and value to be non null
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:292)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:382){quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-09-30 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941438#comment-16941438
 ] 

Guozhang Wang commented on KAFKA-8059:
--

[~xujianhai] that would be great. You can find instructions on how to file a PR 
on "how to contribute": 
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes

If you are using IDE, you can try first reproduce the issue locally with 
repeated runs and and then follow the logs to see if it is indeed a flaky test 
and how to fix it.

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8609) Add consumer metrics for rebalances (part 9)

2019-09-30 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-8609:
-
Description: 
We would like to track some additional metrics on the consumer side related to 
rebalancing as part of this KIP, including
 # listener callback latency
 ## partitions-revoked-time-avg
 ## partitions-revoked-time-max
 ## partitions-assigned-time-avg
 ## partitions-assigned-time-max
 ## partitions-lost-time-avg
 ## partitions-lost-time-max
 # rebalance events
 ## rebalance-rate-per-day
 ## rebalance-total
 ## rebalance-latency-avg
 ## rebalance-latency-max
 ## rebalance-latency-total
 ## failed-rebalance-rate-per-hour
 ## failed-rebalance-total
 # last-rebalance-seconds-ago

  was:
We would like to track some additional metrics on the consumer side related to 
rebalancing as part of this KIP, including
 # listener callback latency
 ## partitions-revoked-time-avg
 ## partitions-revoked-time-max
 ## partitions-assigned-time-avg
 ## partitions-assigned-time-max
 ## partitions-lost-time-avg
 ## partitions-lost-time-max
 # rebalance rate (# rebalances per day)
 ## rebalance-rate-per-day


> Add consumer metrics for rebalances (part 9)
> 
>
> Key: KAFKA-8609
> URL: https://issues.apache.org/jira/browse/KAFKA-8609
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
>
> We would like to track some additional metrics on the consumer side related 
> to rebalancing as part of this KIP, including
>  # listener callback latency
>  ## partitions-revoked-time-avg
>  ## partitions-revoked-time-max
>  ## partitions-assigned-time-avg
>  ## partitions-assigned-time-max
>  ## partitions-lost-time-avg
>  ## partitions-lost-time-max
>  # rebalance events
>  ## rebalance-rate-per-day
>  ## rebalance-total
>  ## rebalance-latency-avg
>  ## rebalance-latency-max
>  ## rebalance-latency-total
>  ## failed-rebalance-rate-per-hour
>  ## failed-rebalance-total
>  # last-rebalance-seconds-ago



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8609) Add consumer metrics for rebalances (part 9)

2019-09-30 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-8609:
-
Fix Version/s: 2.4.0

> Add consumer metrics for rebalances (part 9)
> 
>
> Key: KAFKA-8609
> URL: https://issues.apache.org/jira/browse/KAFKA-8609
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.4.0
>
>
> We would like to track some additional metrics on the consumer side related 
> to rebalancing as part of this KIP, including
>  # listener callback latency
>  ## partitions-revoked-time-avg
>  ## partitions-revoked-time-max
>  ## partitions-assigned-time-avg
>  ## partitions-assigned-time-max
>  ## partitions-lost-time-avg
>  ## partitions-lost-time-max
>  # rebalance events
>  ## rebalance-rate-per-day
>  ## rebalance-total
>  ## rebalance-latency-avg
>  ## rebalance-latency-max
>  ## rebalance-latency-total
>  ## failed-rebalance-rate-per-hour
>  ## failed-rebalance-total
>  # last-rebalance-seconds-ago



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8609) Add consumer metrics for rebalances (part 9)

2019-09-30 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8609.
--
Resolution: Fixed

> Add consumer metrics for rebalances (part 9)
> 
>
> Key: KAFKA-8609
> URL: https://issues.apache.org/jira/browse/KAFKA-8609
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
>
> We would like to track some additional metrics on the consumer side related 
> to rebalancing as part of this KIP, including
>  # listener callback latency
>  ## partitions-revoked-time-avg
>  ## partitions-revoked-time-max
>  ## partitions-assigned-time-avg
>  ## partitions-assigned-time-max
>  ## partitions-lost-time-avg
>  ## partitions-lost-time-max
>  # rebalance rate (# rebalances per day)
>  ## rebalance-rate-per-day



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8609) Add consumer metrics for rebalances (part 9)

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941431#comment-16941431
 ] 

ASF GitHub Bot commented on KAFKA-8609:
---

guozhangwang commented on pull request #7401: KAFKA-8609: add 
rebalance-total-time
URL: https://github.com/apache/kafka/pull/7401
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add consumer metrics for rebalances (part 9)
> 
>
> Key: KAFKA-8609
> URL: https://issues.apache.org/jira/browse/KAFKA-8609
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
>
> We would like to track some additional metrics on the consumer side related 
> to rebalancing as part of this KIP, including
>  # listener callback latency
>  ## partitions-revoked-time-avg
>  ## partitions-revoked-time-max
>  ## partitions-assigned-time-avg
>  ## partitions-assigned-time-max
>  ## partitions-lost-time-avg
>  ## partitions-lost-time-max
>  # rebalance rate (# rebalances per day)
>  ## rebalance-rate-per-day



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941432#comment-16941432
 ] 

ASF GitHub Bot commented on KAFKA-8649:
---

ableegoldman commented on pull request #7427: KAFKA-8649: send latest commonly 
supported version in assignment
URL: https://github.com/apache/kafka/pull/7427
 
 
   [PR 7423](https://github.com/apache/kafka/pull/7423) but targeted at 2.1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941430#comment-16941430
 ] 

ASF GitHub Bot commented on KAFKA-8649:
---

ableegoldman commented on pull request #7426: KAFKA-8649: send latest commonly 
supported version in assignment
URL: https://github.com/apache/kafka/pull/7426
 
 
   [PR 7423](https://github.com/apache/kafka/pull/7423) but targeted at 2.2
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941428#comment-16941428
 ] 

ASF GitHub Bot commented on KAFKA-8649:
---

ableegoldman commented on pull request #7425: KAFKA-8649: send latest commonly 
supported version in assignment
URL: https://github.com/apache/kafka/pull/7425
 
 
   [PR 7423](https://github.com/apache/kafka/pull/7423) but targeted at 2.3
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8911) Implicit TimeWindowedSerde creates Serde with null inner serializer

2019-09-30 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8911:
---
Fix Version/s: 2.4.0

> Implicit TimeWindowedSerde creates Serde with null inner serializer
> ---
>
> Key: KAFKA-8911
> URL: https://issues.apache.org/jira/browse/KAFKA-8911
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Michał
>Assignee: Michał
>Priority: Major
> Fix For: 2.4.0
>
>
> {{Serdes.scala}} contains an implicit def timeWindowedSerde as below:
>  
> {code:java}
> implicit def timeWindowedSerde[T]: WindowedSerdes.TimeWindowedSerde[T] = new 
> WindowedSerdes.TimeWindowedSerde[T]()
> {code}
> It creates a new {{TimeWindowedSerde}} without inner serializer, which is a 
> bug. Even in {{WindowedSerdes.java}} it says that empty constructor is for 
> reflection.
> {code:java}
> // Default constructor needed for reflection object creation
> public TimeWindowedSerde() {
> super(new TimeWindowedSerializer<>(), new TimeWindowedDeserializer<>());
> }
> public TimeWindowedSerde(final Serde inner) {
>  super(new TimeWindowedSerializer<>(inner.serializer()), new 
> TimeWindowedDeserializer<>(inner.deserializer()));
> }
> {code}
> All above failes for me when I try to implicitly access the right Serde:
> {code:java}
> private val twSerde = implicitly[TimeWindowedSerde[String]]
> {code}
> and I have to create the object properly on my own
> {code}
>   private val twSerde = new 
> WindowedSerdes.TimeWindowedSerde[String](implicitly[Serde[String]])
> {code}
> it could be fixed with a proper call in {{Serdes.scala}}
> {code}
>   implicit def timeWindowedSerde[T](implicit tSerde: Serde[T]): 
> WindowedSerdes.TimeWindowedSerde[T] =
> new WindowedSerdes.TimeWindowedSerde[T](tSerde)
> {code}
> But maybe also the scope of the default constructor for {{TimeWindowedSerde}} 
> should be changed?
> BR, Michał



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8911) Implicit TimeWindowedSerde creates Serde with null inner serializer

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941391#comment-16941391
 ] 

ASF GitHub Bot commented on KAFKA-8911:
---

bbejeck commented on pull request #7352: KAFKA-8911: Using proper WindowSerdes 
constructors in their implicit definitions
URL: https://github.com/apache/kafka/pull/7352
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Implicit TimeWindowedSerde creates Serde with null inner serializer
> ---
>
> Key: KAFKA-8911
> URL: https://issues.apache.org/jira/browse/KAFKA-8911
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Michał
>Assignee: Michał
>Priority: Major
>
> {{Serdes.scala}} contains an implicit def timeWindowedSerde as below:
>  
> {code:java}
> implicit def timeWindowedSerde[T]: WindowedSerdes.TimeWindowedSerde[T] = new 
> WindowedSerdes.TimeWindowedSerde[T]()
> {code}
> It creates a new {{TimeWindowedSerde}} without inner serializer, which is a 
> bug. Even in {{WindowedSerdes.java}} it says that empty constructor is for 
> reflection.
> {code:java}
> // Default constructor needed for reflection object creation
> public TimeWindowedSerde() {
> super(new TimeWindowedSerializer<>(), new TimeWindowedDeserializer<>());
> }
> public TimeWindowedSerde(final Serde inner) {
>  super(new TimeWindowedSerializer<>(inner.serializer()), new 
> TimeWindowedDeserializer<>(inner.deserializer()));
> }
> {code}
> All above failes for me when I try to implicitly access the right Serde:
> {code:java}
> private val twSerde = implicitly[TimeWindowedSerde[String]]
> {code}
> and I have to create the object properly on my own
> {code}
>   private val twSerde = new 
> WindowedSerdes.TimeWindowedSerde[String](implicitly[Serde[String]])
> {code}
> it could be fixed with a proper call in {{Serdes.scala}}
> {code}
>   implicit def timeWindowedSerde[T](implicit tSerde: Serde[T]): 
> WindowedSerdes.TimeWindowedSerde[T] =
> new WindowedSerdes.TimeWindowedSerde[T](tSerde)
> {code}
> But maybe also the scope of the default constructor for {{TimeWindowedSerde}} 
> should be changed?
> BR, Michał



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8264) Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition

2019-09-30 Thread Gwen Shapira (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941352#comment-16941352
 ] 

Gwen Shapira commented on KAFKA-8264:
-

 https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25318/

> Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition
> --
>
> Key: KAFKA-8264
> URL: https://issues.apache.org/jira/browse/KAFKA-8264
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/252/tests]
> {quote}org.apache.kafka.common.errors.TopicExistsException: Topic 'topic3' 
> already exists.{quote}
> STDOUT
>  
> {quote}[2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,312] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,313] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,994] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:21,727] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:28,696] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:28,699] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,246] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,247] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,287] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:33,408] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:33,408] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 

[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941302#comment-16941302
 ] 

ASF GitHub Bot commented on KAFKA-8649:
---

ableegoldman commented on pull request #7423: KAFKA-8649: send latest commonly 
supported version in assignment
URL: https://github.com/apache/kafka/pull/7423
 
 
   ...instead of sending the leader's version and having older members try to 
blindly upgrade.
   
   The only other real change here is that we will also set the 
`VERSION_PROBING` error code and return early from `onAssignment` when we are 
_upgrading_ our used subscription version (not just downgrading it) since this 
implies the whole group has finished the rolling upgrade and all members should 
rejoin with the new subscription version
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8785) Flakey test LeaderElectionCommandTest#testPathToJsonFile

2019-09-30 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941288#comment-16941288
 ] 

Bruno Cadonna commented on KAFKA-8785:
--

https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/8088/

> Flakey test LeaderElectionCommandTest#testPathToJsonFile
> 
>
> Key: KAFKA-8785
> URL: https://issues.apache.org/jira/browse/KAFKA-8785
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Priority: Major
>
> *https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/895/console*
> *2:35* kafka.admin.LeaderElectionCommandTest > testPathToJsonFile 
> STARTED*13:23:16* kafka.admin.LeaderElectionCommandTest.testPathToJsonFile 
> failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.13/core/build/reports/testOutput/kafka.admin.LeaderElectionCommandTest.testPathToJsonFile.test.stdout*13:23:16*
>  *13:23:16* kafka.admin.LeaderElectionCommandTest > testPathToJsonFile 
> FAILED*13:23:16* kafka.common.AdminCommandFailedException: Timeout 
> waiting for election results*13:23:16* at 
> kafka.admin.LeaderElectionCommand$.electLeaders(LeaderElectionCommand.scala:133)*13:23:16*
>  at 
> kafka.admin.LeaderElectionCommand$.run(LeaderElectionCommand.scala:88)*13:23:16*
>  at 
> kafka.admin.LeaderElectionCommand$.main(LeaderElectionCommand.scala:41)*13:23:16*
>  at 
> kafka.admin.LeaderElectionCommandTest.$anonfun$testPathToJsonFile$1(LeaderElectionCommandTest.scala:160)*13:23:16*
>  at 
> kafka.admin.LeaderElectionCommandTest.$anonfun$testPathToJsonFile$1$adapted(LeaderElectionCommandTest.scala:137)*13:23:16*
>  at kafka.utils.TestUtils$.resource(TestUtils.scala:1591)*13:23:16*   
>   at 
> kafka.admin.LeaderElectionCommandTest.testPathToJsonFile(LeaderElectionCommandTest.scala:137)*13:23:16*
>  *13:23:16* Caused by:*13:23:16* 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to 
> timeout.*13:23:16*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8800) Flaky Test SaslScramSslEndToEndAuthorizationTest#testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl

2019-09-30 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941286#comment-16941286
 ] 

Bruno Cadonna commented on KAFKA-8800:
--

https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/8096/

> Flaky Test 
> SaslScramSslEndToEndAuthorizationTest#testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl
> --
>
> Key: KAFKA-8800
> URL: https://issues.apache.org/jira/browse/KAFKA-8800
> Project: Kafka
>  Issue Type: Bug
>  Components: core, security, unit tests
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Assignee: Anastasia Vela
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/6956/testReport/junit/kafka.api/SaslScramSslEndToEndAuthorizationTest/testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl/]
> {quote}org.scalatest.exceptions.TestFailedException: Consumed 0 records 
> before timeout instead of the expected 1 records at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530) at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529) 
> at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389) 
> at org.scalatest.Assertions.fail(Assertions.scala:1091) at 
> org.scalatest.Assertions.fail$(Assertions.scala:1087) at 
> org.scalatest.Assertions$.fail(Assertions.scala:1389) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:822) at 
> kafka.utils.TestUtils$.pollRecordsUntilTrue(TestUtils.scala:781) at 
> kafka.utils.TestUtils$.pollUntilAtLeastNumRecords(TestUtils.scala:1312) at 
> kafka.utils.TestUtils$.consumeRecords(TestUtils.scala:1320) at 
> kafka.api.EndToEndAuthorizationTest.consumeRecords(EndToEndAuthorizationTest.scala:522)
>  at 
> kafka.api.EndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl(EndToEndAuthorizationTest.scala:361){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8963) Benchmark and optimize incremental fetch session handler

2019-09-30 Thread Lucas Bradstreet (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucas Bradstreet updated KAFKA-8963:

Description: The FetchSessionHandler is a cause of high CPU usage in the 
replica fetcher for brokers with high partition counts. A jmh benchmark should 
be added and the incremental fetch session handling should be measured and 
optimized.  (was: The FetchSessionHandler is a cause of high CPU usage in the 
replica fetcher for brokers with high partition counts. We should add a jmh 
benchmark and optimize the incremental fetch session building.)

> Benchmark and optimize incremental fetch session handler
> 
>
> Key: KAFKA-8963
> URL: https://issues.apache.org/jira/browse/KAFKA-8963
> Project: Kafka
>  Issue Type: Task
>Reporter: Lucas Bradstreet
>Priority: Major
>
> The FetchSessionHandler is a cause of high CPU usage in the replica fetcher 
> for brokers with high partition counts. A jmh benchmark should be added and 
> the incremental fetch session handling should be measured and optimized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8963) Benchmark and optimize incremental fetch session handler

2019-09-30 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-8963:
---

 Summary: Benchmark and optimize incremental fetch session handler
 Key: KAFKA-8963
 URL: https://issues.apache.org/jira/browse/KAFKA-8963
 Project: Kafka
  Issue Type: Task
Reporter: Lucas Bradstreet


The FetchSessionHandler is a cause of high CPU usage in the replica fetcher for 
brokers with high partition counts. We should add a jmh benchmark and optimize 
the incremental fetch session building.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8962) KafkaAdminClient#describeTopics always goes through the controller

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941192#comment-16941192
 ] 

ASF GitHub Bot commented on KAFKA-8962:
---

dhruvilshah3 commented on pull request #7421: KAFKA-8962: Use least loaded node 
for AdminClient#describeTopics
URL: https://github.com/apache/kafka/pull/7421
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KafkaAdminClient#describeTopics always goes through the controller
> --
>
> Key: KAFKA-8962
> URL: https://issues.apache.org/jira/browse/KAFKA-8962
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dhruvil Shah
>Priority: Major
>
> KafkaAdminClient#describeTopic makes a MetadataRequest against the 
> controller. We should consider routing the request to any broker in the 
> cluster using `LeastLoadedNodeProvider` instead, so that we don't overwhelm 
> the controller with these requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8962) KafkaAdminClient#describeTopics always goes through the controller

2019-09-30 Thread Dhruvil Shah (Jira)
Dhruvil Shah created KAFKA-8962:
---

 Summary: KafkaAdminClient#describeTopics always goes through the 
controller
 Key: KAFKA-8962
 URL: https://issues.apache.org/jira/browse/KAFKA-8962
 Project: Kafka
  Issue Type: Bug
Reporter: Dhruvil Shah


KafkaAdminClient#describeTopic makes a MetadataRequest against the controller. 
We should consider routing the request to any broker in the cluster using 
`LeastLoadedNodeProvider` instead, so that we don't overwhelm the controller 
with these requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8960) Move Task determineCommitId in gradle.build to Project Level

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941138#comment-16941138
 ] 

ASF GitHub Bot commented on KAFKA-8960:
---

ravowlga123 commented on pull request #7420: KAFKA-8960 Move Task 
determineCommitId in gradle.build to Project Level
URL: https://github.com/apache/kafka/pull/7420
 
 
   Currently, the gradle task determineCommitId is present twice in 
gradle.build, once in subproject clients and once in subproject streams. Task 
determineCommitId shall be moved to project-level such that both subprojects 
(and also other subprojects) can call it without being dependent on an 
implementation detail of another subproject. Ran the build after updating
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Move Task determineCommitId in gradle.build to Project Level
> 
>
> Key: KAFKA-8960
> URL: https://issues.apache.org/jira/browse/KAFKA-8960
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Bruno Cadonna
>Assignee: Rabi Kumar K C
>Priority: Minor
>  Labels: newbie
>
> Currently, the gradle task {{determineCommitId}} is present twice in 
> {{gradle.build}}, once in subproject {{clients}} and once in subproject 
> {{streams}}. Task {{determineCommitId}}  shall be moved to project-level such 
> that both subprojects (and also other subprojects) can call it without being 
> dependent on an implementation detail of another subproject.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8960) Move Task determineCommitId in gradle.build to Project Level

2019-09-30 Thread Rabi Kumar K C (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rabi Kumar K C reassigned KAFKA-8960:
-

Assignee: Rabi Kumar K C

> Move Task determineCommitId in gradle.build to Project Level
> 
>
> Key: KAFKA-8960
> URL: https://issues.apache.org/jira/browse/KAFKA-8960
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Bruno Cadonna
>Assignee: Rabi Kumar K C
>Priority: Minor
>  Labels: newbie
>
> Currently, the gradle task {{determineCommitId}} is present twice in 
> {{gradle.build}}, once in subproject {{clients}} and once in subproject 
> {{streams}}. Task {{determineCommitId}}  shall be moved to project-level such 
> that both subprojects (and also other subprojects) can call it without being 
> dependent on an implementation detail of another subproject.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8960) Move Task determineCommitId in gradle.build to Project Level

2019-09-30 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941095#comment-16941095
 ] 

Bruno Cadonna commented on KAFKA-8960:
--

[~rabikumar.kc], yes please go ahead. Assign this ticket to yourself and open a 
PR. 

> Move Task determineCommitId in gradle.build to Project Level
> 
>
> Key: KAFKA-8960
> URL: https://issues.apache.org/jira/browse/KAFKA-8960
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Bruno Cadonna
>Priority: Minor
>  Labels: newbie
>
> Currently, the gradle task {{determineCommitId}} is present twice in 
> {{gradle.build}}, once in subproject {{clients}} and once in subproject 
> {{streams}}. Task {{determineCommitId}}  shall be moved to project-level such 
> that both subprojects (and also other subprojects) can call it without being 
> dependent on an implementation detail of another subproject.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8960) Move Task determineCommitId in gradle.build to Project Level

2019-09-30 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941035#comment-16941035
 ] 

Rabi Kumar K C commented on KAFKA-8960:
---

Hi [~cadonna] I see that currently task determineCommitId is present in 
project(':clients') and project(':streams'). I am planning move task 
determineCommitId to subprojects section. Please let me know if this is what 
you want. If yes then I will create a PR for the same.

> Move Task determineCommitId in gradle.build to Project Level
> 
>
> Key: KAFKA-8960
> URL: https://issues.apache.org/jira/browse/KAFKA-8960
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Bruno Cadonna
>Priority: Minor
>  Labels: newbie
>
> Currently, the gradle task {{determineCommitId}} is present twice in 
> {{gradle.build}}, once in subproject {{clients}} and once in subproject 
> {{streams}}. Task {{determineCommitId}}  shall be moved to project-level such 
> that both subprojects (and also other subprojects) can call it without being 
> dependent on an implementation detail of another subproject.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8807) Flaky Test GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940988#comment-16940988
 ] 

ASF GitHub Bot commented on KAFKA-8807:
---

bbejeck commented on pull request #7418: KAFKA-8807: Flaky GlobalStreamThread 
test
URL: https://github.com/apache/kafka/pull/7418
 
 
   A minor refactor to explicitly verify that `Processor#close` is only called 
once.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test 
> GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown
> ---
>
> Key: KAFKA-8807
> URL: https://issues.apache.org/jira/browse/KAFKA-8807
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Bill Bejeck
>Priority: Major
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/24229/testReport/junit/org.apache.kafka.streams.integration/GlobalThreadShutDownOrderTest/shouldFinishGlobalStoreOperationOnShutDown/]
>  
> h3. Error Message
> java.lang.AssertionError: expected:<[1, 2, 3, 4]> but was:<[1, 2, 3, 4, 1, 2, 
> 3, 4]>
> h3. Stacktrace
> java.lang.AssertionError: expected:<[1, 2, 3, 4]> but was:<[1, 2, 3, 4, 1, 2, 
> 3, 4]> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:120) at 
> org.junit.Assert.assertEquals(Assert.java:146) at 
> org.apache.kafka.streams.integration.GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown(GlobalThreadShutDownOrderTest.java:138)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:412) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> 

[jira] [Commented] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-09-30 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940805#comment-16940805
 ] 

Rabi Kumar K C commented on KAFKA-8953:
---

Hi [~mjsax] Thank You for adding me to the list. I was going through KIP link 
that you shared but it seems like I don't have permission to create the KIP 
page so I sent an email to 
[d...@kafka.apache.org|mailto:to%c2%a0...@kafka.apache.org]  as mentioned in 
the wiki along with my id. Hopefully I will get the permission soon then will 
create KIP page first then add a PR for the ticket.

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Rabi Kumar K C
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8770) Either switch to or add an option for emit-on-change

2019-09-30 Thread Di Campo (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940802#comment-16940802
 ] 

Di Campo commented on KAFKA-8770:
-

This comes from the following use case:

Relation between pageview and session are sent down to a separate topic  (i.e. 
to which session this pageview corresponds to, as a {{(pageviewId, sessionId)}} 
). As more pageviews are added to the session, they are sent to (in a 
{{sessionStore.toStream.flatMap}} fashion, where pageview ids are kept in the 
aggregator).
The relation of a pageview with its session may change, whether because of a 
session merge, a session cut by custom logic, etc.
But most times it is the same sessionId value, so I want to have a first value 
rightaway to have near-realtime associations, but lower the traffic that is 
sent by unnecessary updates.

> Either switch to or add an option for emit-on-change
> 
>
> Key: KAFKA-8770
> URL: https://issues.apache.org/jira/browse/KAFKA-8770
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: needs-kip
>
> Currently, Streams offers two emission models:
> * emit-on-window-close: (using Suppression)
> * emit-on-update: (i.e., emit a new result whenever a new record is 
> processed, regardless of whether the result has changed)
> There is also an option to drop some intermediate results, either using 
> caching or suppression.
> However, there is no support for emit-on-change, in which results would be 
> forwarded only if the result has changed. This has been reported to be 
> extremely valuable as a performance optimizations for some high-traffic 
> applications, and it reduces the computational burden both internally for 
> downstream Streams operations, as well as for external systems that consume 
> the results, and currently have to deal with a lot of "no-op" changes.
> It would be pretty straightforward to implement this, by loading the prior 
> results before a stateful operation and comparing with the new result before 
> persisting or forwarding. In many cases, we load the prior result anyway, so 
> it may not be a significant performance impact either.
> One design challenge is what to do with timestamps. If we get one record at 
> time 1 that produces a result, and then another at time 2 that produces a 
> no-op, what should be the timestamp of the result, 1 or 2? emit-on-change 
> would require us to say 1.
> Clearly, we'd need to do some serious benchmarks to evaluate any potential 
> implementation of emit-on-change.
> Another design challenge is to decide if we should just automatically provide 
> emit-on-change for stateful operators, or if it should be configurable. 
> Configuration increases complexity, so unless the performance impact is high, 
> we may just want to change the emission model without a configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8934) Introduce Instance-level Metrics

2019-09-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940764#comment-16940764
 ] 

ASF GitHub Bot commented on KAFKA-8934:
---

cadonna commented on pull request #7416: KAFKA-8934: Introduce instance-level 
metrics for streams applications
URL: https://github.com/apache/kafka/pull/7416
 
 
   - Moves `StreamsMetricsImpl` from `StreamThread` to `KafkaStreams`
   - Adds instance-level metrics as specified in KIP-444, i.e.:
   -- version
   -- commit-id
   -- application-id
   -- topology-description
   -- state
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Introduce Instance-level Metrics
> 
>
> Key: KAFKA-8934
> URL: https://issues.apache.org/jira/browse/KAFKA-8934
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> Introduce instance-level metrics as proposed in KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8961) Unable to create secure JDBC connection through Kafka Connect

2019-09-30 Thread Monika Bainsala (Jira)
Monika Bainsala created KAFKA-8961:
--

 Summary: Unable to create secure JDBC connection through Kafka 
Connect
 Key: KAFKA-8961
 URL: https://issues.apache.org/jira/browse/KAFKA-8961
 Project: Kafka
  Issue Type: Bug
  Components: build, clients, KafkaConnect, network
Affects Versions: 2.2.1
Reporter: Monika Bainsala


As per below article for enabling JDBC secure connection, we can use updated 
URL parameter while calling the create connector REST API.

Exampl:

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=X)(PORT=1520)))(CONNECT_DATA=(SERVICE_NAME=XXAP)));EncryptionLevel=requested;EncryptionTypes=RC4_256;DataIntegrityLevel=requested;DataIntegrityTypes=MD5"

 

But this approach is not working currently, kindly help in resolving this issue.

 

Reference :

[https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-09-30 Thread Rabi Kumar K C (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rabi Kumar K C reassigned KAFKA-8953:
-

Assignee: Rabi Kumar K C

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Rabi Kumar K C
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)