[jira] [Created] (KAFKA-8678) LeaveGroup request versioning on throttle time is incorrect

2019-07-17 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8678:
--

 Summary: LeaveGroup request versioning on throttle time is 
incorrect
 Key: KAFKA-8678
 URL: https://issues.apache.org/jira/browse/KAFKA-8678
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.3.0, 2.2.0
Reporter: Boyang Chen
Assignee: Boyang Chen


[https://github.com/apache/kafka/pull/6188] accidentally changed the version of 
setting throttle time from v1 to v2. We should fix this change.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (KAFKA-8053) kafka-topics.sh gives confusing error message when the topic doesn't exist

2019-07-17 Thread Tirtha Chatterjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tirtha Chatterjee reassigned KAFKA-8053:


Assignee: Tirtha Chatterjee

> kafka-topics.sh gives confusing error message when the topic doesn't exist
> --
>
> Key: KAFKA-8053
> URL: https://issues.apache.org/jira/browse/KAFKA-8053
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jakub Scholz
>Assignee: Tirtha Chatterjee
>Priority: Minor
>
> The kafka-topics.sh utility gives a confusing message when the topic it is 
> called with doesn't exist or when no topics exist at all:
> {code}
> bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic xxx
> Error while executing topic command : Topics in [] does not exist
> [2019-03-06 13:26:33,982] ERROR java.lang.IllegalArgumentException: Topics in 
> [] does not exist
> at 
> kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:416)
> at 
> kafka.admin.TopicCommand$ZookeeperTopicService.describeTopic(TopicCommand.scala:332)
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:66)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
> (kafka.admin.TopicCommand$)
> {code}
> It tries to list the topics, but because list of topics is always empty, it 
> always prints just `[]`. The error message should be more useful and instead 
> list the topic passed by the user as the parameter or not try to list 
> anything at all.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (KAFKA-7500) MirrorMaker 2.0 (KIP-382)

2019-07-17 Thread Andre Price (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886251#comment-16886251
 ] 

Andre Price edited comment on KAFKA-7500 at 7/17/19 5:45 PM:
-

Hi, Sorry for what is probably a stupid question. I'm interested in trying this 
out. Are there any pre-releases anywhere? Or is the proper way to test this to 
pull and build the referenced PR branch? 

If the pull and build is the path where is the best place to find instructions 
for generating a "release" from it?

Thanks!

 

edit - Thanks [~ryannedolan] for the reply! (assuming the last line was meant 
to be a gradle command) but otherwise, looking forward to trying it out!


was (Author: apriceq):
Hi, Sorry for what is probably a stupid question. I'm interested in trying this 
out. Are there any pre-releases anywhere? Or is the proper way to test this to 
pull and build the referenced PR branch? 

If the pull and build is the path where is the best place to find instructions 
for generating a "release" from it?

Thanks!

> MirrorMaker 2.0 (KIP-382)
> -
>
> Key: KAFKA-7500
> URL: https://issues.apache.org/jira/browse/KAFKA-7500
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect, mirrormaker
>Reporter: Ryanne Dolan
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: Active-Active XDCR setup.png
>
>
> Implement a drop-in replacement for MirrorMaker leveraging the Connect 
> framework.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
> [https://github.com/apache/kafka/pull/6295]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8621) KIP-486: Support custom way to load KeyStore and TrustStore

2019-07-17 Thread Maulin Vasavada (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887277#comment-16887277
 ] 

Maulin Vasavada commented on KAFKA-8621:


[~gshapira_impala_35cc] Can you please review this?

> KIP-486: Support custom way to load KeyStore and TrustStore
> ---
>
> Key: KAFKA-8621
> URL: https://issues.apache.org/jira/browse/KAFKA-8621
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Maulin Vasavada
>Assignee: Thomas Zhou
>Priority: Minor
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-486%3A+Support+custom+way+to+load+KeyStore+and+TrustStore



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8677) Flakey test GroupEndToEndAuthorizationTest#testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl

2019-07-17 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-8677:
---
Description: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/6325/console]

 
*18:43:39* kafka.api.GroupEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED*18:44:00* 
kafka.api.GroupEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl
 failed, log available in 
/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.12/core/build/reports/testOutput/kafka.api.GroupEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl.test.stdout*18:44:00*
 *18:44:00* kafka.api.GroupEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl FAILED*18:44:00* 
org.scalatest.exceptions.TestFailedException: Consumed 0 records before timeout 
instead of the expected 1 records

  was:[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/6325/console]


> Flakey test 
> GroupEndToEndAuthorizationTest#testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl
> 
>
> Key: KAFKA-8677
> URL: https://issues.apache.org/jira/browse/KAFKA-8677
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Priority: Major
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/6325/console]
>  
> *18:43:39* kafka.api.GroupEndToEndAuthorizationTest > 
> testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED*18:44:00* 
> kafka.api.GroupEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl
>  failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.12/core/build/reports/testOutput/kafka.api.GroupEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl.test.stdout*18:44:00*
>  *18:44:00* kafka.api.GroupEndToEndAuthorizationTest > 
> testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl FAILED*18:44:00* 
> org.scalatest.exceptions.TestFailedException: Consumed 0 records before 
> timeout instead of the expected 1 records



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8677) Flakey test GroupEndToEndAuthorizationTest#testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl

2019-07-17 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8677:
--

 Summary: Flakey test 
GroupEndToEndAuthorizationTest#testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl
 Key: KAFKA-8677
 URL: https://issues.apache.org/jira/browse/KAFKA-8677
 Project: Kafka
  Issue Type: Bug
Reporter: Boyang Chen


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/6325/console]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8672) RebalanceSourceConnectorsIntegrationTest#testReconfigConnector

2019-07-17 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887234#comment-16887234
 ] 

Boyang Chen commented on KAFKA-8672:


failed again: 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/367/console]

 

> RebalanceSourceConnectorsIntegrationTest#testReconfigConnector
> --
>
> Key: KAFKA-8672
> URL: https://issues.apache.org/jira/browse/KAFKA-8672
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/6281/testReport/junit/org.apache.kafka.connect.integration/RebalanceSourceConnectorsIntegrationTest/testReconfigConnector/]
> {quote}java.lang.RuntimeException: Could not find enough records. found 33, 
> expected 100 at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.consume(EmbeddedKafkaCluster.java:306)
>  at 
> org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testReconfigConnector(RebalanceSourceConnectorsIntegrationTest.java:180){quote}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-8218) IllegalStateException while accessing context in Transformer

2019-07-17 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8218.
--
Resolution: Not A Problem

> IllegalStateException while accessing context in Transformer
> 
>
> Key: KAFKA-8218
> URL: https://issues.apache.org/jira/browse/KAFKA-8218
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Bartłomiej Kępa
>Priority: Major
>
> Custom Kotlin implementation of Transformer throws 
> {code}
> java.lang.IllegalStateException: This should not happen as headers() should 
> only be called while a record is processed
> {code}
> while being plugged into the stream topology that actually works. Invocation 
> of transform() method has valid arguments (Key and GenericRecord).
> The exception is being thrown because in our implementation of transform we 
> need to access headers from context.  
> {code:java}
>  override fun transform(key: String?, value: GenericRecord): 
> KeyValue {
>   val headers = context.headers()
>   ...
> }
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-07-17 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887223#comment-16887223
 ] 

Guozhang Wang commented on KAFKA-8649:
--

Hello [~ferbncode] did you follow the upgrade path for 2.1.0: 
https://kafka.apache.org/21/documentation/streams/upgrade-guide with the config 
`upgrade.from`?

> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Priority: Major
> Fix For: 2.0.0
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-7176) State store metrics for migrated tasks are not removed

2019-07-17 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7176.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> State store metrics for migrated tasks are not removed
> --
>
> Key: KAFKA-7176
> URL: https://issues.apache.org/jira/browse/KAFKA-7176
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Sam Lendle
>Priority: Major
> Fix For: 2.3.0
>
>
> I observed that state store metrics for tasks that have been migrated to 
> other instances are not removed and are still being updated with phantom 
> values, (when viewed for example via jmx mbeans). 
> For all tasks/threads on the same instance (including for migrated tasks), 
> the values of state store metrics are all (nearly) the same. For the rate 
> metrics at least, the value reported for each task is the rate I expect for 
> all active tasks on that instance, so things are apparently being counted 
> multiple times. Presumably, this is how migrated task metrics are being 
> updated.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-7850) Remove deprecated KStreamTestDriver

2019-07-17 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7850.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

KStreamTestDriver has been removed since 2.3.0 release

> Remove deprecated KStreamTestDriver
> ---
>
> Key: KAFKA-7850
> URL: https://issues.apache.org/jira/browse/KAFKA-7850
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Reporter: Richard Yu
>Assignee: Richard Yu
>Priority: Major
> Fix For: 2.3.0
>
>
> Eversince a series of new test improvements were made to KafkaStreams test 
> suite, KStreamTestDriver was deprecated in favor of TopologyTestDriver. 
> However, a couple existing unit tests continues to use KStreamTestDriver. We 
> wish to migrate all remaining classes to TopologyTestDriver so we could 
> remove the deprecated class.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8345) Create an Administrative API for Replica Reassignment

2019-07-17 Thread Andrew Olson (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887206#comment-16887206
 ] 

Andrew Olson commented on KAFKA-8345:
-

[~cmccabe] [~enether] Can you verify that this could be use to just change the 
replica order so that a different broker becomes the preferred leader?

We have an admin script (shown below) that demotes a select broker from being 
the leader for any partitions. This is a use case that could possibly use this 
new API, if it's supported.

{noformat}
# usage:
# 1. On a Kafka broker node find all partitions with a broker id as first 
replica making it the preferred leader
# export BROKER_ID=
# export KAFKA_ZOOKEEPER=$(awk -F= '/zookeeper.connect/{print $2}' 
/opt/kafka/config/server.properties)
# /opt/kafka/bin/kafka-topics.sh --zookeeper ${KAFKA_ZOOKEEPER} --describe | 
grep "Replicas: ${BROKER_ID}," | awk '{print $2,$4,$8}' > 
kafka_topics_output.txt
# 2. Download and run this script to move first replica to end of replica list 
making it a follower by default
# ruby demote_kafka_broker.rb > reorder_replicas.json
# 3. Execute the replica order reassignment
# /opt/kafka/bin/kafka-reassign-partitions.sh --execute --zookeeper 
${KAFKA_ZOOKEEPER} --manual-assignment-json-file reorder_replicas.json
# 4. Verify the change was executed as expected
# /opt/kafka/bin/kafka-topics.sh --zookeeper ${KAFKA_ZOOKEEPER} --describe

require 'json'

topics = []
File.open("kafka_topics_output.txt", "r") do |f|
  f.each_line do |line|
parts = line.split(' ')
t = {}
t['topic'] = parts[0]
t['partition'] = parts[1].to_i
t['replicas'] = parts[2].split(',').map {|r| r.to_i }
t['replicas'] << t['replicas'][0]
t['replicas'].delete_at(0)
topics << t
  end
end

p = {}
p['partitions'] = topics
p['version'] = 1

puts JSON.pretty_generate(p)
{noformat}

> Create an Administrative API for Replica Reassignment
> -
>
> Key: KAFKA-8345
> URL: https://issues.apache.org/jira/browse/KAFKA-8345
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
>
> Create an Administrative API for Replica Reassignment, as discussed in KIP-455



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8477) Cannot consume and request metadata for __consumer_offsets topic in Kafka 2.2

2019-07-17 Thread David Judd (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887202#comment-16887202
 ] 

David Judd commented on KAFKA-8477:
---

We're still trying to reproduce on the broker side, but it does look like the 
client would have the behavior we've seen in the scenario [~mumruh] suggests 
above, where we consistently get a stale metadata response (ie, with an earlier 
leader epoch than we're seeing in our offset response). And since, IIUC, we hit 
the (possibly stale, but that's irrelevant here) leader for the partition to 
list offsets, but the DefaultMetadataUpdater hits our estimate of the 
least-loaded broker to get the metadata update, all we need to get a stale 
metadata response consistently is for the least-loaded broker to be distinct 
from the partition leader and to have persistently-stale metadata. It seems 
like there has to be some kind of broker bug or bad state for that to occur, 
but we could also try to be more robust to it on the client. 

One idea is to make sure that in the case where we're trying to get a metadata 
update specifically because we've seen a higher leader epoch than our current 
cache in some response, we hit the broker (or one of the brokers) that gave us 
that higher leader epoch, instead of hitting an arbitrary least-loaded broker. 
[~mumrah], does that sound like a good idea? Happy to take a stab at it in a PR 
if so.

> Cannot consume and request metadata for __consumer_offsets topic in Kafka 2.2
> -
>
> Key: KAFKA-8477
> URL: https://issues.apache.org/jira/browse/KAFKA-8477
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.2.0
>Reporter: Mike Mintz
>Assignee: David Arthur
>Priority: Major
> Attachments: kafka-2.2.0-consumer-offset-metadata-bug-master.zip, 
> logs.txt
>
>
> We have an application that consumes from the __consumer_offsets topic to 
> report lag metrics. When we upgraded its kafka-clients dependency from 2.0.1 
> to 2.2.0, it crashed with:
> {noformat}
> Exception in thread "main" org.apache.kafka.common.errors.TimeoutException: 
> Failed to get offsets by times in 30001ms
> {noformat}
> I created a minimal reproduction at 
> [https://github.com/mikemintz/kafka-2.2.0-consumer-offset-metadata-bug] and 
> I'm uploading a zip of this code for posterity.
> In particular, the behavior happens when I call KafkaConsumer.assign(), then 
> poll(), then endOffsets(). This behavior only happens for the 
> __consumer_offsets topic. It also only happens on the Kafka cluster that we 
> run in production, which runs Kafka 2.2.0. The error does not occur on a 
> freshly created Kafka cluster, and I can't get it to reproduce with 
> EmbeddedKafka.
> It works fine with both Kafka 2.0.1 and Kafka 2.1.1, so something broke 
> between 2.1.1. and 2.2.0. Based on the 2.2.0 changelog and the client log 
> messages (attached), it looks like it may have been introduced in KAFKA-7738 
> (cc [~mumrah]). It gets in a loop, repeating the following block of log 
> messages:
> {noformat}
> 2019-06-03 23:24:15 DEBUG NetworkClient:1073 - [Consumer 
> clientId=test.mikemintz.lag-tracker-reproduce, 
> groupId=test.mikemintz.lag-tracker-reproduce] Sending metadata request 
> (type=MetadataRequest, topics=__consumer_offsets) to node REDACTED:9094 (id: 
> 2134 rack: us-west-2b)
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5862 
> to 5862 for partition __consumer_offsets-0
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6040 
> to 6040 for partition __consumer_offsets-10
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6008 
> to 6008 for partition __consumer_offsets-20
> 2019-06-03 23:24:15 DEBUG Metadata:208 - Not replacing existing epoch 6153 
> with new epoch 6152
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5652 
> to 5652 for partition __consumer_offsets-30
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6081 
> to 6081 for partition __consumer_offsets-39
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5629 
> to 5629 for partition __consumer_offsets-9
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5983 
> to 5983 for partition __consumer_offsets-11
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5896 
> to 5896 for partition __consumer_offsets-31
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5278 
> to 5278 for partition __consumer_offsets-13
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 6026 
> to 6026 for partition __consumer_offsets-18
> 2019-06-03 23:24:15 DEBUG Metadata:201 - Updating last seen epoch from 5608 
> to 

[jira] [Commented] (KAFKA-8599) Replace ExpireDelegationToken request/response with automated protocol

2019-07-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886917#comment-16886917
 ] 

ASF GitHub Bot commented on KAFKA-8599:
---

mimaison commented on pull request #7098: KAFKA-8599: Use automatic RPC 
generation in ExpireDelegationToken
URL: https://github.com/apache/kafka/pull/7098
 
 
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Replace ExpireDelegationToken request/response with automated protocol
> --
>
> Key: KAFKA-8599
> URL: https://issues.apache.org/jira/browse/KAFKA-8599
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-7849) Warning when adding GlobalKTable

2019-07-17 Thread Omar Al-Safi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886897#comment-16886897
 ] 

Omar Al-Safi commented on KAFKA-7849:
-

Hi [~mjsax], I would like to give myself a headstart with this issue and work 
on it. Is it possible if you can add me to the contribution list in Jira so I 
can work on it? Thanks 

> Warning when adding GlobalKTable
> 
>
> Key: KAFKA-7849
> URL: https://issues.apache.org/jira/browse/KAFKA-7849
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Dmitry Minkovsky
>Priority: Minor
>  Labels: newbie
>
> Per 
> https://lists.apache.org/thread.html/59c119be8a2723c501e0653fa3ed571e8c09be40d5b5170c151528b5@%3Cusers.kafka.apache.org%3E
>  
> When I add a GlobalKTable for topic "message-write-service-user-ids-by-email" 
> to my topology, I get this warning:
>  
> [2019-01-19 12:18:14,008] WARN 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:421) 
> [Consumer 
> clientId=message-write-service-55f2ca4d-0efc-4344-90d3-955f9f5a65fd-StreamThread-2-consumer,
>  groupId=message-write-service] The following subscribed topics are not 
> assigned to any members: [message-write-service-user-ids-by-email] 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8676) Avoid Stopping Unnecessary Connectors and Tasks

2019-07-17 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886885#comment-16886885
 ] 

ASF GitHub Bot commented on KAFKA-8676:
---

LuyingLiu commented on pull request #7097: KAFKA-8676: Avoid Unnecessary stops 
and starts of tasks
URL: https://github.com/apache/kafka/pull/7097
 
 
   As this is testing for initialization behaviors, I do not think it is a 
   good idea to do unit and integrate testing. System tests are needed. 
   Log INFO output and KafkaConnect API for checking the status of 
   connectors and tasks are useful tools when checking which 
   connectors and tasks start/stop.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Avoid Stopping Unnecessary Connectors and Tasks 
> 
>
> Key: KAFKA-8676
> URL: https://issues.apache.org/jira/browse/KAFKA-8676
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.3.0
> Environment: centOS
>Reporter: Luying Liu
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 2.3.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When adding a new connector or changing a connector configuration, Kafka 
> Connect 2.3.0 will stop all existing tasks and start all the tasks, including 
> the new tasks and the existing ones. However, it is not necessary at all. 
> Only the new connector and tasks need to be started. As the rebalancing can 
> be applied for both running and suspended tasks.The following patch will fix 
> this problem and starts only the new tasks and connectors.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8676) Avoid Stopping Unnecessary Connectors and Tasks

2019-07-17 Thread Luying Liu (JIRA)
Luying Liu created KAFKA-8676:
-

 Summary: Avoid Stopping Unnecessary Connectors and Tasks 
 Key: KAFKA-8676
 URL: https://issues.apache.org/jira/browse/KAFKA-8676
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.3.0
 Environment: centOS
Reporter: Luying Liu
 Fix For: 2.3.0


When adding a new connector or changing a connector configuration, Kafka 
Connect 2.3.0 will stop all existing tasks and start all the tasks, including 
the new tasks and the existing ones. However, it is not necessary at all. Only 
the new connector and tasks need to be started. As the rebalancing can be 
applied for both running and suspended tasks.The following patch will fix this 
problem and starts only the new tasks and connectors.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8584) Allow "bytes" type to generated a ByteBuffer rather than byte arrays

2019-07-17 Thread SuryaTeja Duggi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886787#comment-16886787
 ] 

SuryaTeja Duggi commented on KAFKA-8584:


[~guozhang] Could you provide some code paths. 

> Allow "bytes" type to generated a ByteBuffer rather than byte arrays
> 
>
> Key: KAFKA-8584
> URL: https://issues.apache.org/jira/browse/KAFKA-8584
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: SuryaTeja Duggi
>Priority: Major
>  Labels: newbie
>
> Right now in the RPC definition, type {{bytes}} would be translated into 
> {{byte[]}} in generated Java code. However, for some requests like 
> ProduceRequest#partitionData, the underlying type would better be a 
> ByteBuffer rather than a byte array.
> One proposal is to add an additional boolean tag {{useByteBuffer}} for 
> {{bytes}} type, which by default is false; when set to {{true}} set the 
> corresponding field to generate {{ByteBuffer}} instead of {{[]byte}}. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8675) "Main" consumers are not unsubsribed on KafkaStreams.close()

2019-07-17 Thread Modestas Vainius (JIRA)
Modestas Vainius created KAFKA-8675:
---

 Summary: "Main" consumers are not unsubsribed on 
KafkaStreams.close()
 Key: KAFKA-8675
 URL: https://issues.apache.org/jira/browse/KAFKA-8675
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.1
Reporter: Modestas Vainius


Hi!

It seems that {{KafkaStreams.close()}} never unsubscribes "main" kafka 
consumers. As far as I can tell, 
{{org.apache.kafka.streams.processor.internals.TaskManager#shutdown}} does 
unsubscribe only {{restoreConsumer}}. This results into Kafka Group coordinator 
having to throw away consumer from the consumer group in a non-clean way. 
{{KafkaStreams.close()}} does {{close()}} those consumers but it seems that is 
not enough for clean exit.

Kafka Streams connects to Kafka:
{code:java}
kafka| [2019-07-17 08:02:35,707] INFO [GroupCoordinator 1]: Preparing 
to rebalance group 1-streams-test in state PreparingRebalance with old 
generation 0 (__consumer_offsets-44) (reason: Adding new member 
1-streams-test-70da298b-6c8e-4ef0-8c2a-e7cba079ec9d-StreamThread-1-consumer-9af23416-d14e-49b8-b2ae-70837f2df0db)
 (kafka.coordinator.group.GroupCoordinator)
kafka| [2019-07-17 08:02:35,717] INFO [GroupCoordinator 1]: Stabilized 
group 1-streams-test generation 1 (__consumer_offsets-44) 
(kafka.coordinator.group.GroupCoordinator)
kafka| [2019-07-17 08:02:35,730] INFO [GroupCoordinator 1]: Assignment 
received from leader for group 1-streams-test for generation 1 
(kafka.coordinator.group.GroupCoordinator)
{code}
Processing finishes in 2 secs but after 10 seconds I see this in Kafka logs:
{code:java}
kafka| [2019-07-17 08:02:45,749] INFO [GroupCoordinator 1]: Member 
1-streams-test-70da298b-6c8e-4ef0-8c2a-e7cba079ec9d-StreamThread-1-consumer-9af23416-d14e-49b8-b2ae-70837f2df0db
 in group 1-streams-test has failed, removing it from the group 
(kafka.coordinator.group.GroupCoordinator)
kafka| [2019-07-17 08:02:45,749] INFO [GroupCoordinator 1]: Preparing 
to rebalance group 1-streams-test in state PreparingRebalance with old 
generation 1 (__consumer_offsets-44) (reason: removing member 
1-streams-test-70da298b-6c8e-4ef0-8c2a-e7cba079ec9d-StreamThread-1-consumer-9af23416-d14e-49b8-b2ae-70837f2df0db
 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
kafka| [2019-07-17 08:02:45,749] INFO [GroupCoordinator 1]: Group 
1-streams-test with generation 2 is now empty (__consumer_offsets-44) 
(kafka.coordinator.group.GroupCoordinator)
{code}
Topology is kind of similar to [kafka testing 
example|https://kafka.apache.org/22/documentation/streams/developer-guide/testing.html]
 but I tried on real kafka instance (one node):
{code:java}
new Topology().with {
it.addSource("sourceProcessor", "input-topic")
it.addProcessor("aggregator", new 
CustomMaxAggregatorSupplier(), "sourceProcessor")
it.addStateStore(
Stores.keyValueStoreBuilder(
Stores.inMemoryKeyValueStore("aggStore"),
Serdes.String(),
Serdes.Long()).withLoggingDisabled(), // need to 
disable logging to allow aggregatorStore pre-populating
"aggregator")
it.addSink(
"sinkProcessor",
"result-topic",
"aggregator"
)
it
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8664) non-JSON format messages when streaming data from Kafka to Mongo

2019-07-17 Thread Vu Le (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vu Le updated KAFKA-8664:
-
Affects Version/s: (was: 2.1.1)
   2.2.0

> non-JSON format messages when streaming data from Kafka to Mongo
> 
>
> Key: KAFKA-8664
> URL: https://issues.apache.org/jira/browse/KAFKA-8664
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.0
>Reporter: Vu Le
>Priority: Major
> Attachments: MongoSinkConnector.properties, 
> log_error_when_stream_data_not_a_json_format.txt
>
>
> Hi team,
> I can stream data from Kafka to MongoDB with JSON messages. I use MongoDB 
> Kafka Connector 
> ([https://github.com/mongodb/mongo-kafka/blob/master/docs/install.md])
> However, if I send a non-JSON format message the Connector died. Please see 
> the log file for details.
> My config file:
> {code:java}
> name=mongo-sink
> topics=testconnector.class=com.mongodb.kafka.connect.MongoSinkConnector
> tasks.max=1
> key.ignore=true
> # Specific global MongoDB Sink Connector configuration
> connection.uri=mongodb://localhost:27017
> database=test_kafka
> collection=transaction
> max.num.retries=3
> retries.defer.timeout=5000
> type.name=kafka-connect
> key.converter=org.apache.kafka.connect.json.JsonConverter
> key.converter.schemas.enable=false
> value.converter=org.apache.kafka.connect.json.JsonConverter
> value.converter.schemas.enable=false
> {code}
> I have 2 separated questions:  
>  # how to ignore the message which is non-json format?
>  # how to defined a default-key for this kind of message (for example: abc -> 
> \{ "non-json": "abc" } )
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8664) non-JSON format messages when streaming data from Kafka to Mongo

2019-07-17 Thread Vu Le (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886761#comment-16886761
 ] 

Vu Le commented on KAFKA-8664:
--

Hi Team,

Please help me take a look on this issue.

Thanks,

Vu Le

> non-JSON format messages when streaming data from Kafka to Mongo
> 
>
> Key: KAFKA-8664
> URL: https://issues.apache.org/jira/browse/KAFKA-8664
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.1.1
>Reporter: Vu Le
>Priority: Major
> Attachments: MongoSinkConnector.properties, 
> log_error_when_stream_data_not_a_json_format.txt
>
>
> Hi team,
> I can stream data from Kafka to MongoDB with JSON messages. I use MongoDB 
> Kafka Connector 
> ([https://github.com/mongodb/mongo-kafka/blob/master/docs/install.md])
> However, if I send a non-JSON format message the Connector died. Please see 
> the log file for details.
> My config file:
> {code:java}
> name=mongo-sink
> topics=testconnector.class=com.mongodb.kafka.connect.MongoSinkConnector
> tasks.max=1
> key.ignore=true
> # Specific global MongoDB Sink Connector configuration
> connection.uri=mongodb://localhost:27017
> database=test_kafka
> collection=transaction
> max.num.retries=3
> retries.defer.timeout=5000
> type.name=kafka-connect
> key.converter=org.apache.kafka.connect.json.JsonConverter
> key.converter.schemas.enable=false
> value.converter=org.apache.kafka.connect.json.JsonConverter
> value.converter.schemas.enable=false
> {code}
> I have 2 separated questions:  
>  # how to ignore the message which is non-json format?
>  # how to defined a default-key for this kind of message (for example: abc -> 
> \{ "non-json": "abc" } )
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8024) UtilsTest.testFormatBytes fails with german locale

2019-07-17 Thread Patrik Kleindl (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrik Kleindl updated KAFKA-8024:
--
Fix Version/s: 2.4.0

> UtilsTest.testFormatBytes fails with german locale
> --
>
> Key: KAFKA-8024
> URL: https://issues.apache.org/jira/browse/KAFKA-8024
> Project: Kafka
>  Issue Type: Bug
>Reporter: Patrik Kleindl
>Assignee: Patrik Kleindl
>Priority: Trivial
> Fix For: 2.4.0
>
>
> The unit test fails when the default locale is not English (in my case, deAT)
> assertEquals("1.1 MB", formatBytes((long) (1.1 * 1024 * 1024)));
>  
> org.apache.kafka.common.utils.UtilsTest > testFormatBytes FAILED
>     org.junit.ComparisonFailure: expected:<1[.]1 MB> but was:<1[,]1 MB>
>         at org.junit.Assert.assertEquals(Assert.java:115)
>         at org.junit.Assert.assertEquals(Assert.java:144)
>         at 
> org.apache.kafka.common.utils.UtilsTest.testFormatBytes(UtilsTest.java:106)
>  
> The easiest fix in this case should be adding
> {code:java}
> jvmArgs '-Duser.language=en -Duser.country=US'{code}
> to the test configuration 
> [https://github.com/apache/kafka/blob/b03e8c234a8aeecd10c2c96b683cfb39b24b548a/build.gradle#L270]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)