[jira] [Commented] (KAFKA-10875) offsetsForTimes returns null for some partitions when it shouldn't?

2021-01-20 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268468#comment-17268468
 ] 

huxihx commented on KAFKA-10875:


[~gongyifei] What's the version for the broker? Besides, could you check the 
exact segment log file containing the former timestamp to see its index files 
function well?

> offsetsForTimes returns null for some partitions when it shouldn't?
> ---
>
> Key: KAFKA-10875
> URL: https://issues.apache.org/jira/browse/KAFKA-10875
> Project: Kafka
>  Issue Type: Bug
>Reporter: Yifei Gong
>Priority: Minor
>
> I use spring-boot 2.2.11, spring-kafka 2.4.11 and apache kafka-clients 2.4.1
> I have my consumer {{implements ConsumerAwareRebalanceListener}}, and I am 
> trying to seek to offsets after certain timestamp inside 
> {{onPartitionsAssigned}} method by calling {{offsetsForTimes}}.
> I found this strange behavior of method {{offsetsForTimes}}:
> When I seek an earlier timestamp {{1607922415534L}} (GMT December 14, 2020 
> 5:06:55.534 AM) like below:
> {code:java}
> @Override
> public void onPartitionsAssigned(Consumer consumer, 
> Collection partitions) {
> // calling assignment just to ensure my consumer is actually assigned the 
> partitions
> Set tps = consumer.assignment();
> Map offsetsForTimes = new HashMap<>();
> offsetsForTimes.putAll(consumer.offsetsForTimes(partitions.stream()
> .collect(Collectors.toMap(tp -> tp, epoch -> 1607922415534L;
> }
> {code}
> By setting breakpoint, I can see I got below map:
> {noformat}
> {TopicPartition@5492} "My.Data.Topic-1" -> {OffsetAndTimestamp@5493} 
> "(timestamp=1607922521082, leaderEpoch=282, offset=22475886)"
> {TopicPartition@5495} "My.Data.Topic-0" -> {OffsetAndTimestamp@5496} 
> "(timestamp=1607922523035, leaderEpoch=328, offset=25587551)"
> {TopicPartition@5498} "My.Data.Topic-5" -> null
> {TopicPartition@5500} "My.Data.Topic-4" -> {OffsetAndTimestamp@5501} 
> "(timestamp=1607924819752, leaderEpoch=323, offset=24578937)"
> {TopicPartition@5503} "My.Data.Topic-3" -> {OffsetAndTimestamp@5504} 
> "(timestamp=1607922522143, leaderEpoch=299, offset=23439914)" 
> {TopicPartition@5506} "My.Data.Topic-2" -> {OffsetAndTimestamp@5507} 
> "(timestamp=1607938218461, leaderEpoch=318, offset=23415078)" 
> {TopicPartition@5509} "My.Data.Topic-9" -> {OffsetAndTimestamp@5510} 
> "(timestamp=1607922521019, leaderEpoch=298, offset=22002124)" 
> {TopicPartition@5512} "My.Data.Topic-8" -> {OffsetAndTimestamp@5513} 
> "(timestamp=1607922520780, leaderEpoch=332, offset=23406692)" 
> {TopicPartition@5515} "My.Data.Topic-7" -> {OffsetAndTimestamp@5516} 
> "(timestamp=1607922522800, leaderEpoch=285, offset=22215781)" 
> {TopicPartition@5518} "My.Data.Topic-6" -> null
> {noformat}
> As you can see some of the partitions (5 and 6) it returned null.
> However, if I seek a more recent timestamp like {{1607941818423L}} (GMT 
> December 14, 2020 10:30:18.423 AM), I got offsets for all partitions:
> {noformat}
> {TopicPartition@5492} "My.Data.Topic-1" -> {OffsetAndTimestamp@5493} 
> "(timestamp=1607942934371, leaderEpoch=282, offset=22568732)"
> {TopicPartition@5495} "My.Data.Topic-0" -> {OffsetAndTimestamp@5496} 
> "(timestamp=1607941818435, leaderEpoch=328, offset=25685999)"
> {TopicPartition@5498} "My.Data.Topic-5" -> {OffsetAndTimestamp@5499} 
> "(timestamp=1607941818424, leaderEpoch=309, offset=24333860)"
> {TopicPartition@5501} "My.Data.Topic-4" -> {OffsetAndTimestamp@5502} 
> "(timestamp=1607941818424, leaderEpoch=323, offset=24666385)"
> {TopicPartition@5504} "My.Data.Topic-3" -> {OffsetAndTimestamp@5505} 
> "(timestamp=1607941818433, leaderEpoch=299, offset=23529597)"
> {TopicPartition@5507} "My.Data.Topic-2" -> {OffsetAndTimestamp@5508} 
> "(timestamp=1607941818423, leaderEpoch=318, offset=23431817)"
> {TopicPartition@5510} "My.Data.Topic-9" -> {OffsetAndTimestamp@5511} 
> "(timestamp=1607941818517, leaderEpoch=298, offset=22082849)"
> {TopicPartition@5513} "My.Data.Topic-8" -> {OffsetAndTimestamp@5514} 
> "(timestamp=1607941818423, leaderEpoch=332, offset=23491462)"
> {TopicPartition@5516} "My.Data.Topic-7" -> {OffsetAndTimestamp@5517} 
> "(timestamp=1607942934371, leaderEpoch=285, offset=22306422)"
> {TopicPartition@5519} "My.Data.Topic-6" -> {OffsetAndTimestamp@5520} 
> "(timestamp=1607941818424, leaderEpoch=317, offset=24677423)"
> {noformat}
> So I am confused why seeking to an older timestamp gave me nulls when there 
> are indeed messages with later timestamp as I tried the 2nd time? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-10606) Auto create non-existent topics when fetching metadata for all topics

2020-10-14 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213813#comment-17213813
 ] 

huxihx edited comment on KAFKA-10606 at 10/14/20, 10:45 AM:


I am wondering if we could change the default value for 
_AllowAutoTopicCreation_ to _false_ in MetadataRequest.json. In doing so could 
we have ALL_TOPICS_REQUEST_DATA actually disable auto-creation.


was (Author: huxi_2b):
I am wondering if we could change the default value for 
_AllowAutoTopicCreation_ to _false_ in MetadataRequest.json.

> Auto create non-existent topics when fetching metadata for all topics
> -
>
> Key: KAFKA-10606
> URL: https://issues.apache.org/jira/browse/KAFKA-10606
> Project: Kafka
>  Issue Type: Bug
>Reporter: Lincong Li
>Priority: Major
>
> The "allow auto topic creation" flag is hardcoded to be true for the 
> fetch-all-topic metadata request:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java#L37
> In the below code, annotation claims that "*This never causes 
> auto-creation*". It it NOT true and auto topic creation still gets triggered 
> under some circumstances. So, this is a bug that needs to be fixed.
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java#L68
> For example, the bug could be manifested in the below situation:
> A topic T is being deleted and a request to fetch metadata for all topics 
> gets sent to one broker. The broker reads names of all topics from its 
> metadata cache (shown below).
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L1196
> Then the broker authorizes all topics and makes sure that they are allowed to 
> be described. Then the broker tries to get metadata for every authorized 
> topic by reading the metadata cache again, once for every topic (show below).
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L1240
> However, the metadata cache could have been updated while the broker was 
> authorizing all topics and topic T and its metadata no longer exist in the 
> cache since the topic got deleted and metadata update requests eventually got 
> propagated from the controller to all brokers. So, at this point, when the 
> broker tries to get metadata for topic T from its cache, it realizes that it 
> does not exist and the broker tries to "auto create" topic T since the 
> allow-auto-topic-creation flag was set to true in all the fetch-all-topic 
> metadata requests.
> I think this bug exists since "*metadataRequest.allowAutoTopicCreation*" was 
> introduced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10606) Auto create non-existent topics when fetching metadata for all topics

2020-10-14 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213813#comment-17213813
 ] 

huxihx commented on KAFKA-10606:


I am wondering if we could change the default value for 
_AllowAutoTopicCreation_ to _false_ in MetadataRequest.json.

> Auto create non-existent topics when fetching metadata for all topics
> -
>
> Key: KAFKA-10606
> URL: https://issues.apache.org/jira/browse/KAFKA-10606
> Project: Kafka
>  Issue Type: Bug
>Reporter: Lincong Li
>Priority: Major
>
> The "allow auto topic creation" flag is hardcoded to be true for the 
> fetch-all-topic metadata request:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java#L37
> In the below code, annotation claims that "*This never causes 
> auto-creation*". It it NOT true and auto topic creation still gets triggered 
> under some circumstances. So, this is a bug that needs to be fixed.
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java#L68
> For example, the bug could be manifested in the below situation:
> A topic T is being deleted and a request to fetch metadata for all topics 
> gets sent to one broker. The broker reads names of all topics from its 
> metadata cache (shown below).
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L1196
> Then the broker authorizes all topics and makes sure that they are allowed to 
> be described. Then the broker tries to get metadata for every authorized 
> topic by reading the metadata cache again, once for every topic (show below).
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L1240
> However, the metadata cache could have been updated while the broker was 
> authorizing all topics and topic T and its metadata no longer exist in the 
> cache since the topic got deleted and metadata update requests eventually got 
> propagated from the controller to all brokers. So, at this point, when the 
> broker tries to get metadata for topic T from its cache, it realizes that it 
> does not exist and the broker tries to "auto create" topic T since the 
> allow-auto-topic-creation flag was set to true in all the fetch-all-topic 
> metadata requests.
> I think this bug exists since "*metadataRequest.allowAutoTopicCreation*" was 
> introduced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10576) Different behavior of commitSync and commitAsync

2020-10-10 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211577#comment-17211577
 ] 

huxihx commented on KAFKA-10576:


`commitAsync` does need subsequent `poll` calls to take effect, so I prefer 
option #2 here. 

> Different behavior of commitSync and commitAsync
> 
>
> Key: KAFKA-10576
> URL: https://issues.apache.org/jira/browse/KAFKA-10576
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Yuriy Badalyantc
>Priority: Major
>
> It looks like {{commitSync}} and {{commitAsync}} consumer's methods have a 
> different semantic.
> {code:java}
> public class TestKafka {
> public static void main(String[]args) {
> String id = "dev_test";
> Map settings = new HashMap<>();
> settings.put("bootstrap.servers", "localhost:9094");
> settings.put("key.deserializer", StringDeserializer.class);
> settings.put("value.deserializer", StringDeserializer.class);
> settings.put("client.id", id);
> settings.put("group.id", id);
> String topic = "test";
> Map offsets = new HashMap<>();
> offsets.put(new TopicPartition(topic, 0), new OffsetAndMetadata(1));
> try (KafkaConsumer consumer = new 
> KafkaConsumer<>(settings)) {
> consumer.commitSync(offsets);
> }
> }
> }
> {code}
> In the example above I created a consumer and use {{commitSync}} to commit 
> offsets. This code works as expected — all offsets are committed to kafka.
> But in the case of {{commitAsync}} it will not work:
> {code:java}
> try (KafkaConsumer consumer = new KafkaConsumer<>(settings)) {
> CompletableFuture result = new CompletableFuture<>();
> consumer.commitAsync(offsets, new OffsetCommitCallback() {
> @Override
> public void onComplete(Map 
> offsets, Exception exception) {
> if (exception != null) {
> result.completeExceptionally(exception);
> } else {
> result.complete(true);
> }
> }
> });
> result.get(15L, TimeUnit.SECONDS);
> }
> {code}
> The {{result}} future failed with a timeout.
> This behavior is pretty surprising. From naming and documentation, it looks 
> like {{commitSync}} and {{commitAsync}} methods should behave identically. Of 
> course, besides the blocking/non-blocking aspect. But in reality, there are 
> some differences.
> I can assume that the {{commitAsync}} method somehow depends on the {{poll}} 
> calls. But I didn't find any explicit information about it in 
> {{KafkaConsumer}}'s javadoc or kafka documentation page.
> So, I believe that there are the next options:
>  # It's a bug and not expected behavior. {{commitSync}} and {{commitAsync}} 
> should have identical semantics.
>  # It's expected, but not well-documented behavior. In that case, this 
> behavior should be explicitly documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10584) IndexSearchType should use sealed trait instead of Enumeration

2020-10-09 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10584.

Fix Version/s: 2.7.0
   Resolution: Fixed

> IndexSearchType should use sealed trait instead of Enumeration
> --
>
> Key: KAFKA-10584
> URL: https://issues.apache.org/jira/browse/KAFKA-10584
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: huxihx
>Priority: Major
>  Labels: newbie
> Fix For: 2.7.0
>
>
> In Scala, we prefer sealed traits over Enumeration since the former gives you 
> exhaustiveness checking. With Scala Enumeration, you don't get a warning if 
> you add a new value that is not handled in a given pattern match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10584) IndexSearchType should use sealed trait instead of Enumeration

2020-10-08 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-10584:
--

Assignee: huxihx

> IndexSearchType should use sealed trait instead of Enumeration
> --
>
> Key: KAFKA-10584
> URL: https://issues.apache.org/jira/browse/KAFKA-10584
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: huxihx
>Priority: Major
>  Labels: newbie
>
> In Scala, we prefer sealed traits over Enumeration since the former gives you 
> exhaustiveness checking. With Scala Enumeration, you don't get a warning if 
> you add a new value that is not handled in a given pattern match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10467) kafka-topic --describe fails for topic created by "produce"

2020-09-08 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192585#comment-17192585
 ] 

huxihx commented on KAFKA-10467:


Seems the problem is caused by another issue since it should have thrown `Topic 
'does-not-exists' does not exist as expected`.  Could you manually create a 
topic using TopicCommand and then describe it to see if everything works?

> kafka-topic --describe fails for topic created by "produce"
> ---
>
> Key: KAFKA-10467
> URL: https://issues.apache.org/jira/browse/KAFKA-10467
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.3.1
> Environment: MacOS 
>Reporter: Swayam Raina
>Priority: Minor
>
> {code:java}
> > kafka-topics --version
> 2.3.1 (Commit:18a913733fb71c01){code}
>  
> While producing to a topic that does not already exists
> {code:java}
> producer.send("does-not-exists", "msg-1")
> {code}
>  
> broker creates the topic
> {code:java}
> // partition file
> > ls /tmp/kafka-logs/
> does-not-exists-0{code}
>  
> If I try to list the topics, it shows also shows this new topic
> {code:java}
> > kafka-topics --bootstrap-server localhost:9092 --list
> does-not-exists-0
> {code}
> Now while trying to describe the topic that was auto-created the following 
> error is thrown
>  
> {code:java}
> > kafka-topics --bootstrap-server localhost:9092 --topic does-not-exists 
> >--describe
> Error while executing topic command : 
> org.apache.kafka.common.errors.UnknownServerException: The server experienced 
> an unexpected error when processing the request.Error while executing topic 
> command : org.apache.kafka.common.errors.UnknownServerException: The server 
> experienced an unexpected error when processing the request.[2020-09-08 
> 00:21:30,890] ERROR java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownServerException: The server experienced 
> an unexpected error when processing the request. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.TopicCommand$AdminClientTopicService.$anonfun$describeTopic$3(TopicCommand.scala:228)
>  at 
> kafka.admin.TopicCommand$AdminClientTopicService.$anonfun$describeTopic$3$adapted(TopicCommand.scala:225)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:74) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:73) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:56) at 
> kafka.admin.TopicCommand$AdminClientTopicService.describeTopic(TopicCommand.scala:225)
>  at kafka.admin.TopicCommand$.main(TopicCommand.scala:66) at 
> kafka.admin.TopicCommand.main(TopicCommand.scala)Caused by: 
> org.apache.kafka.common.errors.UnknownServerException: The server experienced 
> an unexpected error when processing the request. (kafka.admin.TopicCommand$)
>  
> {code}
> ```
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10456) wrong description in kafka-console-producer.sh help

2020-09-02 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10456.

Fix Version/s: 2.7.0
   Resolution: Fixed

> wrong description in kafka-console-producer.sh help
> ---
>
> Key: KAFKA-10456
> URL: https://issues.apache.org/jira/browse/KAFKA-10456
> Project: Kafka
>  Issue Type: Task
>  Components: producer 
>Affects Versions: 2.6.0
> Environment: linux
>Reporter: danilo batista queiroz
>Assignee: huxihx
>Priority: Trivial
>  Labels: documentation
> Fix For: 2.7.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> file: core/src/main/scala/kafka/tools/ConsoleProducer.scala
> In line 151, the description of "message-send-max-retries" has a text: 
> 'retires', and the correct is 'retries'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10456) wrong description in kafka-console-producer.sh help

2020-09-01 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-10456:
--

Assignee: huxihx

> wrong description in kafka-console-producer.sh help
> ---
>
> Key: KAFKA-10456
> URL: https://issues.apache.org/jira/browse/KAFKA-10456
> Project: Kafka
>  Issue Type: Task
>  Components: producer 
>Affects Versions: 2.6.0
> Environment: linux
>Reporter: danilo batista queiroz
>Assignee: huxihx
>Priority: Trivial
>  Labels: documentation
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> file: core/src/main/scala/kafka/tools/ConsoleProducer.scala
> In line 151, the description of "message-send-max-retries" has a text: 
> 'retires', and the correct is 'retries'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10450) console-producer throws Uncaught error in kafka producer I/O thread:  (org.apache.kafka.clients.producer.internals.Sender) java.lang.IllegalStateException: There are n

2020-08-31 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188082#comment-17188082
 ] 

huxihx commented on KAFKA-10450:


Same version for both clients and brokers?

> console-producer throws Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender) 
> java.lang.IllegalStateException: There are no in-flight requests for node -1
> ---
>
> Key: KAFKA-10450
> URL: https://issues.apache.org/jira/browse/KAFKA-10450
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.6.0
> Environment: Kafka Version 2.6.0
> MacOS Version - macOS Catalina 10.15.6 (19G2021)
> java version "11.0.8" 2020-07-14 LTS
> Java(TM) SE Runtime Environment 18.9 (build 11.0.8+10-LTS)
> Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode)
>Reporter: Jigar Naik
>Priority: Blocker
>
> Kafka-console-producer.sh gives below error on Mac 
> ERROR [Producer clientId=console-producer] Uncaught error in kafka producer 
> I/O thread:  (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.IllegalStateException: There are no in-flight requests for node -1
> *Steps to re-produce the issue.* 
> Download Kafka from 
> [kafka_2.13-2.6.0.tgz|https://www.apache.org/dyn/closer.cgi?path=/kafka/2.6.0/kafka_2.13-2.6.0.tgz]
>  
> Change data and log directory (Optional)
> Create Topic Using below command 
>  
> {code:java}
> ./kafka-topics.sh \
>  --create \
>  --zookeeper localhost:2181 \
>  --replication-factor 1 \
>  --partitions 1 \
>  --topic my-topic{code}
>  
> Start Kafka console producer using below command
>  
> {code:java}
> ./kafka-console-consumer.sh \
>  --topic my-topic \
>  --from-beginning \
>  --bootstrap-server localhost:9092{code}
>  
> Gives below output
>  
> {code:java}
> ./kafka-console-producer.sh \
>  --topic my-topic \
>      --bootstrap-server 127.0.0.1:9092
> >[2020-09-01 00:24:18,177] ERROR [Producer clientId=console-producer] 
> >Uncaught error in kafka producer I/O thread:  
> >(org.apache.kafka.clients.producer.internals.Sender)
> java.nio.BufferUnderflowException
> at java.base/java.nio.Buffer.nextGetIndex(Buffer.java:650)
> at java.base/java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:391)
> at 
> org.apache.kafka.common.protocol.ByteBufferAccessor.readInt(ByteBufferAccessor.java:43)
> at 
> org.apache.kafka.common.message.ResponseHeaderData.read(ResponseHeaderData.java:102)
> at 
> org.apache.kafka.common.message.ResponseHeaderData.(ResponseHeaderData.java:70)
> at 
> org.apache.kafka.common.requests.ResponseHeader.parse(ResponseHeader.java:66)
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:717)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:834)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
> at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
> at java.base/java.lang.Thread.run(Thread.java:834)
> [2020-09-01 00:24:18,179] ERROR [Producer clientId=console-producer] Uncaught 
> error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.IllegalStateException: There are no in-flight requests for node -1
> at 
> org.apache.kafka.clients.InFlightRequests.requestQueue(InFlightRequests.java:62)
> at 
> org.apache.kafka.clients.InFlightRequests.completeNext(InFlightRequests.java:70)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:833)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
> at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
> at java.base/java.lang.Thread.run(Thread.java:834)
> [2020-09-01 00:24:18,682] WARN [Producer clientId=console-producer] Bootstrap 
> broker 127.0.0.1:9092 (id: -1 rack: null) disconnected 
> (org.apache.kafka.clients.NetworkClient)
> {code}
>  
>  
> The same steps works fine with Kafka version 2.0.0 on Mac. 
> The same steps works fine with Kafka version 2.6.0 on Windows. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10431) ProducerPerformance with payloadFile arg: add support for sequential or random outputs

2020-08-26 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185531#comment-17185531
 ] 

huxihx commented on KAFKA-10431:


Seems adding a new option to ProducerPerformance requires a KIP. I am thinking 
of replacing the randomly-choosing with a sequential consuming. Is it a better 
alternative?

> ProducerPerformance with payloadFile arg: add support for sequential or 
> random outputs
> --
>
> Key: KAFKA-10431
> URL: https://issues.apache.org/jira/browse/KAFKA-10431
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.5.1
>Reporter: Zaahir Laher
>Priority: Minor
>
> When using ProducerPerformance  with the --payloadFile argument with a file 
> with multiple payloads (i.e the default is one payload per line) , the 
> ProducerPerformance randomly chooses payloads from the file. 
> This could result in the same payload being sent, which may not be the 
> desired result in some cases. 
> It would be useful to all have another argument that allows for sequence 
> payload submission if required. If left blank this arg would default to false 
> (i.e default random selection).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9344) Logged consumer config does not always match actual config values

2020-08-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-9344.
---
Resolution: Fixed

> Logged consumer config does not always match actual config values
> -
>
> Key: KAFKA-9344
> URL: https://issues.apache.org/jira/browse/KAFKA-9344
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> Similar to KAFKA-8928, during consumer construction, some configs might be 
> overridden (client.id for instance), but the actual values will not be 
> reflected in the info log. It'd better display the overridden values for 
> those configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10407) add linger.ms parameter support to KafkaLog4jAppender

2020-08-19 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10407.

Fix Version/s: 2.7.0
   Resolution: Fixed

> add linger.ms parameter support to KafkaLog4jAppender
> -
>
> Key: KAFKA-10407
> URL: https://issues.apache.org/jira/browse/KAFKA-10407
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Yu Yang
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.7.0
>
>
> Currently  KafkaLog4jAppender does not accept `linger.ms` setting.   When a 
> service has an outrage that cause excessively error logging,  the service can 
> have too many producer requests to kafka brokers and overload the broker.  
> Setting a non-zero 'linger.ms' will allow kafka producer to batch records and 
> reduce # of producer request. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10407) add linger.ms parameter support to KafkaLog4jAppender

2020-08-19 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180901#comment-17180901
 ] 

huxihx commented on KAFKA-10407:


Merged this to trunk and 2.7 branch.

> add linger.ms parameter support to KafkaLog4jAppender
> -
>
> Key: KAFKA-10407
> URL: https://issues.apache.org/jira/browse/KAFKA-10407
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Yu Yang
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.7.0
>
>
> Currently  KafkaLog4jAppender does not accept `linger.ms` setting.   When a 
> service has an outrage that cause excessively error logging,  the service can 
> have too many producer requests to kafka brokers and overload the broker.  
> Setting a non-zero 'linger.ms' will allow kafka producer to batch records and 
> reduce # of producer request. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10407) add linger.ms parameter support to KafkaLog4jAppender

2020-08-16 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-10407:
--

Assignee: huxihx

> add linger.ms parameter support to KafkaLog4jAppender
> -
>
> Key: KAFKA-10407
> URL: https://issues.apache.org/jira/browse/KAFKA-10407
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Yu Yang
>Assignee: huxihx
>Priority: Minor
>
> Currently  KafkaLog4jAppender does not accept `linger.ms` setting.   When a 
> service has an outrage that cause excessively error logging,  the service can 
> have too many producer requests to kafka brokers and overload the broker.  
> Setting a non-zero 'linger.ms' will allow kafka producer to batch records and 
> reduce # of producer request. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10305) Print usage when parsing fails for ConsumerPerformance

2020-07-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10305.

Fix Version/s: 2.7.0
   Resolution: Fixed

> Print usage when parsing fails for ConsumerPerformance
> --
>
> Key: KAFKA-10305
> URL: https://issues.apache.org/jira/browse/KAFKA-10305
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.6.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.7.0
>
>
> When `kafka-consumer-perf-test.sh` is executed without required options or no 
> options at all, only the error message is displayed. It's better off showing 
> the usage as well like what we did for kafka-console-producer.sh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10305) Print usage when parsing fails for ConsumerPerformance

2020-07-23 Thread huxihx (Jira)
huxihx created KAFKA-10305:
--

 Summary: Print usage when parsing fails for ConsumerPerformance
 Key: KAFKA-10305
 URL: https://issues.apache.org/jira/browse/KAFKA-10305
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.6.0
Reporter: huxihx
Assignee: huxihx


When `kafka-consumer-perf-test.sh` is executed without required options or no 
options at all, only the error message is displayed. It's better off showing 
the usage as well like what we did for kafka-console-producer.sh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10268) dynamic config like "--delete-config log.retention.ms" doesn't work

2020-07-23 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10268.

Fix Version/s: 2.7.0
   Resolution: Fixed

> dynamic config like "--delete-config log.retention.ms" doesn't work
> ---
>
> Key: KAFKA-10268
> URL: https://issues.apache.org/jira/browse/KAFKA-10268
> Project: Kafka
>  Issue Type: Bug
>  Components: log, log cleaner
>Affects Versions: 2.1.1
>Reporter: zhifeng.peng
>Assignee: huxihx
>Priority: Major
> Fix For: 2.7.0
>
> Attachments: server.log.2020-07-13-14
>
>
> After I set "log.retention.ms=301000" to clean the data,i use the cmd
> "bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --delete-config log.retention.ms" to reset 
> to default.
> Static broker configuration like log.retention.hours is 168h and no topic 
> level configuration like retention.ms.
> it did not take effect actually although server.log print the broker 
> configuration like that.
> log.retention.check.interval.ms = 30
>  log.retention.hours = 168
>  log.retention.minutes = null
>  {color:#ff}log.retention.ms = null{color}
>  log.roll.hours = 168
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  
> Then we can see that retention time is still 301000ms from the server.log and 
> segments have been deleted.
> [2020-07-13 14:30:00,958] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005329,6040360] due to retention time 301000ms breach (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005329, size 
> 1073741222] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040360, size 
> 1073728116] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075648 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005330,6040410] {color:#FF}due to retention time 301000ms{color} breach 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005330, size 
> 1073732368] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040410, size 
> 1073735366] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075685 
> (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 5005329 (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 6040360 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 5005330 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 6040410 (kafka.log.Log)
>  [2020-07-13 14:31:01,144] INFO Deleted log 
> /data/kafka_logs-test/test_retention-2/06040360.log.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted offset index 
> /data/kafka_logs-test/test_retention-2/06040360.index.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted time index 
> /data/kafka_logs-test/test_retention-2/06040360.timeindex.deleted.
>  (kafka.log.LogSegment)
>  
> Here are a few steps to reproduce it.
> 1、set log.retention.ms=301000:
> bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --add-config log.retention.ms=301000
> 2、produce messages to the topic:
> bin/kafka-producer-perf-test.sh --topic test_retention --num-records 1000 
> --throughput -1 --producer-props bootstrap.servers=10.129.104.15:9092 
> --record-size 1024
> 3、reset log.retention.ms to the default:
> bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --delete-config log.retention.ms
>  
> I have attched server.log. You can see the log from row 238 to row 731. 



--
This message was sent by Atlassian Jira

[jira] [Commented] (KAFKA-10269) AdminClient ListOffsetsResultInfo/timestamp is always -1

2020-07-21 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162444#comment-17162444
 ] 

huxihx commented on KAFKA-10269:


[~d-t-w] Thanks for reporting and feel free to take this ticket.

> AdminClient ListOffsetsResultInfo/timestamp is always -1
> 
>
> Key: KAFKA-10269
> URL: https://issues.apache.org/jira/browse/KAFKA-10269
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.5.0
>Reporter: Derek Troy-West
>Priority: Minor
>
> When using AdminClient/listOffsets the resulting ListOffsetResultInfos appear 
> to always have a timestamp of -1.
> I've run listOffsets against live clusters with multiple Kafka versions (from 
> 1.0 to 2.5) with both CreateTIme and LogAppendTime for 
> message.timestamp.type, every result has -1 timestamp.
> e.g. 
> {{org.apache.kafka.clients.admin.ListOffsetsResult$ListOffsetsResultInfo#}}{{0x5c3a771}}
> ListOffsetsResultInfo(} offset=23016, timestamp=-1, 
> {{leaderEpoch=Optional[0])}}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10268) dynamic config like "--delete-config log.retention.ms" is not work

2020-07-14 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-10268:
--

Assignee: huxihx

> dynamic config like "--delete-config log.retention.ms" is not work
> --
>
> Key: KAFKA-10268
> URL: https://issues.apache.org/jira/browse/KAFKA-10268
> Project: Kafka
>  Issue Type: Bug
>  Components: log, log cleaner
>Affects Versions: 2.1.1
>Reporter: zhifeng.peng
>Assignee: huxihx
>Priority: Major
> Attachments: server.log.2020-07-13-14
>
>
> After I set "log.retention.ms=301000" to clean the data,i use the cmd
> "bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --delete-config log.retention.ms" to reset 
> to default.
> Static broker configuration like log.retention.hours is 168h and no topic 
> level configuration like retention.ms.
> it did not take effect actually although server.log print the broker 
> configuration like that.
> log.retention.check.interval.ms = 30
>  log.retention.hours = 168
>  log.retention.minutes = null
>  {color:#ff}log.retention.ms = null{color}
>  log.roll.hours = 168
>  log.roll.jitter.hours = 0
>  log.roll.jitter.ms = null
>  log.roll.ms = null
>  log.segment.bytes = 1073741824
>  log.segment.delete.delay.ms = 6
>  
> Then we can see that retention time is still 301000ms from the server.log and 
> segments have been deleted.
> [2020-07-13 14:30:00,958] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005329,6040360] due to retention time 301000ms breach (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005329, size 
> 1073741222] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040360, size 
> 1073728116] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075648 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Found deletable segments with base offsets 
> [5005330,6040410] {color:#FF}due to retention time 301000ms{color} breach 
> (kafka.log.Log)
>  [2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005330, size 
> 1073732368] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040410, size 
> 1073735366] for deletion. (kafka.log.Log)
>  [2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Incrementing log start offset to 7075685 
> (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 5005329 (kafka.log.Log)
>  [2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
> dir=/data/kafka_logs-test] Deleting segment 6040360 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 5005330 (kafka.log.Log)
>  [2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
> dir=/data/kafka_logs-test] Deleting segment 6040410 (kafka.log.Log)
>  [2020-07-13 14:31:01,144] INFO Deleted log 
> /data/kafka_logs-test/test_retention-2/06040360.log.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted offset index 
> /data/kafka_logs-test/test_retention-2/06040360.index.deleted. 
> (kafka.log.LogSegment)
>  [2020-07-13 14:31:01,144] INFO Deleted time index 
> /data/kafka_logs-test/test_retention-2/06040360.timeindex.deleted.
>  (kafka.log.LogSegment)
>  
> Here are a few steps to reproduce it.
> 1、set log.retention.ms=301000:
> bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --add-config log.retention.ms=301000
> 2、produce messages to the topic:
> bin/kafka-producer-perf-test.sh --topic test_retention --num-records 1000 
> --throughput -1 --producer-props bootstrap.servers=10.129.104.15:9092 
> --record-size 1024
> 3、reset log.retention.ms to the default:
> bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
> brokers --entity-default --alter --delete-config log.retention.ms
>  
> I have attched server.log. You can see the log from row 238 to row 731. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-10267) [Documentation] | Correction in kafka-console-producer command

2020-07-11 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155978#comment-17155978
 ] 

huxihx edited comment on KAFKA-10267 at 7/11/20, 12:47 PM:
---

--broker-list is deprecated. Please stick to using --bootstrap-server.


was (Author: huxi_2b):
`--broker-list` is deprecated. Please stick to using `--bootstrap-server` .

> [Documentation] | Correction in kafka-console-producer command
> --
>
> Key: KAFKA-10267
> URL: https://issues.apache.org/jira/browse/KAFKA-10267
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hemant Girase
>Priority: Minor
>
> Hi Team,
> [https://kafka.apache.org/documentation/]
> In the below command, it should be "--broker-list" instead of 
> "--bootstrap-server".
> > bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic 
> > my-replicated-topic
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10267) [Documentation] | Correction in kafka-console-producer command

2020-07-11 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17155978#comment-17155978
 ] 

huxihx commented on KAFKA-10267:


`--broker-list` is deprecated. Please stick to using `--bootstrap-server` .

> [Documentation] | Correction in kafka-console-producer command
> --
>
> Key: KAFKA-10267
> URL: https://issues.apache.org/jira/browse/KAFKA-10267
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Hemant Girase
>Priority: Minor
>
> Hi Team,
> [https://kafka.apache.org/documentation/]
> In the below command, it should be "--broker-list" instead of 
> "--bootstrap-server".
> > bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic 
> > my-replicated-topic
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-10017) Flaky Test EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta

2020-07-06 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151960#comment-17151960
 ] 

huxihx edited comment on KAFKA-10017 at 7/6/20, 11:19 AM:
--

[https://builds.apache.org/job/kafka-pr-jdk14-scala2.13/1419/testReport/junit/org.apache.kafka.streams.integration/EosBetaUpgradeIntegrationTest/shouldUpgradeFromEosAlphaToEosBeta_false_/]

 
{code:java}
Error Message
java.lang.AssertionError: Did not receive all 10 records from topic 
multiPartitionOutputTopic within 6 ms, currently accumulated data is [] 
Expected: is a value equal to or greater than <10> but: <0> was less than <10>
Stacktrace
java.lang.AssertionError: Did not receive all 10 records from topic 
multiPartitionOutputTopic within 6 ms, currently accumulated data is [] 
Expected: is a value equal to or greater than <10> but: <0> was less than <10> 
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:599)
 at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:449) 
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417) 
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:595)
 at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:568)
 at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.readResult(EosBetaUpgradeIntegrationTest.java:973)
 at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.verifyCommitted(EosBetaUpgradeIntegrationTest.java:961)
 at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta(EosBetaUpgradeIntegrationTest.java:427)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:564) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.junit.runners.Suite.runChild(Suite.java:128) at 
org.junit.runners.Suite.runChild(Suite.java:27) at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
 at jdk.internal.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at 

[jira] [Commented] (KAFKA-10017) Flaky Test EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta

2020-07-06 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151960#comment-17151960
 ] 

huxihx commented on KAFKA-10017:


[https://builds.apache.org/job/kafka-pr-jdk14-scala2.13/1419/testReport/junit/org.apache.kafka.streams.integration/EosBetaUpgradeIntegrationTest/shouldUpgradeFromEosAlphaToEosBeta_false_/]

 
{code:java}
Error Messagejava.lang.AssertionError: Did not receive all 10 records from 
topic multiPartitionOutputTopic within 6 ms, currently accumulated data is 
[] Expected: is a value equal to or greater than <10> but: <0> was less than 
<10>Stacktracejava.lang.AssertionError: Did not receive all 10 records from 
topic multiPartitionOutputTopic within 6 ms, currently accumulated data is 
[] Expected: is a value equal to or greater than <10> but: <0> was less than 
<10> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:599)
 at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:449) 
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417) 
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:595)
 at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:568)
 at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.readResult(EosBetaUpgradeIntegrationTest.java:973)
 at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.verifyCommitted(EosBetaUpgradeIntegrationTest.java:961)
 at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta(EosBetaUpgradeIntegrationTest.java:427)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:564) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.junit.runners.Suite.runChild(Suite.java:128) at 
org.junit.runners.Suite.runChild(Suite.java:27) at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
 at jdk.internal.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at 

[jira] [Assigned] (KAFKA-10227) Enforce cleanup policy to only contain compact or delete once

2020-07-03 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-10227:
--

Assignee: huxihx

> Enforce cleanup policy to only contain compact or delete once
> -
>
> Key: KAFKA-10227
> URL: https://issues.apache.org/jira/browse/KAFKA-10227
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Assignee: huxihx
>Priority: Minor
>
> When creating or altering a topic, it's possible to set cleanup.policy to 
> values like "compact,compact,delete".
> For example:
>  {{./bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic 
> test --partitions 1 --replication-factor 1 --config 
> cleanup.policy=compact,compact,delete}}
> {{./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe}}
>  {{Topic: test PartitionCount: 1 ReplicationFactor: 1 Configs: 
> cleanup.policy=compact,compact,delete,segment.bytes=1073741824}}
>  
>  We should prevent this and enforce cleanup policy contains each value only 
> once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10222) Incorrect methods show up in 0.10 Kafka Streams docs

2020-07-01 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-10222:
---
Description: 
In 0.10 Kafka Streams 
doc([http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html]),
 two wrong methods show up, as show below:

_builder.from("my-input-topic").mapValue(value -> 
value.length().toString()).to("my-output-topic");_

 

There is no method named `from` or `mapValues`. They should be `stream` and 
`mapValues` respectively.

 

 

  was:
In 0.10 Kafka Streams 
[doc|http://[http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html]],
 two wrong methods show up, as show below:

 _builder.from("my-input-topic").mapValue(value -> 
value.length().toString()).to("my-output-topic");_

 

There is no method named `from` or `mapValues`. They should be `stream` and 
`mapValues` respectively.

 

 


> Incorrect methods show up in 0.10 Kafka Streams docs
> 
>
> Key: KAFKA-10222
> URL: https://issues.apache.org/jira/browse/KAFKA-10222
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.0.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> In 0.10 Kafka Streams 
> doc([http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html]),
>  two wrong methods show up, as show below:
> _builder.from("my-input-topic").mapValue(value -> 
> value.length().toString()).to("my-output-topic");_
>  
> There is no method named `from` or `mapValues`. They should be `stream` and 
> `mapValues` respectively.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10222) Incorrect methods show up in 0.10 Kafka Streams docs

2020-07-01 Thread huxihx (Jira)
huxihx created KAFKA-10222:
--

 Summary: Incorrect methods show up in 0.10 Kafka Streams docs
 Key: KAFKA-10222
 URL: https://issues.apache.org/jira/browse/KAFKA-10222
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.10.0.0
Reporter: huxihx
Assignee: huxihx


In 0.10 Kafka Streams 
[doc|http://[http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html]],
 two wrong methods show up, as show below:

 _builder.from("my-input-topic").mapValue(value -> 
value.length().toString()).to("my-output-topic");_

 

There is no method named `from` or `mapValues`. They should be `stream` and 
`mapValues` respectively.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10220) NPE when describing resources

2020-06-30 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148774#comment-17148774
 ] 

huxihx commented on KAFKA-10220:


Well, from the broker perspective, you are right. Only trunk is affected. What 
I mean is we'll also hit NPE when using 2.6 clients talking to trunk broker.

> NPE when describing resources
> -
>
> Key: KAFKA-10220
> URL: https://issues.apache.org/jira/browse/KAFKA-10220
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Edoardo Comar
>Assignee: Luke Chen
>Priority: Major
>
> In current trunk code 
>  Describing a topic from the CLI can fail with an NPE in the broker
> on the line 
> {{          
> resource.configurationKeys.asScala.forall(_.contains(configName))}}
>  
> (configurationKeys is null)
> {{[2020-06-30 11:10:39,464] ERROR [Admin Manager on Broker 0]: Error 
> processing describe configs request for resource 
> DescribeConfigsResource(resourceType=2, resourceName='topic1', 
> configurationKeys=null) 
> (kafka.server.AdminManager)}}{{java.lang.NullPointerException}}{{at 
> kafka.server.AdminManager.$anonfun$describeConfigs$3(AdminManager.scala:395)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$3$adapted(AdminManager.scala:393)}}{{at
>  
> scala.collection.TraversableLike.$anonfun$filterImpl$1(TraversableLike.scala:248)}}{{at
>  scala.collection.Iterator.foreach(Iterator.scala:929)}}{{at 
> scala.collection.Iterator.foreach$(Iterator.scala:929)}}{{at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)}}{{at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)}}{{at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)}}{{at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)}}{{at 
> scala.collection.TraversableLike.filterImpl(TraversableLike.scala:247)}}{{at 
> scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:245)}}{{at 
> scala.collection.AbstractTraversable.filterImpl(Traversable.scala:104)}}{{at 
> scala.collection.TraversableLike.filter(TraversableLike.scala:259)}}{{at 
> scala.collection.TraversableLike.filter$(TraversableLike.scala:259)}}{{at 
> scala.collection.AbstractTraversable.filter(Traversable.scala:104)}}{{at 
> kafka.server.AdminManager.createResponseConfig$1(AdminManager.scala:393)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$1(AdminManager.scala:412)}}{{at
>  scala.collection.immutable.List.map(List.scala:283)}}{{at 
> kafka.server.AdminManager.describeConfigs(AdminManager.scala:386)}}{{at 
> kafka.server.KafkaApis.handleDescribeConfigsRequest(KafkaApis.scala:2595)}}{{at
>  kafka.server.KafkaApis.handle(KafkaApis.scala:165)}}{{at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)}}{{at 
> java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10220) NPE when describing resources

2020-06-30 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148738#comment-17148738
 ] 

huxihx commented on KAFKA-10220:


[~ijuma] This should affect 2.6 as well since `configurationKeys` starts to be 
initialized in 2.7, due to the refinement introduced by 
[KAFKA-9432|https://issues.apache.org/jira/browse/KAFKA-9432]. Anyway, since 
`configurationKeys` is a nullable, an empty check should be added when 
processing the resources in AdminManager.

> NPE when describing resources
> -
>
> Key: KAFKA-10220
> URL: https://issues.apache.org/jira/browse/KAFKA-10220
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Edoardo Comar
>Assignee: Luke Chen
>Priority: Major
>
> In current trunk code 
>  Describing a topic from the CLI can fail with an NPE in the broker
> on the line 
> {{          
> resource.configurationKeys.asScala.forall(_.contains(configName))}}
>  
> (configurationKeys is null)
> {{[2020-06-30 11:10:39,464] ERROR [Admin Manager on Broker 0]: Error 
> processing describe configs request for resource 
> DescribeConfigsResource(resourceType=2, resourceName='topic1', 
> configurationKeys=null) 
> (kafka.server.AdminManager)}}{{java.lang.NullPointerException}}{{at 
> kafka.server.AdminManager.$anonfun$describeConfigs$3(AdminManager.scala:395)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$3$adapted(AdminManager.scala:393)}}{{at
>  
> scala.collection.TraversableLike.$anonfun$filterImpl$1(TraversableLike.scala:248)}}{{at
>  scala.collection.Iterator.foreach(Iterator.scala:929)}}{{at 
> scala.collection.Iterator.foreach$(Iterator.scala:929)}}{{at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)}}{{at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)}}{{at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)}}{{at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)}}{{at 
> scala.collection.TraversableLike.filterImpl(TraversableLike.scala:247)}}{{at 
> scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:245)}}{{at 
> scala.collection.AbstractTraversable.filterImpl(Traversable.scala:104)}}{{at 
> scala.collection.TraversableLike.filter(TraversableLike.scala:259)}}{{at 
> scala.collection.TraversableLike.filter$(TraversableLike.scala:259)}}{{at 
> scala.collection.AbstractTraversable.filter(Traversable.scala:104)}}{{at 
> kafka.server.AdminManager.createResponseConfig$1(AdminManager.scala:393)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$1(AdminManager.scala:412)}}{{at
>  scala.collection.immutable.List.map(List.scala:283)}}{{at 
> kafka.server.AdminManager.describeConfigs(AdminManager.scala:386)}}{{at 
> kafka.server.KafkaApis.handleDescribeConfigsRequest(KafkaApis.scala:2595)}}{{at
>  kafka.server.KafkaApis.handle(KafkaApis.scala:165)}}{{at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)}}{{at 
> java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10220) NPE when describing resources

2020-06-30 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148550#comment-17148550
 ] 

huxihx commented on KAFKA-10220:


[~ecomar]  Thanks for reporting. I could not reproduce this issue with trunk. 
Are you using the latest code for clients?

> NPE when describing resources
> -
>
> Key: KAFKA-10220
> URL: https://issues.apache.org/jira/browse/KAFKA-10220
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Edoardo Comar
>Priority: Major
>
> In current trunk code 
>  Describing a topic from the CLI can fail with an NPE in the broker
> on the line 
> {{          
> resource.configurationKeys.asScala.forall(_.contains(configName))}}
>  
> (configurationKeys is null?)
> {{[2020-06-30 11:10:39,464] ERROR [Admin Manager on Broker 0]: Error 
> processing describe configs request for resource 
> DescribeConfigsResource(resourceType=2, resourceName='topic1', 
> configurationKeys=null) 
> (kafka.server.AdminManager)}}{{java.lang.NullPointerException}}{{at 
> kafka.server.AdminManager.$anonfun$describeConfigs$3(AdminManager.scala:395)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$3$adapted(AdminManager.scala:393)}}{{at
>  
> scala.collection.TraversableLike.$anonfun$filterImpl$1(TraversableLike.scala:248)}}{{at
>  scala.collection.Iterator.foreach(Iterator.scala:929)}}{{at 
> scala.collection.Iterator.foreach$(Iterator.scala:929)}}{{at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)}}{{at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)}}{{at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)}}{{at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)}}{{at 
> scala.collection.TraversableLike.filterImpl(TraversableLike.scala:247)}}{{at 
> scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:245)}}{{at 
> scala.collection.AbstractTraversable.filterImpl(Traversable.scala:104)}}{{at 
> scala.collection.TraversableLike.filter(TraversableLike.scala:259)}}{{at 
> scala.collection.TraversableLike.filter$(TraversableLike.scala:259)}}{{at 
> scala.collection.AbstractTraversable.filter(Traversable.scala:104)}}{{at 
> kafka.server.AdminManager.createResponseConfig$1(AdminManager.scala:393)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$1(AdminManager.scala:412)}}{{at
>  scala.collection.immutable.List.map(List.scala:283)}}{{at 
> kafka.server.AdminManager.describeConfigs(AdminManager.scala:386)}}{{at 
> kafka.server.KafkaApis.handleDescribeConfigsRequest(KafkaApis.scala:2595)}}{{at
>  kafka.server.KafkaApis.handle(KafkaApis.scala:165)}}{{at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)}}{{at 
> java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10182) Change number of partitions of __consumer_offsets

2020-06-24 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143807#comment-17143807
 ] 

huxihx commented on KAFKA-10182:


[~simpleBread] Did you have any chances to see if createPartitions API works?

> Change number of partitions of __consumer_offsets 
> --
>
> Key: KAFKA-10182
> URL: https://issues.apache.org/jira/browse/KAFKA-10182
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xu Zhang
>Priority: Major
>
> {{currently __consumer_offsets}} cannot be changed for the lifetime of the 
> cluster, and it's generally a really bad idea to change the number of 
> partitions for __consumer_offsets after it is initially created. Because 
> hashing for consumer group name to partition to change, which means the group 
> coordinator will have no history.
>  
> Is there a way to change the number of partitions for __consumer_offsets? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10182) Change number of partitions of __consumer_offsets

2020-06-18 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139155#comment-17139155
 ] 

huxihx commented on KAFKA-10182:


You could use `Admin.createPartitions` API to increase partition count for this 
internal topic.

> Change number of partitions of __consumer_offsets 
> --
>
> Key: KAFKA-10182
> URL: https://issues.apache.org/jira/browse/KAFKA-10182
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xu Zhang
>Priority: Major
>
> {{currently __consumer_offsets}} cannot be changed for the lifetime of the 
> cluster, and it's generally a really bad idea to change the number of 
> partitions for __consumer_offsets after it is initially created. Because 
> hashing for consumer group name to partition to change, which means the group 
> coordinator will have no history.
>  
> Is there a way to change the number of partitions for __consumer_offsets? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9541) Flaky Test DescribeConsumerGroupTest#testDescribeGroupMembersWithShortInitializationTimeout

2020-02-24 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9541:
-

Assignee: (was: huxihx)

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeGroupMembersWithShortInitializationTimeout
> ---
>
> Key: KAFKA-9541
> URL: https://issues.apache.org/jira/browse/KAFKA-9541
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.4.0
>Reporter: huxihx
>Priority: Major
>
> h3. Error Message
> java.lang.AssertionError: assertion failed
> h3. Stacktrace
> java.lang.AssertionError: assertion failed at 
> scala.Predef$.assert(Predef.scala:267) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeGroupMembersWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:630)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>  at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>  at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> 

[jira] [Comment Edited] (KAFKA-9541) Flaky Test DescribeConsumerGroupTest#testDescribeGroupMembersWithShortInitializationTimeout

2020-02-12 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035028#comment-17035028
 ] 

huxihx edited comment on KAFKA-9541 at 2/13/20 2:14 AM:


Occasionally the captured exception is DisconnectException instead of 
TimeoutException. That might be due to an unexpected long pause that caused the 
node disconnection.


was (Author: huxi_2b):
Occasionally the captured exception is DisconnectedException instead of 
TimeoutException. That might be due to an unexpected long pause that caused the 
node disconnection.

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeGroupMembersWithShortInitializationTimeout
> ---
>
> Key: KAFKA-9541
> URL: https://issues.apache.org/jira/browse/KAFKA-9541
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> h3. Error Message
> java.lang.AssertionError: assertion failed
> h3. Stacktrace
> java.lang.AssertionError: assertion failed at 
> scala.Predef$.assert(Predef.scala:267) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeGroupMembersWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:630)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>  at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>  at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at 
> 

[jira] [Updated] (KAFKA-9541) Flaky Test DescribeConsumerGroupTest#testDescribeGroupMembersWithShortInitializationTimeout

2020-02-11 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-9541:
--
Summary: Flaky Test 
DescribeConsumerGroupTest#testDescribeGroupMembersWithShortInitializationTimeout
  (was: Flaky Test 
DescribeConsumerGroupTest#testDescribeGroupWithShortInitializationTimeout)

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeGroupMembersWithShortInitializationTimeout
> ---
>
> Key: KAFKA-9541
> URL: https://issues.apache.org/jira/browse/KAFKA-9541
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> h3. Error Message
> java.lang.AssertionError: assertion failed
> h3. Stacktrace
> java.lang.AssertionError: assertion failed at 
> scala.Predef$.assert(Predef.scala:267) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeGroupMembersWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:630)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>  at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>  at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> 

[jira] [Commented] (KAFKA-9541) Flaky Test DescribeConsumerGroupTest#testDescribeGroupWithShortInitializationTimeout

2020-02-11 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035028#comment-17035028
 ] 

huxihx commented on KAFKA-9541:
---

Occasionally the captured exception is DisconnectedException instead of 
TimeoutException. That might be due to an unexpected long pause that caused the 
node disconnection.

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeGroupWithShortInitializationTimeout
> 
>
> Key: KAFKA-9541
> URL: https://issues.apache.org/jira/browse/KAFKA-9541
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> h3. Error Message
> java.lang.AssertionError: assertion failed
> h3. Stacktrace
> java.lang.AssertionError: assertion failed at 
> scala.Predef$.assert(Predef.scala:267) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeGroupMembersWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:630)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>  at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>  at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> 

[jira] [Created] (KAFKA-9541) Flaky Test DescribeConsumerGroupTest#testDescribeGroupWithShortInitializationTimeout

2020-02-11 Thread huxihx (Jira)
huxihx created KAFKA-9541:
-

 Summary: Flaky Test 
DescribeConsumerGroupTest#testDescribeGroupWithShortInitializationTimeout
 Key: KAFKA-9541
 URL: https://issues.apache.org/jira/browse/KAFKA-9541
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.4.0
Reporter: huxihx
Assignee: huxihx


h3. Error Message

java.lang.AssertionError: assertion failed
h3. Stacktrace

java.lang.AssertionError: assertion failed at 
scala.Predef$.assert(Predef.scala:267) at 
kafka.admin.DescribeConsumerGroupTest.testDescribeGroupMembersWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:630)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
 at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
 at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
 at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
 at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
 at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:412)
 at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
 at 

[jira] [Updated] (KAFKA-9322) Add `tail -n` feature for ConsoleConsumer

2020-01-19 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-9322:
--
Labels: needs-kip  (was: )

> Add `tail -n` feature for ConsoleConsumer
> -
>
> Key: KAFKA-9322
> URL: https://issues.apache.org/jira/browse/KAFKA-9322
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>  Labels: needs-kip
>
> When debugging, it will be convenient to quickly check the last N messages 
> for a partition using ConsoleConsumer. Currently `offset` could not be 
> negative except -1 and -2. However, we could simply break this rule to 
> support `tail -n` feature. A tricky thing is currently we treat -1 and -2 as 
> special values, so I am not sure if a new KIP is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8881) Measure thread running time precisely

2020-01-12 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8881:
--
Description: Currently, the code uses `System.currentTimeMillis()` to 
measure timeout extensively. However, many situations trigger the thread 
suspend such as gc and context switch. In such cases, the timeout value we 
specify is not strictly honored. I believe many of flaky tests failed with 
timed-out are a result of this. Maybe we should use 
ThreadMXBean#getCurrentThreadCpuTime to precisely measure the thread running 
time.  (was: Currently, the code uses `System.currentTimeMillis()` to measure 
timeout extensively. However, many situations trigger the thread suspend such 
as gc and context switch. In such cases, the timeout value we specify is not 
strictly honored. I believe many of flaky tests failed with timed-out are a 
result of this. Maybe we should use ThreadMXBean#getCurrentThreadUserTime to 
precisely measure the thread running time.)

> Measure thread running time precisely
> -
>
> Key: KAFKA-8881
> URL: https://issues.apache.org/jira/browse/KAFKA-8881
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: huxihx
>Priority: Major
>  Labels: needs-discussion
>
> Currently, the code uses `System.currentTimeMillis()` to measure timeout 
> extensively. However, many situations trigger the thread suspend such as gc 
> and context switch. In such cases, the timeout value we specify is not 
> strictly honored. I believe many of flaky tests failed with timed-out are a 
> result of this. Maybe we should use ThreadMXBean#getCurrentThreadCpuTime to 
> precisely measure the thread running time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9344) Logged consumer config does not always match actual config values

2019-12-29 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-9344:
--
Description: Similar to KAFKA-8928, during consumer construction, some 
configs might be overridden (client.id for instance), but the actual values 
will not be reflected in the info log. It'd better display the overridden 
values for those configs.  (was: Similar to 
[KAFKA-8928|https://issues.apache.org/jira/browse/KAFKA-8928]During consumer 
construction, some configs might be overridden (client.id for instance), but 
the actual values will not be reflected in the info log. It'd better display 
the overridden values for those configs.)

> Logged consumer config does not always match actual config values
> -
>
> Key: KAFKA-9344
> URL: https://issues.apache.org/jira/browse/KAFKA-9344
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> Similar to KAFKA-8928, during consumer construction, some configs might be 
> overridden (client.id for instance), but the actual values will not be 
> reflected in the info log. It'd better display the overridden values for 
> those configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9344) Logged consumer config does not always match actual config values

2019-12-29 Thread huxihx (Jira)
huxihx created KAFKA-9344:
-

 Summary: Logged consumer config does not always match actual 
config values
 Key: KAFKA-9344
 URL: https://issues.apache.org/jira/browse/KAFKA-9344
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.4.0
Reporter: huxihx
Assignee: huxihx


Similar to [KAFKA-8928|https://issues.apache.org/jira/browse/KAFKA-8928]During 
consumer construction, some configs might be overridden (client.id for 
instance), but the actual values will not be reflected in the info log. It'd 
better display the overridden values for those configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9236) Confused log after using CLI scripts to produce messages

2019-12-25 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003415#comment-17003415
 ] 

huxihx commented on KAFKA-9236:
---

[~iamabug] Can we close this Jira now?

> Confused log after using CLI scripts to produce messages
> 
>
> Key: KAFKA-9236
> URL: https://issues.apache.org/jira/browse/KAFKA-9236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9277) move all group state transition rules into their states

2019-12-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9277:
-

Assignee: dengziming  (was: huxihx)

> move all group state transition rules into their states
> ---
>
> Key: KAFKA-9277
> URL: https://issues.apache.org/jira/browse/KAFKA-9277
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dengziming
>Assignee: dengziming
>Priority: Minor
> Fix For: 2.5.0
>
>
> Today the `GroupMetadata` maintain a validPreviousStates map of all 
> GroupState:
> ```
> private val validPreviousStates: Map[GroupState, Set[GroupState]] =
>  Map(Dead -> Set(Stable, PreparingRebalance, CompletingRebalance, Empty, 
> Dead),
>  CompletingRebalance -> Set(PreparingRebalance),
>  Stable -> Set(CompletingRebalance),
>  PreparingRebalance -> Set(Stable, CompletingRebalance, Empty),
>  Empty -> Set(PreparingRebalance))
> ```
> It would be cleaner to move all state transition rules into their states :
> ```
> private[group] sealed trait GroupState {
>  val validPreviousStates: Set[GroupState]
> }
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9277) move all group state transition rules into their states

2019-12-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9277:
-

Assignee: huxihx  (was: dengziming)

> move all group state transition rules into their states
> ---
>
> Key: KAFKA-9277
> URL: https://issues.apache.org/jira/browse/KAFKA-9277
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dengziming
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.5.0
>
>
> Today the `GroupMetadata` maintain a validPreviousStates map of all 
> GroupState:
> ```
> private val validPreviousStates: Map[GroupState, Set[GroupState]] =
>  Map(Dead -> Set(Stable, PreparingRebalance, CompletingRebalance, Empty, 
> Dead),
>  CompletingRebalance -> Set(PreparingRebalance),
>  Stable -> Set(CompletingRebalance),
>  PreparingRebalance -> Set(Stable, CompletingRebalance, Empty),
>  Empty -> Set(PreparingRebalance))
> ```
> It would be cleaner to move all state transition rules into their states :
> ```
> private[group] sealed trait GroupState {
>  val validPreviousStates: Set[GroupState]
> }
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9254) Topic level configuration failed

2019-12-24 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9254:
-

Assignee: huxihx

> Topic level configuration failed
> 
>
> Key: KAFKA-9254
> URL: https://issues.apache.org/jira/browse/KAFKA-9254
> Project: Kafka
>  Issue Type: Bug
>  Components: config, log, replication
>Affects Versions: 2.0.1
>Reporter: fenghong
>Assignee: huxihx
>Priority: Critical
>
> We are engineers at Huobi and now encounter Kafka BUG 
> Modifying DynamicBrokerConfig more than 2 times will invalidate the topic 
> level unrelated configuration
> The bug reproduction method as follows:
>  # Set Kafka Broker config  server.properties min.insync.replicas=3
>  # Create topic test-1 and set topic‘s level config min.insync.replicas=2
>  # Dynamically modify the configuration twice as shown below
> {code:java}
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.message.timestamp.type=LogAppendTime
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.retention.ms=60480
> {code}
>  # stop a Kafka Server and found the Exception as shown below
>  org.apache.kafka.common.errors.NotEnoughReplicasException: Number of insync 
> replicas for partition test-1-0 is [2], below required minimum [3]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9322) Add `tail -n` feature for ConsoleConsumer

2019-12-19 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-9322:
--
Description: When debugging, it will be convenient to quickly check the 
last N messages for a partition using ConsoleConsumer. Currently `offset` could 
not be negative except -1 and -2. However, we could simply break this rule to 
support `tail -n` feature. A tricky thing is currently we treat -1 and -2 as 
special values, so I am not sure if a new KIP is needed.  (was: When debugging, 
it will be convenient to quickly check the last N messages for a partition 
using ConsoleConsumer. Currently `offset` could not be negative. However, we 
could simply break this rule to support `tail -n` feature. A tricky thing is 
currently we treat -1 and -2 as special values, so I am not sure if a new KIP 
is needed.)

> Add `tail -n` feature for ConsoleConsumer
> -
>
> Key: KAFKA-9322
> URL: https://issues.apache.org/jira/browse/KAFKA-9322
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> When debugging, it will be convenient to quickly check the last N messages 
> for a partition using ConsoleConsumer. Currently `offset` could not be 
> negative except -1 and -2. However, we could simply break this rule to 
> support `tail -n` feature. A tricky thing is currently we treat -1 and -2 as 
> special values, so I am not sure if a new KIP is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9322) Add `tail -n` feature for ConsoleConsumer

2019-12-19 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-9322:
--
Description: When debugging, it will be convenient to quickly check the 
last N messages for a partition using ConsoleConsumer. Currently `offset` could 
not be negative. However, we could simply break this rule to support `tail -n` 
feature. A tricky thing is currently we treat -1 and -2 as special values, so I 
am not sure if a new KIP is needed.  (was: When debugging, it will be 
convenient to quickly check the last N messages for a partition using 
ConsoleConsumer. Currently `offset` could not be negative. However, we could 
simply break this rule to support `tail -n` feature.)

> Add `tail -n` feature for ConsoleConsumer
> -
>
> Key: KAFKA-9322
> URL: https://issues.apache.org/jira/browse/KAFKA-9322
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Major
>
> When debugging, it will be convenient to quickly check the last N messages 
> for a partition using ConsoleConsumer. Currently `offset` could not be 
> negative. However, we could simply break this rule to support `tail -n` 
> feature. A tricky thing is currently we treat -1 and -2 as special values, so 
> I am not sure if a new KIP is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9322) Add `tail -n` feature for ConsoleConsumer

2019-12-19 Thread huxihx (Jira)
huxihx created KAFKA-9322:
-

 Summary: Add `tail -n` feature for ConsoleConsumer
 Key: KAFKA-9322
 URL: https://issues.apache.org/jira/browse/KAFKA-9322
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.4.0
Reporter: huxihx
Assignee: huxihx


When debugging, it will be convenient to quickly check the last N messages for 
a partition using ConsoleConsumer. Currently `offset` could not be negative. 
However, we could simply break this rule to support `tail -n` feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9317) Provide a group admin command to get group information or do some admin operation

2019-12-19 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000562#comment-17000562
 ] 

huxihx commented on KAFKA-9317:
---

Does `kafka-consumer-groups.sh` satisfy your requirement?

> Provide a group admin command to get group information or do some admin 
> operation
> -
>
> Key: KAFKA-9317
> URL: https://issues.apache.org/jira/browse/KAFKA-9317
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.0.0
>Reporter: maobaolong
>Priority: Minor
>
> How to run this command
> If you have a batch of kafka shell script
> bin/kafka-run-class.sh kafka.admin.GroupCommand
> If you have only a fat jar
> java -cp xxx.jar kafka.admin.GroupCommand
> The following sub cmd is supported
> sub cmd   description argument example
> list  list group  --bootstrap-server localhost:9099 --list
> describe  describe group and show group member.   --bootstrap-server 
> localhost:9092 --describe --group producer-xx-127.0.0.1
> leaveGroupleave specify member from a given group, star means all 
> --bootstrap-server localhost:9092 --leaveGroup --group 
> producer-x-127.0.0.1 --memberId \*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9300) Create a topic based on the specified brokers

2019-12-19 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999835#comment-16999835
 ] 

huxihx commented on KAFKA-9300:
---

I mean a new config deserves a KIP to have people discuss whether it should be 
added:)

> Create a topic based on the specified brokers
> -
>
> Key: KAFKA-9300
> URL: https://issues.apache.org/jira/browse/KAFKA-9300
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Affects Versions: 2.3.0
>Reporter: weiwei
>Assignee: weiwei
>Priority: Major
> Fix For: 2.4.0
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> Generally, A Kafka cluster serves multiple businesses. To reduce the impact 
> of businesses, many companies isolate brokers to physically isolate 
> businesses. That is, the topics of certain businesses are created on the 
> specified brokers. The current topic creation script supports only create 
> topic according replica-assignment . This function is not convenient for the 
> service to specify the brokers. Therefore, you need to add this function as 
> follows: Create a topci based on the specified brokers. The 
> replica-assignment-brokers parameter is added to indicate the broker range of 
> the topic distribution. If this parameter is not set, all broker nodes in the 
> cluster are used. For example, kafka-topics.sh --create --topic test06 
> --partitions 2 --replication-factor 1 --zookeeper zkurl -- 
> --replica-assignment-brokers=1,2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9316) ConsoleProducer help info not expose default properties

2019-12-18 Thread huxihx (Jira)
huxihx created KAFKA-9316:
-

 Summary: ConsoleProducer help info not expose default properties
 Key: KAFKA-9316
 URL: https://issues.apache.org/jira/browse/KAFKA-9316
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.4.0
Reporter: huxihx
Assignee: huxihx


Unlike ConsoleConsumer, ConsoleProducer help info does not expose default 
properties. Users cannot know what default properties are supported by checking 
the help info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9300) Create a topic based on the specified brokers

2019-12-16 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997081#comment-16997081
 ] 

huxihx commented on KAFKA-9300:
---

This might need a KIP.

> Create a topic based on the specified brokers
> -
>
> Key: KAFKA-9300
> URL: https://issues.apache.org/jira/browse/KAFKA-9300
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Affects Versions: 2.2.2
>Reporter: weiwei
>Assignee: weiwei
>Priority: Major
> Fix For: 2.4.0
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> Generally, A Kafka cluster serves multiple businesses. To reduce the impact 
> of businesses, many companies isolate brokers to physically isolate 
> businesses. That is, the topics of certain businesses are created on the 
> specified brokers. The current topic creation script supports only create 
> topic according replica-assignment . This function is not convenient for the 
> service to specify the brokers. Therefore, you need to add this function as 
> follows: Create a topci based on the specified brokers. The 
> replica-assignment-brokers parameter is added to indicate the broker range of 
> the topic distribution. If this parameter is not set, all broker nodes in the 
> cluster are used. For example, kafka-topics.sh --create --topic test06 
> --partitions 2 --replication-factor 1 --zookeeper zkurl -- 
> --replica-assignment-brokers=1,2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9236) Confused log after using CLI scripts to produce messages

2019-12-01 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16985548#comment-16985548
 ] 

huxihx commented on KAFKA-9236:
---

Sometimes, it makes sense to specify a timeout when closing the producer if you 
don't want to wait indefinitely to complete the sending of all incomplete 
requests. In such a case, logging with the timeout value sounds reasonable.

> Confused log after using CLI scripts to produce messages
> 
>
> Key: KAFKA-9236
> URL: https://issues.apache.org/jira/browse/KAFKA-9236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9254) Topic level configuration failed

2019-12-01 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16985542#comment-16985542
 ] 

huxihx commented on KAFKA-9254:
---

Seems I cannot reproduce it with 2.3.0. Could you retry with the newest version?

> Topic level configuration failed
> 
>
> Key: KAFKA-9254
> URL: https://issues.apache.org/jira/browse/KAFKA-9254
> Project: Kafka
>  Issue Type: Bug
>  Components: config, log, replication
>Affects Versions: 2.0.1
>Reporter: fenghong
>Priority: Critical
>
> We are engineers at Huobi and now encounter Kafka BUG 
> Modifying DynamicBrokerConfig more than 2 times will invalidate the topic 
> level unrelated configuration
> The bug reproduction method as follows:
>  # Set Kafka Broker config  server.properties min.insync.replicas=3
>  # Create topic test-1 and set topic‘s level config min.insync.replicas=2
>  # Dynamically modify the configuration twice as shown below
> {code:java}
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.message.timestamp.type=LogAppendTime
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.retention.ms=60480
> {code}
>  # stop a Kafka Server and found the Exception as shown below
>  org.apache.kafka.common.errors.NotEnoughReplicasException: Number of insync 
> replicas for partition test-1-0 is [2], below required minimum [3]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9236) Confused log after using CLI scripts to produce messages

2019-11-26 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983211#comment-16983211
 ] 

huxihx commented on KAFKA-9236:
---

[~iamabug] It is saying the producer is closing with the specified timeout 
value `Long.MAX_VALUE`, which means the close has no timeout at all.

> Confused log after using CLI scripts to produce messages
> 
>
> Key: KAFKA-9236
> URL: https://issues.apache.org/jira/browse/KAFKA-9236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9236) Confused log after using CLI scripts to produce messages

2019-11-26 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983195#comment-16983195
 ] 

huxihx commented on KAFKA-9236:
---

Could you briefly describe the problem you ran into?

> Confused log after using CLI scripts to produce messages
> 
>
> Key: KAFKA-9236
> URL: https://issues.apache.org/jira/browse/KAFKA-9236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9202) serde in ConsoleConsumer with access to headers

2019-11-21 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9202:
-

Assignee: huxihx

> serde in ConsoleConsumer with access to headers
> ---
>
> Key: KAFKA-9202
> URL: https://issues.apache.org/jira/browse/KAFKA-9202
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 2.3.0
>Reporter: Jorg Heymans
>Assignee: huxihx
>Priority: Major
>
> ML thread here : 
> [https://lists.apache.org/thread.html/ab8c3094945cb9f9312fd3614a5b4454f24756cfa1a702ef5c739c8f@%3Cusers.kafka.apache.org%3E]
>  
> The Deserializer interface has two methods, one that gives access to the 
> headers and one that does not. ConsoleConsumer.scala only calls the latter 
> method. It would be nice if it were to call the default method that provides 
> header access, so that custom serde that depends on headers becomes possible. 
> Currently it does this:
>  
> {code:java}
> deserializer.map(_.deserialize(topic, nonNullBytes).toString.
> getBytes(StandardCharsets.UTF_8)).getOrElse(nonNullBytes)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9217) Partial partition's log-end-offset is zero

2019-11-21 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979764#comment-16979764
 ] 

huxihx commented on KAFKA-9217:
---

Is it possible that partition 5 has no records at all?

> Partial partition's log-end-offset is zero
> --
>
> Key: KAFKA-9217
> URL: https://issues.apache.org/jira/browse/KAFKA-9217
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.1
> Environment: kafka0.10.0.1
>Reporter: lisen
>Priority: Major
> Fix For: 0.10.0.1
>
> Attachments: Snipaste_2019-11-21_14-53-06.png, 
> Snipaste_2019-11-21_15-00-09.png
>
>
> The amount of data my consumers consume is 400222, But using the command to 
> view the consumption results is only 279789, The command view results are as 
> follows:
> !Snipaste_2019-11-21_15-00-09.png!
> The data result of partition 5 is
> !Snipaste_2019-11-21_14-53-06.png!
> I want to know if this is a kafka bug.Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9220) TimeoutException when using kafka-preferred-replica-election

2019-11-21 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979761#comment-16979761
 ] 

huxihx commented on KAFKA-9220:
---

That might need a small KIP:)

> TimeoutException when using kafka-preferred-replica-election
> 
>
> Key: KAFKA-9220
> URL: https://issues.apache.org/jira/browse/KAFKA-9220
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.3.0
>Reporter: Or Shemesh
>Priority: Major
>
> When running kafka-preferred-replica-election --bootstrap-server xxx:9092
> I'm getting this error:
> Timeout waiting for election resultsTimeout waiting for election 
> resultsException in thread "main" kafka.common.AdminCommandFailedException at 
> kafka.admin.PreferredReplicaLeaderElectionCommand$AdminClientCommand.electPreferredLeaders(PreferredReplicaLeaderElectionCommand.scala:246)
>  at 
> kafka.admin.PreferredReplicaLeaderElectionCommand$.run(PreferredReplicaLeaderElectionCommand.scala:78)
>  at 
> kafka.admin.PreferredReplicaLeaderElectionCommand$.main(PreferredReplicaLeaderElectionCommand.scala:42)
>  at 
> kafka.admin.PreferredReplicaLeaderElectionCommand.main(PreferredReplicaLeaderElectionCommand.scala)Caused
>  by: org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.
>  
> Because we have a big cluster and getting all the data from the zookeeper is 
> taking more the 30 second.
>  
> After searching the code I saw that the 30 second is hard-coded can you 
> enable us to set the timeout as parameter?
> [https://github.com/confluentinc/kafka/blob/master/core/src/main/scala/kafka/admin/PreferredReplicaLeaderElectionCommand.scala]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9208) Flaky Test SslAdminClientIntegrationTest.testCreatePartitions

2019-11-19 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-9208.
---
Resolution: Duplicate

> Flaky Test SslAdminClientIntegrationTest.testCreatePartitions
> -
>
> Key: KAFKA-9208
> URL: https://issues.apache.org/jira/browse/KAFKA-9208
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.4.0
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: flaky-test
>
> Java 8 build failed on 2.4-targeted PR
> h3. Stacktrace
> java.lang.AssertionError: validateOnly expected:<3> but was:<1> at 
> org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testCreatePartitions$6.apply(AdminClientIntegrationTest.scala:625)
>  at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testCreatePartitions$6.apply(AdminClientIntegrationTest.scala:599)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> kafka.api.AdminClientIntegrationTest.testCreatePartitions(AdminClientIntegrationTest.scala:599)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9157) logcleaner could generate empty segment files after cleaning

2019-11-18 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9157:
-

Assignee: huxihx

> logcleaner could generate empty segment files after cleaning
> 
>
> Key: KAFKA-9157
> URL: https://issues.apache.org/jira/browse/KAFKA-9157
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.3.0
>Reporter: Jun Rao
>Assignee: huxihx
>Priority: Major
>
> Currently, the log cleaner could only combine segments within a 2-billion 
> offset range. If all records in that range are deleted, an empty segment 
> could be generated. It would be useful to avoid generating such empty 
> segments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9157) logcleaner could generate empty segment files after cleaning

2019-11-17 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976330#comment-16976330
 ] 

huxihx commented on KAFKA-9157:
---

[~sbellapu] Do yo still work on this jira? If not, could I take on this one?  I 
already reproduced the issue and managed to work out a fix. What do you think?

> logcleaner could generate empty segment files after cleaning
> 
>
> Key: KAFKA-9157
> URL: https://issues.apache.org/jira/browse/KAFKA-9157
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.3.0
>Reporter: Jun Rao
>Priority: Major
>
> Currently, the log cleaner could only combine segments within a 2-billion 
> offset range. If all records in that range are deleted, an empty segment 
> could be generated. It would be useful to avoid generating such empty 
> segments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9150) DescribeGroup uses member assignment as metadata

2019-11-06 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16968211#comment-16968211
 ] 

huxihx commented on KAFKA-9150:
---

Do not be aware that  David Jacot already takes the jira. Closed the pull 
request.

> DescribeGroup uses member assignment as metadata
> 
>
> Key: KAFKA-9150
> URL: https://issues.apache.org/jira/browse/KAFKA-9150
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: David Jacot
>Priority: Blocker
> Fix For: 2.4.0
>
>
> When we converted the DescribeGroup internally to rely on the generated 
> protocol in KAFKA-7922, we introduced a regression in the response handling. 
> Basically we serialize the member assignment as both the assignment and 
> metadata in the response: 
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L1326].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9093) NullPointerException in KafkaConsumer with group.instance.id

2019-10-31 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-9093.
---
Resolution: Fixed

> NullPointerException in KafkaConsumer with group.instance.id
> 
>
> Key: KAFKA-9093
> URL: https://issues.apache.org/jira/browse/KAFKA-9093
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0
>Reporter: Rolef Heinrich
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.3.2
>
>
> When using *group.instance.id=myUniqId[0]*, the KafkaConsumer's constructor 
> throws a NullpointerException in close():
>  
> {code:java}
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:2204)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:825)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:664)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:644)
> {code}
> {{It turns out that the exception is thrown because the *log* member is not 
> yet initialized (still null) in the constructor when the original exception 
> is handled. The original exception is thrown before *log* is initailized.}}
> {{The side effect of this error is, that close does does not cleanup 
> resources as clean is supposed to do.}}
> *{{The used consumer properties for reference:}}*
>  
> {code:java}
> key.deserializer=com.ibm.streamsx.kafka.serialization
> request.timeout.ms=25000
> value.deserializer=com.ibm.streamsx.kafka.serialization
> client.dns.lookup=use_all_dns_ips
> metadata.max.age.ms=2000
> enable.auto.commit=false
> group.instance.id=myUniqId[0]
> max.poll.interval.ms=30
> group.id=consumer-0
> metric.reporters=com.ibm.streamsx.kafka.clients.consum...
> reconnect.backoff.max.ms=1
> bootstrap.servers=localhost:9092
> max.poll.records=50
> session.timeout.ms=2
> client.id=C-J37-ReceivedMessages[0]
> allow.auto.create.topics=false
> metrics.sample.window.ms=1
> retry.backoff.ms=500
> reconnect.backoff.ms=250{code}
> *Expected behaviour:* throw exception indicating that something is wrong with 
> the chosen group.instance.id.
> The documentation does not tell anything about valid values for 
> group.instance.id.
> *Reproduce:*
>  
>  
> {code:java}
>  
> import java.util.Properties;
> import org.apache.kafka.clients.consumer.ConsumerConfig;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> public class Main {
> public static void main(String[] args) {
> Properties props = new Properties();
> props.put (ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
> props.put (ConsumerConfig.GROUP_ID_CONFIG, "group-Id1");
> props.put (ConsumerConfig.GROUP_INSTANCE_ID_CONFIG, "myUniqId[0]");
> props.put (ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.StringDeserializer");
> props.put (ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.StringDeserializer");
> KafkaConsumer c = new KafkaConsumer (props);
> }
> }
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:2204)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:825)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:664)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:644)
>   at Main.main(Main.java:15)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9069) Flaky Test AdminClientIntegrationTest.testCreatePartitions

2019-10-30 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9069:
-

Assignee: huxihx

> Flaky Test AdminClientIntegrationTest.testCreatePartitions
> --
>
> Key: KAFKA-9069
> URL: https://issues.apache.org/jira/browse/KAFKA-9069
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core, unit tests
>Reporter: Matthias J. Sax
>Assignee: huxihx
>Priority: Major
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/2792/testReport/junit/kafka.api/AdminClientIntegrationTest/testCreatePartitions/]
> {quote}java.lang.AssertionError: validateOnly expected:<3> but was:<1> at 
> org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> kafka.api.AdminClientIntegrationTest.$anonfun$testCreatePartitions$5(AdminClientIntegrationTest.scala:651)
>  at 
> kafka.api.AdminClientIntegrationTest.$anonfun$testCreatePartitions$5$adapted(AdminClientIntegrationTest.scala:601)
>  at scala.collection.immutable.List.foreach(List.scala:305) at 
> kafka.api.AdminClientIntegrationTest.testCreatePartitions(AdminClientIntegrationTest.scala:601){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9025) ZkSecurityMigrator not working with zookeeper chroot

2019-10-30 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9025:
-

Assignee: huxihx

> ZkSecurityMigrator not working with zookeeper chroot
> 
>
> Key: KAFKA-9025
> URL: https://issues.apache.org/jira/browse/KAFKA-9025
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0
> Environment: Reproduced at least on rhel and macos
>Reporter: Laurent Millet
>Assignee: huxihx
>Priority: Major
>
> The ZkSecurityMigrator tool fails to handle installations where kafka is 
> configured with a zookeeper chroot (as opposed to using /, the default):
>  * ACLs on existing nodes are not modified (they are left world-modifiable)
>  * New nodes created by the tool are created directly under the zookeeper 
> root instead of under the chroot
> The tool does not emit any message, thus the unsuspecting user can only 
> assume everything went well, when in fact it did not and znodes are still not 
> secure:
> kafka_2.12-2.3.0 $ bin/zookeeper-security-migration.sh --zookeeper.acl=secure 
> --zookeeper.connect=localhost:2181
> kafka_2.12-2.3.0 $
> For example, with kafka configured to use /kafka as chroot 
> (zookeeper.connect=localhost:2181/kafka), the following is observed:
>  * Before running the tool
>  ** Zookeeper top-level nodes (all kafka nodes are under /kafka):
> [zk: localhost:2181(CONNECTED) 1] ls /
> [kafka, zookeeper]
>  ** Example node ACL:
> [zk: localhost:2181(CONNECTED) 2] getAcl /kafka/brokers
> 'world,'anyone
> : cdrwa
>  * After running the tool:
>  ** Zookeeper top-level nodes (kafka nodes created by the tool appeared here):
> [zk: localhost:2181(CONNECTED) 3] ls /
> [admin, brokers, cluster, config, controller, controller_epoch, 
> delegation_token, isr_change_notification, kafka, kafka-acl, 
> kafka-acl-changes, kafka-acl-extended, kafka-acl-extended-changes, 
> latest_producer_id_block, log_dir_event_notification, zookeeper]
>  ** Example node ACL:
> [zk: localhost:2181(CONNECTED) 4] getAcl /kafka/brokers
> 'world,'anyone
> : cdrwa
>  ** New node ACL:
> [zk: localhost:2181(CONNECTED) 5] getAcl /brokers
> 'sasl,'kafka
> : cdrwa
> 'world,'anyone
> : r
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9069) Flaky Test AdminClientIntegrationTest.testCreatePartitions

2019-10-30 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962816#comment-16962816
 ] 

huxihx commented on KAFKA-9069:
---

[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/3063/testReport/junit/kafka.api/AdminClientIntegrationTest/testCreatePartitions/]

> Flaky Test AdminClientIntegrationTest.testCreatePartitions
> --
>
> Key: KAFKA-9069
> URL: https://issues.apache.org/jira/browse/KAFKA-9069
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core, unit tests
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/2792/testReport/junit/kafka.api/AdminClientIntegrationTest/testCreatePartitions/]
> {quote}java.lang.AssertionError: validateOnly expected:<3> but was:<1> at 
> org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> kafka.api.AdminClientIntegrationTest.$anonfun$testCreatePartitions$5(AdminClientIntegrationTest.scala:651)
>  at 
> kafka.api.AdminClientIntegrationTest.$anonfun$testCreatePartitions$5$adapted(AdminClientIntegrationTest.scala:601)
>  at scala.collection.immutable.List.foreach(List.scala:305) at 
> kafka.api.AdminClientIntegrationTest.testCreatePartitions(AdminClientIntegrationTest.scala:601){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9025) ZkSecurityMigrator not working with zookeeper chroot

2019-10-29 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962659#comment-16962659
 ] 

huxihx commented on KAFKA-9025:
---

[~lmairbus] You should explicitly specify the chroot when running 
zookeeper-security-migration.sh if a chroot is configured, as shown below:
{code:java}
bin/zookeeper-security-migration.sh --zookeeper.acl=secure 
--zookeeper.connect=localhost:2181/kafka{code}
Could you retry your scenario with this command above?

> ZkSecurityMigrator not working with zookeeper chroot
> 
>
> Key: KAFKA-9025
> URL: https://issues.apache.org/jira/browse/KAFKA-9025
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0
> Environment: Reproduced at least on rhel and macos
>Reporter: Laurent Millet
>Priority: Major
>
> The ZkSecurityMigrator tool fails to handle installations where kafka is 
> configured with a zookeeper chroot (as opposed to using /, the default):
>  * ACLs on existing nodes are not modified (they are left world-modifiable)
>  * New nodes created by the tool are created directly under the zookeeper 
> root instead of under the chroot
> The tool does not emit any message, thus the unsuspecting user can only 
> assume everything went well, when in fact it did not and znodes are still not 
> secure:
> kafka_2.12-2.3.0 $ bin/zookeeper-security-migration.sh --zookeeper.acl=secure 
> --zookeeper.connect=localhost:2181
> kafka_2.12-2.3.0 $
> For example, with kafka configured to use /kafka as chroot 
> (zookeeper.connect=localhost:2181/kafka), the following is observed:
>  * Before running the tool
>  ** Zookeeper top-level nodes (all kafka nodes are under /kafka):
> [zk: localhost:2181(CONNECTED) 1] ls /
> [kafka, zookeeper]
>  ** Example node ACL:
> [zk: localhost:2181(CONNECTED) 2] getAcl /kafka/brokers
> 'world,'anyone
> : cdrwa
>  * After running the tool:
>  ** Zookeeper top-level nodes (kafka nodes created by the tool appeared here):
> [zk: localhost:2181(CONNECTED) 3] ls /
> [admin, brokers, cluster, config, controller, controller_epoch, 
> delegation_token, isr_change_notification, kafka, kafka-acl, 
> kafka-acl-changes, kafka-acl-extended, kafka-acl-extended-changes, 
> latest_producer_id_block, log_dir_event_notification, zookeeper]
>  ** Example node ACL:
> [zk: localhost:2181(CONNECTED) 4] getAcl /kafka/brokers
> 'world,'anyone
> : cdrwa
>  ** New node ACL:
> [zk: localhost:2181(CONNECTED) 5] getAcl /brokers
> 'sasl,'kafka
> : cdrwa
> 'world,'anyone
> : r
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9093) NullPointerException in KafkaConsumer with group.instance.id

2019-10-24 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9093:
-

Assignee: huxihx

> NullPointerException in KafkaConsumer with group.instance.id
> 
>
> Key: KAFKA-9093
> URL: https://issues.apache.org/jira/browse/KAFKA-9093
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0
>Reporter: Rolef Heinrich
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.3.1
>
>
> When using *group.instance.id=myUniqId[0]*, the KafkaConsumer's constructor 
> throws a NullpointerException in close():
>  
> {code:java}
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:2204)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:825)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:664)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:644)
> {code}
> {{It turns out that the exception is thrown because the *log* member is not 
> yet initialized (still null) in the constructor when the original exception 
> is handled. The original exception is thrown before *log* is initailized.}}
> {{The side effect of this error is, that close does does not cleanup 
> resources as clean is supposed to do.}}
> *{{The used consumer properties for reference:}}*
>  
> {code:java}
> key.deserializer=com.ibm.streamsx.kafka.serialization
> request.timeout.ms=25000
> value.deserializer=com.ibm.streamsx.kafka.serialization
> client.dns.lookup=use_all_dns_ips
> metadata.max.age.ms=2000
> enable.auto.commit=false
> group.instance.id=myUniqId[0]
> max.poll.interval.ms=30
> group.id=consumer-0
> metric.reporters=com.ibm.streamsx.kafka.clients.consum...
> reconnect.backoff.max.ms=1
> bootstrap.servers=localhost:9092
> max.poll.records=50
> session.timeout.ms=2
> client.id=C-J37-ReceivedMessages[0]
> allow.auto.create.topics=false
> metrics.sample.window.ms=1
> retry.backoff.ms=500
> reconnect.backoff.ms=250{code}
> *Expected behaviour:* throw exception indicating that something is wrong with 
> the chosen group.instance.id.
> The documentation does not tell anything about valid values for 
> group.instance.id.
> *Reproduce:*
>  
>  
> {code:java}
>  
> import java.util.Properties;
> import org.apache.kafka.clients.consumer.ConsumerConfig;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> public class Main {
> public static void main(String[] args) {
> Properties props = new Properties();
> props.put (ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
> props.put (ConsumerConfig.GROUP_ID_CONFIG, "group-Id1");
> props.put (ConsumerConfig.GROUP_INSTANCE_ID_CONFIG, "myUniqId[0]");
> props.put (ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.StringDeserializer");
> props.put (ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.StringDeserializer");
> KafkaConsumer c = new KafkaConsumer (props);
> }
> }
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:2204)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:825)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:664)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:644)
>   at Main.main(Main.java:15)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8946) Single byte header issues WARN logging

2019-10-08 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8946:
--
Component/s: KafkaConnect

> Single byte header issues WARN logging
> --
>
> Key: KAFKA-8946
> URL: https://issues.apache.org/jira/browse/KAFKA-8946
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Henning Treu
>Priority: Minor
>
> Setting a single byte header like
> {code:java}
> headers.add("MY_CUSTOM_HEADER", new byte[] { 1 });
> {code}
> will cause a WARN message with full stack trace:
> {code:java}
> [2019-08-29 06:27:40,599] WARN Failed to deserialize value for header 
> 'MY_CUSTOM_HEADER' on topic '', so using byte array 
> (org.apache.kafka.connect.storage.SimpleHeaderConverter)
> java.lang.StringIndexOutOfBoundsException: String index out of range: 0
> at java.lang.String.charAt(String.java:658)
> at org.apache.kafka.connect.data.Values.parse(Values.java:816)
> at org.apache.kafka.connect.data.Values.parseString(Values.java:373)
> at 
> org.apache.kafka.connect.storage.SimpleHeaderConverter.toConnectHeader(SimpleHeaderConverter.java:64)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertHeadersFor(WorkerSinkTask.java:501)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:469)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Since Kafka will continue with the headers as 
> {code:java}
> Schema.BYTES_SCHEMA
> {code}
>  the warning seems a little harsh.
> There are two options:
> # Handle none-String header values explicitly
> # Drop the stacktrace logging and put it to an extra DEBUG log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8946) Single byte header issues WARN logging

2019-10-08 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8946:
--
Component/s: (was: KafkaConnect)

> Single byte header issues WARN logging
> --
>
> Key: KAFKA-8946
> URL: https://issues.apache.org/jira/browse/KAFKA-8946
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Henning Treu
>Priority: Minor
>
> Setting a single byte header like
> {code:java}
> headers.add("MY_CUSTOM_HEADER", new byte[] { 1 });
> {code}
> will cause a WARN message with full stack trace:
> {code:java}
> [2019-08-29 06:27:40,599] WARN Failed to deserialize value for header 
> 'MY_CUSTOM_HEADER' on topic '', so using byte array 
> (org.apache.kafka.connect.storage.SimpleHeaderConverter)
> java.lang.StringIndexOutOfBoundsException: String index out of range: 0
> at java.lang.String.charAt(String.java:658)
> at org.apache.kafka.connect.data.Values.parse(Values.java:816)
> at org.apache.kafka.connect.data.Values.parseString(Values.java:373)
> at 
> org.apache.kafka.connect.storage.SimpleHeaderConverter.toConnectHeader(SimpleHeaderConverter.java:64)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertHeadersFor(WorkerSinkTask.java:501)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:469)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Since Kafka will continue with the headers as 
> {code:java}
> Schema.BYTES_SCHEMA
> {code}
>  the warning seems a little harsh.
> There are two options:
> # Handle none-String header values explicitly
> # Drop the stacktrace logging and put it to an extra DEBUG log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8928) Logged producer config does not always match actual config values

2019-10-08 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8928:
-

Assignee: huxihx

> Logged producer config does not always match actual config values
> -
>
> Key: KAFKA-8928
> URL: https://issues.apache.org/jira/browse/KAFKA-8928
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Chris Pettitt
>Assignee: huxihx
>Priority: Major
>
> I'm working with EOS, and expected to see that switching to EOS sets 
> `acks=all` among other things. Instead, in the logs I see:
>  
> {code:java}
> ProducerConfig values: 
>   acks = 1
>   ...
>   enable.idempotence = true
> ...
>  (org.apache.kafka.clients.producer.ProducerConfig:279)
> {code}
>  
> Clearly this is incorrect. The value is changed in KafkaProducer and the 
> override is logged, but it appears to be filtered at the default setting. It 
> would be best to log all of the correct values in one place. Logging the 
> incorrect values in misleading and at best can waste time when trying to 
> understand Kafka behavior.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8944) Compiler Warning

2019-09-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8944:
-

Assignee: huxihx

> Compiler Warning
> 
>
> Key: KAFKA-8944
> URL: https://issues.apache.org/jira/browse/KAFKA-8944
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: huxihx
>Priority: Minor
>  Labels: scala
>
> When building Kafka Streams, we get the following compiler warning that we 
> should fix:
> {code:java}
> scala/src/main/scala/org/apache/kafka/streams/scala/kstream/KTable.scala:24: 
> imported `Suppressed' is permanently hidden by definition of object 
> Suppressed in package kstream import 
> org.apache.kafka.streams.kstream.{Suppressed, 
> ValueTransformerWithKeySupplier, KTable => KTableJ}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8915) Unable to modify partition

2019-09-19 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-8915.
---
Resolution: Not A Problem

> Unable to modify partition
> --
>
> Key: KAFKA-8915
> URL: https://issues.apache.org/jira/browse/KAFKA-8915
> Project: Kafka
>  Issue Type: Bug
>Reporter: lingyi.zhong
>Priority: Major
>
> [root@work1 kafka]# bin/kafka-topics.sh --create --zookeeper 10.20.30.77:2181 
> --replication-factor 1 --partitions 1  --topic test_topic3[root@work1 kafka]# 
> bin/kafka-topics.sh --create --zookeeper 10.20.30.77:2181 
> --replication-factor 1 --partitions 1  --topic test_topic3
> WARNING: Due to limitations in metric names, topics with a period ('.') or 
> underscore ('_') could collide. To avoid issues it is best to use either, but 
> not both.Created topic "test_topic3".[root@work1 kafka]# bin/kafka-topics.sh  
> --alter --zookeeper 10.20.30.78:2181/chroot  --partition 2 --topic test_topic3
> Exception in thread "main" joptsimple.UnrecognizedOptionException: partition 
> is not a recognized option at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:108) at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510) at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56) at 
> joptsimple.OptionParser.parse(OptionParser.java:396) at 
> kafka.admin.TopicCommand$TopicCommandOptions.(TopicCommand.scala:358) 
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:44) at 
> kafka.admin.TopicCommand.main(TopicCommand.scala)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-8915) Unable to modify partition

2019-09-19 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933961#comment-16933961
 ] 

huxihx edited comment on KAFKA-8915 at 9/20/19 2:32 AM:


Seems it's a typo. You have to specify `partitions` not `partition` in the 
command.


was (Author: huxi_2b):
Seems it's a typo. You have to specify `--partitions` instead of `--partition`.

> Unable to modify partition
> --
>
> Key: KAFKA-8915
> URL: https://issues.apache.org/jira/browse/KAFKA-8915
> Project: Kafka
>  Issue Type: Bug
>Reporter: lingyi.zhong
>Priority: Major
>
> [root@work1 kafka]# bin/kafka-topics.sh --create --zookeeper 10.20.30.77:2181 
> --replication-factor 1 --partitions 1  --topic test_topic3[root@work1 kafka]# 
> bin/kafka-topics.sh --create --zookeeper 10.20.30.77:2181 
> --replication-factor 1 --partitions 1  --topic test_topic3
> WARNING: Due to limitations in metric names, topics with a period ('.') or 
> underscore ('_') could collide. To avoid issues it is best to use either, but 
> not both.Created topic "test_topic3".[root@work1 kafka]# bin/kafka-topics.sh  
> --alter --zookeeper 10.20.30.78:2181/chroot  --partition 2 --topic test_topic3
> Exception in thread "main" joptsimple.UnrecognizedOptionException: partition 
> is not a recognized option at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:108) at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510) at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56) at 
> joptsimple.OptionParser.parse(OptionParser.java:396) at 
> kafka.admin.TopicCommand$TopicCommandOptions.(TopicCommand.scala:358) 
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:44) at 
> kafka.admin.TopicCommand.main(TopicCommand.scala)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8915) Unable to modify partition

2019-09-19 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933961#comment-16933961
 ] 

huxihx commented on KAFKA-8915:
---

Seems it's a typo. You have to specify `--partitions` instead of `--partition`.

> Unable to modify partition
> --
>
> Key: KAFKA-8915
> URL: https://issues.apache.org/jira/browse/KAFKA-8915
> Project: Kafka
>  Issue Type: Bug
>Reporter: lingyi.zhong
>Priority: Major
>
> [root@work1 kafka]# bin/kafka-topics.sh --create --zookeeper 10.20.30.77:2181 
> --replication-factor 1 --partitions 1  --topic test_topic3[root@work1 kafka]# 
> bin/kafka-topics.sh --create --zookeeper 10.20.30.77:2181 
> --replication-factor 1 --partitions 1  --topic test_topic3
> WARNING: Due to limitations in metric names, topics with a period ('.') or 
> underscore ('_') could collide. To avoid issues it is best to use either, but 
> not both.Created topic "test_topic3".[root@work1 kafka]# bin/kafka-topics.sh  
> --alter --zookeeper 10.20.30.78:2181/chroot  --partition 2 --topic test_topic3
> Exception in thread "main" joptsimple.UnrecognizedOptionException: partition 
> is not a recognized option at 
> joptsimple.OptionException.unrecognizedOption(OptionException.java:108) at 
> joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:510) at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56) at 
> joptsimple.OptionParser.parse(OptionParser.java:396) at 
> kafka.admin.TopicCommand$TopicCommandOptions.(TopicCommand.scala:358) 
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:44) at 
> kafka.admin.TopicCommand.main(TopicCommand.scala)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8107) Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete

2019-09-16 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8107:
-

Assignee: huxihx

> Flaky Test kafka.api.ClientIdQuotaTest.testQuotaOverrideDelete
> --
>
> Key: KAFKA-8107
> URL: https://issues.apache.org/jira/browse/KAFKA-8107
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Assignee: huxihx
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testQuotaOverrideDelete/



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8881) Measure thread running time precisely

2019-09-06 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8881:
--
Description: Currently, the code uses `System.currentTimeMillis()` to 
measure timeout extensively. However, many situations trigger the thread 
suspend such as gc and context switch. In such cases, the timeout value we 
specify is not strictly honored. I believe many of flaky tests failed with 
timed-out are a result of this. Maybe we should use 
ThreadMXBean#getCurrentThreadUserTime to precisely measure the thread running 
time.  (was: Currently, the code uses `System.currentTimeMillis()` to measure 
timeout extensively. However, many situations trigger the thread suspend such 
as gc and context switch. In such cases, the timeout value we specify is not 
strictly honored. Maybe we could use ThreadMXBean#getCurrentThreadUserTime to 
precisely measure the thread running time.)

> Measure thread running time precisely
> -
>
> Key: KAFKA-8881
> URL: https://issues.apache.org/jira/browse/KAFKA-8881
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: huxihx
>Priority: Major
>  Labels: needs-discussion
>
> Currently, the code uses `System.currentTimeMillis()` to measure timeout 
> extensively. However, many situations trigger the thread suspend such as gc 
> and context switch. In such cases, the timeout value we specify is not 
> strictly honored. I believe many of flaky tests failed with timed-out are a 
> result of this. Maybe we should use ThreadMXBean#getCurrentThreadUserTime to 
> precisely measure the thread running time.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8881) Measure thread running time precisely

2019-09-06 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8881:
--
Description: Currently, the code uses `System.currentTimeMillis()` to 
measure timeout extensively. However, many situations trigger the thread 
suspend such as gc and context switch. In such cases, the timeout value we 
specify is not strictly honored. Maybe we could use 
ThreadMXBean#getCurrentThreadUserTime to precisely measure the thread running 
time.  (was: Currently, the code uses `System.currentTimeMillis()` to measure 
timeout. However, many situations trigger the thread suspend such as gc and 
context switch. In such cases, the timeout value we specify is not strictly 
honored. Maybe we could use ThreadMXBean#getCurrentThreadUserTime to precisely 
measure the thread running time.)

> Measure thread running time precisely
> -
>
> Key: KAFKA-8881
> URL: https://issues.apache.org/jira/browse/KAFKA-8881
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: huxihx
>Priority: Major
>  Labels: needs-discussion
>
> Currently, the code uses `System.currentTimeMillis()` to measure timeout 
> extensively. However, many situations trigger the thread suspend such as gc 
> and context switch. In such cases, the timeout value we specify is not 
> strictly honored. Maybe we could use ThreadMXBean#getCurrentThreadUserTime to 
> precisely measure the thread running time.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8881) Measure thread running time precisely

2019-09-06 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8881:
--
Labels: needs-discussion  (was: )

> Measure thread running time precisely
> -
>
> Key: KAFKA-8881
> URL: https://issues.apache.org/jira/browse/KAFKA-8881
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: huxihx
>Priority: Major
>  Labels: needs-discussion
>
> Currently, the code uses `System.currentTimeMillis()` to measure timeout. 
> However, many situations trigger the thread suspend such as gc and context 
> switch. In such cases, the timeout value we specify is not strictly honored. 
> Maybe we could use ThreadMXBean#getCurrentThreadUserTime to precisely measure 
> the thread running time.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8881) Measure thread running time precisely

2019-09-06 Thread huxihx (Jira)
huxihx created KAFKA-8881:
-

 Summary: Measure thread running time precisely
 Key: KAFKA-8881
 URL: https://issues.apache.org/jira/browse/KAFKA-8881
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: huxihx


Currently, the code uses `System.currentTimeMillis()` to measure timeout. 
However, many situations trigger the thread suspend such as gc and context 
switch. In such cases, the timeout value we specify is not strictly honored. 
Maybe we could use ThreadMXBean#getCurrentThreadUserTime to precisely measure 
the thread running time.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8838) Allow consumer group tool to work with non-existing consumer groups

2019-09-05 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923917#comment-16923917
 ] 

huxihx commented on KAFKA-8838:
---

Do you use the latest version? I cannot reproduce this problem using 2.3. 

> Allow consumer group tool to work with non-existing consumer groups
> ---
>
> Key: KAFKA-8838
> URL: https://issues.apache.org/jira/browse/KAFKA-8838
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Patrik Kleindl
>Priority: Minor
>
> The streams application reset tool works for non-existing consumer groups and 
> allows to "pre-set" offsets before a new deployment.
> The consumer group tool does not allow the same which would be a nice 
> enhancement.
> If this should work and the NullPointerException is not expected this can be 
> converted to a bug.
>  
> {code:java}
> ./kafka-consumer-groups --bootstrap-server broker:9092 --group applicationId 
> --reset-offsets --by-duration P60D --topic topic1 --executeError: Executing 
> consumer group command failed due to nulljava.lang.NullPointerException at 
> scala.collection.convert.Wrappers$JListWrapper.iterator(Wrappers.scala:88) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$parseTopicPartitionsToReset$1.apply(ConsumerGroupCommand.scala:477)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$$anonfun$parseTopicPartitionsToReset$1.apply(ConsumerGroupCommand.scala:471)
>  at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>  at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at 
> scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.parseTopicPartitionsToReset(ConsumerGroupCommand.scala:471)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.getPartitionsToReset(ConsumerGroupCommand.scala:486)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:310)
>  at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:64) at 
> kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala){code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8818) CreatePartitions Request protocol documentation

2019-09-05 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923907#comment-16923907
 ] 

huxihx commented on KAFKA-8818:
---

Why do you think the array here is invalid? It stores the replica assignment 
for a new partition. If the replication factor is larger than 1, then 
assignment must be an array containing all the brokers on which replicas are 
running.

> CreatePartitions Request protocol documentation
> ---
>
> Key: KAFKA-8818
> URL: https://issues.apache.org/jira/browse/KAFKA-8818
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Fábio Silva
>Priority: Major
>  Labels: documentation, protocol-documentation
>
> CreatePartitions Request protocol documentation contains a invalid type 
> ARRAY(INT32) (assignment field), it must be INT32.
> Wrong: 
> {code:java}
> assignment => ARRAY(INT32){code}
> Correct:
> {code:java}
> assignment => INT32
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8877) Race condition on partition counter

2019-09-05 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923313#comment-16923313
 ] 

huxihx commented on KAFKA-8877:
---

Seems `nextValue` is already removed by 
[KAFKA-8601|https://issues.apache.org/jira/browse/KAFKA-8601]. 

> Race condition on partition counter
> ---
>
> Key: KAFKA-8877
> URL: https://issues.apache.org/jira/browse/KAFKA-8877
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.2.1
>Reporter: Oleg Kuznetsov
>Priority: Major
>
> In the method:
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner#nextValue*
> {code:java}
> private int nextValue(String topic) {
> AtomicInteger counter = topicCounterMap.get(topic);
> if (null == counter) {
> counter = new AtomicInteger(ThreadLocalRandom.current().nextInt());
> AtomicInteger currentCounter = topicCounterMap.putIfAbsent(topic, 
> counter);
> if (currentCounter != null) {
> counter = currentCounter;
> }
> }
> return counter.getAndIncrement();
> }
> {code}
> the counter might be created multiple times instead of once.
> I propose to replace it with something like *topicCounterMap.compute(topic, _ 
> -> ...* (init the counter once per topic))  ** 
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (KAFKA-8876) KafkaBasedLog does not throw exception when some partitions of the topic is offline

2019-09-05 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8876:
-

Assignee: huxihx

> KafkaBasedLog does not throw exception when some partitions of the topic is 
> offline
> ---
>
> Key: KAFKA-8876
> URL: https://issues.apache.org/jira/browse/KAFKA-8876
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Boquan Tang
>Assignee: huxihx
>Priority: Major
>
> Currently KafkaBasedLog does not check if *all* partitions in the topic is 
> online or not, this may result it ignoring partitions that's still recovering 
> and in turn report to KafkaOffsetBackingStore null offset backed by the 
> concerning partition, while in fact it should either wait or fail the 
> connector thread to prompt retry, so the offset can be correctly loaded by 
> the connector.
> Specifically, we are using debezium mysql connector to replicate mysql binlog 
> to kafka.
> In an attempt of restarting after a cluster downage, we observed following:
> {code}
> 2019-08-29T19:27:32Z INFO 
> [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] [main] Starting 
> KafkaOffsetBackingStore
> 2019-08-29T19:27:32Z INFO [org.apache.kafka.connect.util.KafkaBasedLog] 
> [main] Starting KafkaBasedLog with topic bobqueue-binlog-shovel-v1-offsets
> ...skipped client config logs...
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-12 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-10 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-21 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-5 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-20 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-18 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-2 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-13 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-11 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-8 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-23 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-7 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-22 to offset 0.
> 2019-08-29T19:27:33Z INFO 
> [org.apache.kafka.clients.consumer.internals.Fetcher] [main] [Consumer 
> clientId=consumer-1, groupId=bobqueue-binlog-shovel-v1] Resetting offset for 
> partition bobqueue-binlog-shovel-v1-offsets-6 to 

[jira] [Assigned] (KAFKA-8875) CreateTopic API should check topic existence before replication factor

2019-09-04 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8875:
-

Assignee: huxihx

> CreateTopic API should check topic existence before replication factor
> --
>
> Key: KAFKA-8875
> URL: https://issues.apache.org/jira/browse/KAFKA-8875
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> If you try to create a topic and the replication factor cannot be satisfied, 
> Kafka will return `INVALID_REPLICATION_FACTOR`. If the topic already exists, 
> we should probably return `TOPIC_ALREADY_EXISTS` instead. You won't see this 
> problem if using TopicCommand because we check existence prior to creating 
> the topic.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (KAFKA-8719) kafka-console-consumer bypassing sentry evaluations while specifying --partition option

2019-09-02 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-8719.
---
Resolution: Cannot Reproduce

> kafka-console-consumer bypassing sentry evaluations while specifying 
> --partition option
> ---
>
> Key: KAFKA-8719
> URL: https://issues.apache.org/jira/browse/KAFKA-8719
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, tools
>Reporter: Sathish
>Priority: Major
>  Labels: kafka-console-cons
>
> While specifying --partition option on kafka-console-consumer, it is 
> bypassing the sentry evaluations and making the users to consume messages 
> freely. Even though a consumer group does not have access to consume messages 
> from topics --partition option bypassing the evaluation
> Example command used:
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111 --partition 0
> This succeeds even though, if spark-kafka-111 does not have any access on 
> topic booktopic1
> whereas 
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111
> Fails with topic authorisation issues



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8719) kafka-console-consumer bypassing sentry evaluations while specifying --partition option

2019-08-29 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919137#comment-16919137
 ] 

huxihx commented on KAFKA-8719:
---

What version did you use?  Options group and partition should not be specified 
together. Besides, I did not reproduce this issue using the latest version(2.3).

> kafka-console-consumer bypassing sentry evaluations while specifying 
> --partition option
> ---
>
> Key: KAFKA-8719
> URL: https://issues.apache.org/jira/browse/KAFKA-8719
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, tools
>Reporter: Sathish
>Priority: Major
>  Labels: kafka-console-cons
>
> While specifying --partition option on kafka-console-consumer, it is 
> bypassing the sentry evaluations and making the users to consume messages 
> freely. Even though a consumer group does not have access to consume messages 
> from topics --partition option bypassing the evaluation
> Example command used:
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111 --partition 0
> This succeeds even though, if spark-kafka-111 does not have any access on 
> topic booktopic1
> whereas 
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111
> Fails with topic authorisation issues



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8732) specifying a non-existent broker to ./bin/kafka-reassign-partitions.sh leads to reassignment never getting completed.

2019-08-29 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919121#comment-16919121
 ] 

huxihx commented on KAFKA-8732:
---

The issue was already fixed in newer versions where ReassignPartitionsCommand 
checks existence for to-be-reassigned brokers before the execution.

> specifying a non-existent broker to ./bin/kafka-reassign-partitions.sh leads 
> to reassignment never getting completed.
> -
>
> Key: KAFKA-8732
> URL: https://issues.apache.org/jira/browse/KAFKA-8732
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, tools
>Affects Versions: 0.10.1.1
> Environment: Ubuntu-VERSION="14.04.5 LTS"
>Reporter: Ron1994
>Priority: Critical
>  Labels: bin, tools
>
> Specifying a non-existent broker to ./bin/kafka-reassign-partitions.sh leads 
> to reassignment never getting completed. 
>  My reassignment is getting struck if I provide non-existing broker ID. My 
> kafka version is 0.10.1.1.
>  
>  
> {code:java}
> ./kafka-reassign-partitions.sh --zookeeper zk:2181 --reassignment-json-file 
> le.json --execute
> Current partition replica assignment
> {"version":1,"partitions":[{"topic":"cv-topic","partition":0,"replicas":[1011131,101067,98,101240]}]}
> Save this to use as the --reassignment-json-file option during rollback
> Successfully started reassignment of partitions.
> {code}
> In this 98 is the non-existing broker. Deleting reassign_partitions znode is 
> of no use as well. As when I describe the same topic the 98 broker is out of 
> sync.
>  
>  
> {code:java}
> Topic:cv-topic PartitionCount:1 ReplicationFactor:4 Configs:
> Topic: cv-topic Partition: 0 Leader: 1011131 Replicas: 
> 1011131,101067,98,101240 Isr: 1011131,101067,101240
> {code}
> Now 98 will always be out of sync.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Issue Comment Deleted] (KAFKA-8719) kafka-console-consumer bypassing sentry evaluations while specifying --partition option

2019-08-29 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8719:
--
Comment: was deleted

(was: The issue was already fixed in newer versions where 
ReassignPartitionsCommand checks existence for to-be-reassigned brokers before 
the execution.)

> kafka-console-consumer bypassing sentry evaluations while specifying 
> --partition option
> ---
>
> Key: KAFKA-8719
> URL: https://issues.apache.org/jira/browse/KAFKA-8719
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, tools
>Reporter: Sathish
>Priority: Major
>  Labels: kafka-console-cons
>
> While specifying --partition option on kafka-console-consumer, it is 
> bypassing the sentry evaluations and making the users to consume messages 
> freely. Even though a consumer group does not have access to consume messages 
> from topics --partition option bypassing the evaluation
> Example command used:
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111 --partition 0
> This succeeds even though, if spark-kafka-111 does not have any access on 
> topic booktopic1
> whereas 
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111
> Fails with topic authorisation issues



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8719) kafka-console-consumer bypassing sentry evaluations while specifying --partition option

2019-08-29 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919120#comment-16919120
 ] 

huxihx commented on KAFKA-8719:
---

The issue was already fixed in newer versions where ReassignPartitionsCommand 
checks existence for to-be-reassigned brokers before the execution.

> kafka-console-consumer bypassing sentry evaluations while specifying 
> --partition option
> ---
>
> Key: KAFKA-8719
> URL: https://issues.apache.org/jira/browse/KAFKA-8719
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, tools
>Reporter: Sathish
>Priority: Major
>  Labels: kafka-console-cons
>
> While specifying --partition option on kafka-console-consumer, it is 
> bypassing the sentry evaluations and making the users to consume messages 
> freely. Even though a consumer group does not have access to consume messages 
> from topics --partition option bypassing the evaluation
> Example command used:
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111 --partition 0
> This succeeds even though, if spark-kafka-111 does not have any access on 
> topic booktopic1
> whereas 
> #kafka-console-consumer  --topic booktopic1 --consumer.config 
> consumer.properties --bootstrap-server :9092 --from-beginning 
> --consumer-property group.id=spark-kafka-111
> Fails with topic authorisation issues



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8718) Not authorized to access topics: [__consumer_offsets] with Apache Kafka 2.3.0

2019-08-29 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919109#comment-16919109
 ] 

huxihx commented on KAFKA-8718:
---

Option `blacklist` was removed by 
[KAFKA-2983|https://issues.apache.org/jira/browse/KAFKA-2983]. Did you enable 
security for the source cluster?

> Not authorized to access topics: [__consumer_offsets] with Apache Kafka 2.3.0
> -
>
> Key: KAFKA-8718
> URL: https://issues.apache.org/jira/browse/KAFKA-8718
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, mirrormaker, producer 
>Affects Versions: 2.3.0
>Reporter: Bala Bharath Reddy Resapu
>Priority: Critical
>
> Hi Team,
> I am trying to replicate all topics from one instance to other instance using 
> Kafka mirror maker. When i specify to copy all the topics using whitelist 
> option it fails with the below error. Upon reading few blogs people have 
> suggested to mention the offset topic in blacklist. When i tried to do that 
> it fails saying not a recognized option. Please suggest if this is a bug or 
> do we have a fix for this.
> /usr/src/mirror-maker/kafka_2.12-2.3.0/bin/kafka-mirror-maker.sh 
> --consumer.config sourceClusterConsumer.properties --producer.config 
> targetClusterProducer.properties --num.streams 4 --whitelist=".*"
> ERROR Error when sending message to topic __consumer_offsets with key: 62 
> bytes, value: 28 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
>  
> --blacklist "__consumer_offsets
>  ERROR Exception when starting mirror maker. (kafka.tools.MirrorMaker$)
> joptsimple.UnrecognizedOptionException: blacklist is not a recognized option



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8801) electLeaderForPartitions redundancy for some success elect partitions

2019-08-16 Thread huxihx (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908794#comment-16908794
 ] 

huxihx commented on KAFKA-8801:
---

Seems it's already fixed in 
[KAFKA-8286|https://issues.apache.org/jira/browse/KAFKA-8286] ?

> electLeaderForPartitions redundancy for some success elect partitions
> -
>
> Key: KAFKA-8801
> URL: https://issues.apache.org/jira/browse/KAFKA-8801
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: shilin Lu
>Priority: Major
> Attachments: code.png
>
>
> !code.png!
> This is the code of electLeaderForPartitions. The logic of code is if update 
> leaderAndIsr to zk failed,will add the partitions to remaining retry seq. so 
> i think the parameters in the red box should change to remainings



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (KAFKA-8503) AdminClient should ignore retries config if a custom timeout is provided

2019-06-11 Thread huxihx (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8503:
-

Assignee: huxihx

> AdminClient should ignore retries config if a custom timeout is provided
> 
>
> Key: KAFKA-8503
> URL: https://issues.apache.org/jira/browse/KAFKA-8503
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> The admin client takes a `retries` config similar to the producer. The 
> default value is 5. Individual APIs also accept an optional timeout, which is 
> defaulted to `request.timeout.ms`. The call will fail if either `retries` or 
> the API timeout is exceeded. This is not very intuitive. I think a user would 
> expect to wait if they provided a timeout and the operation cannot be 
> completed. In general, timeouts are much easier for users to work with and 
> reason about.
> A couple options are either to ignore `retries` in this case or to increase 
> the default value of `retries` to something large and not likely to be 
> exceeded. I propose to do the first. Longer term, we could consider 
> deprecating `retries` and avoiding the overloading of `request.timeout.ms` by 
> providing a `default.api.timeout.ms` similar to the consumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8442) Inconsistent ISR output in topic command when using --bootstrap-server

2019-05-28 Thread huxihx (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8442:
-

Assignee: huxihx

> Inconsistent ISR output in topic command when using --bootstrap-server
> --
>
> Key: KAFKA-8442
> URL: https://issues.apache.org/jira/browse/KAFKA-8442
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> If there is no leader for a partition, the Metadata API returns an empty ISR. 
> When using the `--bootstrap-server` option with `kafka-topics.sh`, this leads 
> to the following output:
> {code}
> Topic:foo   PartitionCount:1ReplicationFactor:2 
> Configs:segment.bytes=1073741824
> Topic: foo  Partition: 0Leader: noneReplicas: 1,3   Isr: 
> {code}
> When using `--zookeeper`, we display the current ISR correctly:
> {code}
> Topic:foo   PartitionCount:1ReplicationFactor:2 Configs:
> Topic: foo  Partition: 0Leader: -1  Replicas: 1,3   Isr: 1
> {code}
> To avoid confusion, we should make this output consistent or at least not 
> misleading. We should either change the Metadata API to print the ISR when we 
> have it or we can change the output of the topic command to `N/A` or 
> something like that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8402) bin/kafka-preferred-replica-election.sh fails if generated json is bigger than 1MB

2019-05-22 Thread huxihx (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846315#comment-16846315
 ] 

huxihx commented on KAFKA-8402:
---

 A possible solution is to throw an exception when the length of the encoded 
byte array hits the 1MB threshold.

> bin/kafka-preferred-replica-election.sh fails if generated json is bigger 
> than 1MB
> --
>
> Key: KAFKA-8402
> URL: https://issues.apache.org/jira/browse/KAFKA-8402
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.1.1
>Reporter: Vyacheslav Stepanov
>Priority: Major
>
> If I run script {{bin/kafka-preferred-replica-election.sh}} without 
> specifying the list of topics/partitions - it will get all topics/partitions 
> from zookeeper and transform that to json, then it will create zookeeper node 
> at {{/admin/preferred_replica_election}} using this json as data for that 
> zookeeper node. If the generated json is bigger than 1MB (default max size of 
> data of zookeeper node) - the script will fail without giving a good 
> description of failure. The size of 1MB can be reached if the amount of 
> topics/partitions is high enough.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8338) Improve consumer offset expiration logic to take subscription into account

2019-05-14 Thread huxihx (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8338:
-

Assignee: huxihx

> Improve consumer offset expiration logic to take subscription into account
> --
>
> Key: KAFKA-8338
> URL: https://issues.apache.org/jira/browse/KAFKA-8338
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>Assignee: huxihx
>Priority: Major
>
> Currently, we expire consumer offsets for a group after the group is 
> considered gone.
> There is a case where the consumer group still exists, but is now subscribed 
> to different topics. In that case, the offsets of the old topics will never 
> expire and if lag is monitored, the monitors will show ever-growing lag on 
> those topics. 
> We need to improve the logic to expire the consumer offsets if the consumer 
> group didn't subscribe to specific topics/partitions for enough time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8350) Splitting batches should consider topic-level message size

2019-05-10 Thread huxihx (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx updated KAFKA-8350:
--
Description: 
Currently, producers do the batch splitting based on the batch size. However, 
the split will never succeed when batch size is greatly larger than the 
topic-level max message size.

For instance, if the batch size is set to 8MB but we maintain the default value 
for broker-side `message.max.bytes` (112, about1MB), producer will 
endlessly try to split a large batch but never succeeded, as shown below:
{code:java}
[2019-05-10 16:25:09,233] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 61 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:10,021] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 62 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:10,758] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 63 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:12,071] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 64 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617){code}
A better solution is to have producer do splitting based on the minimum of 
these two configs. However, it is tricky for the client to get the topic-level 
or broker-level config values. Seems  there could be three ways to do this:
 # When broker throws `RecordTooLargeException`, do not swallow its real 
message since it contains the max message size already. If the message is not 
swallowed, the client easily gets it from the response.
 # Add code to issue  `DescribeConfigsRequest` to retrieve the value.
 # If splitting failed, decreases the batch size gradually until the split is 
successful. For  example, 

{code:java}
// In RecordAccumulator.java
private int steps = 1;
..
public int splitAndReenqueue(ProducerBatch bigBatch) {
..
Deque dq = bigBatch.split(this.batchSize / steps);
if (dq.size() == 1) // split failed
steps++;
..
}{code}
Do all of these make sense?

 

 

 

  was:
Currently, producers do the batch splitting based on the batch size. However, 
the split will never succeed when batch size is greatly larger than the 
topic-level max message size.

For instance, if the batch size is set to 8MB but we maintain the default value 
for broker-side `message.max.bytes` (112, about1MB), producer will 
endlessly try to split a large batch but never succeeded, as shown below:
{code:java}
[2019-05-10 16:25:09,233] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 61 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:10,021] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 62 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:10,758] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 63 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:12,071] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 64 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617){code}
A better solution is to have producer do splitting based on the minimum of 
these two configs. However, it is tricky for the client to get the topic-level 
or broker-level config values. Seems  there could be three ways to do this:
 # When broker throws `RecordTooLargeException`, do not swallow its real 
message since it contains the max message size already. If the message is not 
swallowed, the client easily gets it from the response.
 # Add code to issue  `DescribeConfigsRequest` to retrieve the value.
 # If splitting failed, lower down the batch size gradually until the split is 
successful. For  example, 

{code:java}
// In RecordAccumulator.java
private int steps = 1;
..
public int splitAndReenqueue(ProducerBatch bigBatch) {
..
Deque dq = bigBatch.split(this.batchSize / steps);
if (dq.size() == 1) // 

[jira] [Created] (KAFKA-8350) Splitting batches should consider topic-level message size

2019-05-10 Thread huxihx (JIRA)
huxihx created KAFKA-8350:
-

 Summary: Splitting batches should consider topic-level message size
 Key: KAFKA-8350
 URL: https://issues.apache.org/jira/browse/KAFKA-8350
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 2.3.0
Reporter: huxihx


Currently, producers do the batch splitting based on the batch size. However, 
the split will never succeed when batch size is greatly larger than the 
topic-level max message size.

For instance, if the batch size is set to 8MB but we maintain the default value 
for broker-side `message.max.bytes` (112, about1MB), producer will 
endlessly try to split a large batch but never succeeded, as shown below:
{code:java}
[2019-05-10 16:25:09,233] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 61 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:10,021] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 62 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:10,758] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 63 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617)
[2019-05-10 16:25:12,071] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 64 on topic-partition test-0, splitting and retrying 
(2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender:617){code}
A better solution is to have producer do splitting based on the minimum of 
these two configs. However, it is tricky for the client to get the topic-level 
or broker-level config values. Seems  there could be three ways to do this:
 # When broker throws `RecordTooLargeException`, do not swallow its real 
message since it contains the max message size already. If the message is not 
swallowed, the client easily gets it from the response.
 # Add code to issue  `DescribeConfigsRequest` to retrieve the value.
 # If splitting failed, lower down the batch size gradually until the split is 
successful. For  example, 

{code:java}
// In RecordAccumulator.java
private int steps = 1;
..
public int splitAndReenqueue(ProducerBatch bigBatch) {
..
Deque dq = bigBatch.split(this.batchSize / steps);
if (dq.size() == 1) // split failed
steps++;
..
}{code}
Do all of these make sense?

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8211) Flaky Test: ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan

2019-04-10 Thread huxihx (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8211:
-

Assignee: huxihx

> Flaky Test: ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan
> -
>
> Key: KAFKA-8211
> URL: https://issues.apache.org/jira/browse/KAFKA-8211
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.3.0
>Reporter: Bill Bejeck
>Assignee: huxihx
>Priority: Major
> Fix For: 2.3.0
>
>
> Failed in build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20778/]
>  
> {noformat}
> Error Message
> java.lang.AssertionError: Expected that consumer group has consumed all 
> messages from topic/partition.
> Stacktrace
> java.lang.AssertionError: Expected that consumer group has consumed all 
> messages from topic/partition.
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:381)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791)
>   at 
> kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
>   at 
> kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
>   at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan(ResetConsumerGroupOffsetTest.scala:323)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
> 

  1   2   3   4   >