[GitHub] [kafka] tang7526 commented on pull request #10588: KAFKA-12662: add unit test for ProducerPerformance

2021-05-28 Thread GitBox


tang7526 commented on pull request #10588:
URL: https://github.com/apache/kafka/pull/10588#issuecomment-850770445


   > > BTW, could you add tests for the code used to generate data? 
(https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java#L127)
   > 
   > Ah, right! Thanks for reminder, @chia7712 !
   > @tang7526 , this is the PR(#10469) to fix the random data generation. I 
agree we should add some tests to protect this change. FYI
   
   Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] tang7526 commented on a change in pull request #10588: KAFKA-12662: add unit test for ProducerPerformance

2021-05-28 Thread GitBox


tang7526 commented on a change in pull request #10588:
URL: https://github.com/apache/kafka/pull/10588#discussion_r641890279



##
File path: 
tools/src/test/java/org/apache/kafka/tools/ProducerPerformanceTest.java
##
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.tools;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.Callback;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.mockito.ArgumentMatchers;
+import org.mockito.Mock;
+import org.mockito.Spy;
+import org.mockito.junit.jupiter.MockitoExtension;
+
+import net.sourceforge.argparse4j.inf.ArgumentParser;
+import net.sourceforge.argparse4j.inf.ArgumentParserException;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.util.List;
+import java.util.Properties;
+
+import static java.util.Arrays.asList;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+
+@ExtendWith(MockitoExtension.class)
+public class ProducerPerformanceTest {
+
+@Mock
+KafkaProducer producerMock;
+
+@Spy
+ProducerPerformance producerPerformaceSpy;
+
+private File createTempFile(String contents) throws IOException {
+File file = File.createTempFile("ProducerPerformanceTest", ".tmp");
+file.deleteOnExit();
+final FileWriter writer = new FileWriter(file);
+writer.write(contents);
+writer.close();
+return file;
+}
+
+@Test
+public void testReadPayloadFile() throws Exception {
+File payloadFile = createTempFile("Hello\nKafka");
+String payloadFilePath = payloadFile.getAbsolutePath();
+String payloadDelimiter = "\n";
+
+List payloadByteList = 
ProducerPerformance.readPayloadFile(payloadFilePath, payloadDelimiter);
+
+assertEquals(2, payloadByteList.size());
+assertEquals("Hello", new String(payloadByteList.get(0)));
+assertEquals("Kafka", new String(payloadByteList.get(1)));
+}
+
+@Test
+public void testReadProps() throws Exception {
+
+List producerProps = 
asList("bootstrap.servers=localhost:9000");

Review comment:
   > `asList` -> `Collections.singletonList`
   
   Done.

##
File path: tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java
##
@@ -46,6 +47,11 @@
 public class ProducerPerformance {
 
 public static void main(String[] args) throws Exception {
+ProducerPerformance perf = new ProducerPerformance();
+perf.start(args);
+}
+
+public void start(String[] args) throws IOException {

Review comment:
   > Could you change it from public to package-private?
   
   Done

##
File path: tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java
##
@@ -190,8 +166,53 @@ public static void main(String[] args) throws Exception {
 
 }
 
+public KafkaProducer createKafkaProducer(Properties props) 
{

Review comment:
   > ditto. package-private
   
   Done

##
File path: tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java
##
@@ -190,8 +166,53 @@ public static void main(String[] args) throws Exception {
 
 }
 
+public KafkaProducer createKafkaProducer(Properties props) 
{
+return new KafkaProducer<>(props);
+}
+
+public static Properties readProps(List producerProps, String 
producerConfig, String transactionalId,

Review comment:
   > ditto. package-private
   
   Done

##
File path: tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java
##
@@ -190,8 +166,53 @@ public static void main(String[] args) throws Exception {
 
 }
 
+public KafkaProducer createKafkaProducer(Properties props) 
{
+return new 

[GitHub] [kafka] tang7526 commented on a change in pull request #10588: KAFKA-12662: add unit test for ProducerPerformance

2021-05-28 Thread GitBox


tang7526 commented on a change in pull request #10588:
URL: https://github.com/apache/kafka/pull/10588#discussion_r641890244



##
File path: 
tools/src/test/java/org/apache/kafka/tools/ProducerPerformanceTest.java
##
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.tools;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.Callback;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.mockito.ArgumentMatchers;
+import org.mockito.Mock;
+import org.mockito.Spy;
+import org.mockito.junit.jupiter.MockitoExtension;
+
+import net.sourceforge.argparse4j.inf.ArgumentParser;
+import net.sourceforge.argparse4j.inf.ArgumentParserException;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.util.List;
+import java.util.Properties;
+
+import static java.util.Arrays.asList;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+
+@ExtendWith(MockitoExtension.class)
+public class ProducerPerformanceTest {
+
+@Mock
+KafkaProducer producerMock;
+
+@Spy
+ProducerPerformance producerPerformaceSpy;
+
+private File createTempFile(String contents) throws IOException {
+File file = File.createTempFile("ProducerPerformanceTest", ".tmp");
+file.deleteOnExit();
+final FileWriter writer = new FileWriter(file);
+writer.write(contents);
+writer.close();
+return file;
+}
+
+@Test
+public void testReadPayloadFile() throws Exception {
+File payloadFile = createTempFile("Hello\nKafka");
+String payloadFilePath = payloadFile.getAbsolutePath();
+String payloadDelimiter = "\n";
+
+List payloadByteList = 
ProducerPerformance.readPayloadFile(payloadFilePath, payloadDelimiter);
+
+assertEquals(2, payloadByteList.size());
+assertEquals("Hello", new String(payloadByteList.get(0)));
+assertEquals("Kafka", new String(payloadByteList.get(1)));
+}
+
+@Test
+public void testReadProps() throws Exception {
+
+List producerProps = 
asList("bootstrap.servers=localhost:9000");
+String producerConfig = createTempFile("acks=1").getAbsolutePath();
+String transactionalId = "1234";
+boolean transactionsEnabled = true;
+
+Properties prop = ProducerPerformance.readProps(producerProps, 
producerConfig, transactionalId, transactionsEnabled);
+
+assertNotNull(prop);
+assertEquals(5, prop.size());
+}
+
+@Test
+public void testNumberOfCallsForSendAndClose() throws IOException {
+
+
doReturn(null).when(producerMock).send(ArgumentMatchers.>any(), ArgumentMatchers.any());

Review comment:
   > redundant type arguments `http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.tools;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.Callback;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.mockito.ArgumentMatchers;
+import org.mockito.Mock;
+import org.mockito.Spy;
+import org.mockito.junit.jupiter.MockitoExtension;
+
+import net.sourceforge.argparse4j.inf.ArgumentParser;
+import net.sourceforge.argparse4j.inf.ArgumentParserException;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import 

[GitHub] [kafka] tang7526 commented on a change in pull request #10588: KAFKA-12662: add unit test for ProducerPerformance

2021-05-28 Thread GitBox


tang7526 commented on a change in pull request #10588:
URL: https://github.com/apache/kafka/pull/10588#discussion_r641890227



##
File path: 
tools/src/test/java/org/apache/kafka/tools/ProducerPerformanceTest.java
##
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.tools;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.Callback;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.mockito.ArgumentMatchers;
+import org.mockito.Mock;
+import org.mockito.Spy;
+import org.mockito.junit.jupiter.MockitoExtension;
+
+import net.sourceforge.argparse4j.inf.ArgumentParser;
+import net.sourceforge.argparse4j.inf.ArgumentParserException;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.util.List;
+import java.util.Properties;
+
+import static java.util.Arrays.asList;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+
+@ExtendWith(MockitoExtension.class)
+public class ProducerPerformanceTest {
+
+@Mock
+KafkaProducer producerMock;
+
+@Spy
+ProducerPerformance producerPerformaceSpy;

Review comment:
   > typo: `producerPerformaceSpy` -> `producerPerformanceSpy`
   
   Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] tang7526 commented on a change in pull request #10588: KAFKA-12662: add unit test for ProducerPerformance

2021-05-28 Thread GitBox


tang7526 commented on a change in pull request #10588:
URL: https://github.com/apache/kafka/pull/10588#discussion_r641890191



##
File path: 
tools/src/test/java/org/apache/kafka/tools/ProducerPerformanceTest.java
##
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.tools;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.Callback;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.mockito.ArgumentMatchers;
+import org.mockito.Mock;
+import org.mockito.Spy;
+import org.mockito.junit.jupiter.MockitoExtension;
+
+import net.sourceforge.argparse4j.inf.ArgumentParser;
+import net.sourceforge.argparse4j.inf.ArgumentParserException;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.util.List;
+import java.util.Properties;
+
+import static java.util.Arrays.asList;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+
+@ExtendWith(MockitoExtension.class)
+public class ProducerPerformanceTest {
+
+@Mock
+KafkaProducer producerMock;
+
+@Spy
+ProducerPerformance producerPerformaceSpy;

Review comment:
   > BTW, could you add tests for the code used to generate data? 
(https://github.com/apache/kafka/blob/trunk/tools/src/main/java/org/apache/kafka/tools/ProducerPerformance.java#L127)
   
   Done.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dengziming commented on pull request #10789: KAFKA-6084: Propagate JSON parsing errors in ReassignPartitionsCommand

2021-05-28 Thread GitBox


dengziming commented on pull request #10789:
URL: https://github.com/apache/kafka/pull/10789#issuecomment-850759832


   Hi @viktorsomogyi @ijuma , PTAL.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dengziming opened a new pull request #10789: KAFKA-6084: Propagate JSON parsing errors in ReassignPartitionsCommand

2021-05-28 Thread GitBox


dengziming opened a new pull request #10789:
URL: https://github.com/apache/kafka/pull/10789


   *More detailed description of your change*
   I'm verifying reassigning partitions in KRaft mode, and I accidentally find 
that #4090 only propagate errors when parsing reassignment json, we'd better 
propagate errors when parsing other json.
   
   *Summary of testing strategy (including rationale)*
   unit test
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman opened a new pull request #10788: KAFKA-12648: Pt. 3 - addNamedTopology API

2021-05-28 Thread GitBox


ableegoldman opened a new pull request #10788:
URL: https://github.com/apache/kafka/pull/10788


   Pt. 1: [#10609](https://github.com/apache/kafka/pull/10609)
   Pt. 2: [10683](https://github.com/apache/kafka/pull/10683)
   
   In Pt. 3 we implement the `addNamedTopology` API. This can be used to update 
the processing topology of a running Kafka Streams application without 
resetting the app, or even pausing/restarting the process. It's up to the user 
to ensure that this API is called on every instance of an application to ensure 
all clients are able to run the newly added NamedTopology. This should not be 
too much of a burden as it only requires that each client eventually be updated 
by the user, not that they do so in a synchronous or even ordered fashion -- 
under the covers, Streams will take care of keeping the internal state 
consistent while various clients wait to converge on the latest view of the 
full topology.
   
   Internally, Streams will be leveraging rebalances and subscriptions that 
report each client's currently known NamedTopologies. When a new NamedTopology 
is added, a rebalance will be triggered to distribute the tasks that correspond 
to it. To minimize disruption and wasted work, the assignor just computes the 
desired eventual assignment of these new tasks to clients regardless of whether 
the target client has been issued the `addNamedTopology` request yet. If a 
client receives tasks for a NamedTopology it does not yet recognize, it simply 
files them away and continues to process its other topologies. Once it receives 
this new NamedTopology, it checks whether the name matches those of the unknown 
tasks, and  if it does then these tasks will begin to be processed without 
triggering a new rebalance. If the new NamedTopology does not match any unknown 
tasks it has received, then the client must trigger a fresh rebalance for this 
new NamedTopology.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma edited a comment on pull request #9229: MINOR: Reduce allocations in requests via buffer caching

2021-05-28 Thread GitBox


ijuma edited a comment on pull request #9229:
URL: https://github.com/apache/kafka/pull/9229#issuecomment-850747419


   This PR:
   
   ```
   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.compression_type=lz4
   status: PASS
   run time:   1 minute 16.243 seconds
   {"records_per_sec": 2044571.6622, "mb_per_sec": 194.9855}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.compression_type=zstd
   status: PASS
   run time:   1 minute 19.227 seconds
   {"records_per_sec": 1779992.88, "mb_per_sec": 169.7533}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_and_consumer.security_protocol=PLAINTEXT.compression_type=lz4
   status: PASS
   run time:   1 minute 13.064 seconds
   {"producer": {"records_per_sec": 402868.423173, "mb_per_sec": 38.42}, 
"consumer": {"records_per_sec": 408363.28, "mb_per_sec": 38.9446}}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_and_consumer.security_protocol=PLAINTEXT.compression_type=zstd
   status: PASS
   run time:   1 minute 12.112 seconds
   {"producer": {"records_per_sec": 347886.588972, "mb_per_sec": 33.18}, 
"consumer": {"records_per_sec": 352534.7247, "mb_per_sec": 33.6203}}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_end_to_end_latency.security_protocol=PLAINTEXT.compression_type=lz4
   status: PASS
   run time:   51.120 seconds
   {"latency_50th_ms": 0.0, "latency_99th_ms": 3.0, "latency_999th_ms": 8.0}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_end_to_end_latency.security_protocol=PLAINTEXT.compression_type=zstd
   status: PASS
   run time:   45.992 seconds
   {"latency_50th_ms": 0.0, "latency_99th_ms": 3.0, "latency_999th_ms": 9.0}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_long_term_producer_throughput.security_protocol=PLAINTEXT.compression_type=lz4
   status: PASS
   run time:   1 minute 11.957 seconds
   {"0": {"records_per_sec": 400994.466276, "mb_per_sec": 38.24}}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_long_term_producer_throughput.security_protocol=PLAINTEXT.compression_type=zstd
   status: PASS
   run time:   1 minute 12.859 seconds
   {"0": {"records_per_sec": 366716.784627, "mb_per_sec": 34.97}}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=10
   status: PASS
   run time:   55.828 seconds
   {"records_per_sec": 1101318.782309, "mb_per_sec": 10.5}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=100
   status: PASS
   run time:   44.917 seconds
   {"records_per_sec": 373345.479833, "mb_per_sec": 35.6}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=1000
   status: PASS
   run time:   45.609 seconds
   {"records_per_sec": 63912.857143, "mb_per_sec": 60.95}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=1
   status: PASS
   run time:   45.665 seconds
   {"records_per_sec": 8099.57755, "mb_per_sec": 77.24}
   

   test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=10
   status: PASS
   run time:   42.768 seconds
   {"records_per_sec": 1127.731092, 

[GitHub] [kafka] ijuma edited a comment on pull request #9229: MINOR: Reduce allocations in requests via buffer caching

2021-05-28 Thread GitBox


ijuma edited a comment on pull request #9229:
URL: https://github.com/apache/kafka/pull/9229#issuecomment-850747419


   This PR:
   
   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   1 minute 16.243 seconds
   > {"records_per_sec": 2044571.6622, "mb_per_sec": 194.9855}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   1 minute 19.227 seconds
   > {"records_per_sec": 1779992.88, "mb_per_sec": 169.7533}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_and_consumer.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   1 minute 13.064 seconds
   > {"producer": {"records_per_sec": 402868.423173, "mb_per_sec": 38.42}, 
"consumer": {"records_per_sec": 408363.28, "mb_per_sec": 38.9446}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_and_consumer.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   1 minute 12.112 seconds
   > {"producer": {"records_per_sec": 347886.588972, "mb_per_sec": 33.18}, 
"consumer": {"records_per_sec": 352534.7247, "mb_per_sec": 33.6203}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_end_to_end_latency.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   51.120 seconds
   > {"latency_50th_ms": 0.0, "latency_99th_ms": 3.0, "latency_999th_ms": 8.0}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_end_to_end_latency.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   45.992 seconds
   > {"latency_50th_ms": 0.0, "latency_99th_ms": 3.0, "latency_999th_ms": 9.0}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_long_term_producer_throughput.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   1 minute 11.957 seconds
   > {"0": {"records_per_sec": 400994.466276, "mb_per_sec": 38.24}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_long_term_producer_throughput.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   1 minute 12.859 seconds
   > {"0": {"records_per_sec": 366716.784627, "mb_per_sec": 34.97}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=10
   > status: PASS
   > run time:   55.828 seconds
   > {"records_per_sec": 1101318.782309, "mb_per_sec": 10.5}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=100
   > status: PASS
   > run time:   44.917 seconds
   > {"records_per_sec": 373345.479833, "mb_per_sec": 35.6}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=1000
   > status: PASS
   > run time:   45.609 seconds
   > {"records_per_sec": 63912.857143, "mb_per_sec": 60.95}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=1
   > status: PASS
   > run time:   45.665 seconds
   > {"records_per_sec": 8099.57755, "mb_per_sec": 77.24}
   > 

   > test_id:

[GitHub] [kafka] ijuma commented on pull request #9229: MINOR: Reduce allocations in requests via buffer caching

2021-05-28 Thread GitBox


ijuma commented on pull request #9229:
URL: https://github.com/apache/kafka/pull/9229#issuecomment-850747419


   This PR:
   
   > 

   > SESSION REPORT (ALL TESTS)
   > ducktape version: 0.8.1
   > session_id:   2021-05-28--005
   > run time: 16 minutes 56.024 seconds
   > tests run:18
   > passed:   18
   > failed:   0
   > ignored:  0
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   1 minute 16.243 seconds
   > {"records_per_sec": 2044571.6622, "mb_per_sec": 194.9855}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_consumer_throughput.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   1 minute 19.227 seconds
   > {"records_per_sec": 1779992.88, "mb_per_sec": 169.7533}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_and_consumer.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   1 minute 13.064 seconds
   > {"producer": {"records_per_sec": 402868.423173, "mb_per_sec": 38.42}, 
"consumer": {"records_per_sec": 408363.28, "mb_per_sec": 38.9446}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_and_consumer.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   1 minute 12.112 seconds
   > {"producer": {"records_per_sec": 347886.588972, "mb_per_sec": 33.18}, 
"consumer": {"records_per_sec": 352534.7247, "mb_per_sec": 33.6203}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_end_to_end_latency.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   51.120 seconds
   > {"latency_50th_ms": 0.0, "latency_99th_ms": 3.0, "latency_999th_ms": 8.0}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_end_to_end_latency.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   45.992 seconds
   > {"latency_50th_ms": 0.0, "latency_99th_ms": 3.0, "latency_999th_ms": 9.0}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_long_term_producer_throughput.security_protocol=PLAINTEXT.compression_type=lz4
   > status: PASS
   > run time:   1 minute 11.957 seconds
   > {"0": {"records_per_sec": 400994.466276, "mb_per_sec": 38.24}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_long_term_producer_throughput.security_protocol=PLAINTEXT.compression_type=zstd
   > status: PASS
   > run time:   1 minute 12.859 seconds
   > {"0": {"records_per_sec": 366716.784627, "mb_per_sec": 34.97}}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=10
   > status: PASS
   > run time:   55.828 seconds
   > {"records_per_sec": 1101318.782309, "mb_per_sec": 10.5}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=100
   > status: PASS
   > run time:   44.917 seconds
   > {"records_per_sec": 373345.479833, "mb_per_sec": 35.6}
   > 

   > test_id:
kafkatest.benchmarks.core.benchmark_test.Benchmark.test_producer_throughput.acks=1.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.compression_type=lz4.message_size=1000
   > status: PASS
   > run time:   45.609 seconds
   > {"records_per_sec": 63912.857143, "mb_per_sec": 60.95}
   > 

   > test_id:

[GitHub] [kafka] feyman2016 commented on pull request #10593: KAFKA-10800 Validate the snapshot id when the state machine creates a snapshot

2021-05-28 Thread GitBox


feyman2016 commented on pull request #10593:
URL: https://github.com/apache/kafka/pull/10593#issuecomment-850743673


   @jsancio Thank you for the kindly review and help~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-12545) Integrate snapshot in the shell tool

2021-05-28 Thread loboxu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

loboxu reassigned KAFKA-12545:
--

Assignee: loboxu

> Integrate snapshot in the shell tool
> 
>
> Key: KAFKA-12545
> URL: https://issues.apache.org/jira/browse/KAFKA-12545
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jose Armando Garcia Sancio
>Assignee: loboxu
>Priority: Major
>  Labels: kip-500
>
> The MetadataNodeManager in the shell tool doesn't fully support snapshots. It 
> needs to handle:
>  # reloading snapshots
>  # SnapshotReader::snapshotId representing the end offset and epoch for the 
> the records in the snapshot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] cmccabe opened a new pull request #10787: KAFKA-12864: Move queue and timeline into server-common

2021-05-28 Thread GitBox


cmccabe opened a new pull request #10787:
URL: https://github.com/apache/kafka/pull/10787


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12864) Move KafkaEventQueue and timeline data structures into server-common

2021-05-28 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-12864:


 Summary: Move KafkaEventQueue and timeline data structures into 
server-common
 Key: KAFKA-12864
 URL: https://issues.apache.org/jira/browse/KAFKA-12864
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe
Assignee: Colin McCabe


Move KafkaEventQueue and timeline data structures into server-common



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] vvcephei commented on pull request #10507: KAFKA-8410: Migrating stateful operators to new Processor API

2021-05-28 Thread GitBox


vvcephei commented on pull request #10507:
URL: https://github.com/apache/kafka/pull/10507#issuecomment-850642648


   Hey @jeqo , now that https://github.com/apache/kafka/pull/10744 is merged, 
what do you think about just closing this PR and breaking it up into a series 
of PRs that migrate one or two processors at a time?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vvcephei merged pull request #10744: KAFKA-8410: KTableProcessor migration groundwork

2021-05-28 Thread GitBox


vvcephei merged pull request #10744:
URL: https://github.com/apache/kafka/pull/10744


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12863) Configure controller snapshot generation

2021-05-28 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12863:
--

 Summary: Configure controller snapshot generation
 Key: KAFKA-12863
 URL: https://issues.apache.org/jira/browse/KAFKA-12863
 Project: Kafka
  Issue Type: Sub-task
  Components: controller
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12787) Integrate controller snapshot with the RaftClient

2021-05-28 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio updated KAFKA-12787:
---
Summary: Integrate controller snapshot with the RaftClient  (was: Configure 
and integrate controller snapshot with the RaftClient)

> Integrate controller snapshot with the RaftClient
> -
>
> Key: KAFKA-12787
> URL: https://issues.apache.org/jira/browse/KAFKA-12787
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
>  Labels: kip-500
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jsancio opened a new pull request #10786: KAFKA-12787: Integrate controller snapshoting with raft client

2021-05-28 Thread GitBox


jsancio opened a new pull request #10786:
URL: https://github.com/apache/kafka/pull/10786


   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10780: KAFKA-12782: Fix Javadocs generation by upgrading JDK

2021-05-28 Thread GitBox


ijuma commented on pull request #10780:
URL: https://github.com/apache/kafka/pull/10780#issuecomment-850609061


   @jlprat See 
https://github.com/apache/kafka/pull/10702#issuecomment-841663450 for 
additional things that need to be fixed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10585: MINOR: cleanTest ought to remove output of unitTest task and integrat…

2021-05-28 Thread GitBox


ijuma commented on a change in pull request #10585:
URL: https://github.com/apache/kafka/pull/10585#discussion_r641757745



##
File path: build.gradle
##
@@ -380,6 +380,15 @@ subprojects {
 }
   }
 
+  cleanTest {
+subprojects.each {
+  delete "${it.buildDir}/test-results/unitTest"
+  delete "${it.buildDir}/reports/tests/unitTest"
+  delete "${it.buildDir}/test-results/integrationTest"
+  delete "${it.buildDir}/reports/tests/integrationTest"
+}
+  }
+

Review comment:
   How do we know that the hardcoded directories here are all correct? Is 
there a way to get that data from gradle instead of hardcoding?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #10780: KAFKA-12782: Fix Javadocs generation by upgrading JDK

2021-05-28 Thread GitBox


jlprat commented on pull request #10780:
URL: https://github.com/apache/kafka/pull/10780#issuecomment-850605601


   Thanks @ijuma for the review! The upgrade to JDK 16 is this one 
https://github.com/apache/kafka/pull/10415 , right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma merged pull request #10780: KAFKA-12782: Fix Javadocs generation by upgrading JDK

2021-05-28 Thread GitBox


ijuma merged pull request #10780:
URL: https://github.com/apache/kafka/pull/10780


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10780: KAFKA-12782: Fix Javadocs generation by upgrading JDK

2021-05-28 Thread GitBox


ijuma commented on pull request #10780:
URL: https://github.com/apache/kafka/pull/10780#issuecomment-850604343


   We'll hopefully switch to JDK 16 before the 3.0 release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lkokhreidze opened a new pull request #10785: KIP-708 / A Rack awareness for Kafka Streams

2021-05-28 Thread GitBox


lkokhreidze opened a new pull request #10785:
URL: https://github.com/apache/kafka/pull/10785


   
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-3539) KafkaProducer.send() may block even though it returns the Future

2021-05-28 Thread Moses Nakamura (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353465#comment-17353465
 ] 

Moses Nakamura commented on KAFKA-3539:
---

I've gotten lots of good feedback on the discussion thread so far.  [~cmccabe] 
mentioned that there is a "fix" for the metadata case that means we don't need 
to block, queue, or fail fast.  That sounds better, so I'd like to know more 
about that, but I think we have enough information to start winnowing down the 
KIP to a more specific recommendation once I understand that idea better.

> KafkaProducer.send() may block even though it returns the Future
> 
>
> Key: KAFKA-3539
> URL: https://issues.apache.org/jira/browse/KAFKA-3539
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Oleg Zhurakousky
>Priority: Critical
>  Labels: needs-discussion, needs-kip
>
> You can get more details from the us...@kafka.apache.org by searching on the 
> thread with the subject "KafkaProducer block on send".
> The bottom line is that method that returns Future must never block, since it 
> essentially violates the Future contract as it was specifically designed to 
> return immediately passing control back to the user to check for completion, 
> cancel etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-3539) KafkaProducer.send() may block even though it returns the Future

2021-05-28 Thread Moses Nakamura (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353465#comment-17353465
 ] 

Moses Nakamura edited comment on KAFKA-3539 at 5/28/21, 5:15 PM:
-

I've gotten lots of good feedback on the discussion thread so far.  [~cmccabe] 
mentioned that there is a "fix" for the metadata case that means we don't need 
to block, queue, or fail fast.  That sounds better (and might avoid a breaking 
change?), so I'd like to know more about that, but I think we have enough 
information to start winnowing down the KIP to a more specific recommendation 
once I understand that idea better.


was (Author: moses.nakamura):
I've gotten lots of good feedback on the discussion thread so far.  [~cmccabe] 
mentioned that there is a "fix" for the metadata case that means we don't need 
to block, queue, or fail fast.  That sounds better, so I'd like to know more 
about that, but I think we have enough information to start winnowing down the 
KIP to a more specific recommendation once I understand that idea better.

> KafkaProducer.send() may block even though it returns the Future
> 
>
> Key: KAFKA-3539
> URL: https://issues.apache.org/jira/browse/KAFKA-3539
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Reporter: Oleg Zhurakousky
>Priority: Critical
>  Labels: needs-discussion, needs-kip
>
> You can get more details from the us...@kafka.apache.org by searching on the 
> thread with the subject "KafkaProducer block on send".
> The bottom line is that method that returns Future must never block, since it 
> essentially violates the Future contract as it was specifically designed to 
> return immediately passing control back to the user to check for completion, 
> cancel etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jlprat commented on pull request #10784: KAFKA-12862: Update Scala fmt library and apply fixes

2021-05-28 Thread GitBox


jlprat commented on pull request #10784:
URL: https://github.com/apache/kafka/pull/10784#issuecomment-850556227


   Noted, I will ping them when touching that area.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10784: KAFKA-12862: Update Scala fmt library and apply fixes

2021-05-28 Thread GitBox


ijuma commented on pull request #10784:
URL: https://github.com/apache/kafka/pull/10784#issuecomment-850554801


   Looks like the formatting changes are on the streams scala module. cc @mjsax 
@vvcephei @guozhangwang 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #10784: KAFKA-12862: Update Scala fmt library and apply fixes

2021-05-28 Thread GitBox


jlprat commented on pull request #10784:
URL: https://github.com/apache/kafka/pull/10784#issuecomment-850550830


   Sorry to almost always point at you @ijuma If you would have time, could you 
take a look at this one? Maybe it's worth change the scala fmt config instead 
of leaving these deafults.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #10783: MINOR: Dependency updates around Scala libraries

2021-05-28 Thread GitBox


jlprat commented on pull request #10783:
URL: https://github.com/apache/kafka/pull/10783#issuecomment-850547775


   Sure thing, let me fetch them and link them here:
   * Spotless: Unfortunately there is no release notes to be found
   * Scala 2.12.14: https://github.com/scala/scala/releases/tag/v2.12.14
   * Scala Logging: 
https://github.com/lightbend/scala-logging/releases/tag/v3.9.3
   * Scala Collection Compat:
 *  https://github.com/scala/scala-collection-compat/releases/tag/v2.3.1
 * https://github.com/scala/scala-collection-compat/releases/tag/v2.3.2
 * https://github.com/scala/scala-collection-compat/releases/tag/v2.4.0
 * https://github.com/scala/scala-collection-compat/releases/tag/v2.4.1
 * https://github.com/scala/scala-collection-compat/releases/tag/v2.4.2
 * https://github.com/scala/scala-collection-compat/releases/tag/v2.4.3
 * https://github.com/scala/scala-collection-compat/releases/tag/v2.4.4
   * Scala Java8 Compat:
 * https://github.com/scala/scala-java8-compat/releases/tag/v1.0.0-RC1
 * https://github.com/scala/scala-java8-compat/releases/tag/v1.0.0
   * Snappy: 
 * https://github.com/xerial/snappy-java/releases/tag/1.1.8.2
 * https://github.com/xerial/snappy-java/releases/tag/1.1.8.3
 * https://github.com/xerial/snappy-java/releases/tag/1.1.8.4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10783: MINOR: Dependency updates around Scala libraries

2021-05-28 Thread GitBox


ijuma commented on pull request #10783:
URL: https://github.com/apache/kafka/pull/10783#issuecomment-850542590


   Thanks for the PR. Can you please include links to the various release 
notes? We typically look through them to understand if there's anything to be 
worried about.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #10783: MINOR: Dependency updates around Scala libraries

2021-05-28 Thread GitBox


jlprat commented on pull request #10783:
URL: https://github.com/apache/kafka/pull/10783#issuecomment-850534872


   cc @ijuma Could you please review this PR, as you were the one reviewing 
this for the last time?
   Thanks in advance


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12862) Update ScalaFMT version to latest

2021-05-28 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353444#comment-17353444
 ] 

Josep Prat commented on KAFKA-12862:


For an example of how the new code formatted with the new version looks like go 
over [https://github.com/apache/kafka/pull/10784] and look at the diff.

> Update ScalaFMT version to latest
> -
>
> Key: KAFKA-12862
> URL: https://issues.apache.org/jira/browse/KAFKA-12862
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Josep Prat
>Assignee: Josep Prat
>Priority: Minor
>
> When upgrading to the latest stable scala fmt version (2.7.5) lots of classes 
> need to be reformatted because of the dangling parentheses setting.
> I thought it was worth creating an issue, so there is also a place to discuss 
> or document possible Scala fmt config changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jlprat opened a new pull request #10784: KAFKA-12862: Update Scala fmt library and apply fixes

2021-05-28 Thread GitBox


jlprat opened a new pull request #10784:
URL: https://github.com/apache/kafka/pull/10784


   Updates the scala fmt to the latest stable version
   Applies all the style fixes (all source code changes are done by scala
   fmt)
   Removes setting about dangling parentheses as `true` is already the
   default
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-10581) Ability to filter events at Kafka broker based on Kafka header value

2021-05-28 Thread Bhukailas Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhukailas Reddy updated KAFKA-10581:

Description: 
Provide an ability to filter Kafka message events at Kafka broker based on 
consumer's interest on headers value.

 
 # Saves bandwidth to consumer
 # Better throughput for consumers

  was:Provide an ability to filter kafka message events at Kafka broker based 
on consumer's interest on headers value


> Ability to filter events at Kafka broker based on Kafka header value
> 
>
> Key: KAFKA-10581
> URL: https://issues.apache.org/jira/browse/KAFKA-10581
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, consumer
>Reporter: Bhukailas Reddy
>Priority: Critical
>
> Provide an ability to filter Kafka message events at Kafka broker based on 
> consumer's interest on headers value.
>  
>  # Saves bandwidth to consumer
>  # Better throughput for consumers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10581) Ability to filter events at Kafka broker based on Kafka header value

2021-05-28 Thread Bhukailas Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhukailas Reddy updated KAFKA-10581:

Description: Provide an ability to filter kafka message events at Kafka 
broker based on consumer's interest on headers value  (was: Provide an ability 
to filter kafka message events at Kafka broker based on consumer's interest)

> Ability to filter events at Kafka broker based on Kafka header value
> 
>
> Key: KAFKA-10581
> URL: https://issues.apache.org/jira/browse/KAFKA-10581
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, consumer
>Reporter: Bhukailas Reddy
>Priority: Critical
>
> Provide an ability to filter kafka message events at Kafka broker based on 
> consumer's interest on headers value



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10581) Ability to filter events at Kafka broker based on Kafka header value

2021-05-28 Thread Bhukailas Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhukailas Reddy updated KAFKA-10581:

Priority: Critical  (was: Major)

> Ability to filter events at Kafka broker based on Kafka header value
> 
>
> Key: KAFKA-10581
> URL: https://issues.apache.org/jira/browse/KAFKA-10581
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, consumer
>Reporter: Bhukailas Reddy
>Priority: Critical
>
> Provide an ability to filter kafka message events at Kafka broker based on 
> consumer's interest



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12862) Update ScalaFMT version to latest

2021-05-28 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12862:
--

 Summary: Update ScalaFMT version to latest
 Key: KAFKA-12862
 URL: https://issues.apache.org/jira/browse/KAFKA-12862
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Josep Prat
Assignee: Josep Prat


When upgrading to the latest stable scala fmt version (2.7.5) lots of classes 
need to be reformatted because of the dangling parentheses setting.

I thought it was worth creating an issue, so there is also a place to discuss 
or document possible Scala fmt config changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jlprat opened a new pull request #10783: MINOR: Dependency updates around Scala libraries

2021-05-28 Thread GitBox


jlprat opened a new pull request #10783:
URL: https://github.com/apache/kafka/pull/10783


   Updates Scala version for 2.12 to the latest available
   Updates Scala libs to the latest ones available
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-4107) Support offset reset capability in Kafka Connect

2021-05-28 Thread Denis Savenko (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-4107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353419#comment-17353419
 ] 

Denis Savenko commented on KAFKA-4107:
--

Yes, this indeed should be implemented as Kafka Connect REST API extension, 
rather than a separate CLI tool (or at least additionally to it).

Otherwise, it would look really weird that usually REST API is used for Kafka 
Connect management, but only connectors offsets are manipulated with some CLI 
utility.

> Support offset reset capability in Kafka Connect
> 
>
> Key: KAFKA-4107
> URL: https://issues.apache.org/jira/browse/KAFKA-4107
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Priority: Major
>  Labels: needs-kip
>
> It would be useful in some cases to be able to reset connector offsets. For 
> example, if a topic in Kafka corresponding to a source database is 
> accidentally deleted (or deleted because of corrupt data), an administrator 
> may want to reset offsets and reproduce the log from the beginning. It may 
> also be useful to have support for overriding offsets, but that seems like a 
> less likely use case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12861) MockProducer raises NPE when no Serializer

2021-05-28 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gérald Quintana updated KAFKA-12861:

Description: 
Since KAFKA-10503, the MockProducer may raise NullPointerException when 
key/value serializers are not set:
{noformat}
java.lang.NullPointerException: null
  at 
org.apache.kafka.clients.producer.MockProducer.send(MockProducer.java:307){noformat}
This occurs when using MockProducer default constructor:
{code:java}
public MockProducer() {
 this(Cluster.empty(), false, null, null, null);
}{code}
The problem didn't occur on Kafka Client 2.6.

I understand this constructor is only for metadata as described in JavaDoc. 
However defaulting to a Noop serializer (MockSerializer) would be a better 
default. Removing the default constructor to force declaring a serialiszer 
could also be a solution.

  was:
Since KAFKA-10503, the MockProducer may raise NullPointerException when 
key/value serializers are not set:
*15:58:16*  java.lang.NullPointerException: null*15:58:16*  at 
org.apache.kafka.clients.producer.MockProducer.send(MockProducer.java:307)
This occurs when using MockProducer default constructor:
{code:java}
public MockProducer() {
 this(Cluster.empty(), false, null, null, null);
}{code}
The problem didn't occur on Kafka Client 2.6.

I understand this constructor is only for metadata as described in JavaDoc. 
However defaulting to a Noop serializer (MockSerializer) would be a better 
default. Removing the default constructor to force declaring a serialiszer 
could also be a solution.


> MockProducer raises NPE when no Serializer
> --
>
> Key: KAFKA-12861
> URL: https://issues.apache.org/jira/browse/KAFKA-12861
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.7.0, 2.8.0, 2.7.1
>Reporter: Gérald Quintana
>Priority: Minor
>
> Since KAFKA-10503, the MockProducer may raise NullPointerException when 
> key/value serializers are not set:
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.kafka.clients.producer.MockProducer.send(MockProducer.java:307){noformat}
> This occurs when using MockProducer default constructor:
> {code:java}
> public MockProducer() {
>  this(Cluster.empty(), false, null, null, null);
> }{code}
> The problem didn't occur on Kafka Client 2.6.
> I understand this constructor is only for metadata as described in JavaDoc. 
> However defaulting to a Noop serializer (MockSerializer) would be a better 
> default. Removing the default constructor to force declaring a serialiszer 
> could also be a solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12861) MockProducer raises NPE when no Serializer

2021-05-28 Thread Jira
Gérald Quintana created KAFKA-12861:
---

 Summary: MockProducer raises NPE when no Serializer
 Key: KAFKA-12861
 URL: https://issues.apache.org/jira/browse/KAFKA-12861
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.7.1, 2.8.0, 2.7.0
Reporter: Gérald Quintana


Since KAFKA-10503, the MockProducer may raise NullPointerException when 
key/value serializers are not set:
*15:58:16*  java.lang.NullPointerException: null*15:58:16*  at 
org.apache.kafka.clients.producer.MockProducer.send(MockProducer.java:307)
This occurs when using MockProducer default constructor:
{code:java}
public MockProducer() {
 this(Cluster.empty(), false, null, null, null);
}{code}
The problem didn't occur on Kafka Client 2.6.

I understand this constructor is only for metadata as described in JavaDoc. 
However defaulting to a Noop serializer (MockSerializer) would be a better 
default. Removing the default constructor to force declaring a serialiszer 
could also be a solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] showuon edited a comment on pull request #10736: KAFKA-9295: increase session timeout to 30 seconds

2021-05-28 Thread GitBox


showuon edited a comment on pull request #10736:
URL: https://github.com/apache/kafka/pull/10736#issuecomment-850396020


   OK, on second thought, I start to think the root cause of this flaky test is 
because the bug mentioned above. Increasing session timeout is just a 
workaround to avoid rebalancing caused data lost. I'll look into it further, 
and file a bug then. Any suggestions are welcomed. 
   Before the root cause got fixed, I think we can workaround it by increasing 
more session timeout. What do you think?
   Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kosiakk commented on pull request #7539: KAFKA-6968: Adds calls to listener on rebalance of MockConsumer

2021-05-28 Thread GitBox


kosiakk commented on pull request #7539:
URL: https://github.com/apache/kafka/pull/7539#issuecomment-850399911


   Depending on the assignment strategy, not all currently assigned partitions 
will be rewoked. For example 
`org.apache.kafka.clients.consumer.CooperativeStickyAssignor` tries to preserve 
currently assigned partitions.
   
   My Mock subclass replicates that behavour using a simple Set difference 
calculation. I've just extended the Mock Consumer to overwrite `rebalance` 
implementation, actually in Kotlin (here just to illustrate the intention)
   
   ```Kotlin
   @Synchronized
   override fun rebalance(newAssignment: Collection) {
   val listener = lastRebalanceListener ?: error("Please call 
`subscribe` before `rebalance`")
   
   val revoked = assignment() - newAssignment
   val assigned = newAssignment - assignment()
   
   listener.onPartitionsRevoked(revoked)
   super.rebalance(newAssignment)
   listener.onPartitionsAssigned(assigned)
   }
   ```
   
   or a patch proposal for the main class in Java:
   ```Java
   public synchronized void rebalance(Collection 
newAssignment) {
   this.records.clear();
   // todo check this.subscriptions.rebalanceListener() for null
   
   final Set revoked = 
this.subscriptions.assignedPartitions();
   revoked.removeAll(newAssignment);
   
   final Set assigned = new HashSet<>(newAssignment);
   assigned.removeAll(this.subscriptions.assignedPartitions());
   
   this.subscriptions.rebalanceListener().onPartitionsRevoked(revoked);
   this.subscriptions.assignFromSubscribed(newAssignment);
   
this.subscriptions.rebalanceListener().onPartitionsAssigned(assigned);
   }
   ```
   
   You can actually see this in the log of a real implementation when a second 
node joins the group:
   ```
   [consumerThread] INFO 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer 
clientId=consumer-test1-1, groupId=test1] Updating assignment with
Assigned partitions:   [INPUT-2, INPUT-3]
Current owned partitions:  [INPUT-2, INPUT-3, INPUT-0, 
INPUT-1]
Added partitions (assigned - owned):   []
Revoked partitions (owned - assigned): [INPUT-0, INPUT-1]
   ```
   
   and then debugger confirms that `onPartitionsRevoked` is called only with 
**2-element set** with the difference, and then `onPartitionsAssigned` is 
callsed with an **empty set**.
   
   tl;dr: please don't revoke and then add again unnecessarily, it might be 
expensive in the app


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #10736: KAFKA-9295: increase session timeout to 30 seconds

2021-05-28 Thread GitBox


showuon commented on pull request #10736:
URL: https://github.com/apache/kafka/pull/10736#issuecomment-850396020


   OK, on second thought, I start to think the root cause of this flaky test is 
because the bug mentioned above. Increasing session timeout is just a 
workaround to avoid rebalancing caused data lost. I'll look into it further, 
and file a bug then. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353301#comment-17353301
 ] 

KahnCheny commented on KAFKA-12860:
---

Hi [~junrao] is this still an issue?

> Partition offset due to non-monotonically incrementing offsets in logs
> --
>
> Key: KAFKA-12860
> URL: https://issues.apache.org/jira/browse/KAFKA-12860
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1.1
>Reporter: KahnCheny
>Priority: Major
>
> We encountered an issue with Kafka after running out of heap space. When on 
> several brokers halted on start up with the error:
> {code:java}
> // Some comments here
> 2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal 
> error during KafkaServer startup. Prepare to shutdown
> kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
> to position 6553 no larger than the last offset appended (1125422119) to 
> /dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
> at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
> at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
> at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
> at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
> at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
> at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.log.LogSegment.recover(LogSegment.scala:278)
> at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
> at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
> at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at kafka.log.Log.loadSegmentFiles(Log.scala:322)
> at kafka.log.Log.loadSegments(Log.scala:405)
> at kafka.log.Log.(Log.scala:218)
> at kafka.log.Log$.apply(Log.scala:1776)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
> {code}
> We dump Log segment file that the partition offset (1125422119) due to 
> non-monotonically incrementing offsets in logs:
> {code:java}
> // Some comments here
> baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532548379 CreateTime: 1622078435585 isvalid: true size: 
> 110260 magic: 2 compresscodec: GZIP crc: 4024531289
> baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 
> magic: 2 compresscodec: GZIP crc: 1867381940
> baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 
> magic: 2 compresscodec: GZIP crc: 3993802638
> baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532782466 CreateTime: 

[jira] [Assigned] (KAFKA-12847) Dockerfile needed for kafka system tests needs changes

2021-05-28 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison reassigned KAFKA-12847:
--

Assignee: Abhijit Mane

> Dockerfile needed for kafka system tests needs changes
> --
>
> Key: KAFKA-12847
> URL: https://issues.apache.org/jira/browse/KAFKA-12847
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.8.0, 2.7.1
> Environment: Issue tested in environments below but is independent of 
> h/w arch. or Linux flavor: -
> 1.) RHEL-8.3 on x86_64 
> 2.) RHEL-8.3 on IBM Power (ppc64le)
> 3.) apache/kafka branch tested: trunk (master)
>Reporter: Abhijit Mane
>Assignee: Abhijit Mane
>Priority: Major
>  Labels: easyfix
> Attachments: Dockerfile.upstream
>
>
> Hello,
> I tried apache/kafka system tests as per documentation: -
> ([https://github.com/apache/kafka/tree/trunk/tests#readme|https://github.com/apache/kafka/tree/trunk/tests#readme_])
> =
>  PROBLEM
>  ~~
> 1.) As root user, clone kafka github repo and start "kafka system tests"
>  # git clone [https://github.com/apache/kafka.git]
>  # cd kafka
>  # ./gradlew clean systemTestLibs
>  # bash tests/docker/run_tests.sh
> 2.) Dockerfile issue - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
> This file has an *UID* entry as shown below: -
>  ---
>  ARG *UID*="1000"
>  RUN useradd -u $*UID* ducker
> // {color:#de350b}*Error during docker build*{color} => useradd: UID 0 is not 
> unique, root user id is 0
>  ---
>  I ran everything as root which means the built-in bash environment variable 
> 'UID' always
> resolves to 0 and can't be changed. Hence, the docker build fails. The issue 
> should be seen even if run as non-root.
> 3.) Next, as root, as per README, I ran: -
> server:/kafka> *bash tests/docker/run_tests.sh*
> The ducker tool builds the container images & switches to user '*ducker*' 
> inside the container
> & maps kafka root dir ('kafka') from host to '/opt/kafka-dev' in the 
> container.
> Ref: 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak|https://github.com/apache/kafka/blob/trunk/tests/docker/ducker-ak]
> Ex:  docker run -d *-v "${kafka_dir}:/opt/kafka-dev"* 
> This fails as the 'ducker' user has *no write permissions* to create files 
> under 'kafka' root dir. Hence, it needs to be made writeable.
> // *chmod -R a+w kafka* 
>  – needed as container is run as 'ducker' and needs write access since kafka 
> root volume from host is mapped to container as "/opt/kafka-dev" where the 
> 'ducker' user writes logs
>  =
> =
>  *FIXES needed*
>  ~
>  1.) Dockerfile - 
> [https://github.com/apache/kafka/blob/trunk/tests/docker/Dockerfile]
>  Change 'UID' to '*UID_DUCKER*'.
> This won't conflict with built in bash env. var UID and the docker image 
> build should succeed.
>  ---
>  ARG *UID_DUCKER*="1000"
>  RUN useradd -u $*UID_DUCKER* ducker
> // *{color:#57d9a3}No Error{color}* => No conflict with built-in UID
>  ---
> 2.) README needs an update where we must ensure the kafka root dir from where 
> the tests 
>  are launched is writeable to allow the 'ducker' user to create results/logs.
>  # chmod -R a+w kafka
> With this, I was able to get the docker images built and system tests started 
> successfully.
>  =
> Also, I wonder whether or not upstream Dockerfile & System tests are part of 
> CI/CD and get tested for every PR. If so, this issue should have been caught.
>  
> *Question to kafka SME*
>  -
>  Do you believe this is a valid problem with the Dockerfile and the fix is 
> acceptable? 
>  Please let me know and I am happy to submit a PR with this fix.
> Thanks,
>  Abhijit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12850) Use JDK16 for builds instead of JDK15

2021-05-28 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353226#comment-17353226
 ] 

Josep Prat commented on KAFKA-12850:


Probably this task can be closed as it's a duplicate of Kafka-12790 AFAIU.

> Use JDK16 for builds instead of JDK15
> -
>
> Key: KAFKA-12850
> URL: https://issues.apache.org/jira/browse/KAFKA-12850
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Josep Prat
>Priority: Major
>
> Given that JDK15 reached EOL in March 2021, it is probably worth migrating 
> the Jenkins build pipelines to use JDK16 instead.
> Unless there is a compelling reason to stay with JDK15.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] Abhijit-Mane opened a new pull request #10782: Fix issue with 'UID' string in Dockerfile used for System Test

2021-05-28 Thread GitBox


Abhijit-Mane opened a new pull request #10782:
URL: https://github.com/apache/kafka/pull/10782


   Jira issue: https://issues.apache.org/jira/browse/KAFKA-12847
   Fixes #12847
   
   Signed-off-by: Abhijit Mane 
   
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mimaison commented on pull request #10743: KIP-699: Work in progress

2021-05-28 Thread GitBox


mimaison commented on pull request #10743:
URL: https://github.com/apache/kafka/pull/10743#issuecomment-850300974


   @skaundinya15 Yeah we may have to rebase but that shouldn't be an issue. I 
did not touch ListOffsets in this PR, I'm only updating methods that interact 
with coordinators


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KahnCheny updated KAFKA-12860:
--
Description: 
We encountered an issue with Kafka after running out of heap space. When on 
several brokers halted on start up with the error:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset (1125422119) due to 
non-monotonically incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
{color:#DE350B}baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 
lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 
isTransactional: false position: 532782466 CreateTime: 1622078471656 isvalid: 
true size: 107229 magic: 2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722{color}
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 

[jira] [Updated] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KahnCheny updated KAFKA-12860:
--
Description: 
We encountered an issue with Kafka after running out of heap space. When on 
several brokers halted on start up with the error:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset (1125422119) due to 
non-monotonically incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 compresscodec: GZIP crc: 322023138

[GitHub] [kafka] feyman2016 commented on pull request #10593: KAFKA-10800 Validate the snapshot id when the state machine creates a snapshot

2021-05-28 Thread GitBox


feyman2016 commented on pull request #10593:
URL: https://github.com/apache/kafka/pull/10593#issuecomment-850279646


   Thanks for your quick reply, call for review again @jsancio :)
   The failed test: 
`org.apache.kafka.connect.integration.BlockingConnectorTest.testBlockInSinkTaskStop`
 is not related


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KahnCheny updated KAFKA-12860:
--
Description: 
We encountered an issue with Kafka after running out of heap space. When on 
several brokers halted on start up with the error:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset (1125422119) due to 
non-monotonically incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 compresscodec: GZIP crc: 322023138

[jira] [Updated] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KahnCheny updated KAFKA-12860:
--
Description: 
We encountered an issue with Kafka after running out of heap space. When on 
several agents stopped running due to errors during startup:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset (1125422119) due to 
non-monotonically incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 compresscodec: GZIP crc: 

[jira] [Updated] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KahnCheny updated KAFKA-12860:
--
Description: 
We encountered an issue with Kafka after running out of heap space. When 
several agents stopped running due to errors during startup:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset (1125422119) due to 
non-monotonically incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 compresscodec: GZIP crc: 

[jira] [Updated] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KahnCheny updated KAFKA-12860:
--
Description: 
We encountered an issue with Kafka after running out of heap space. When 
several agents stopped running due to errors during startup:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset due to non-monotonically 
incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 compresscodec: GZIP crc: 322023138
{code}

[jira] [Created] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)
KahnCheny created KAFKA-12860:
-

 Summary: Partition offset due to non-monotonically incrementing 
offsets in logs
 Key: KAFKA-12860
 URL: https://issues.apache.org/jira/browse/KAFKA-12860
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.1
Reporter: KahnCheny


We encountered an issue with Kafka after running out of heap space. When 
several agents stopped running due to errors during startup:

2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.


We dump Log segment file that the partition offset due to non-monotonically 
incrementing offsets in logs:

baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 

[GitHub] [kafka] saddays opened a new pull request #10781: MINOR: Reduce duplicate authentication check

2021-05-28 Thread GitBox


saddays opened a new pull request #10781:
URL: https://github.com/apache/kafka/pull/10781


   *More detailed description of your change,
   
   Reduce repeat authentication check for same broker.
   
   *Summary of testing strategy (including rationale)
   
   The requests belong to same broker, it's unnecessary to check every request 
, although the check is cheap.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lea updated KAFKA-12859:

Attachment: (was: 截图1.PNG)

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
> Attachments: 截图3.PNG, 截图4.PNG, 截图5.PNG
>
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353004#comment-17353004
 ] 

Lea edited comment on KAFKA-12859 at 5/28/21, 6:57 AM:
---

!截图3.PNG!

 

The above screenshot is printed by Kafka trace log. we can see that the 30211 
request received the 30205 response. Through packet capture, it was found that 
the request server of 30205 repeatedly sent it twice, as shown in the figure 
below. 

The serial numbers 105722 and 105726 contain the same correlationid (0x75fd 
30205). This causes the kafka client report error.

 

!截图4.PNG|width=553,height=459!

!截图5.PNG|width=618,height=652!

 

 


was (Author: leaye):
!截图3.PNG!

 

The above screenshot is printed by Kafka trace log. You can see that the 30211 
request received the 30205 response. Through packet capture, it was found that 
the request server of 30205 repeatedly sent it twice, as shown in the figure 
below. 

The serial numbers 105722 and 105726 contain the same correlationid (0x75fd 
30205). This causes the kafka client report error.

 

!截图4.PNG|width=553,height=459!

!截图5.PNG|width=618,height=652!

 

 

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
> Attachments: 截图3.PNG, 截图4.PNG, 截图5.PNG
>
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353004#comment-17353004
 ] 

Lea commented on KAFKA-12859:
-

!截图3.PNG!

 

The above screenshot is printed by Kafka trace log. You can see that the 30211 
request received the 30205 response. Through packet capture, it was found that 
the request server of 30205 repeatedly sent it twice, as shown in the figure 
below. 

The serial numbers 105722 and 105726 contain the same correlationid (0x75fd 
30205). This causes the kafka client report error.

 

!截图4.PNG|width=553,height=459!

!截图5.PNG|width=618,height=652!

 

 

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
> Attachments: 截图1.PNG, 截图3.PNG, 截图4.PNG, 截图5.PNG
>
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lea updated KAFKA-12859:

Description: 
When the client producer data, the Kafka server repeatedly responds with the 
same correlationId. As a result, all requests subsequent to the socket fail to 
be sent.
{code:java}
//
java.lang.IllegalStateException: Correlation id for response (30205) does not 
match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
apiVersion=5, clientId=producer-1, correlationId=30211)
at 
org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
 ~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
 ~[kafka-clients-2.4.0.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
~[kafka-clients-2.4.0.jar!/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]


{code}
 

 

  was:
When the client producer data, the Kafka server repeatedly responds with the 
same correlationId. As a result, all requests subsequent to the socket fail to 
be sent.

异常如下
{code:java}
//
java.lang.IllegalStateException: Correlation id for response (30205) does not 
match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
apiVersion=5, clientId=producer-1, correlationId=30211)
at 
org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
 ~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
 ~[kafka-clients-2.4.0.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
~[kafka-clients-2.4.0.jar!/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]


{code}
 

 


> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
> Attachments: 截图1.PNG, 截图3.PNG, 截图4.PNG, 截图5.PNG
>
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lea updated KAFKA-12859:

Attachment: 截图5.PNG
截图4.PNG

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
> Attachments: 截图1.PNG, 截图3.PNG, 截图4.PNG, 截图5.PNG
>
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> 异常如下
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lea updated KAFKA-12859:

Attachment: 截图3.PNG

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
> Attachments: 截图1.PNG, 截图3.PNG
>
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> 异常如下
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lea updated KAFKA-12859:

Attachment: 截图1.PNG

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
> Attachments: 截图1.PNG, 截图3.PNG
>
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> 异常如下
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lea updated KAFKA-12859:

Comment: was deleted

(was: !image-2021-05-28-14-43-12-300.png!)

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> 异常如下
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352992#comment-17352992
 ] 

Lea commented on KAFKA-12859:
-

!image-2021-05-28-14-43-12-300.png!

> Kafka Server Repeated Responses
> ---
>
> Key: KAFKA-12859
> URL: https://issues.apache.org/jira/browse/KAFKA-12859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.0
>Reporter: Lea
>Priority: Critical
>
> When the client producer data, the Kafka server repeatedly responds with the 
> same correlationId. As a result, all requests subsequent to the socket fail 
> to be sent.
> 异常如下
> {code:java}
> //
> java.lang.IllegalStateException: Correlation id for response (30205) does not 
> match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
> apiVersion=5, clientId=producer-1, correlationId=30211)
> at 
> org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
>  ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
> ~[kafka-clients-2.4.0.jar!/:na]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)
Lea created KAFKA-12859:
---

 Summary: Kafka Server Repeated Responses
 Key: KAFKA-12859
 URL: https://issues.apache.org/jira/browse/KAFKA-12859
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.1.0
Reporter: Lea


When the client producer data, the Kafka server repeatedly responds with the 
same correlationId. As a result, all requests subsequent to the socket fail to 
be sent.

异常如下
{code:java}
//
java.lang.IllegalStateException: Correlation id for response (30205) does not 
match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
apiVersion=5, clientId=producer-1, correlationId=30211)
at 
org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
 ~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
 ~[kafka-clients-2.4.0.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
~[kafka-clients-2.4.0.jar!/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]


{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12858) dynamically update the ssl certificates of kafka connect worker without restarting connect process.

2021-05-28 Thread kaushik srinivas (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352986#comment-17352986
 ] 

kaushik srinivas commented on KAFKA-12858:
--

[~ijuma]

Can you provide us your inputs on this requirement.

> dynamically update the ssl certificates of kafka connect worker without 
> restarting connect process.
> ---
>
> Key: KAFKA-12858
> URL: https://issues.apache.org/jira/browse/KAFKA-12858
> Project: Kafka
>  Issue Type: Improvement
>Reporter: kaushik srinivas
>Priority: Major
>
> Hi,
>  
> We are trying to update the ssl certificates of kafka connect worker which is 
> due for expiry. Is there any way to dynamically update the ssl certificate of 
> connet worker as it is possible in kafka using kafka-configs.sh script ?
> If not, what is the recommended way to update the ssl certificates of kafka 
> connect worker without disrupting the existing traffic ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12858) dynamically update the ssl certificates of kafka connect worker without restarting connect process.

2021-05-28 Thread kaushik srinivas (Jira)
kaushik srinivas created KAFKA-12858:


 Summary: dynamically update the ssl certificates of kafka connect 
worker without restarting connect process.
 Key: KAFKA-12858
 URL: https://issues.apache.org/jira/browse/KAFKA-12858
 Project: Kafka
  Issue Type: Improvement
Reporter: kaushik srinivas


Hi,

 

We are trying to update the ssl certificates of kafka connect worker which is 
due for expiry. Is there any way to dynamically update the ssl certificate of 
connet worker as it is possible in kafka using kafka-configs.sh script ?

If not, what is the recommended way to update the ssl certificates of kafka 
connect worker without disrupting the existing traffic ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)