[GitHub] [kafka] zhaohaidao commented on a change in pull request #9311: KAFKA-9910: Implement new transaction timed out error

2020-10-09 Thread GitBox


zhaohaidao commented on a change in pull request #9311:
URL: https://github.com/apache/kafka/pull/9311#discussion_r502751077



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/internals/TransactionManager.java
##
@@ -1589,7 +1607,8 @@ public void handleResponse(AbstractResponse response) {
 fatalError(error.exception());
 } else if (error == Errors.INVALID_TXN_STATE) {
 fatalError(error.exception());
-} else if (error == Errors.UNKNOWN_PRODUCER_ID || error == 
Errors.INVALID_PRODUCER_ID_MAPPING) {
+} else if (error == Errors.UNKNOWN_PRODUCER_ID || error == 
Errors.INVALID_PRODUCER_ID_MAPPING
+|| error == Errors.TRANSACTION_TIMED_OUT) {

Review comment:
   Thanks for your advice. I will update doc and the kip later





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] zhaohaidao commented on a change in pull request #9311: KAFKA-9910: Implement new transaction timed out error

2020-10-09 Thread GitBox


zhaohaidao commented on a change in pull request #9311:
URL: https://github.com/apache/kafka/pull/9311#discussion_r502744351



##
File path: 
core/src/main/scala/kafka/coordinator/transaction/TransactionCoordinator.scala
##
@@ -381,9 +386,16 @@ class TransactionCoordinator(brokerId: Int,
 if (txnMetadata.producerId != producerId)
   Left(Errors.INVALID_PRODUCER_ID_MAPPING)
 // Strict equality is enforced on the client side requests, as 
they shouldn't bump the producer epoch.
-else if ((isFromClient && producerEpoch != 
txnMetadata.producerEpoch) || producerEpoch < txnMetadata.producerEpoch)
+else if (isFromClient && producerEpoch != 
txnMetadata.producerEpoch) {
+  if (producerEpoch == txnMetadata.lastProducerEpoch) {

Review comment:
   Thanks for your adivce. I have create an issue to track this follow-up: 
https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-10596?filter=allissues#





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-10596) Adding some state to TransactionMetadata which explicitly indicates that the transaction was timed out

2020-10-09 Thread HaiyuanZhao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HaiyuanZhao updated KAFKA-10596:


{code:java}
The code assumes that if we get a receive an EndTxn request and 
lastProducerEpoch has been set, then it must be because the coordinator timed 
out the transaction. That is definitely true in the common case, but I'm 
wondering if it is worth adding some state to TransactionMetadata which 
explicitly indicates that the transaction was timed out. Not super important 
and could be done in a follow-up.
{code}
A follow-up quoted above from comment 
https://github.com/apache/kafka/pull/9311#discussion_r501196960

> Adding some state to TransactionMetadata which explicitly indicates that the 
> transaction was timed out
> --
>
> Key: KAFKA-10596
> URL: https://issues.apache.org/jira/browse/KAFKA-10596
> Project: Kafka
>  Issue Type: Improvement
>Reporter: HaiyuanZhao
>Assignee: HaiyuanZhao
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10596) Adding some state to TransactionMetadata which explicitly indicates that the transaction was timed out

2020-10-09 Thread HaiyuanZhao (Jira)
HaiyuanZhao created KAFKA-10596:
---

 Summary: Adding some state to TransactionMetadata which explicitly 
indicates that the transaction was timed out
 Key: KAFKA-10596
 URL: https://issues.apache.org/jira/browse/KAFKA-10596
 Project: Kafka
  Issue Type: Improvement
Reporter: HaiyuanZhao
Assignee: HaiyuanZhao






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10584) IndexSearchType should use sealed trait instead of Enumeration

2020-10-09 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10584.

Fix Version/s: 2.7.0
   Resolution: Fixed

> IndexSearchType should use sealed trait instead of Enumeration
> --
>
> Key: KAFKA-10584
> URL: https://issues.apache.org/jira/browse/KAFKA-10584
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: huxihx
>Priority: Major
>  Labels: newbie
> Fix For: 2.7.0
>
>
> In Scala, we prefer sealed traits over Enumeration since the former gives you 
> exhaustiveness checking. With Scala Enumeration, you don't get a warning if 
> you add a new value that is not handled in a given pattern match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] huxihx commented on pull request #9399: KAFKA-10584:IndexSearchType should use sealed trait instead of Enumeration

2020-10-09 Thread GitBox


huxihx commented on pull request #9399:
URL: https://github.com/apache/kafka/pull/9399#issuecomment-706479905


   @junrao Thanks for the review, merging to trunk.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] huxihx merged pull request #9399: KAFKA-10584:IndexSearchType should use sealed trait instead of Enumeration

2020-10-09 Thread GitBox


huxihx merged pull request #9399:
URL: https://github.com/apache/kafka/pull/9399


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] zhaohaidao commented on a change in pull request #9311: KAFKA-9910: Implement new transaction timed out error

2020-10-09 Thread GitBox


zhaohaidao commented on a change in pull request #9311:
URL: https://github.com/apache/kafka/pull/9311#discussion_r502740233



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/internals/TransactionManager.java
##
@@ -369,7 +372,9 @@ private TransactionalRequestResult 
beginCompletingTransaction(TransactionResult
 // If the error is an INVALID_PRODUCER_ID_MAPPING error, the server 
will not accept an EndTxnRequest, so skip
 // directly to InitProducerId. Otherwise, we must first abort the 
transaction, because the producer will be
 // fenced if we directly call InitProducerId.
-if (!(lastError instanceof InvalidPidMappingException)) {
+boolean needEndTxn = !(abortableError instanceof 
InvalidPidMappingException)

Review comment:
   I am not sure this helper is worthwhile because it is only used in 
beginCompletingTransaction.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-10215) MockProcessorContext doesn't work with SessionStores

2020-10-09 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211525#comment-17211525
 ] 

John Roesler commented on KAFKA-10215:
--

Note, this will be fixed for the new PAPI MockProcessorContext, at least, by 
[https://github.com/apache/kafka/pull/9396] .

> MockProcessorContext doesn't work with SessionStores
> 
>
> Key: KAFKA-10215
> URL: https://issues.apache.org/jira/browse/KAFKA-10215
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.7.0
>
>
> The recommended pattern for testing custom Processor implementations is to 
> use the test-utils MockProcessorContext. If a Processor implementation needs 
> a store, the store also has to be initialized with the same context. However, 
> the existing (in-memory and persistent) Session store implementations perform 
> internal casts that result in class cast exceptions if you attempt to 
> initialize them with the MockProcessorContext.
> A workaround is to instead embed the processor in an application and use the 
> TopologyTestDriver instead.
> The fix is the same as for KAFKA-10200



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10215) MockProcessorContext doesn't work with SessionStores

2020-10-09 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler updated KAFKA-10215:
-
Fix Version/s: 2.7.0

> MockProcessorContext doesn't work with SessionStores
> 
>
> Key: KAFKA-10215
> URL: https://issues.apache.org/jira/browse/KAFKA-10215
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.7.0
>
>
> The recommended pattern for testing custom Processor implementations is to 
> use the test-utils MockProcessorContext. If a Processor implementation needs 
> a store, the store also has to be initialized with the same context. However, 
> the existing (in-memory and persistent) Session store implementations perform 
> internal casts that result in class cast exceptions if you attempt to 
> initialize them with the MockProcessorContext.
> A workaround is to instead embed the processor in an application and use the 
> TopologyTestDriver instead.
> The fix is the same as for KAFKA-10200



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10215) MockProcessorContext doesn't work with SessionStores

2020-10-09 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler reassigned KAFKA-10215:


Assignee: John Roesler

> MockProcessorContext doesn't work with SessionStores
> 
>
> Key: KAFKA-10215
> URL: https://issues.apache.org/jira/browse/KAFKA-10215
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> The recommended pattern for testing custom Processor implementations is to 
> use the test-utils MockProcessorContext. If a Processor implementation needs 
> a store, the store also has to be initialized with the same context. However, 
> the existing (in-memory and persistent) Session store implementations perform 
> internal casts that result in class cast exceptions if you attempt to 
> initialize them with the MockProcessorContext.
> A workaround is to instead embed the processor in an application and use the 
> TopologyTestDriver instead.
> The fix is the same as for KAFKA-10200



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10200) MockProcessorContext doesn't work with WindowStores

2020-10-09 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler updated KAFKA-10200:
-
Fix Version/s: 2.7.0

> MockProcessorContext doesn't work with WindowStores
> ---
>
> Key: KAFKA-10200
> URL: https://issues.apache.org/jira/browse/KAFKA-10200
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, streams-test-utils
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.7.0
>
>
> The recommended pattern for testing custom Processor implementations is to 
> use the test-utils MockProcessorContext. If a Processor implementation needs 
> a store, the store also has to be initialized with the same context. However, 
> the existing (in-memory and persistent) Windowed store implementations 
> perform internal casts that result in class cast exceptions if you attempt to 
> initialize them with the MockProcessorContext.
> A workaround is to instead embed the processor in an application and use the 
> TopologyTestDriver instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] vvcephei commented on a change in pull request #9396: KAFKA-10437: Implement new PAPI support for test-utils

2020-10-09 Thread GitBox


vvcephei commented on a change in pull request #9396:
URL: https://github.com/apache/kafka/pull/9396#discussion_r502737231



##
File path: 
streams/test-utils/src/test/java/org/apache/kafka/streams/test/wordcount/WindowedWordCountProcessorTest.java
##
@@ -136,54 +133,18 @@ public void shouldWorkWithPersistentStore() throws 
IOException {
 
context.scheduledPunctuators().get(0).getPunctuator().punctuate(1_000L);
 
 // finally, we can verify the output.
-final Iterator 
capturedForwards = context.forwarded().iterator();
-assertThat(capturedForwards.next().keyValue(), is(new 
KeyValue<>("[alpha@100/200]", "2")));
-assertThat(capturedForwards.next().keyValue(), is(new 
KeyValue<>("[beta@100/200]", "1")));
-assertThat(capturedForwards.next().keyValue(), is(new 
KeyValue<>("[delta@200/300]", "1")));
-assertThat(capturedForwards.next().keyValue(), is(new 
KeyValue<>("[gamma@100/200]", "1")));
-assertThat(capturedForwards.next().keyValue(), is(new 
KeyValue<>("[gamma@200/300]", "1")));
-assertThat(capturedForwards.hasNext(), is(false));
+final List> 
capturedForwards = context.forwarded();
+final List> 
expected = asList(
+new CapturedForward<>(new Record<>("[alpha@100/200]", "2", 
1_000L)),
+new CapturedForward<>(new Record<>("[beta@100/200]", "1", 
1_000L)),
+new CapturedForward<>(new Record<>("[delta@200/300]", "1", 
1_000L)),
+new CapturedForward<>(new Record<>("[gamma@100/200]", "1", 
1_000L)),
+new CapturedForward<>(new Record<>("[gamma@200/300]", "1", 
1_000L))
+);
+
+assertThat(capturedForwards, is(expected));
 } finally {
 Utils.delete(stateDir);
 }
 }
-
-@SuppressWarnings("deprecation") // TODO will be fixed in KAFKA-10437
-@Test
-public void shouldFailWithLogging() {

Review comment:
   Yes, they are superceded by the new 
`MockProcessorContextStateStoreTest`, which verifies that any state store 
configured with logging or caching is rejected.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vvcephei commented on a change in pull request #9396: KAFKA-10437: Implement new PAPI support for test-utils

2020-10-09 Thread GitBox


vvcephei commented on a change in pull request #9396:
URL: https://github.com/apache/kafka/pull/9396#discussion_r502737067



##
File path: 
streams/test-utils/src/main/java/org/apache/kafka/streams/processor/api/MockProcessorContext.java
##
@@ -0,0 +1,494 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.processor.api;
+
+import org.apache.kafka.common.metrics.MetricConfig;
+import org.apache.kafka.common.metrics.Metrics;
+import org.apache.kafka.common.metrics.Sensor;
+import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.utils.Time;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.StreamsMetrics;
+import org.apache.kafka.streams.Topology;
+import org.apache.kafka.streams.TopologyTestDriver;
+import org.apache.kafka.streams.kstream.Transformer;
+import org.apache.kafka.streams.kstream.ValueTransformer;
+import org.apache.kafka.streams.processor.Cancellable;
+import org.apache.kafka.streams.processor.PunctuationType;
+import org.apache.kafka.streams.processor.Punctuator;
+import org.apache.kafka.streams.processor.StateRestoreCallback;
+import org.apache.kafka.streams.processor.StateStore;
+import org.apache.kafka.streams.processor.StateStoreContext;
+import org.apache.kafka.streams.processor.TaskId;
+import org.apache.kafka.streams.processor.internals.ClientUtils;
+import org.apache.kafka.streams.processor.internals.RecordCollector;
+import org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl;
+import org.apache.kafka.streams.processor.internals.metrics.TaskMetrics;
+import org.apache.kafka.streams.state.internals.InMemoryKeyValueStore;
+
+import java.io.File;
+import java.time.Duration;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Properties;
+
+import static org.apache.kafka.common.utils.Utils.mkEntry;
+import static org.apache.kafka.common.utils.Utils.mkMap;
+import static org.apache.kafka.common.utils.Utils.mkProperties;
+
+/**
+ * {@link MockProcessorContext} is a mock of {@link ProcessorContext} for 
users to test their {@link Processor},
+ * {@link Transformer}, and {@link ValueTransformer} implementations.
+ * 
+ * The tests for this class 
(org.apache.kafka.streams.MockProcessorContextTest) include several behavioral
+ * tests that serve as example usage.
+ * 
+ * Note that this class does not take any automated actions (such as firing 
scheduled punctuators).
+ * It simply captures any data it witnesses.
+ * If you require more automated tests, we recommend wrapping your {@link 
Processor} in a minimal source-processor-sink
+ * {@link Topology} and using the {@link TopologyTestDriver}.
+ */
+public class MockProcessorContext implements 
ProcessorContext, RecordCollector.Supplier {

Review comment:
   Unfortunately, that would actually expose `InternalProcessorContext` 
itself as a public interface (any supertype of a public type is a public type).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vvcephei commented on pull request #9338: Fixed KAFKA-10515: Serdes used within metered state stores will now be initialized with the default serdes if not already set.

2020-10-09 Thread GitBox


vvcephei commented on pull request #9338:
URL: https://github.com/apache/kafka/pull/9338#issuecomment-706474125


   Thanks, @thake .
   
   Ah, sorry about that: it was my change, and now I feel extra guilty for not 
reviewing and merging your fix in a timely fashion.
   
   It was intentional not to link those two interfaces with a supertype. I 
assume that the duplication you're concerned about is in WrappingNullables? I 
wonder if you can avoid passing in the context to those utility methods and 
only pass in the `contextKeySerializer`, etc. You could simply getting these at 
the call site by adding some more extractors to ProcessorContextUtils.
   
   Since this was my fault to begin with, I'm also happy to take a crack at it 
next week if it proves to be a pain.
   
   For what it's worth, I quite like your approach in this PR.
   
   Thanks again,
   -John



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-10200) MockProcessorContext doesn't work with WindowStores

2020-10-09 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211515#comment-17211515
 ] 

John Roesler commented on KAFKA-10200:
--

Thanks, Sophie, yes, this is done. 

> MockProcessorContext doesn't work with WindowStores
> ---
>
> Key: KAFKA-10200
> URL: https://issues.apache.org/jira/browse/KAFKA-10200
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, streams-test-utils
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> The recommended pattern for testing custom Processor implementations is to 
> use the test-utils MockProcessorContext. If a Processor implementation needs 
> a store, the store also has to be initialized with the same context. However, 
> the existing (in-memory and persistent) Windowed store implementations 
> perform internal casts that result in class cast exceptions if you attempt to 
> initialize them with the MockProcessorContext.
> A workaround is to instead embed the processor in an application and use the 
> TopologyTestDriver instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ijuma commented on a change in pull request #8891: KAFKA-10143: Improve test coverage for throttle changes during reassignment

2020-10-09 Thread GitBox


ijuma commented on a change in pull request #8891:
URL: https://github.com/apache/kafka/pull/8891#discussion_r502724870



##
File path: 
core/src/test/scala/integration/kafka/admin/ReassignPartitionsIntegrationTest.scala
##
@@ -162,6 +162,39 @@ class ReassignPartitionsIntegrationTest extends 
ZooKeeperTestHarness {
   localLogOrException(part).highWatermark)
   }
 
+  @Test
+  def testAlterReassignmentThrottle(): Unit = {
+cluster = new ReassignPartitionsTestCluster(zkConnect)
+cluster.setup()
+cluster.produceMessages("foo", 0, 50)
+cluster.produceMessages("baz", 2, 60)
+val assignment = """{"version":1,"partitions":""" +

Review comment:
   Nit: if you're using triple quotes, you don't need to end them at every 
line.

##
File path: core/src/main/scala/kafka/admin/ReassignPartitionsCommand.scala
##
@@ -1213,22 +1211,38 @@ object ReassignPartitionsCommand extends Logging {
* @return  A map from partition objects to error strings.
*/
   def cancelPartitionReassignments(adminClient: Admin,
-  reassignments: Set[TopicPartition])
+   reassignments: Set[TopicPartition])
   : Map[TopicPartition, Throwable] = {
 val results: Map[TopicPartition, KafkaFuture[Void]] =
   adminClient.alterPartitionReassignments(reassignments.map {
   (_, (None: Option[NewPartitionReassignment]).asJava)
 }.toMap.asJava).values().asScala
 results.flatMap {
-  case (part, future) => {
+  case (part, future) =>

Review comment:
   Nit: this can go on the line above?

##
File path: 
core/src/test/scala/integration/kafka/admin/ReassignPartitionsIntegrationTest.scala
##
@@ -407,17 +423,90 @@ class ReassignPartitionsIntegrationTest extends 
ZooKeeperTestHarness {
 waitForBrokerLevelThrottles(unthrottledBrokerConfigs)
 
 // Wait for the directory movement to complete.
-waitForVerifyAssignment(cluster.adminClient, assignment, true,
+waitForVerifyAssignment(cluster.adminClient, reassignment.json, true,
 VerifyAssignmentResult(Map(
-  new TopicPartition("foo", 0) -> PartitionReassignmentState(Seq(0, 1, 
2), Seq(0, 1, 2), true)
+  topicPartition -> PartitionReassignmentState(Seq(0, 1, 2), Seq(0, 1, 
2), true)
 ), false, Map(
-  new TopicPartitionReplica("foo", 0, 0) -> 
CompletedMoveState(newFoo1Dir)
+  new TopicPartitionReplica(topicPartition.topic, 
topicPartition.partition, 0) ->
+CompletedMoveState(reassignment.targetDir)
 ), false))
 
 val info1 = new BrokerDirs(cluster.adminClient.describeLogDirs(0.to(4).
 map(_.asInstanceOf[Integer]).asJavaCollection), 0)
-assertEquals(newFoo1Dir,
-  info1.curLogDirs.getOrElse(new TopicPartition("foo", 0), ""))
+assertEquals(reassignment.targetDir,
+  info1.curLogDirs.getOrElse(topicPartition, ""))
+  }
+
+  @Test
+  def testAlterLogDirReassignmentThrottle(): Unit = {
+val topicPartition = new TopicPartition("foo", 0)
+
+cluster = new ReassignPartitionsTestCluster(zkConnect)
+cluster.setup()
+cluster.produceMessages(topicPartition.topic, topicPartition.partition, 
700)
+
+val targetBrokerId = 0
+val replicas = Seq(0, 1, 2)
+val reassignment = buildLogDirReassignment(topicPartition, targetBrokerId, 
replicas)
+
+// Start the replica move with a low throttle so it does not complete
+val initialLogDirThrottle = 1L
+runExecuteAssignment(cluster.adminClient, false, reassignment.json,
+  interBrokerThrottle = -1L, initialLogDirThrottle)
+waitForLogDirThrottle(Set(0), initialLogDirThrottle)
+
+// Now increase the throttle and verify that the log dir movement completes
+val updatedLogDirThrottle = 300L
+runExecuteAssignment(cluster.adminClient, additional = true, 
reassignment.json,
+  interBrokerThrottle = -1L, replicaAlterLogDirsThrottle = 
updatedLogDirThrottle)
+waitForLogDirThrottle(Set(0), updatedLogDirThrottle)
+
+waitForVerifyAssignment(cluster.adminClient, reassignment.json, true,
+  VerifyAssignmentResult(Map(
+topicPartition -> PartitionReassignmentState(Seq(0, 1, 2), Seq(0, 1, 
2), true)
+  ), false, Map(
+new TopicPartitionReplica(topicPartition.topic, 
topicPartition.partition, targetBrokerId) ->
+  CompletedMoveState(reassignment.targetDir)
+  ), false))
+  }
+
+  case class LogDirReassignment(json: String, currentDir: String, targetDir: 
String)
+
+  private def buildLogDirReassignment(topicPartition: TopicPartition,
+  brokerId: Int,
+  replicas: Seq[Int]): LogDirReassignment 
= {
+
+val describeLogDirsResult = cluster.adminClient.describeLogDirs(
+  0.to(4).map(_.asInstanceOf[Integer]).asJavaCollection)
+
+val logDirInfo = new BrokerDirs(describeLogDirsResult, brokerId)
+

[jira] [Created] (KAFKA-10595) Explain idempotent producer in max.in.flight.requests.per.connection

2020-10-09 Thread Yeva Byzek (Jira)
Yeva Byzek created KAFKA-10595:
--

 Summary: Explain idempotent producer in 
max.in.flight.requests.per.connection
 Key: KAFKA-10595
 URL: https://issues.apache.org/jira/browse/KAFKA-10595
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Yeva Byzek


A user asked:

 
{quote}Is the idempotent producer also a total order producer? meaning, despite 
having max.inflight > 1, it will keep message production ordering? My 
understanding of this has always been no, but I'd like to confirm...
{quote}
 

I believe a contributing factor to this question is that 
[https://kafka.apache.org/documentation/#max.in.flight.requests.per.connection] 
reads

 
{quote}Note that if this setting is set to be greater than 1 and there are 
failed sends, there is a risk of message re-ordering due to retries (i.e., if 
retries are enabled).
{quote}
 

Suggestion: it may be clearer if we augmented this description to say that 
message re-ordering would not happen if {{enable.idempotent=true}} 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10509) Add metric to track throttle time due to hitting connection rate quota

2020-10-09 Thread James Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211484#comment-17211484
 ] 

James Cheng commented on KAFKA-10509:
-

Can we also update the documentation at 
[http://kafka.apache.org/documentation/#monitoring] to describe the new metric?

 

> Add metric to track throttle time due to hitting connection rate quota
> --
>
> Key: KAFKA-10509
> URL: https://issues.apache.org/jira/browse/KAFKA-10509
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Major
> Fix For: 2.7.0
>
>
> See KIP-612.
>  
> kafka.network:type=socket-server-metrics,name=connection-accept-throttle-time,listener=\{listenerName}
>  * Type: SampledStat.Avg
>  * Description: Average throttle time due to violating per-listener or 
> broker-wide connection acceptance rate quota on a given listener.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] wushujames commented on pull request #9276: KAFKA-10473: Add docs on partition size-on-disk, and other log-related metrics

2020-10-09 Thread GitBox


wushujames commented on pull request #9276:
URL: https://github.com/apache/kafka/pull/9276#issuecomment-706452504


   @dongjinleekr All builds failed, but none of the failures seem related to my 
change. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vvcephei opened a new pull request #9408: MINOR: Improve IQ name and type checks

2020-10-09 Thread GitBox


vvcephei opened a new pull request #9408:
URL: https://github.com/apache/kafka/pull/9408


   Previously, we would throw a confusing error, "the store has migrated,"
   when users ask for a store that is not in the topology at all, or when the
   type of the store doesn't match the QueryableStoreType parameter.
   
   Adds an up-front check that the requested store is registered and also
   a better error message when the QueryableStoreType parameter
   doesn't match the store's type.
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #9384: MINOR: remove explicit passing of AdminClient into StreamsPartitionAssignor

2020-10-09 Thread GitBox


mjsax commented on a change in pull request #9384:
URL: https://github.com/apache/kafka/pull/9384#discussion_r502683418



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
##
@@ -1091,8 +1096,4 @@ ConsumerRebalanceListener rebalanceListener() {
 Admin adminClient() {
 return adminClient;
 }
-
-InternalTopologyBuilder internalTopologyBuilder() {

Review comment:
   Remove unused method





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #9384: MINOR: remove explicit passing of AdminClient into StreamsPartitionAssignor

2020-10-09 Thread GitBox


mjsax commented on a change in pull request #9384:
URL: https://github.com/apache/kafka/pull/9384#discussion_r502683749



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsPartitionAssignor.java
##
@@ -206,7 +204,6 @@ public void configure(final Map configs) {
 assignmentConfigs = assignorConfiguration.assignmentConfigs();
 partitionGrouper = assignorConfiguration.partitionGrouper();
 userEndPoint = assignorConfiguration.userEndPoint();
-adminClient = assignorConfiguration.adminClient();

Review comment:
   Only used at a single place, just removing the variable.

##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/TaskManager.java
##
@@ -120,10 +120,6 @@ void setMainConsumer(final Consumer 
mainConsumer) {
 this.mainConsumer = mainConsumer;
 }
 
-Consumer mainConsumer() {

Review comment:
   Decouple TM from StreamPartitionAssignor now and not "miss-use" it to 
get the consumer any longer.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #9384: MINOR: remove explicit passing of AdminClient into StreamsPartitionAssignor

2020-10-09 Thread GitBox


mjsax commented on a change in pull request #9384:
URL: https://github.com/apache/kafka/pull/9384#discussion_r502683615



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsPartitionAssignor.java
##
@@ -192,12 +191,11 @@ public String toString() {
  */
 @Override
 public void configure(final Map configs) {
-final AssignorConfiguration assignorConfiguration = new 
AssignorConfiguration(configs);
+assignorConfiguration = new AssignorConfiguration(configs);

Review comment:
   we need to keep a reference as we cannot get the `mainConsumer` at this 
point, as it will be set after this method finished

##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsPartitionAssignor.java
##
@@ -206,7 +204,6 @@ public void configure(final Map configs) {
 assignmentConfigs = assignorConfiguration.assignmentConfigs();
 partitionGrouper = assignorConfiguration.partitionGrouper();
 userEndPoint = assignorConfiguration.userEndPoint();
-adminClient = assignorConfiguration.adminClient();

Review comment:
   Only used at a single place, just removeing the variable.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-9333) Shim `core` module that targets default scala version (KIP-531)

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211434#comment-17211434
 ] 

Bill Bejeck commented on KAFKA-9333:


Not a blocker so I'm clearing the fix version as part of the 2.7.0 release.

> Shim `core` module that targets default scala version (KIP-531)
> ---
>
> Key: KAFKA-9333
> URL: https://issues.apache.org/jira/browse/KAFKA-9333
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
>  Labels: kip
> Fix For: 2.7.0
>
>
> Introduce a shim `core` module that targets the default scala version. This 
> useful for applications that do not require a specific Scala version. Java 
> applications that shade Scala dependencies or Java applications that have a 
> single Scala dependency would fall under this category. We will target Scala 
> 2.13 in the initial version of this module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9333) Shim `core` module that targets default scala version (KIP-531)

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9333:
---
Fix Version/s: (was: 2.7.0)

> Shim `core` module that targets default scala version (KIP-531)
> ---
>
> Key: KAFKA-9333
> URL: https://issues.apache.org/jira/browse/KAFKA-9333
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
>  Labels: kip
>
> Introduce a shim `core` module that targets the default scala version. This 
> useful for applications that do not require a specific Scala version. Java 
> applications that shade Scala dependencies or Java applications that have a 
> single Scala dependency would fall under this category. We will target Scala 
> 2.13 in the initial version of this module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8744) Add Support to Scala API for KIP-307 and KIP-479

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8744:
---
Fix Version/s: (was: 2.7.0)

> Add Support to Scala API for KIP-307 and KIP-479
> 
>
> Key: KAFKA-8744
> URL: https://issues.apache.org/jira/browse/KAFKA-8744
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Bill Bejeck
>Assignee: Matthias J. Sax
>Priority: Major
>
> With the ability to provide names for all operators in a Kafka Streams 
> topology 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL])
>  coming in the 2.4 release, we also need to add this new feature to the 
> Streams Scala API.
> KIP-307 was refined via KIP-479 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoined+config+object+to+Join])
>  and this ticket should cover both cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8744) Add Support to Scala API for KIP-307 and KIP-479

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211431#comment-17211431
 ] 

Bill Bejeck commented on KAFKA-8744:


Not a blocker issue so as part of the 2.7.0 release I'm clearing the fix 
version.  When we have a PR for this ticket we can set the appropriate fix 
version then.

> Add Support to Scala API for KIP-307 and KIP-479
> 
>
> Key: KAFKA-8744
> URL: https://issues.apache.org/jira/browse/KAFKA-8744
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Bill Bejeck
>Assignee: Matthias J. Sax
>Priority: Major
> Fix For: 2.7.0
>
>
> With the ability to provide names for all operators in a Kafka Streams 
> topology 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL])
>  coming in the 2.4 release, we also need to add this new feature to the 
> Streams Scala API.
> KIP-307 was refined via KIP-479 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoined+config+object+to+Join])
>  and this ticket should cover both cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10233) KafkaConsumer polls in a tight loop if group is not authorized

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211422#comment-17211422
 ] 

Bill Bejeck commented on KAFKA-10233:
-

[~rsivaram] do you think this will be completed for the 2.7.0 release?

> KafkaConsumer polls in a tight loop if group is not authorized
> --
>
> Key: KAFKA-10233
> URL: https://issues.apache.org/jira/browse/KAFKA-10233
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.7.0
>
>
> Consumer propagates GroupAuthorizationException from poll immediately when 
> trying to find coordinator even though it is a retriable exception. If the 
> application polls in a loop, ignoring retriable exceptions, the consumer 
> tries to find coordinator in a tight loop without any backoff. We should 
> apply retry backoff in this case to avoid overloading brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10143) Add integration testing ensuring that the reassignment tool can change replication quota with rebalance in progress

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211421#comment-17211421
 ] 

Bill Bejeck commented on KAFKA-10143:
-

[~hachikuji] do you think this will go with the 2.7.0 release?  

> Add integration testing ensuring that the reassignment tool can change 
> replication quota with rebalance in progress
> ---
>
> Key: KAFKA-10143
> URL: https://issues.apache.org/jira/browse/KAFKA-10143
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.6.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.7.0, 2.6.1
>
>
> Previously we could use --execute with the --throttle option in order to 
> change the quota of an active reassignment. We seem to have lost this with 
> KIP-455. The code has the following comment:
> {code}
> val reassignPartitionsInProgress = zkClient.reassignPartitionsInProgress()
> if (reassignPartitionsInProgress) {
>   // Note: older versions of this tool would modify the broker quotas 
> here (but not
>   // topic quotas, for some reason).  This behavior wasn't documented in 
> the --execute
>   // command line help.  Since it might interfere with other ongoing 
> reassignments,
>   // this behavior was dropped as part of the KIP-455 changes.
>   throw new 
> TerseReassignmentFailureException(cannotExecuteBecauseOfExistingMessage)
> }
> {code}
> Seems like it was a mistake to change this because it breaks compatibility. 
> We probably have to revert. At the same time, we can make the intent clearer 
> both in the code and in the command help output.
> (Edit: in KIP-455, we changed the behavior so that changes require quota 
> changes require the --additional flag. So this issue is mostly about ensuring 
> we have the integration testing to cover that behavior.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9837) New RPC for notifying controller of failed replica

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211415#comment-17211415
 ] 

Bill Bejeck commented on KAFKA-9837:


I'm changing the fix version for this ticket as it's not a blocker.

> New RPC for notifying controller of failed replica
> --
>
> Key: KAFKA-9837
> URL: https://issues.apache.org/jira/browse/KAFKA-9837
> Project: Kafka
>  Issue Type: Sub-task
>  Components: controller, core
>Reporter: David Arthur
>Priority: Major
> Fix For: 2.7.0
>
>
> This is the tracking ticket for 
> [KIP-589|https://cwiki.apache.org/confluence/display/KAFKA/KIP-589+Add+API+to+update+Replica+state+in+Controller].
>  For the bridge release, brokers should no longer use ZooKeeper to notify the 
> controller that a log dir has failed. It should instead use an RPC mechanism.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9837) New RPC for notifying controller of failed replica

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9837:
---
Fix Version/s: (was: 2.7.0)
   2.8.0

> New RPC for notifying controller of failed replica
> --
>
> Key: KAFKA-9837
> URL: https://issues.apache.org/jira/browse/KAFKA-9837
> Project: Kafka
>  Issue Type: Sub-task
>  Components: controller, core
>Reporter: David Arthur
>Priority: Major
> Fix For: 2.8.0
>
>
> This is the tracking ticket for 
> [KIP-589|https://cwiki.apache.org/confluence/display/KAFKA/KIP-589+Add+API+to+update+Replica+state+in+Controller].
>  For the bridge release, brokers should no longer use ZooKeeper to notify the 
> controller that a log dir has failed. It should instead use an RPC mechanism.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9393) DeleteRecords may cause extreme lock contention for large partition directories

2020-10-09 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-9393.

Fix Version/s: 2.7.0
 Assignee: Gardner Vickers
   Resolution: Fixed

merged to trunk.

> DeleteRecords may cause extreme lock contention for large partition 
> directories
> ---
>
> Key: KAFKA-9393
> URL: https://issues.apache.org/jira/browse/KAFKA-9393
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Lucas Bradstreet
>Assignee: Gardner Vickers
>Priority: Major
> Fix For: 2.7.0
>
>
> DeleteRecords, frequently used by KStreams triggers a 
> Log.maybeIncrementLogStartOffset call, calling 
> kafka.log.ProducerStateManager.listSnapshotFiles which calls 
> java.io.File.listFiles on the partition dir. The time taken to list this 
> directory can be extreme for partitions with many small segments (e.g 2) 
> taking multiple seconds to finish. This causes lock contention for the log, 
> and if produce requests are also occurring for the same log can cause a 
> majority of request handler threads to become blocked waiting for the 
> DeleteRecords call to finish.
> I believe this is a problem going back to the initial implementation of the 
> transactional producer, but I need to confirm how far back it goes.
> One possible solution is to maintain a producer state snapshot aligned to the 
> log segment, and simply delete it whenever we delete a segment. This would 
> ensure that we never have to perform a directory scan.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] junrao merged pull request #7929: KAFKA-9393: DeleteRecords may cause extreme lock contention for large partition directories

2020-10-09 Thread GitBox


junrao merged pull request #7929:
URL: https://github.com/apache/kafka/pull/7929


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ryannedolan commented on a change in pull request #9395: KAFKA-9726: Add LegacyReplicationPolicy for MM2

2020-10-09 Thread GitBox


ryannedolan commented on a change in pull request #9395:
URL: https://github.com/apache/kafka/pull/9395#discussion_r502672494



##
File path: 
connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/ReplicationPolicy.java
##
@@ -57,4 +57,9 @@ default boolean isInternalTopic(String topic) {
 return topic.endsWith(".internal") || topic.endsWith("-internal") || 
topic.startsWith("__")
 || topic.startsWith(".");
 }
+
+/** Checks if the policy can track back to the source of the topic. */
+default boolean canTrackSource(String topic) {

Review comment:
   If a public API change like this is required, you will need to propose a 
small KIP. I'm unclear why it's required tho, and ideally we would not alter 
the existing API if possible.
   
   If a new method is required, I think "track" is too ambiguous and should not 
be used here.

##
File path: 
connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/ReplicationPolicy.java
##
@@ -57,4 +57,9 @@ default boolean isInternalTopic(String topic) {
 return topic.endsWith(".internal") || topic.endsWith("-internal") || 
topic.startsWith("__")
 || topic.startsWith(".");
 }
+
+/** Checks if the policy can track back to the source of the topic. */

Review comment:
   I'm not sure what you mean by "track back to the source of the topic". 
The word "track" might mean a few things here, and it's not obvious what you 
mean. Can you clarify?

##
File path: 
connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/LegacyReplicationPolicy.java
##
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.connect.mirror;
+
+import org.apache.kafka.common.Configurable;
+
+import java.util.Map;
+
+import static 
org.apache.kafka.connect.mirror.MirrorClientConfig.HEARTBEATS_TOPIC;
+
+/**
+ * The replication policy that imitates the behavior of MirrorMaker 1.
+ *
+ * The policy doesn't rename topics: {@code topic1} remains {@code topic1} 
after replication.
+ * There is one exception to this: for {@code heartbeats}, it behaves 
identical to {@link DefaultReplicationPolicy}.
+ *
+ * The policy has some notable limitations. The most important one is that 
the policy is unable to detect
+ * cycles for any topic apart from {@code heartbeats}. This makes 
cross-replication effectively impossible.
+ *
+ * Another limitation is that {@link MirrorClient#remoteTopics()} will be 
able to list only
+ * {@code heartbeats} topics.
+ *
+ * {@link MirrorClient#countHopsForTopic(String, String)} will return 
{@code -1} for any topic
+ * apart from {@code heartbeats}.
+ *
+ * The policy supports {@link DefaultReplicationPolicy}'s configurations
+ * for the behavior related to {@code heartbeats}.
+ */
+public class LegacyReplicationPolicy implements ReplicationPolicy, 
Configurable {
+// Replication sub-policy for heartbeats topics
+private final DefaultReplicationPolicy heartbeatTopicReplicationPolicy = 
new DefaultReplicationPolicy();
+
+@Override
+public void configure(final Map props) {
+heartbeatTopicReplicationPolicy.configure(props);
+}
+
+@Override
+public String formatRemoteTopic(final String sourceClusterAlias, final 
String topic) {
+if (isOriginalTopicHeartbeats(topic)) {
+return 
heartbeatTopicReplicationPolicy.formatRemoteTopic(sourceClusterAlias, topic);
+} else {
+return topic;
+}
+}
+
+@Override
+public String topicSource(final String topic) {
+if (isOriginalTopicHeartbeats(topic)) {
+return heartbeatTopicReplicationPolicy.topicSource(topic);
+} else {
+return null;

Review comment:
   I've seen alternative solutions floating around that use a configurable 
source here. Basically, the configuration passed to configure() is consulted to 
find the "source cluster", rather than looking at the topic name. That approach 
lets you return an actual source here, which obviates the new canTrackSource() 
method etc.





This is an automated message from the 

[GitHub] [kafka] splett2 commented on pull request #9386: KAFKA-10024: Add dynamic configuration and enforce quota for per-IP connection rate limits

2020-10-09 Thread GitBox


splett2 commented on pull request #9386:
URL: https://github.com/apache/kafka/pull/9386#issuecomment-706405628


   @apovzner 
   Updated PR to address comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-9705) Zookeeper mutation protocols should be redirected to Controller only

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211403#comment-17211403
 ] 

Bill Bejeck commented on KAFKA-9705:


[~bchen225242] will this make it for the 2.7.0 release?

> Zookeeper mutation protocols should be redirected to Controller only
> 
>
> Key: KAFKA-9705
> URL: https://issues.apache.org/jira/browse/KAFKA-9705
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.7.0
>
>
> In the bridge release, we need to restrict the direct access of ZK to 
> controller only. This means the existing AlterConfig path should be migrated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10343) Remove 2.7 IBP for redirection enablement

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10343:

Fix Version/s: (was: 2.7.0)
   2.8.0

> Remove 2.7 IBP for redirection enablement
> -
>
> Key: KAFKA-10343
> URL: https://issues.apache.org/jira/browse/KAFKA-10343
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
> Fix For: 2.8.0
>
>
> The shipment of redirection could not be complete in 2.7. With that being 
> said, we need to patch a PR to disable it once the release branch is cut, by 
> removing the new IBP flag entirely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10343) Remove 2.7 IBP for redirection enablement

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211400#comment-17211400
 ] 

Bill Bejeck commented on KAFKA-10343:
-

[~bchen225242] From the comments above, I'm going to move the "fix version" to 
2.8.0.  If the next release is a different version we can update it at that 
time.

> Remove 2.7 IBP for redirection enablement
> -
>
> Key: KAFKA-10343
> URL: https://issues.apache.org/jira/browse/KAFKA-10343
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Priority: Major
> Fix For: 2.8.0
>
>
> The shipment of redirection could not be complete in 2.7. With that being 
> said, we need to patch a PR to disable it once the release branch is cut, by 
> removing the new IBP flag entirely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10509) Add metric to track throttle time due to hitting connection rate quota

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-10509.
-
Resolution: Fixed

Resolved via [https://github.com/apache/kafka/pull/9317] merged on 9/28.

> Add metric to track throttle time due to hitting connection rate quota
> --
>
> Key: KAFKA-10509
> URL: https://issues.apache.org/jira/browse/KAFKA-10509
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Major
> Fix For: 2.7.0
>
>
> See KIP-612.
>  
> kafka.network:type=socket-server-metrics,name=connection-accept-throttle-time,listener=\{listenerName}
>  * Type: SampledStat.Avg
>  * Description: Average throttle time due to violating per-listener or 
> broker-wide connection acceptance rate quota on a given listener.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-5453) Controller may miss requests sent to the broker when zk session timeout happens.

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211395#comment-17211395
 ] 

Bill Bejeck commented on KAFKA-5453:


No progress since the last release, I'm clearing the fix version field as part 
of the 2.7.0 release process.

> Controller may miss requests sent to the broker when zk session timeout 
> happens.
> 
>
> Key: KAFKA-5453
> URL: https://issues.apache.org/jira/browse/KAFKA-5453
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Jiangjie Qin
>Assignee: Viktor Somogyi-Vass
>Priority: Major
> Fix For: 2.7.0
>
>
> The issue I encountered was the following:
> 1. Partition reassignment was in progress, one replica of a partition is 
> being reassigned from broker 1 to broker 2.
> 2. Controller received an ISR change notification which indicates broker 2 
> has caught up.
> 3. Controller was sending StopReplicaRequest to broker 1.
> 4. Broker 1 zk session timeout occurs. Controller removed broker 1 from the 
> cluster and cleaned up the queue. i.e. the StopReplicaRequest was removed 
> from the ControllerChannelManager.
> 5. Broker 1 reconnected to zk and act as if it is still a follower replica of 
> the partition. 
> 6. Broker 1 will always receive exception from the leader because it is not 
> in the replica list.
> Not sure what is the correct fix here. It seems that broke 1 in this case 
> should ask the controller for the latest replica assignment.
> There are two related bugs:
> 1. when a {{NotAssignedReplicaException}} is thrown from 
> {{Partition.updateReplicaLogReadResult()}}, the other partitions in the same 
> request will failed to update the fetch timestamp and offset and thus also 
> drop out of the ISR.
> 2. The {{NotAssignedReplicaException}} was not properly returned to the 
> replicas, instead, a UnknownServerException is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-5453) Controller may miss requests sent to the broker when zk session timeout happens.

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-5453:
---
Fix Version/s: (was: 2.7.0)

> Controller may miss requests sent to the broker when zk session timeout 
> happens.
> 
>
> Key: KAFKA-5453
> URL: https://issues.apache.org/jira/browse/KAFKA-5453
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0
>Reporter: Jiangjie Qin
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>
> The issue I encountered was the following:
> 1. Partition reassignment was in progress, one replica of a partition is 
> being reassigned from broker 1 to broker 2.
> 2. Controller received an ISR change notification which indicates broker 2 
> has caught up.
> 3. Controller was sending StopReplicaRequest to broker 1.
> 4. Broker 1 zk session timeout occurs. Controller removed broker 1 from the 
> cluster and cleaned up the queue. i.e. the StopReplicaRequest was removed 
> from the ControllerChannelManager.
> 5. Broker 1 reconnected to zk and act as if it is still a follower replica of 
> the partition. 
> 6. Broker 1 will always receive exception from the leader because it is not 
> in the replica list.
> Not sure what is the correct fix here. It seems that broke 1 in this case 
> should ask the controller for the latest replica assignment.
> There are two related bugs:
> 1. when a {{NotAssignedReplicaException}} is thrown from 
> {{Partition.updateReplicaLogReadResult()}}, the other partitions in the same 
> request will failed to update the fetch timestamp and offset and thus also 
> drop out of the ISR.
> 2. The {{NotAssignedReplicaException}} was not properly returned to the 
> replicas, instead, a UnknownServerException is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-6078.

Resolution: Fixed

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.7.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8940) Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8940.

Resolution: Fixed

> Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance
> -
>
> Key: KAFKA-8940
> URL: https://issues.apache.org/jira/browse/KAFKA-8940
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
>
> I lost the screen shot unfortunately... it reports the set of expected 
> records does not match the received records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211394#comment-17211394
 ] 

Bill Bejeck commented on KAFKA-6078:


No failures reported for this in 10 months. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.7.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8940) Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211391#comment-17211391
 ] 

Bill Bejeck commented on KAFKA-8940:


As part of the 2.7.0 release process, I'm optimistically closing this ticket. 
If another failure occurs, we can re-open.

> Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance
> -
>
> Key: KAFKA-8940
> URL: https://issues.apache.org/jira/browse/KAFKA-8940
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.7.0, 2.5.2, 2.6.1
>
>
> I lost the screen shot unfortunately... it reports the set of expected 
> records does not match the received records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8940) Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8940:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.5.2)
   (was: 2.7.0)

> Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance
> -
>
> Key: KAFKA-8940
> URL: https://issues.apache.org/jira/browse/KAFKA-8940
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
>
> I lost the screen shot unfortunately... it reports the set of expected 
> records does not match the received records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8398) NPE when unmapping files after moving log directories using AlterReplicaLogDirs

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8398:
---
Fix Version/s: (was: 2.7.0)

> NPE when unmapping files after moving log directories using 
> AlterReplicaLogDirs
> ---
>
> Key: KAFKA-8398
> URL: https://issues.apache.org/jira/browse/KAFKA-8398
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Vikas Singh
>Assignee: Jaikiran Pai
>Priority: Major
> Attachments: AlterReplicaLogDirs.txt
>
>
> The NPE occurs after the AlterReplicaLogDirs command completes successfully 
> and when unmapping older regions. The relevant part of log is in attached log 
> file. Here is the stacktrace (which is repeated for both index files):
>  
> {code:java}
> [2019-05-20 14:08:13,999] ERROR Error unmapping index 
> /tmp/kafka-logs/test-0.567a0d8ff88b45ab95794020d0b2e66f-delete/.index
>  (kafka.log.OffsetIndex)
> java.lang.NullPointerException
> at 
> org.apache.kafka.common.utils.MappedByteBuffers.unmap(MappedByteBuffers.java:73)
> at kafka.log.AbstractIndex.forceUnmap(AbstractIndex.scala:318)
> at kafka.log.AbstractIndex.safeForceUnmap(AbstractIndex.scala:308)
> at kafka.log.AbstractIndex.$anonfun$closeHandler$1(AbstractIndex.scala:257)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.closeHandler(AbstractIndex.scala:257)
> at kafka.log.AbstractIndex.deleteIfExists(AbstractIndex.scala:226)
> at kafka.log.LogSegment.$anonfun$deleteIfExists$6(LogSegment.scala:597)
> at kafka.log.LogSegment.delete$1(LogSegment.scala:585)
> at kafka.log.LogSegment.$anonfun$deleteIfExists$5(LogSegment.scala:597)
> at kafka.utils.CoreUtils$.$anonfun$tryAll$1(CoreUtils.scala:115)
> at kafka.utils.CoreUtils$.$anonfun$tryAll$1$adapted(CoreUtils.scala:114)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at kafka.utils.CoreUtils$.tryAll(CoreUtils.scala:114)
> at kafka.log.LogSegment.deleteIfExists(LogSegment.scala:599)
> at kafka.log.Log.$anonfun$delete$3(Log.scala:1762)
> at kafka.log.Log.$anonfun$delete$3$adapted(Log.scala:1762)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at kafka.log.Log.$anonfun$delete$2(Log.scala:1762)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.log.Log.maybeHandleIOException(Log.scala:2013)
> at kafka.log.Log.delete(Log.scala:1759)
> at kafka.log.LogManager.deleteLogs(LogManager.scala:761)
> at kafka.log.LogManager.$anonfun$deleteLogs$6(LogManager.scala:775)
> at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> [{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8398) NPE when unmapping files after moving log directories using AlterReplicaLogDirs

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211387#comment-17211387
 ] 

Bill Bejeck commented on KAFKA-8398:


Since this is not a blocker, and the PR is closed, I'm clearing the "fix 
version" field as part of the 2.7.0 release process.

> NPE when unmapping files after moving log directories using 
> AlterReplicaLogDirs
> ---
>
> Key: KAFKA-8398
> URL: https://issues.apache.org/jira/browse/KAFKA-8398
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Vikas Singh
>Assignee: Jaikiran Pai
>Priority: Major
> Fix For: 2.7.0
>
> Attachments: AlterReplicaLogDirs.txt
>
>
> The NPE occurs after the AlterReplicaLogDirs command completes successfully 
> and when unmapping older regions. The relevant part of log is in attached log 
> file. Here is the stacktrace (which is repeated for both index files):
>  
> {code:java}
> [2019-05-20 14:08:13,999] ERROR Error unmapping index 
> /tmp/kafka-logs/test-0.567a0d8ff88b45ab95794020d0b2e66f-delete/.index
>  (kafka.log.OffsetIndex)
> java.lang.NullPointerException
> at 
> org.apache.kafka.common.utils.MappedByteBuffers.unmap(MappedByteBuffers.java:73)
> at kafka.log.AbstractIndex.forceUnmap(AbstractIndex.scala:318)
> at kafka.log.AbstractIndex.safeForceUnmap(AbstractIndex.scala:308)
> at kafka.log.AbstractIndex.$anonfun$closeHandler$1(AbstractIndex.scala:257)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.closeHandler(AbstractIndex.scala:257)
> at kafka.log.AbstractIndex.deleteIfExists(AbstractIndex.scala:226)
> at kafka.log.LogSegment.$anonfun$deleteIfExists$6(LogSegment.scala:597)
> at kafka.log.LogSegment.delete$1(LogSegment.scala:585)
> at kafka.log.LogSegment.$anonfun$deleteIfExists$5(LogSegment.scala:597)
> at kafka.utils.CoreUtils$.$anonfun$tryAll$1(CoreUtils.scala:115)
> at kafka.utils.CoreUtils$.$anonfun$tryAll$1$adapted(CoreUtils.scala:114)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at kafka.utils.CoreUtils$.tryAll(CoreUtils.scala:114)
> at kafka.log.LogSegment.deleteIfExists(LogSegment.scala:599)
> at kafka.log.Log.$anonfun$delete$3(Log.scala:1762)
> at kafka.log.Log.$anonfun$delete$3$adapted(Log.scala:1762)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at kafka.log.Log.$anonfun$delete$2(Log.scala:1762)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.log.Log.maybeHandleIOException(Log.scala:2013)
> at kafka.log.Log.delete(Log.scala:1759)
> at kafka.log.LogManager.deleteLogs(LogManager.scala:761)
> at kafka.log.LogManager.$anonfun$deleteLogs$6(LogManager.scala:775)
> at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> [{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10140) Incremental config api excludes plugin config changes

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-10140.
-
Resolution: Fixed

> Incremental config api excludes plugin config changes
> -
>
> Key: KAFKA-10140
> URL: https://issues.apache.org/jira/browse/KAFKA-10140
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Critical
> Fix For: 2.7.0
>
>
> I was trying to alter the jmx metric filters using the incremental alter 
> config api and hit this error:
> ```
> java.util.NoSuchElementException: key not found: metrics.jmx.blacklist
>   at scala.collection.MapLike.default(MapLike.scala:235)
>   at scala.collection.MapLike.default$(MapLike.scala:234)
>   at scala.collection.AbstractMap.default(Map.scala:65)
>   at scala.collection.MapLike.apply(MapLike.scala:144)
>   at scala.collection.MapLike.apply$(MapLike.scala:143)
>   at scala.collection.AbstractMap.apply(Map.scala:65)
>   at kafka.server.AdminManager.listType$1(AdminManager.scala:681)
>   at 
> kafka.server.AdminManager.$anonfun$prepareIncrementalConfigs$1(AdminManager.scala:693)
>   at 
> kafka.server.AdminManager.prepareIncrementalConfigs(AdminManager.scala:687)
>   at 
> kafka.server.AdminManager.$anonfun$incrementalAlterConfigs$1(AdminManager.scala:618)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:154)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:273)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:266)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:108)
>   at 
> kafka.server.AdminManager.incrementalAlterConfigs(AdminManager.scala:589)
>   at 
> kafka.server.KafkaApis.handleIncrementalAlterConfigsRequest(KafkaApis.scala:2698)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:188)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:78)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ```
> It looks like we are only allowing changes to the keys defined in 
> `KafkaConfig` through this API. This excludes config changes to any plugin 
> components such as `JmxReporter`. 
> Note that I was able to use the regular `alterConfig` API to change this 
> config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-6824.

Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10140) Incremental config api excludes plugin config changes

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211386#comment-17211386
 ] 

Bill Bejeck commented on KAFKA-10140:
-

Since this is not a blocker, there is no active PR I'm clearing the "fix 
version" field as part of the 2.7.0 release process.

> Incremental config api excludes plugin config changes
> -
>
> Key: KAFKA-10140
> URL: https://issues.apache.org/jira/browse/KAFKA-10140
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Critical
> Fix For: 2.7.0
>
>
> I was trying to alter the jmx metric filters using the incremental alter 
> config api and hit this error:
> ```
> java.util.NoSuchElementException: key not found: metrics.jmx.blacklist
>   at scala.collection.MapLike.default(MapLike.scala:235)
>   at scala.collection.MapLike.default$(MapLike.scala:234)
>   at scala.collection.AbstractMap.default(Map.scala:65)
>   at scala.collection.MapLike.apply(MapLike.scala:144)
>   at scala.collection.MapLike.apply$(MapLike.scala:143)
>   at scala.collection.AbstractMap.apply(Map.scala:65)
>   at kafka.server.AdminManager.listType$1(AdminManager.scala:681)
>   at 
> kafka.server.AdminManager.$anonfun$prepareIncrementalConfigs$1(AdminManager.scala:693)
>   at 
> kafka.server.AdminManager.prepareIncrementalConfigs(AdminManager.scala:687)
>   at 
> kafka.server.AdminManager.$anonfun$incrementalAlterConfigs$1(AdminManager.scala:618)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:154)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:273)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:266)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:108)
>   at 
> kafka.server.AdminManager.incrementalAlterConfigs(AdminManager.scala:589)
>   at 
> kafka.server.KafkaApis.handleIncrementalAlterConfigsRequest(KafkaApis.scala:2698)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:188)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:78)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ```
> It looks like we are only allowing changes to the keys defined in 
> `KafkaConfig` through this API. This excludes config changes to any plugin 
> components such as `JmxReporter`. 
> Note that I was able to use the regular `alterConfig` API to change this 
> config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-6824:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211385#comment-17211385
 ] 

Bill Bejeck commented on KAFKA-6824:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8257) Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8257.

Resolution: Fixed

> Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota
> 
>
> Key: KAFKA-8257
> URL: https://issues.apache.org/jira/browse/KAFKA-8257
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3566/tests]
> {quote}java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at kafka.server.BaseRequestTest.receiveResponse(BaseRequestTest.scala:87)
> at kafka.server.BaseRequestTest.sendAndReceive(BaseRequestTest.scala:148)
> at 
> kafka.network.DynamicConnectionQuotaTest.verifyConnection(DynamicConnectionQuotaTest.scala:229)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4(DynamicConnectionQuotaTest.scala:133)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4$adapted(DynamicConnectionQuotaTest.scala:133)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicListenerConnectionQuota(DynamicConnectionQuotaTest.scala:133){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8257) Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8257:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota
> 
>
> Key: KAFKA-8257
> URL: https://issues.apache.org/jira/browse/KAFKA-8257
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3566/tests]
> {quote}java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at kafka.server.BaseRequestTest.receiveResponse(BaseRequestTest.scala:87)
> at kafka.server.BaseRequestTest.sendAndReceive(BaseRequestTest.scala:148)
> at 
> kafka.network.DynamicConnectionQuotaTest.verifyConnection(DynamicConnectionQuotaTest.scala:229)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4(DynamicConnectionQuotaTest.scala:133)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4$adapted(DynamicConnectionQuotaTest.scala:133)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicListenerConnectionQuota(DynamicConnectionQuotaTest.scala:133){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8257) Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211381#comment-17211381
 ] 

Bill Bejeck commented on KAFKA-8257:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota
> 
>
> Key: KAFKA-8257
> URL: https://issues.apache.org/jira/browse/KAFKA-8257
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3566/tests]
> {quote}java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at kafka.server.BaseRequestTest.receiveResponse(BaseRequestTest.scala:87)
> at kafka.server.BaseRequestTest.sendAndReceive(BaseRequestTest.scala:148)
> at 
> kafka.network.DynamicConnectionQuotaTest.verifyConnection(DynamicConnectionQuotaTest.scala:229)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4(DynamicConnectionQuotaTest.scala:133)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4$adapted(DynamicConnectionQuotaTest.scala:133)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicListenerConnectionQuota(DynamicConnectionQuotaTest.scala:133){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211380#comment-17211380
 ] 

Bill Bejeck commented on KAFKA-8139:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
> 
>
> Key: KAFKA-8139
> URL: https://issues.apache.org/jira/browse/KAFKA-8139
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
> java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
> java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
> scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
>  at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
> at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
>  at 
> scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
>  at 
> scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) 
> at 
> scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
> at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
> kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
> kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
>  at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
> at 
> kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
> STDOUT
> {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at 

[jira] [Resolved] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8139.

Resolution: Fixed

> Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
> 
>
> Key: KAFKA-8139
> URL: https://issues.apache.org/jira/browse/KAFKA-8139
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
> java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
> java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
> scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
>  at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
> at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
>  at 
> scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
>  at 
> scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) 
> at 
> scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
> at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
> kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
> kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
>  at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
> at 
> kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
> STDOUT
> {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:74) at 

[jira] [Resolved] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8092.

Resolution: Fixed

> Flaky Test 
> GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
> --
>
> Key: KAFKA-8092
> URL: https://issues.apache.org/jira/browse/KAFKA-8092
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
> clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38267-127.0.0.1:53148-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> 

[jira] [Updated] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8092:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test 
> GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
> --
>
> Key: KAFKA-8092
> URL: https://issues.apache.org/jira/browse/KAFKA-8092
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
> clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38267-127.0.0.1:53148-0, 
> 

[jira] [Updated] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8139:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
> 
>
> Key: KAFKA-8139
> URL: https://issues.apache.org/jira/browse/KAFKA-8139
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
> java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
> java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
> scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
>  at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
> at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
>  at 
> scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
>  at 
> scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) 
> at 
> scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
> at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
> kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
> kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
>  at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
> at 
> kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
> STDOUT
> {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> 

[jira] [Commented] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211379#comment-17211379
 ] 

Bill Bejeck commented on KAFKA-8092:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test 
> GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
> --
>
> Key: KAFKA-8092
> URL: https://issues.apache.org/jira/browse/KAFKA-8092
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
> clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  

[jira] [Resolved] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7647.

Resolution: Fixed

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.1.1, 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8076) Flaky Test ProduceRequestTest#testSimpleProduceRequest

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211378#comment-17211378
 ] 

Bill Bejeck commented on KAFKA-8076:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test ProduceRequestTest#testSimpleProduceRequest
> --
>
> Key: KAFKA-8076
> URL: https://issues.apache.org/jira/browse/KAFKA-8076
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.server/ProduceRequestTest/testSimpleProduceRequest/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.server.ProduceRequestTest.createTopicAndFindPartitionWithLeader(ProduceRequestTest.scala:91)
>  at 
> kafka.server.ProduceRequestTest.testSimpleProduceRequest(ProduceRequestTest.scala:42)
> {quote}
> STDOUT
> {quote}[2019-03-08 01:42:24,797] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-08 01:42:38,287] WARN Unable to 
> read additional data from client sessionid 0x100712b09280002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8076) Flaky Test ProduceRequestTest#testSimpleProduceRequest

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8076.

Resolution: Fixed

> Flaky Test ProduceRequestTest#testSimpleProduceRequest
> --
>
> Key: KAFKA-8076
> URL: https://issues.apache.org/jira/browse/KAFKA-8076
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.server/ProduceRequestTest/testSimpleProduceRequest/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.server.ProduceRequestTest.createTopicAndFindPartitionWithLeader(ProduceRequestTest.scala:91)
>  at 
> kafka.server.ProduceRequestTest.testSimpleProduceRequest(ProduceRequestTest.scala:42)
> {quote}
> STDOUT
> {quote}[2019-03-08 01:42:24,797] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-08 01:42:38,287] WARN Unable to 
> read additional data from client sessionid 0x100712b09280002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8076) Flaky Test ProduceRequestTest#testSimpleProduceRequest

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8076:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test ProduceRequestTest#testSimpleProduceRequest
> --
>
> Key: KAFKA-8076
> URL: https://issues.apache.org/jira/browse/KAFKA-8076
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.server/ProduceRequestTest/testSimpleProduceRequest/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.server.ProduceRequestTest.createTopicAndFindPartitionWithLeader(ProduceRequestTest.scala:91)
>  at 
> kafka.server.ProduceRequestTest.testSimpleProduceRequest(ProduceRequestTest.scala:42)
> {quote}
> STDOUT
> {quote}[2019-03-08 01:42:24,797] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-08 01:42:38,287] WARN Unable to 
> read additional data from client sessionid 0x100712b09280002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211375#comment-17211375
 ] 

Bill Bejeck commented on KAFKA-8137:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 

[jira] [Updated] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8137:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR 
> 

[jira] [Commented] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211376#comment-17211376
 ] 

Bill Bejeck commented on KAFKA-7647:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.1.1, 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8108) Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8108.

Resolution: Fixed

> Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer
> 
>
> Key: KAFKA-8108
> URL: https://issues.apache.org/jira/browse/KAFKA-8108
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testThrottledProducerConsumer/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-7647:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.1.1, 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8108) Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8108:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer
> 
>
> Key: KAFKA-8108
> URL: https://issues.apache.org/jira/browse/KAFKA-8108
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testThrottledProducerConsumer/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8137.

Resolution: Fixed

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, 

[jira] [Resolved] (KAFKA-8303) Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8303.

Resolution: Fixed

> Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint
> -
>
> Key: KAFKA-8303
> URL: https://issues.apache.org/jira/browse/KAFKA-8303
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, security, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/21274/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testLogStartOffsetCheckpoint/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testLogStartOffsetCheckpoint$2.apply$mcZ$sp(AdminClientIntegrationTest.scala:820)
>  at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:789) at 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint(AdminClientIntegrationTest.scala:813){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8108) Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211374#comment-17211374
 ] 

Bill Bejeck commented on KAFKA-8108:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer
> 
>
> Key: KAFKA-8108
> URL: https://issues.apache.org/jira/browse/KAFKA-8108
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testThrottledProducerConsumer/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8303) Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211372#comment-17211372
 ] 

Bill Bejeck commented on KAFKA-8303:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint
> -
>
> Key: KAFKA-8303
> URL: https://issues.apache.org/jira/browse/KAFKA-8303
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, security, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/21274/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testLogStartOffsetCheckpoint/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testLogStartOffsetCheckpoint$2.apply$mcZ$sp(AdminClientIntegrationTest.scala:820)
>  at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:789) at 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint(AdminClientIntegrationTest.scala:813){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8303) Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8303:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint
> -
>
> Key: KAFKA-8303
> URL: https://issues.apache.org/jira/browse/KAFKA-8303
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, security, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/21274/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testLogStartOffsetCheckpoint/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testLogStartOffsetCheckpoint$2.apply$mcZ$sp(AdminClientIntegrationTest.scala:820)
>  at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:789) at 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint(AdminClientIntegrationTest.scala:813){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8079) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8079:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange
> -
>
> Key: KAFKA-8079
> URL: https://issues.apache.org/jira/browse/KAFKA-8079
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.$anonfun$shouldSurviveFastLeaderChange$2(EpochDrivenReplicationProtocolAcceptanceTest.scala:294)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldSurviveFastLeaderChange(EpochDrivenReplicationProtocolAcceptanceTest.scala:273){quote}
> STDOUT
> {quote}[2019-03-08 01:16:02,452] ERROR [ReplicaFetcher replicaId=101, 
> leaderId=100, fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:23,677] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
> fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:35,779] ERROR [Controller id=100] Error completing 
> preferred replica leader election for partition topic1-0 
> (kafka.controller.KafkaController:76)
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> topic1-0 under strategy PreferredReplicaPartitionLeaderElectionStrategy
> at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
> at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$onPreferredReplicaElection(KafkaController.scala:649)
> at 
> kafka.controller.KafkaController.$anonfun$checkAndTriggerAutoLeaderRebalance$6(KafkaController.scala:1008)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerAutoLeaderRebalance(KafkaController.scala:989)
> at 
> kafka.controller.KafkaController$AutoPreferredReplicaLeaderElection$.process(KafkaController.scala:1020)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> Dumping /tmp/kafka-2158669830092629415/topic1-0/.log
> Starting offset: 0
> baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 0 CreateTime: 1552007783877 size: 141 magic: 
> 2 compresscodec: SNAPPY crc: 2264724941 isvalid: true
> baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 141 CreateTime: 1552007784731 size: 141 
> magic: 2 compresscodec: SNAPPY crc: 14988968 isvalid: true
> baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: 

[jira] [Resolved] (KAFKA-8079) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8079.

Resolution: Fixed

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange
> -
>
> Key: KAFKA-8079
> URL: https://issues.apache.org/jira/browse/KAFKA-8079
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.$anonfun$shouldSurviveFastLeaderChange$2(EpochDrivenReplicationProtocolAcceptanceTest.scala:294)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldSurviveFastLeaderChange(EpochDrivenReplicationProtocolAcceptanceTest.scala:273){quote}
> STDOUT
> {quote}[2019-03-08 01:16:02,452] ERROR [ReplicaFetcher replicaId=101, 
> leaderId=100, fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:23,677] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
> fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:35,779] ERROR [Controller id=100] Error completing 
> preferred replica leader election for partition topic1-0 
> (kafka.controller.KafkaController:76)
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> topic1-0 under strategy PreferredReplicaPartitionLeaderElectionStrategy
> at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
> at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$onPreferredReplicaElection(KafkaController.scala:649)
> at 
> kafka.controller.KafkaController.$anonfun$checkAndTriggerAutoLeaderRebalance$6(KafkaController.scala:1008)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerAutoLeaderRebalance(KafkaController.scala:989)
> at 
> kafka.controller.KafkaController$AutoPreferredReplicaLeaderElection$.process(KafkaController.scala:1020)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> Dumping /tmp/kafka-2158669830092629415/topic1-0/.log
> Starting offset: 0
> baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 0 CreateTime: 1552007783877 size: 141 magic: 
> 2 compresscodec: SNAPPY crc: 2264724941 isvalid: true
> baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 141 CreateTime: 1552007784731 size: 141 
> magic: 2 compresscodec: SNAPPY crc: 14988968 isvalid: true
> baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 282 CreateTime: 1552007784734 

[jira] [Commented] (KAFKA-7988) Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211370#comment-17211370
 ] 

Bill Bejeck commented on KAFKA-7988:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize
> 
>
> Key: KAFKA-7988
> URL: https://issues.apache.org/jira/browse/KAFKA-7988
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/30/]
> {quote}kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize 
> FAILED java.lang.AssertionError: Invalid threads: expected 6, got 5: 
> List(ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-1, 
> ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-2, ReplicaFetcherThread-0-1) 
> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1260)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.maybeVerifyThreadPoolSize$1(DynamicBrokerReconfigurationTest.scala:531)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.resizeThreadPool$1(DynamicBrokerReconfigurationTest.scala:550)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.reducePoolSize$1(DynamicBrokerReconfigurationTest.scala:536)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$testThreadPoolResize$3(DynamicBrokerReconfigurationTest.scala:559)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:558)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:572){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7988) Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-7988:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize
> 
>
> Key: KAFKA-7988
> URL: https://issues.apache.org/jira/browse/KAFKA-7988
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/30/]
> {quote}kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize 
> FAILED java.lang.AssertionError: Invalid threads: expected 6, got 5: 
> List(ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-1, 
> ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-2, ReplicaFetcherThread-0-1) 
> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1260)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.maybeVerifyThreadPoolSize$1(DynamicBrokerReconfigurationTest.scala:531)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.resizeThreadPool$1(DynamicBrokerReconfigurationTest.scala:550)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.reducePoolSize$1(DynamicBrokerReconfigurationTest.scala:536)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$testThreadPoolResize$3(DynamicBrokerReconfigurationTest.scala:559)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:558)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:572){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7988) Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7988.

Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize
> 
>
> Key: KAFKA-7988
> URL: https://issues.apache.org/jira/browse/KAFKA-7988
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/30/]
> {quote}kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize 
> FAILED java.lang.AssertionError: Invalid threads: expected 6, got 5: 
> List(ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-1, 
> ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-2, ReplicaFetcherThread-0-1) 
> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1260)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.maybeVerifyThreadPoolSize$1(DynamicBrokerReconfigurationTest.scala:531)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.resizeThreadPool$1(DynamicBrokerReconfigurationTest.scala:550)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.reducePoolSize$1(DynamicBrokerReconfigurationTest.scala:536)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$testThreadPoolResize$3(DynamicBrokerReconfigurationTest.scala:559)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:558)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:572){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8079) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211371#comment-17211371
 ] 

Bill Bejeck commented on KAFKA-8079:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange
> -
>
> Key: KAFKA-8079
> URL: https://issues.apache.org/jira/browse/KAFKA-8079
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.$anonfun$shouldSurviveFastLeaderChange$2(EpochDrivenReplicationProtocolAcceptanceTest.scala:294)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldSurviveFastLeaderChange(EpochDrivenReplicationProtocolAcceptanceTest.scala:273){quote}
> STDOUT
> {quote}[2019-03-08 01:16:02,452] ERROR [ReplicaFetcher replicaId=101, 
> leaderId=100, fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:23,677] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
> fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:35,779] ERROR [Controller id=100] Error completing 
> preferred replica leader election for partition topic1-0 
> (kafka.controller.KafkaController:76)
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> topic1-0 under strategy PreferredReplicaPartitionLeaderElectionStrategy
> at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
> at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$onPreferredReplicaElection(KafkaController.scala:649)
> at 
> kafka.controller.KafkaController.$anonfun$checkAndTriggerAutoLeaderRebalance$6(KafkaController.scala:1008)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerAutoLeaderRebalance(KafkaController.scala:989)
> at 
> kafka.controller.KafkaController$AutoPreferredReplicaLeaderElection$.process(KafkaController.scala:1020)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> Dumping /tmp/kafka-2158669830092629415/topic1-0/.log
> Starting offset: 0
> baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 0 CreateTime: 1552007783877 size: 141 magic: 
> 2 compresscodec: SNAPPY crc: 2264724941 isvalid: true
> baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 141 CreateTime: 1552007784731 size: 141 
> magic: 2 compresscodec: SNAPPY crc: 14988968 

[jira] [Commented] (KAFKA-8113) Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211369#comment-17211369
 ] 

Bill Bejeck commented on KAFKA-8113:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
> -
>
> Key: KAFKA-8113
> URL: https://issues.apache.org/jira/browse/KAFKA-8113
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3468/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.ListOffsetsRequestTest.fetchOffsetAndEpoch$1(ListOffsetsRequestTest.scala:136)
> at 
> kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:151){quote}
> STDOUT
> {quote}[2019-03-15 17:16:13,029] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-15 17:16:13,231] ERROR [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ReplicaNotAvailableException: Partition 
> topic-0 is not available{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8113) Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8113.

Resolution: Fixed

> Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
> -
>
> Key: KAFKA-8113
> URL: https://issues.apache.org/jira/browse/KAFKA-8113
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3468/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.ListOffsetsRequestTest.fetchOffsetAndEpoch$1(ListOffsetsRequestTest.scala:136)
> at 
> kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:151){quote}
> STDOUT
> {quote}[2019-03-15 17:16:13,029] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-15 17:16:13,231] ERROR [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ReplicaNotAvailableException: Partition 
> topic-0 is not available{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8113) Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8113:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
> -
>
> Key: KAFKA-8113
> URL: https://issues.apache.org/jira/browse/KAFKA-8113
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3468/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.ListOffsetsRequestTest.fetchOffsetAndEpoch$1(ListOffsetsRequestTest.scala:136)
> at 
> kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:151){quote}
> STDOUT
> {quote}[2019-03-15 17:16:13,029] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-15 17:16:13,231] ERROR [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ReplicaNotAvailableException: Partition 
> topic-0 is not available{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8087.

Resolution: Fixed

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,787] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 

[jira] [Resolved] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8077.

Resolution: Fixed

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8087:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,787] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, 

[jira] [Updated] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8077:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211367#comment-17211367
 ] 

Bill Bejeck commented on KAFKA-8077:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211366#comment-17211366
 ] 

Bill Bejeck commented on KAFKA-8087:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> 

[jira] [Resolved] (KAFKA-8075) Flaky Test GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8075.

Resolution: Fixed

> Flaky Test 
> GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit
> --
>
> Key: KAFKA-8075
> URL: https://issues.apache.org/jira/browse/KAFKA-8075
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/]
> {quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 3000ms.{quote}
> STDOUT
> {quote}[2019-03-08 01:48:45,226] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
> authorized to access topics: [Topic authorization failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-08 01:48:45,227] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-08 01:48:57,870] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=43610,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:43610-127.0.0.1:44870-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:14,858] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44107,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44107-127.0.0.1:38156-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:21,984] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=39025,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:39025-127.0.0.1:41474-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:39,438] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44798,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44798-127.0.0.1:58496-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. Error: Consumer group 'my-group' does not 
> exist. [2019-03-08 01:49:55,502] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) [2019-03-08 01:50:02,720] WARN 
> Unable to read additional data from client sessionid 0x1007131d81c0001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-08 01:50:03,855] 
> ERROR [KafkaApi-0] Error when handling request: 

[jira] [Updated] (KAFKA-8075) Flaky Test GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8075:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test 
> GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit
> --
>
> Key: KAFKA-8075
> URL: https://issues.apache.org/jira/browse/KAFKA-8075
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/]
> {quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 3000ms.{quote}
> STDOUT
> {quote}[2019-03-08 01:48:45,226] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
> authorized to access topics: [Topic authorization failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-08 01:48:45,227] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-08 01:48:57,870] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=43610,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:43610-127.0.0.1:44870-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:14,858] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44107,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44107-127.0.0.1:38156-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:21,984] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=39025,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:39025-127.0.0.1:41474-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:39,438] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44798,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44798-127.0.0.1:58496-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. Error: Consumer group 'my-group' does not 
> exist. [2019-03-08 01:49:55,502] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) [2019-03-08 01:50:02,720] WARN 
> Unable to read additional data from client sessionid 0x1007131d81c0001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-08 01:50:03,855] 
> ERROR 

[jira] [Commented] (KAFKA-8075) Flaky Test GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211365#comment-17211365
 ] 

Bill Bejeck commented on KAFKA-8075:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test 
> GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit
> --
>
> Key: KAFKA-8075
> URL: https://issues.apache.org/jira/browse/KAFKA-8075
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/]
> {quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 3000ms.{quote}
> STDOUT
> {quote}[2019-03-08 01:48:45,226] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
> authorized to access topics: [Topic authorization failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-08 01:48:45,227] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-08 01:48:57,870] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=43610,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:43610-127.0.0.1:44870-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:14,858] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44107,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44107-127.0.0.1:38156-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:21,984] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=39025,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:39025-127.0.0.1:41474-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:39,438] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44798,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44798-127.0.0.1:58496-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. Error: Consumer group 'my-group' does not 
> exist. [2019-03-08 01:49:55,502] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) [2019-03-08 01:50:02,720] WARN 
> Unable to read 

[jira] [Updated] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8138:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8084:
---
Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8138.

Resolution: Fixed

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2020-10-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211363#comment-17211363
 ] 

Bill Bejeck commented on KAFKA-8138:


No failures reported for this in over a year. As part of the 2.7.0 release 
process, I'm optimistically closing this ticket. If another failure occurs, we 
can re-open.

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   >