[jira] [Created] (KAFKA-12563) Something wrong with MM2 metrics

2021-03-25 Thread Bui Thanh MInh (Jira)
Bui Thanh MInh created KAFKA-12563:
--

 Summary: Something wrong with MM2 metrics
 Key: KAFKA-12563
 URL: https://issues.apache.org/jira/browse/KAFKA-12563
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.7.0
Reporter: Bui Thanh MInh
 Attachments: Screen Shot 2021-03-26 at 12.10.12.png

The metric 
_*`adt_2dc_c1_kafka_connect_mirror_source_connector_replication_latency_ms_avg`*_
 shows that value of latency is a very large number but the amount of messages 
in two DC are the same.

View details in the attachment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #638

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12452: Remove deprecated overloads of ProcessorContext#forward 
(#10300)

[github] MINOR: Use Java 11 for generating aggregated javadoc in release.py 
(#10399)


--
[...truncated 3.68 MB...]
PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime() STARTED

PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime() PASSED

PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted() STARTED

PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted() PASSED

PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup() STARTED

PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup() PASSED

PlaintextConsumerTest > testMaxPollRecords() STARTED

ConsumerBounceTest > testSeekAndCommitWithBrokerFailures() PASSED

ConsumerBounceTest > testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize() 
STARTED

PlaintextConsumerTest > testMaxPollRecords() PASSED

PlaintextConsumerTest > testAutoOffsetReset() STARTED

ConsumerBounceTest > testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize() 
PASSED

ConsumerBounceTest > testSubscribeWhenTopicUnavailable() STARTED

PlaintextConsumerTest > testAutoOffsetReset() PASSED

PlaintextConsumerTest > testPerPartitionLagWithMaxPollRecords() STARTED

PlaintextConsumerTest > testPerPartitionLagWithMaxPollRecords() PASSED

PlaintextConsumerTest > testFetchInvalidOffset() STARTED

PlaintextConsumerTest > testFetchInvalidOffset() PASSED

PlaintextConsumerTest > testAutoCommitIntercept() STARTED

PlaintextConsumerTest > testAutoCommitIntercept() PASSED

PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst() STARTED

ConsumerBounceTest > testSubscribeWhenTopicUnavailable() PASSED

ConsumerBounceTest > 
testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup() STARTED

PlaintextConsumerTest > 
testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst() PASSED

PlaintextConsumerTest > testCommitSpecifiedOffsets() STARTED

PlaintextConsumerTest > testCommitSpecifiedOffsets() PASSED

PlaintextConsumerTest > testPerPartitionLeadMetricsCleanUpWithSubscribe() 
STARTED

ConsumerBounceTest > 
testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup() PASSED

ConsumerBounceTest > testConsumptionWithBrokerFailures() STARTED

ConsumerBounceTest > testConsumptionWithBrokerFailures() SKIPPED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
PASSED

ControllerContextTest > testPreferredReplicaImbalanceMetric() STARTED

ControllerContextTest > testPreferredReplicaImbalanceMetric() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() PASSED

ControllerContextTest > testReassignTo() STARTED

ControllerContextTest > testReassignTo() PASSED

ControllerContextTest > testPartitionReplicaAssignment() STARTED

ControllerContextTest > testPartitionReplicaAssignment() PASSED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() STARTED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() PASSED

ControllerContextTest > testReassignToIdempotence() STARTED

ControllerContextTest > testReassignToIdempotence() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
PASSED

PlaintextConsumerTest > testPerPartitionLeadMetricsCleanUpWithSubscribe() PASSED

PlaintextConsumerTest > testCommitMetadata() STARTED

PlaintextConsumerTest > testCommitMetadata() PASSED

PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer() STARTED

PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer() PASSED

PlaintextConsumerTest > testRoundRobinAssignment() STARTED

PlaintextConsumerTest > testRoundRobinAssignment() PASSED

PlaintextConsumerTest > testPatternSubscription() STARTED

PlaintextConsumerTest > testPatternSubscription() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = 

[jira] [Created] (KAFKA-12562) Remove deprecated-overloaded "KafkaStreams#metadataForKey" and "KafkaStreams#store"

2021-03-25 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-12562:
-

 Summary: Remove deprecated-overloaded 
"KafkaStreams#metadataForKey" and "KafkaStreams#store"
 Key: KAFKA-12562
 URL: https://issues.apache.org/jira/browse/KAFKA-12562
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Guozhang Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12527) Remove deprecated "PartitionGrouper" interface

2021-03-25 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-12527.
---
Resolution: Duplicate

> Remove deprecated "PartitionGrouper" interface
> --
>
> Key: KAFKA-12527
> URL: https://issues.apache.org/jira/browse/KAFKA-12527
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #669

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12452: Remove deprecated overloads of ProcessorContext#forward 
(#10300)


--
[...truncated 3.68 MB...]
PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
PASSED

PartitionStateMachineTest > testNonexistentPartitionToNewPartitionTransition() 
STARTED

PartitionStateMachineTest > testNonexistentPartitionToNewPartitionTransition() 
PASSED

PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates() STARTED

PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates() PASSED

PartitionStateMachineTest > 
testOfflinePartitionToUncleanOnlinePartitionTransition() STARTED

PartitionStateMachineTest > 
testOfflinePartitionToUncleanOnlinePartitionTransition() PASSED

PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition() STARTED

PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition() PASSED

PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZooKeeperClientExceptionFromStateLookup()
 STARTED

PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZooKeeperClientExceptionFromStateLookup()
 PASSED

PartitionStateMachineTest > testOnlinePartitionToOfflineTransition() STARTED

PartitionStateMachineTest > testOnlinePartitionToOfflineTransition() PASSED

PartitionStateMachineTest > testNewPartitionToOfflinePartitionTransition() 
STARTED

PartitionStateMachineTest > testNewPartitionToOfflinePartitionTransition() 
PASSED

PartitionStateMachineTest > testUpdatingOfflinePartitionsCount() STARTED

PartitionStateMachineTest > testUpdatingOfflinePartitionsCount() PASSED

PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition() STARTED

PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition() PASSED

PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition() STARTED

PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition() PASSED

PartitionStateMachineTest > testOnlinePartitionToOnlineTransition() STARTED

PartitionStateMachineTest > testOnlinePartitionToOnlineTransition() PASSED

PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition() STARTED

PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition() PASSED

PartitionStateMachineTest > testNewPartitionToOnlinePartitionTransition() 
STARTED

PartitionStateMachineTest > testNewPartitionToOnlinePartitionTransition() PASSED

PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition() STARTED

PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition() PASSED

PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion() STARTED

PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion() PASSED

PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup() 
STARTED

PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup() PASSED

PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown() STARTED

PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown() PASSED

PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZooKeeperClientExceptionFromCreateStates()
 STARTED

PartitionStateMachineTest > 

[jira] [Resolved] (KAFKA-12526) Remove deprecated long ms overloads

2021-03-25 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-12526.
---
Resolution: Duplicate

> Remove deprecated long ms overloads
> ---
>
> Key: KAFKA-12526
> URL: https://issues.apache.org/jira/browse/KAFKA-12526
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12561) Fix flaky kafka.server.RaftClusterTest.testCreateClusterAndCreateListDeleteTopic()

2021-03-25 Thread Luke Chen (Jira)
Luke Chen created KAFKA-12561:
-

 Summary: Fix flaky 
kafka.server.RaftClusterTest.testCreateClusterAndCreateListDeleteTopic()
 Key: KAFKA-12561
 URL: https://issues.apache.org/jira/browse/KAFKA-12561
 Project: Kafka
  Issue Type: Test
Reporter: Luke Chen
Assignee: Luke Chen


{code:java}
java.util.concurrent.TimeoutException
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at 
kafka.server.RaftClusterTest.testCreateClusterAndCreateListDeleteTopic(RaftClusterTest.scala:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)



{code}
 

Also the same error as in testCreateClusterAndCreateAndManyTopics

 

https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk11/634/testReport/junit/kafka.server/RaftClusterTest/testCreateClusterAndCreateListDeleteTopic__/
https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk8/602/testReport/junit/kafka.server/RaftClusterTest/testCreateClusterAndCreateListDeleteTopic__/
https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk15/667/testReport/junit/kafka.server/RaftClusterTest/testCreateClusterAndCreateAndManyTopics__/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #139

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-12508: Disable KIP-557 (#10397)


--
[...truncated 6.90 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #108

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-12508: Disable KIP-557 (#10397)


--
[...truncated 3.18 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task 

[jira] [Created] (KAFKA-12560) Accidental delete of some log files kafka-authorizer.log and kafka-request.log can break topics in cluster

2021-03-25 Thread Rufus Skyfii (Jira)
Rufus Skyfii created KAFKA-12560:


 Summary: Accidental delete of some log files kafka-authorizer.log 
and kafka-request.log can break topics in cluster
 Key: KAFKA-12560
 URL: https://issues.apache.org/jira/browse/KAFKA-12560
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 2.2.1
 Environment: AWS EC2 i3.xlarge
Reporter: Rufus Skyfii


These two log files have 0 byte size and last created modified or updated  
along time ago and these files seem to be created on startup.

When disks fill up, sometimes admins go through old log files and delete them. 
In this case, these two .log were also picked up. Note that I acknowledge this 
is a mistake as it should be filtered by extension but nevertheless anything in 
/var/log should not cause process to have issues and if the file doesn't exist, 
the process should simply recreate it.

The observed effect of this on a HA kafka cluster with 3 nodes, the replica 
members were out of sync with just one member being leader and not in sync with 
rest of the nodes while some topics had two in sync instead of 3 and the leader 
had a -1 status. Restarting kafka did not resolve so in order to resolve, kafka 
was flushed in one of the node where it had lots of -1 leader status and then 
rest of the nodes restarted.  However, this should not happen even if someone 
deleted kafka-authorizer.log and kafka-request.log and should be more resistant 
of breaking down due to file system changes.

 
{code:java}
root@ip-172-17-1-63:/var/log/kafka# find /var/log/kafka -mtime +30 -print | 
xargs ls -ll |grep '.log$'
-rw-r--r-- 1 root root0 Nov 26 05:38 /var/log/kafka/kafka-authorizer.log
-rw-r--r-- 1 root root0 Nov 26 05:38 /var/log/kafka/kafka-request.log
{code}
{code:java}
root@ip-172-17-1-63:/var/log/kafka# stat kafka-request.log  

   
  File: kafka-request.log   

   
  Size: 0   Blocks: 0  IO Block: 4096   regular empty file  

   
Device: 34h/52d Inode: 768131  Links: 1 

   
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)

   
Access: 2020-11-26 05:38:04.724561118 + 

   
Modify: 2020-11-26 05:38:04.724561118 + 

   
Change: 2020-11-26 05:38:04.724561118 + 

   
 Birth: -   

   
root@ip-172-17-1-63:/var/log/kafka# stat kafka-authorizer.log   

   
  File: kafka-authorizer.log

   
  Size: 0   Blocks: 0  IO Block: 4096   regular empty file  

   
Device: 34h/52d Inode: 768132  Links: 1 

   
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)

   
Access: 2020-11-26 05:38:04.724561118 + 
 

[jira] [Created] (KAFKA-12559) Add a top-level Streams config for bounding off-heap memory

2021-03-25 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12559:
--

 Summary: Add a top-level Streams config for bounding off-heap 
memory
 Key: KAFKA-12559
 URL: https://issues.apache.org/jira/browse/KAFKA-12559
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: A. Sophie Blee-Goldman


At the moment we provide an example of how to bound the memory usage of rocskdb 
in the [Memory 
Management|https://kafka.apache.org/27/documentation/streams/developer-guide/memory-mgmt.html#rocksdb]
 section of the docs. This requires implementing a custom RocksDBConfigSetter 
class and setting a number of rocksdb options for relatively advanced concepts 
and configurations. It seems a fair number of users either fail to find this or 
consider it to be for more advanced use cases/users. But RocksDB can eat up a 
lot of off-heap memory and it's not uncommon for users to come across a 
{{RocksDBException: Cannot allocate memory}}

It would probably be a much better user experience if we implemented this 
memory bound out-of-the-box and just gave users a top-level StreamsConfig to 
tune the off-heap memory given to rocksdb, like we have for on-heap cache 
memory with cache.max.bytes.buffering. More advanced users can continue to 
fine-tune their memory bounding and apply other configs with a custom config 
setter, while new or more casual users can cap on the off-heap memory without 
getting their hands dirty with rocksdb.

I would propose to add the following top-level config:

rocksdb.max.bytes.off.heap: medium priority, default to -1 (unbounded), valid 
values are [0, inf]

I'd also want to consider adding a second, lower priority top-level config to 
give users a knob for adjusting how much of that total off-heap memory goes to 
the block cache + index/filter blocks, and how much of it is afforded to the 
write buffers. I'm struggling to come up with a good name for this config, but 
it would be something like

rocksdb.memtable.to.block.cache.off.heap.memory.ratio: low priority, default to 
0.5, valid values are [0, 1]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12508) Emit-on-change tables may lose updates on error or restart in at_least_once

2021-03-25 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-12508.
--
Resolution: Fixed

Disabled KIP-557 entirely.

> Emit-on-change tables may lose updates on error or restart in at_least_once
> ---
>
> Key: KAFKA-12508
> URL: https://issues.apache.org/jira/browse/KAFKA-12508
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0, 2.7.0, 2.6.1
>Reporter: Nico Habermann
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> [KIP-557|https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams]
>  added emit-on-change semantics to KTables that suppress updates for 
> duplicate values.
> However, this may cause data loss in at_least_once topologies when records 
> are retried from the last commit due to an error / restart / etc.
>  
> Consider the following example:
> {code:java}
> streams.table(source, materialized)
> .toStream()
> .map(mayThrow())
> .to(output){code}
>  
>  # Record A gets read
>  # Record A is stored in the table
>  # The update for record A is forwarded through the topology
>  # Map() throws (or alternatively, any restart while the forwarded update was 
> still being processed and not yet produced to the output topic)
>  # The stream is restarted and "retries" from the last commit
>  # Record A gets read again
>  # The table will discard the update for record A because
>  ## The value is the same
>  ## The timestamp is the same
>  # Eventually the stream will commit
>  # There is absolutely no output for Record A even though we're running in 
> at_least_once
>  
> This behaviour does not seem intentional. [The emit-on-change logic 
> explicitly forwards records that have the same value and an older 
> timestamp.|https://github.com/apache/kafka/blob/367eca083b44261d4e5fa8aa61b7990a8b35f8b0/streams/src/main/java/org/apache/kafka/streams/state/internals/ValueAndTimestampSerializer.java#L50]
> This logic should probably be changed to also forward updates that have an 
> older *or equal* timestamp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.7.1 release

2021-03-25 Thread John Roesler
Hi Mickael,

I hope all is well with you.

FYI, I have just pushed the fix to this blocker:
https://issues.apache.org/jira/browse/KAFKA-12508
to the 2.7 branch. I'll resolve the ticket after I finish
pushing to 2.6.

Thanks,
John

On Fri, 2021-02-19 at 05:29 -0800, Ismael Juma wrote:
> Sounds good, I was just curious if you had seen problems in the wild. The
> following two are important too:
> 
> 1. https://issues.apache.org/jira/browse/KAFKA-10793
> 2. https://issues.apache.org/jira/browse/KAFKA-12193
> 
> Ismael
> 
> 
> On Fri, Feb 19, 2021 at 3:24 AM Mickael Maison 
> wrote:
> 
> > Hi Ismael,
> > 
> > The main motivation was to carry on releasing a bugfix 2-3 months
> > after a release. The project has followed that process for a while now
> > and, as far as I can tell, it works well.
> > 
> > That said, looking at the issues, there are a few interesting fixes:
> > - KAFKA-12323 is a regression in Streams 2.7.0
> > - KAFKA-12152 is an important idempotent producer fix
> > - KAFKA-12310 updates ZooKeeper to 3.5.9
> > 
> > Thanks
> > 
> > On Thu, Feb 18, 2021 at 5:56 PM Ismael Juma  wrote:
> > > 
> > > Thanks for volunteering Mickael. Are there any critical bugs in 2.7.0 so
> > > far that motivated the release? There doesn't have to be, but I'd be
> > > interested to know if there are.
> > > 
> > > Ismael
> > > 
> > > On Thu, Feb 18, 2021 at 7:58 AM Mickael Maison 
> > wrote:
> > > 
> > > > Hi,
> > > > 
> > > > I'd like to volunteer to be the release manager for the next bugfix
> > > > release, 2.7.1.
> > > > 
> > > > I created the release plan on the wiki:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.7.1
> > > > 
> > > > Thanks
> > > > 
> > 




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #609

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12508: Disable KIP-557 (#10397)


--
[...truncated 3.68 MB...]
LogTest > testDeleteSnapshotsOnIncrementLogStartOffset() STARTED

LogTest > testDeleteSnapshotsOnIncrementLogStartOffset() PASSED

LogTest > testCorruptIndexRebuild() STARTED

LogTest > testCorruptIndexRebuild() PASSED

LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted() STARTED

LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted() PASSED

LogTest > testReadWithTooSmallMaxLength() STARTED

LogTest > testReadWithTooSmallMaxLength() PASSED

LogTest > testOverCompactedLogRecovery() STARTED

LogTest > testOverCompactedLogRecovery() PASSED

LogTest > testBogusIndexSegmentsAreRemoved() STARTED

LogTest > testBogusIndexSegmentsAreRemoved() PASSED

LogTest > testLeaderEpochCacheClearedAfterStaticMessageFormatDowngrade() STARTED

LogTest > testLeaderEpochCacheClearedAfterStaticMessageFormatDowngrade() PASSED

LogTest > testCompressedMessages() STARTED

LogTest > testCompressedMessages() PASSED

LogTest > testAppendMessageWithNullPayload() STARTED

LogTest > testAppendMessageWithNullPayload() PASSED

LogTest > testCorruptLog() STARTED

LogTest > testCorruptLog() PASSED

LogTest > testLogRecoversToCorrectOffset() STARTED

LogTest > testLogRecoversToCorrectOffset() PASSED

LogTest > testReopenThenTruncate() STARTED

LogTest > testReopenThenTruncate() PASSED

LogTest > testZombieCoordinatorFenced() STARTED

LogTest > testZombieCoordinatorFenced() PASSED

LogTest > testOldProducerEpoch() STARTED

LogTest > testOldProducerEpoch() PASSED

LogTest > testProducerSnapshotsRecoveryAfterUncleanShutdownV1() STARTED

LogTest > testProducerSnapshotsRecoveryAfterUncleanShutdownV1() PASSED

LogTest > testDegenerateSegmentSplit() STARTED

LogTest > testDegenerateSegmentSplit() PASSED

LogTest > testParseTopicPartitionNameForMissingPartition() STARTED

LogTest > testParseTopicPartitionNameForMissingPartition() PASSED

LogTest > testParseTopicPartitionNameForEmptyName() STARTED

LogTest > testParseTopicPartitionNameForEmptyName() PASSED

LogTest > testOffsetSnapshot() STARTED

LogTest > testOffsetSnapshot() PASSED

LogTest > testOpenDeletesObsoleteFiles() STARTED

LogTest > testOpenDeletesObsoleteFiles() PASSED

LogTest > shouldUpdateOffsetForLeaderEpochsWhenDeletingSegments() STARTED

LogTest > shouldUpdateOffsetForLeaderEpochsWhenDeletingSegments() PASSED

LogTest > testLogDeleteDirName() STARTED

LogTest > testLogDeleteDirName() PASSED

LogTest > testDeleteOldSegments() STARTED

LogTest > testDeleteOldSegments() PASSED

LogTest > testRebuildTimeIndexForOldMessages() STARTED

LogTest > testRebuildTimeIndexForOldMessages() PASSED

LogTest > testProducerIdMapTruncateTo() STARTED

LogTest > testProducerIdMapTruncateTo() PASSED

LogTest > testTakeSnapshotOnRollAndDeleteSnapshotOnRecoveryPointCheckpoint() 
STARTED

LogTest > testTakeSnapshotOnRollAndDeleteSnapshotOnRecoveryPointCheckpoint() 
PASSED

LogTest > testLogEndLessThanStartAfterReopen() STARTED

LogTest > testLogEndLessThanStartAfterReopen() PASSED

LogTest > testLogRecoversForLeaderEpoch() STARTED

LogTest > testLogRecoversForLeaderEpoch() PASSED

LogTest > testRetentionDeletesProducerStateSnapshots() STARTED

LogTest > testRetentionDeletesProducerStateSnapshots() PASSED

LogTest > testLoadingLogDeletesProducerStateSnapshotsPastLogEndOffset() STARTED

LogTest > testLoadingLogDeletesProducerStateSnapshotsPastLogEndOffset() PASSED

LogTest > testRetentionIdempotency() STARTED

LogTest > testRetentionIdempotency() PASSED

LogTest > testWriteLeaderEpochCheckpointAfterDirectoryRename() STARTED

LogTest > testWriteLeaderEpochCheckpointAfterDirectoryRename() PASSED

LogTest > testOverCompactedLogRecoveryMultiRecord() STARTED

LogTest > testOverCompactedLogRecoveryMultiRecord() PASSED

LogTest > testSizeBasedLogRoll() STARTED

LogTest > testSizeBasedLogRoll() PASSED

LogTest > testRebuildProducerIdMapWithCompactedData() STARTED

LogTest > testRebuildProducerIdMapWithCompactedData() PASSED

LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize() STARTED

LogTest > shouldNotDeleteSizeBasedSegmentsWhenUnderRetentionSize() PASSED

LogTest > testTransactionIndexUpdatedThroughReplication() STARTED

LogTest > testTransactionIndexUpdatedThroughReplication() PASSED

LogTest > testTimeBasedLogRollJitter() STARTED

LogTest > testTimeBasedLogRollJitter() PASSED

LogTest > testParseTopicPartitionName() STARTED

LogTest > testParseTopicPartitionName() PASSED

LogTest > testEndTxnWithFencedProducerEpoch() STARTED

LogTest > testEndTxnWithFencedProducerEpoch() PASSED

LogTest > testRecoveryOfSegmentWithOffsetOverflow() STARTED

LogTest > testRecoveryOfSegmentWithOffsetOverflow() PASSED

LogTest > testRecoverAfterNonMonotonicCoordinatorEpochWrite() STARTED

LogTest > testRecoverAfterNonMonotonicCoordinatorEpochWrite() PASSED

LogTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #637

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12508: Disable KIP-557 (#10397)


--
[...truncated 3.68 MB...]
ControllerEventManagerTest > testSuccessfulEvent() PASSED

ControllerEventManagerTest > testMetricsCleanedOnClose() STARTED

ControllerEventManagerTest > testMetricsCleanedOnClose() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() PASSED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() PASSED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() STARTED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #668

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12508: Disable KIP-557 (#10397)


--
[...truncated 3.70 MB...]

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testListTransactionsAuthorization() STARTED

AuthorizerIntegrationTest > testListTransactionsAuthorization() PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > [1] compression=none STARTED

ProducerCompressionTest > [1] compression=none PASSED

ProducerCompressionTest > [2] compression=gzip STARTED

ProducerCompressionTest > [2] compression=gzip PASSED

ProducerCompressionTest > [3] compression=snappy STARTED

ProducerCompressionTest > [3] compression=snappy PASSED

ProducerCompressionTest > [4] compression=lz4 STARTED

ProducerCompressionTest > [4] compression=lz4 PASSED

ProducerCompressionTest > [5] compression=zstd STARTED

ProducerCompressionTest > [5] compression=zstd PASSED

MetricsTest > testMetrics() STARTED

MetricsTest > testMetrics() PASSED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() STARTED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() PASSED

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() STARTED

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() PASSED

ProducerFailureHandlingTest > testWrongBrokerList() STARTED

ProducerFailureHandlingTest > testWrongBrokerList() PASSED

ProducerFailureHandlingTest > testNotEnoughReplicas() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicas() PASSED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 

Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #80

2021-03-25 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-12508: Disable KIP-557 (#10397)


--
[...truncated 3.62 MB...]

BrokerEndPointTest > testFromJsonV1() STARTED

BrokerEndPointTest > testFromJsonV1() PASSED

BrokerEndPointTest > testFromJsonV2() STARTED

BrokerEndPointTest > testFromJsonV2() PASSED

BrokerEndPointTest > testFromJsonV3() STARTED

BrokerEndPointTest > testFromJsonV3() PASSED

BrokerEndPointTest > testFromJsonV5() STARTED

BrokerEndPointTest > testFromJsonV5() PASSED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() STARTED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() PASSED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() STARTED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() PASSED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
STARTED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
PASSED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() STARTED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() PASSED

JsonValueTest > testJsonObjectIterator() STARTED

JsonValueTest > testJsonObjectIterator() PASSED

JsonValueTest > testDecodeLong() STARTED

JsonValueTest > testDecodeLong() PASSED

JsonValueTest > testAsJsonObject() STARTED

JsonValueTest > testAsJsonObject() PASSED

JsonValueTest > testDecodeDouble() STARTED

JsonValueTest > testDecodeDouble() PASSED

JsonValueTest > testDecodeOption() STARTED

JsonValueTest > testDecodeOption() PASSED

JsonValueTest > testDecodeString() STARTED

JsonValueTest > testDecodeString() PASSED

JsonValueTest > testJsonValueToString() STARTED

JsonValueTest > testJsonValueToString() PASSED

JsonValueTest > testAsJsonObjectOption() STARTED

JsonValueTest > testAsJsonObjectOption() PASSED

JsonValueTest > testAsJsonArrayOption() STARTED

JsonValueTest > testAsJsonArrayOption() PASSED

JsonValueTest > testAsJsonArray() STARTED

JsonValueTest > testAsJsonArray() PASSED

JsonValueTest > testJsonValueHashCode() STARTED

JsonValueTest > testJsonValueHashCode() PASSED

JsonValueTest > testDecodeInt() STARTED

JsonValueTest > testDecodeInt() PASSED

JsonValueTest > testDecodeMap() STARTED

JsonValueTest > testDecodeMap() PASSED

JsonValueTest > testDecodeSeq() STARTED

JsonValueTest > testDecodeSeq() PASSED

JsonValueTest > testJsonObjectGet() STARTED

JsonValueTest > testJsonObjectGet() PASSED

JsonValueTest > testJsonValueEquals() STARTED

JsonValueTest > testJsonValueEquals() PASSED

JsonValueTest > testJsonArrayIterator() STARTED

JsonValueTest > testJsonArrayIterator() PASSED

JsonValueTest > testJsonObjectApply() STARTED

JsonValueTest > testJsonObjectApply() PASSED

JsonValueTest > testDecodeBoolean() STARTED

JsonValueTest > testDecodeBoolean() PASSED

PasswordEncoderTest > testEncoderConfigChange() STARTED

PasswordEncoderTest > testEncoderConfigChange() PASSED

PasswordEncoderTest > testEncodeDecodeAlgorithms() STARTED

PasswordEncoderTest > testEncodeDecodeAlgorithms() PASSED

PasswordEncoderTest > testEncodeDecode() STARTED

PasswordEncoderTest > testEncodeDecode() PASSED

ThrottlerTest > testThrottleDesiredRate() STARTED

ThrottlerTest > testThrottleDesiredRate() PASSED

LoggingTest > testLoggerLevelIsResolved() STARTED

LoggingTest > testLoggerLevelIsResolved() PASSED

LoggingTest > testLog4jControllerIsRegistered() STARTED

LoggingTest > testLog4jControllerIsRegistered() PASSED

LoggingTest > testTypeOfGetLoggers() STARTED

LoggingTest > testTypeOfGetLoggers() PASSED

LoggingTest > testLogName() STARTED

LoggingTest > testLogName() PASSED

LoggingTest > testLogNameOverride() STARTED

LoggingTest > testLogNameOverride() PASSED

TimerTest > testAlreadyExpiredTask() STARTED

TimerTest > testAlreadyExpiredTask() PASSED

TimerTest > testTaskExpiration() STARTED

TimerTest > testTaskExpiration() PASSED

ReplicationUtilsTest > testUpdateLeaderAndIsr() STARTED

ReplicationUtilsTest > testUpdateLeaderAndIsr() PASSED

TopicFilterTest > testIncludeLists() STARTED

TopicFilterTest > testIncludeLists() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED

RaftManagerTest > testUncaughtExceptionInIoThread() PASSED

RequestChannelTest > testNonAlterRequestsNotTransformed() STARTED

RequestChannelTest > testNonAlterRequestsNotTransformed() PASSED

RequestChannelTest > testAlterRequests() STARTED

RequestChannelTest > testAlterRequests() PASSED

RequestChannelTest > testJsonRequests() STARTED

RequestChannelTest > testJsonRequests() PASSED

RequestChannelTest > testIncrementalAlterRequests() STARTED

RequestChannelTest > testIncrementalAlterRequests() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 

Re: [VOTE] KIP-712: Shallow Mirroring

2021-03-25 Thread Henry Cai
I am actually fine with that.  I think only the change in MM1 will be
deprecated which is not a big part of the PR.  Other people can see what
was implemented in this PR and can port it to other use cases.

On Thu, Mar 25, 2021 at 12:11 PM Ryanne Dolan  wrote:

> +1 from me. Super looking forward to this.
>
> But N.B. it looks like KIP-720 will pass, which deprecates MM1. I don't
> think there is any reason both KIPs can't pass, but it looks like any
> classes introduced in KIP-712 would get immediately deprecated.
>
> Ryanne
>
> On Thu, Mar 25, 2021, 12:34 PM Henry Cai 
> wrote:
>
> > Hi,
> >
> >
> > I'd like to start a vote on KIP-712: Shallow Mirroring.
> >
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-712%3A+Shallow+Mirroring
> >
> >
> > Thanks
> >
>


[jira] [Created] (KAFKA-12558) MM2 may not sync partition offsets correctly

2021-03-25 Thread Alan (Jira)
Alan created KAFKA-12558:


 Summary: MM2 may not sync partition offsets correctly
 Key: KAFKA-12558
 URL: https://issues.apache.org/jira/browse/KAFKA-12558
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.6.1, 2.7.0
Reporter: Alan


There is a race condition in {{MirrorSourceTask}} where certain partition 
offsets may never be sent. The bug occurs when the [outstandingOffsetSync 
semaphore is 
full|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L207].
 In this case, the sendOffsetSync [will silently 
fail|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L207].

This failure is normally acceptable since offset sync will retry frequently. 
However, {{maybeSyncOffsets}} has a bug where it will [mutate the partition 
state|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L199]
 prior to confirming the result of {{sendOffsetSync}}. The end result is that 
the partition state is mutated prematurely, and prevent future offset syncs to 
recover.

Since {{MAX_OUTSTANDING_OFFSET_SYNCS}} is 10, this bug happens when you assign 
more than 10 partitions to each task.

In my test cases where I had over 100 partitions per task, the majority of the 
offsets were wrong. Here's an example of such a failure. 
https://issues.apache.org/jira/browse/KAFKA-12468?focusedCommentId=17308308=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17308308

During my troubleshooting, I customized the {{MirrorSourceTask}} to confirm 
that all partitions that has the wrong offset were failing to acquire the 
initial semaphore. The condition [can be trapped 
here|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L208].

*Possible Fix:*

A possible fix is to create a {{shouldUpdate}} method in {{PartitionState}}. 
This method should be read-only and return true if {{sendOffsetSync}} is 
needed. Once {{sendOffsetSync}} is successful, only then {{update}} should be 
called.

Here's some pseudocode
{code:java}
private void maybeSyncOffsets(TopicPartition topicPartition, long 
upstreamOffset,
long downstreamOffset) {
PartitionState partitionState =
partitionStates.computeIfAbsent(topicPartition, x -> new 
PartitionState(maxOffsetLag));
if (partitionState.shouldUpdate(upstreamOffset, downstreamOffset)) {
if(sendOffsetSync(topicPartition, upstreamOffset, downstreamOffset)) {
partitionState.update(upstreamOffset, downstreamOffset)
}
}
}
{code}
 

*Workaround:*

For those who are experiencing this issue, the workaround for this issue is to 
make sure you have less than or equal to 10 partitions per task. Set your 
`tasks.max` value accordingly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-712: Shallow Mirroring

2021-03-25 Thread Ryanne Dolan
+1 from me. Super looking forward to this.

But N.B. it looks like KIP-720 will pass, which deprecates MM1. I don't
think there is any reason both KIPs can't pass, but it looks like any
classes introduced in KIP-712 would get immediately deprecated.

Ryanne

On Thu, Mar 25, 2021, 12:34 PM Henry Cai  wrote:

> Hi,
>
>
> I'd like to start a vote on KIP-712: Shallow Mirroring.
>
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-712%3A+Shallow+Mirroring
>
>
> Thanks
>


[jira] [Created] (KAFKA-12557) org.apache.kafka.clients.admin.KafkaAdminClientTest#testClientSideTimeoutAfterFailureToReceiveResponse intermittently hangs indefinitely

2021-03-25 Thread John Roesler (Jira)
John Roesler created KAFKA-12557:


 Summary: 
org.apache.kafka.clients.admin.KafkaAdminClientTest#testClientSideTimeoutAfterFailureToReceiveResponse
 intermittently hangs indefinitely
 Key: KAFKA-12557
 URL: https://issues.apache.org/jira/browse/KAFKA-12557
 Project: Kafka
  Issue Type: Bug
  Components: clients, core
Reporter: John Roesler
Assignee: John Roesler
 Fix For: 3.0.0, 2.8.0


While running tests for [https://github.com/apache/kafka/pull/10397,] I got a 
test timeout under Java 8.

I ran it locally via `./gradlew clean -PscalaVersion=2.12 :clients:unitTest 
--profile --no-daemon --continue 
-PtestLoggingEvents=started,passed,skipped,failed -PignoreFailures=true 
-PmaxTestRetries=1 -PmaxTestRetryFailures=5` (copied from the Jenkins log) and 
was able to determine that the hanging test is:

org.apache.kafka.clients.admin.KafkaAdminClientTest#testClientSideTimeoutAfterFailureToReceiveResponse

It's odd, but it hangs most times on my branch, and I haven't seen it hang on 
trunk, despite the fact that my PR doesn't touch the client or core code at all.

Some debugging reveals that when the client is hanging, it's because the 
listTopics request is still sitting in its pendingRequests queue, and if I 
understand the test setup correctly, it would never be completed, since we will 
never advance time or queue up a metadata response for it.

I figure a reasonable blanket response to this is just to make sure that the 
test harness will close the admin client eagerly instead of lazily.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Is it safe to delete old log segments manually?

2021-03-25 Thread Peter Bukowinski
In this case, yes, for any given topic-partition on the broker, you should be 
able to delete the oldest log segment, its associated index and timeindex 
files, and the snapshot file (which will be recreated on startup) in order to 
gain some free space.

—
Peter Bukowinski

> On Mar 25, 2021, at 11:08 AM, Sankalp Bhatia  
> wrote:
> 
> Thank you for the response Peter. However, for us all the brokers are 
> currently offline. So if I delete the entire topic-partition directory in one 
> of the brokers, the first broker would start with no means to replicate the 
> data which we just deleted. What are your thoughts on this? Do you think this 
> approach will work safe in our case? 
> 
> Thanks,
> Sankalp
> 
> On Thu, 25 Mar 2021 at 21:09, Peter Bukowinski  > wrote:
> Hi Sankalp,
> 
> As long as you have replication, I’ve found it is safest to delete entire 
> topic-partition directories than it is to delete individual log segments from 
> them. For one, you get back more space. Second, you don’t have to worry about 
> metadata corruption.
> 
> When I’ve run out of disk space in the past, the first thing I did was reduce 
> topic retention where I could, waited for the log cleanup routines to run, 
> then I looked for and deleted associated topic partition directories on the 
> brokers with filled disks before starting kafka on them. When the brokers 
> rejoined the cluster, they started catching up on the deleted topic-partition 
> directories.
> 
> --
> Peter Bukowinski
> 
> > On Mar 25, 2021, at 8:00 AM, Sankalp Bhatia  > > wrote:
> > 
> > Hi All,
> > 
> > Brokers in one of our Apache Kafka clusters are continuously crashing as
> > they have run out of disk space. As per my understanding, reducing the
> > value of retention.ms  and retention.bytes properties 
> > will not work because
> > the broker is crashing before the log-retention thread can be scheduled (
> > link
> >  >  
> > >
> > ).
> > One option we are exploring is if we can manually delete some of the old
> > segment files to make some space in our data disk for the broker to startup
> > while reducing the retention.ms  config at the same 
> > time. There is an old
> > email thread (link
> >  >  
> > >)
> > which suggests it is safe to do so, but we want to understand if there have
> > been recent changes to topic-partition metadata which we might end up
> > corrupting if we try this? If so, are there any tips to get around this
> > issue?
> > 
> > Thanks,
> > Sankalp



Re: Is it safe to delete old log segments manually?

2021-03-25 Thread Sankalp Bhatia
Thank you for the response Peter. However, for us all the brokers are
currently offline. So if I delete the entire topic-partition directory in
one of the brokers, the first broker would start with no means to replicate
the data which we just deleted. What are your thoughts on this? Do you
think this approach will work safe in our case?

Thanks,
Sankalp

On Thu, 25 Mar 2021 at 21:09, Peter Bukowinski  wrote:

> Hi Sankalp,
>
> As long as you have replication, I’ve found it is safest to delete entire
> topic-partition directories than it is to delete individual log segments
> from them. For one, you get back more space. Second, you don’t have to
> worry about metadata corruption.
>
> When I’ve run out of disk space in the past, the first thing I did was
> reduce topic retention where I could, waited for the log cleanup routines
> to run, then I looked for and deleted associated topic partition
> directories on the brokers with filled disks before starting kafka on them.
> When the brokers rejoined the cluster, they started catching up on the
> deleted topic-partition directories.
>
> --
> Peter Bukowinski
>
> > On Mar 25, 2021, at 8:00 AM, Sankalp Bhatia 
> wrote:
> >
> > Hi All,
> >
> > Brokers in one of our Apache Kafka clusters are continuously crashing as
> > they have run out of disk space. As per my understanding, reducing the
> > value of retention.ms and retention.bytes properties will not work
> because
> > the broker is crashing before the log-retention thread can be scheduled (
> > link
> > <
> https://github.com/apache/kafka/blob/3eaf44ba8ea26a7a820894390e8877d404ddd5a2/core/src/main/scala/kafka/log/LogManager.scala#L394-L398
> >
> > ).
> > One option we are exploring is if we can manually delete some of the old
> > segment files to make some space in our data disk for the broker to
> startup
> > while reducing the retention.ms config at the same time. There is an old
> > email thread (link
> > <
> https://mail-archives.apache.org/mod_mbox/kafka-users/201403.mbox/%3CCAOG_4Qbwx44T-=vrpkvqgrum8lpmdzl2bxxrgz5c9h1_noh...@mail.gmail.com%3E
> >)
> > which suggests it is safe to do so, but we want to understand if there
> have
> > been recent changes to topic-partition metadata which we might end up
> > corrupting if we try this? If so, are there any tips to get around this
> > issue?
> >
> > Thanks,
> > Sankalp
>


[VOTE] KIP-712: Shallow Mirroring

2021-03-25 Thread Henry Cai
Hi,


I'd like to start a vote on KIP-712: Shallow Mirroring.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-712%3A+Shallow+Mirroring


Thanks


[jira] [Created] (KAFKA-12556) Add --under-preferred-replica-partitions option to describe topics command

2021-03-25 Thread Wenbing Shen (Jira)
Wenbing Shen created KAFKA-12556:


 Summary: Add --under-preferred-replica-partitions option to 
describe topics command
 Key: KAFKA-12556
 URL: https://issues.apache.org/jira/browse/KAFKA-12556
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Wenbing Shen


Whether the preferred replica is the partition leader directly affects the 
external output traffic of the broker. When the preferred replica of all 
partitions becomes the leader, the external output traffic of the broker will 
be in a balanced state. When there are a large number of partition leaders that 
are not preferred replicas, it will be destroyed this state of balance.

Currently, the controller will periodically check the unbalanced ratio of the 
partition preferred replicas (if enabled) to trigger the preferred replica 
election, or manually trigger the election through the kafka-leader-election 
tool. However, if we want to know which partition leader is in the 
non-preferred replica, we need to look it up in the controller log or judge 
ourselves from the topic details list.

We can add the --under-preferred-replica-partitions configuration option in 
TopicCommand describe topics to query the list of partitions in the current 
cluster that are in non-preferred replicas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-707: The future of KafkaFuture

2021-03-25 Thread Ismael Juma
Thanks Tom, looks reasonable to me. I'll take a closer look soon, but
please go ahead with the vote whenever you're ready.

Ismael

On Thu, Mar 25, 2021, 8:53 AM Tom Bentley  wrote:

> Hi,
>
> I've updated the KIP along the lines of my previous reply to Ismael. If no
> one has further comments I will likely start a vote thread next week.
>
> Kind regards,
>
> Tom
>
> On Tue, Mar 23, 2021 at 10:01 AM Tom Bentley  wrote:
>
> > Hi Ryanne,
> >
> > Thanks for the reply. If there was a consensus for either of the more
> > ambitious changes described in my 2nd email then I would agree with you,
> > since it's much easier for users to understand and deal with and avoids
> > having a situation of a class with half of its API deprecated, the need
> to
> > find synonyms etc.
> >
> > TBH though, that 2nd email was an attempt to get people thinking and
> > engaging in the discussion. My personal preference is much closer to the
> > incremental option described in the KIP, because overall I don't think
> > there are nearly enough API mistakes/tech debt in the admin client
> overall
> > to warrant such a big change. Sure, there are a few deprecated methods
> and
> > classes, and using CompletionStage or CompletableFuture rather than
> > KafkaFuture would require major changes where those could be cleaned up
> at
> > the same time, but it just doesn't seem worth it when the vast majority
> of
> > the API is fine. It would force everyone using the Admin client to
> > eventually switch, whereas adding an accessor to KafkaFuture achieves the
> > prime objective of enabling people who need to to interoperate with 3rd
> > party APIs requiring CompletionStage, while requiring nothing of all the
> > other users of the API.
> >
> > So unless there's a chorus of people who disagree and think that such a
> > big refactoring really is worthwhile, then I'm inclined to stick with
> > something closer to KIP-707.
> >
> > Many thanks,
> >
> > Tom
> >
> > On Fri, Mar 19, 2021 at 4:52 PM Ryanne Dolan 
> > wrote:
> >
> >> My two cents: keep the same method and class names and just use a
> >> different
> >> package. Strongly dislike coming up with slightly different names for
> >> everything.
> >>
> >> Ryanne
> >>
> >> On Tue, Feb 2, 2021, 4:41 AM Tom Bentley  wrote:
> >>
> >> > I've previously discounted the possibility of an "Admin2" client, but
> >> > seeing the recent discussions on the thread for KIP-706, I wonder
> >> whether
> >> > this current proposal in KIP-707 would benefit from a bit more
> >> > discussion... I think there are broadly two approaches to evolving the
> >> > Admin client API to use CompletionStage directly (rather than what's
> >> > currently proposed in KIP-707):
> >> >
> >> > The simpler option, from a development point of view, would be to
> >> introduce
> >> > an alternative/parallel set of classes for each of the existing result
> >> > classes. E.g. ListTopicsOutcome which was the same as
> ListTopicsResult,
> >> but
> >> > using CompletionStage rather than KafkaFuture. Adding methods to the
> >> > existing Admin interface would require coming up with synonym method
> >> names
> >> > for every API call, and probably half of the API being deprecated (if
> >> not
> >> > immediately then in the long run). It would be cleaner to have a whole
> >> new
> >> > interface, let's call it Manager, using the same method names. The
> >> existing
> >> > Admin client implementation would then wrap a Manager instance, and
> the
> >> > existing Result classes could have a constructor parameter of their
> >> > corresponding Outcome instance which wrapped the CompletionStages with
> >> > KafkaFutures. The Options classes would be unchanged. From a users
> >> point of
> >> > view migrating to the new Manager client would mostly be a matter of
> >> > changing class names and adding a `.toCompletionStage()` to those
> places
> >> > where they were calling KafkaFuture.get()/getNow() (and even this
> could
> >> be
> >> > avoided if we used CompletableFuture rather than CompletionStage in
> the
> >> > Outcome class APIs). In the long run Admin would be removed and we'd
> be
> >> > left with the minor annoyance of having a client called Manager in a
> >> > package called admin.
> >> >
> >> > The more involved version would do a similar refactoring, but within a
> >> > different package. If we stuck with the Admin and Result class names
> the
> >> > users experience of migrating their codebase would be limited to
> >> changing
> >> > import statements and the same additions of `.toCompletionStage()`. On
> >> the
> >> > implementation side it would force us to duplicate all the Options
> >> classes
> >> > and also have a way of converting old Options instances to their new
> >> > equivalents so that the old Admin implementation could delegate to the
> >> new
> >> > one. The main benefit of this approach seems to be the slightly easier
> >> > experience for people porting their code to the new client.
> >> >
> >> > In doing either of 

Re: Kafka contribution

2021-03-25 Thread Bill Bejeck
Hi Tamas,

I've added you as a contributor to Jira.  You should be able to self-assign
tickets now.

For more information, you can take a look at the "How To Contribute"
 page.

Thanks for your interest in Apache Kafka.

-Bill

On Thu, Mar 25, 2021 at 10:29 AM Tamas Barnabas Egyed
 wrote:

> Dear Apache Kafka team,
>
> I wrote some wrong details about myself in my previous email.
> So here are my details:
>
> email: *egye...@cloudera.com *
> github username: *egyedt*
> jira username: *egyed.t*
>
> Kind regards,
> *Tamás Egyed* | Software engineer, Hungary
> cloudera.com 
> [image: Cloudera] 
> [image: Cloudera on Twitter]  [image:
> Cloudera on Facebook]  [image: Cloudera
> on LinkedIn] 
> 
> --
>


Re: [DISCUSS] KIP-707: The future of KafkaFuture

2021-03-25 Thread Tom Bentley
Hi,

I've updated the KIP along the lines of my previous reply to Ismael. If no
one has further comments I will likely start a vote thread next week.

Kind regards,

Tom

On Tue, Mar 23, 2021 at 10:01 AM Tom Bentley  wrote:

> Hi Ryanne,
>
> Thanks for the reply. If there was a consensus for either of the more
> ambitious changes described in my 2nd email then I would agree with you,
> since it's much easier for users to understand and deal with and avoids
> having a situation of a class with half of its API deprecated, the need to
> find synonyms etc.
>
> TBH though, that 2nd email was an attempt to get people thinking and
> engaging in the discussion. My personal preference is much closer to the
> incremental option described in the KIP, because overall I don't think
> there are nearly enough API mistakes/tech debt in the admin client overall
> to warrant such a big change. Sure, there are a few deprecated methods and
> classes, and using CompletionStage or CompletableFuture rather than
> KafkaFuture would require major changes where those could be cleaned up at
> the same time, but it just doesn't seem worth it when the vast majority of
> the API is fine. It would force everyone using the Admin client to
> eventually switch, whereas adding an accessor to KafkaFuture achieves the
> prime objective of enabling people who need to to interoperate with 3rd
> party APIs requiring CompletionStage, while requiring nothing of all the
> other users of the API.
>
> So unless there's a chorus of people who disagree and think that such a
> big refactoring really is worthwhile, then I'm inclined to stick with
> something closer to KIP-707.
>
> Many thanks,
>
> Tom
>
> On Fri, Mar 19, 2021 at 4:52 PM Ryanne Dolan 
> wrote:
>
>> My two cents: keep the same method and class names and just use a
>> different
>> package. Strongly dislike coming up with slightly different names for
>> everything.
>>
>> Ryanne
>>
>> On Tue, Feb 2, 2021, 4:41 AM Tom Bentley  wrote:
>>
>> > I've previously discounted the possibility of an "Admin2" client, but
>> > seeing the recent discussions on the thread for KIP-706, I wonder
>> whether
>> > this current proposal in KIP-707 would benefit from a bit more
>> > discussion... I think there are broadly two approaches to evolving the
>> > Admin client API to use CompletionStage directly (rather than what's
>> > currently proposed in KIP-707):
>> >
>> > The simpler option, from a development point of view, would be to
>> introduce
>> > an alternative/parallel set of classes for each of the existing result
>> > classes. E.g. ListTopicsOutcome which was the same as ListTopicsResult,
>> but
>> > using CompletionStage rather than KafkaFuture. Adding methods to the
>> > existing Admin interface would require coming up with synonym method
>> names
>> > for every API call, and probably half of the API being deprecated (if
>> not
>> > immediately then in the long run). It would be cleaner to have a whole
>> new
>> > interface, let's call it Manager, using the same method names. The
>> existing
>> > Admin client implementation would then wrap a Manager instance, and the
>> > existing Result classes could have a constructor parameter of their
>> > corresponding Outcome instance which wrapped the CompletionStages with
>> > KafkaFutures. The Options classes would be unchanged. From a users
>> point of
>> > view migrating to the new Manager client would mostly be a matter of
>> > changing class names and adding a `.toCompletionStage()` to those places
>> > where they were calling KafkaFuture.get()/getNow() (and even this could
>> be
>> > avoided if we used CompletableFuture rather than CompletionStage in the
>> > Outcome class APIs). In the long run Admin would be removed and we'd be
>> > left with the minor annoyance of having a client called Manager in a
>> > package called admin.
>> >
>> > The more involved version would do a similar refactoring, but within a
>> > different package. If we stuck with the Admin and Result class names the
>> > users experience of migrating their codebase would be limited to
>> changing
>> > import statements and the same additions of `.toCompletionStage()`. On
>> the
>> > implementation side it would force us to duplicate all the Options
>> classes
>> > and also have a way of converting old Options instances to their new
>> > equivalents so that the old Admin implementation could delegate to the
>> new
>> > one. The main benefit of this approach seems to be the slightly easier
>> > experience for people porting their code to the new client.
>> >
>> > In doing either of these much more significant refactorings there would
>> > also be the opportunity to omit the current Admin API's deprecated
>> methods
>> > and classes from the new API.
>> >
>> > Do we think this is worth biting off in order to have more long term
>> > consistency between the Admin, Producer and consumer APIs?
>> >
>> > Kind regards,
>> >
>> > Tom
>> >
>> > On Fri, Jan 22, 2021 at 3:02 PM Tom Bentley 
>> 

Re: Is it safe to delete old log segments manually?

2021-03-25 Thread Peter Bukowinski
Hi Sankalp,

As long as you have replication, I’ve found it is safest to delete entire 
topic-partition directories than it is to delete individual log segments from 
them. For one, you get back more space. Second, you don’t have to worry about 
metadata corruption.

When I’ve run out of disk space in the past, the first thing I did was reduce 
topic retention where I could, waited for the log cleanup routines to run, then 
I looked for and deleted associated topic partition directories on the brokers 
with filled disks before starting kafka on them. When the brokers rejoined the 
cluster, they started catching up on the deleted topic-partition directories.

--
Peter Bukowinski

> On Mar 25, 2021, at 8:00 AM, Sankalp Bhatia  wrote:
> 
> Hi All,
> 
> Brokers in one of our Apache Kafka clusters are continuously crashing as
> they have run out of disk space. As per my understanding, reducing the
> value of retention.ms and retention.bytes properties will not work because
> the broker is crashing before the log-retention thread can be scheduled (
> link
> 
> ).
> One option we are exploring is if we can manually delete some of the old
> segment files to make some space in our data disk for the broker to startup
> while reducing the retention.ms config at the same time. There is an old
> email thread (link
> )
> which suggests it is safe to do so, but we want to understand if there have
> been recent changes to topic-partition metadata which we might end up
> corrupting if we try this? If so, are there any tips to get around this
> issue?
> 
> Thanks,
> Sankalp


Is it safe to delete old log segments manually?

2021-03-25 Thread Sankalp Bhatia
Hi All,

Brokers in one of our Apache Kafka clusters are continuously crashing as
they have run out of disk space. As per my understanding, reducing the
value of retention.ms and retention.bytes properties will not work because
the broker is crashing before the log-retention thread can be scheduled (
link

).
One option we are exploring is if we can manually delete some of the old
segment files to make some space in our data disk for the broker to startup
while reducing the retention.ms config at the same time. There is an old
email thread (link
)
which suggests it is safe to do so, but we want to understand if there have
been recent changes to topic-partition metadata which we might end up
corrupting if we try this? If so, are there any tips to get around this
issue?

Thanks,
Sankalp


Re: Kafka contribution

2021-03-25 Thread Tamas Barnabas Egyed
Dear Apache Kafka team,

I wrote some wrong details about myself in my previous email.
So here are my details:

email: *egye...@cloudera.com *
github username: *egyedt*
jira username: *egyed.t*

Kind regards,
*Tamás Egyed* | Software engineer, Hungary
cloudera.com 
[image: Cloudera] 
[image: Cloudera on Twitter]  [image:
Cloudera on Facebook]  [image: Cloudera
on LinkedIn] 

--


Kafka contribution

2021-03-25 Thread Tamas Barnabas Egyed
Dear Apache Kafka team,

I would like to contribute to the Kafka project.
Here are some details about me:

email: *egye...@cloudera.com *
github username: *egytom* (domain: github.infra.cloudera.com)
jira username: *egyed.t *(https://jira.cloudera.com/)

Kind regards,
*Tamás Egyed* | Software engineer, Hungary
cloudera.com 
[image: Cloudera] 
[image: Cloudera on Twitter]  [image:
Cloudera on Facebook]  [image: Cloudera
on LinkedIn] 

--


[jira] [Resolved] (KAFKA-12513) Kafka zookeeper client can't connect when the first zookeeper server is offline

2021-03-25 Thread Krzysztof Piecuch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krzysztof Piecuch resolved KAFKA-12513.
---
Resolution: Invalid

I've just read the docs, looks like everything is fine on kafka & zookeeper 
side.

 

sorry for the confusion.

> Kafka zookeeper client can't connect when the first zookeeper server is 
> offline
> ---
>
> Key: KAFKA-12513
> URL: https://issues.apache.org/jira/browse/KAFKA-12513
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 2.3.1, 2.4.1, 2.7.0
> Environment: kafka_2.13-2.7.0, kernel 5.4.0-52-generic (Ubuntu), 
> Scala 2.13.3-400
>Reporter: Krzysztof Piecuch
>Priority: Critical
>
> Kafka zookeeper client library will not connect to any zookeepers in the 
> "zookeeper string" when the first zookeeper is offline. This causes the 
> cluster to crash hard and in order to get the cluster back into healthy state 
> the first zookeeper node must be resurrected.
> The crash does not always happen immediately after zk0 goes offline, because 
> kafka might have connections established to different zookeeper instances. 
> When the connection gets dropped and kafka needs to reconnect everything 
> crashes hard.
>  
> Demo:
> This works:
> {code:java}
>  root@kafka0:/opt/kafka/current/bin# ./kafka-topics.sh --zookeeper 
> zk0.gambit:2181/hex8c,zk1.gambit:2181/hex8c,zk2.gambit:2181/hex8c --describe  
> --topic duma
> Topic: duma   PartitionCount: 6   ReplicationFactor: 3Configs: 
> compression.type=uncompressed,retention.bytes=322122547200
>   Topic: duma Partition: 0Leader: 1   Replicas: 1,0,2 Isr: 
> 1,0,2
>   Topic: duma Partition: 1Leader: 2   Replicas: 2,1,0 Isr: 
> 0,1,2
>   Topic: duma Partition: 2Leader: 0   Replicas: 0,2,1 Isr: 
> 0,1,2
>   Topic: duma Partition: 3Leader: 1   Replicas: 1,2,0 Isr: 
> 1,0,2
>   Topic: duma Partition: 4Leader: 2   Replicas: 2,0,1 Isr: 
> 1,0,2
>   Topic: duma Partition: 5Leader: 0   Replicas: 0,1,2 Isr: 
> 0,1,2
> {code}
> Now let's mess with the zookeeper string and see how zookeeper client reacts:
> Changing the last server in the zookeeper string works as expected, 
> {{kafka-topics.sh}} connected to zookeeper but couldn't find the topic 
> (because of bogus zookeeper string):
> {code:java}
> root@kafka0:/opt/kafka/current/bin# ./kafka-topics.sh --zookeeper 
> zk0.gambit:2181/hex8c,zk1.gambit:2181/hex8c,1.1.1.1:2181/hex8c --describe 
> --topic duma
> Error while executing topic command : Topic 'duma' does not exist as expected
> [2021-03-20 23:01:45,535] ERROR java.lang.IllegalArgumentException: Topic 
> 'duma' does not exist as expected
>   at 
> kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:484)
>   at 
> kafka.admin.TopicCommand$ZookeeperTopicService.describeTopic(TopicCommand.scala:390)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:67)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$) {code}
> However, in case the first server in the zookeeper cluster is unavailable 
> zookeeper client won't connect to any of the zookeepers:
> {code:java}
> root@kafka0:/opt/kafka/current/bin# ./kafka-topics.sh --zookeeper 
> 1.1.1.1:2181/hex8c,zk1.gambit:2181/hex8c,zk2.gambit:2181/hex8c --describe 
> --topic duma
> [2021-03-20 23:02:43,888] WARN Client session timed out, have not heard from 
> server in 30012ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
> Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: 
> Timed out waiting for connection while in state: CONNECTING
>   at 
> kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:259)
>   at 
> kafka.zookeeper.ZooKeeperClient$$Lambda$31.5D399170.apply$mcV$sp(Unknown
>  Source)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
>   at 
> kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:255)
>   at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:113)
>   at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1858)
>   at 
> kafka.admin.TopicCommand$ZookeeperTopicService$.apply(TopicCommand.scala:321)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:54)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala) {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12555) Log reason for rolling a segment

2021-03-25 Thread Stanislav Kozlovski (Jira)
Stanislav Kozlovski created KAFKA-12555:
---

 Summary: Log reason for rolling a segment
 Key: KAFKA-12555
 URL: https://issues.apache.org/jira/browse/KAFKA-12555
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski


It would be useful for issue-diagnostic purposes to log the reason for why a 
log segment was rolled 
(https://github.com/apache/kafka/blob/e840b03a026ddb9a67a15a164d877545130d6e17/core/src/main/scala/kafka/log/Log.scala#L2069)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12551) Refactor Kafka Log layer

2021-03-25 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12551:


 Summary: Refactor Kafka Log layer
 Key: KAFKA-12551
 URL: https://issues.apache.org/jira/browse/KAFKA-12551
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


This is an umbrella Jira that tracks the work items for for Log layer refactor 
as described here: 
[https://docs.google.com/document/d/1dQJL4MCwqQJSPmZkVmVzshFZKuFy_bCPtubav4wBfHQ/edit?usp=sharing]
 .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12552) Extract segments map out of Log class into separate class

2021-03-25 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12552:


 Summary: Extract segments map out of Log class into separate class
 Key: KAFKA-12552
 URL: https://issues.apache.org/jira/browse/KAFKA-12552
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Extract segments map out of Log class into separate class. This will be 
particularly useful to refactor the recovery logic in Log class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12554) Split Log layer into UnifiedLog and LocalLog

2021-03-25 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12554:


 Summary: Split Log layer into UnifiedLog and LocalLog
 Key: KAFKA-12554
 URL: https://issues.apache.org/jira/browse/KAFKA-12554
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Split Log layer into UnifiedLog and LocalLog based on the proposal described in 
this document: 
https://docs.google.com/document/d/1dQJL4MCwqQJSPmZkVmVzshFZKuFy_bCPtubav4wBfHQ/edit#.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12553) Refactor Log layer recovery logic

2021-03-25 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-12553:


 Summary: Refactor Log layer recovery logic
 Key: KAFKA-12553
 URL: https://issues.apache.org/jira/browse/KAFKA-12553
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam
Assignee: Kowshik Prakasam


Refactor Log layer recovery logic by extracting it out of the kafka.log.Log 
class into separate modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Fw: Regarding email notification un-subscription

2021-03-25 Thread Tom Bentley
Hi Madan,

You need to send an email (any subject or contents will do) to
dev-unsubscr...@kafka.apache.org from the email address which is receiving
the mailing list messages. You will then receive a message containing a
link which you can use to unsubscribe.

Kind regards,

Tom


On Thu, Mar 25, 2021 at 5:56 AM madan mohan mohanty
 wrote:

> Please unsubscribe me
>
> Sent from Yahoo Mail on Android
>
>- Forwarded message - From: "Jyotinder Singh" <
> jyotindrsi...@gmail.com> To: "dev@kafka.apache.org" 
> Cc:  Sent: Thu, 25 Mar 2021 at 12:38 am Subject: Re: Regarding email
> notification un-subscription  Thanks, I wasn't aware about the unsubscribe
> confirmation mail that was
> ending up in my spam folder.
>
> Thanks a lot!
>
> On Wed, 24 Mar 2021, 17:55 Tom Bentley,  wrote:
>
> > Avinash,
> >
> > You don't say whether you have already tried, but you need to send an
> email
> > to dev-unsubscr...@kafka.apache.org.
> >
> > Jyotinder,
> >
> > Assuming you definitely used the right email address, have you checked
> your
> > spam folder, since IIRC you get sent a confirmation email with a link to
> > visit to confirm that you want to unsubscribe.
> >
> > Kind regards,
> >
> > Tom
> >
> > On Wed, Mar 24, 2021 at 12:12 PM Jyotinder Singh <
> jyotindrsi...@gmail.com>
> > wrote:
> >
> > > I’m facing the same issue. Mailing to the unsubscribe address doesn’t
> > seem
> > > to work.
> > > Email: jyotindrsi...@gmail.com
> > >
> > > On Wed, 24 Mar 2021 at 5:29 PM, Avinash Srivastava <
> > > asrivast...@flyanra.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I am continuously getting multiple emails from various email accounts
> > > > related to kafka. I would request you to please unsubscribe my email
> -
> > > > asrivast...@flyanra.com from subscription list.
> > > >
> > > >
> > > > Thanks
> > > >
> > > > Avinash
> > > >
> > > > --
> > > Sent from my iPad
> > >
> >
>
>