Build failed in Jenkins: kafka-trunk-jdk11 #83

2018-11-07 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7431: Clean up connect unit tests

--
[...truncated 2.33 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3186

2018-11-07 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7431: Clean up connect unit tests

--
[...truncated 2.73 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-7605) Flaky Test `SaslMultiMechanismConsumerTest.testCoordinatorFailover`

2018-11-07 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7605:
--

 Summary: Flaky Test 
`SaslMultiMechanismConsumerTest.testCoordinatorFailover`
 Key: KAFKA-7605
 URL: https://issues.apache.org/jira/browse/KAFKA-7605
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Jason Gustafson
Assignee: Jason Gustafson


{code}
java.lang.AssertionError: Failed to observe commit callback before timeout
at kafka.utils.TestUtils$.fail(TestUtils.scala:351)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:761)
at kafka.utils.TestUtils$.pollUntilTrue(TestUtils.scala:727)
at 
kafka.api.BaseConsumerTest.awaitCommitCallback(BaseConsumerTest.scala:198)
at 
kafka.api.BaseConsumerTest.ensureNoRebalance(BaseConsumerTest.scala:214)
at 
kafka.api.BaseConsumerTest.testCoordinatorFailover(BaseConsumerTest.scala:117)
{code}

Probably just need to increase the timeout a little.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7604) Flaky Test `ConsumerCoordinatorTest.testRebalanceAfterTopicUnavailableWithPatternSubscribe`

2018-11-07 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7604:
--

 Summary: Flaky Test 
`ConsumerCoordinatorTest.testRebalanceAfterTopicUnavailableWithPatternSubscribe`
 Key: KAFKA-7604
 URL: https://issues.apache.org/jira/browse/KAFKA-7604
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Jason Gustafson
Assignee: Jason Gustafson


{code}
java.lang.AssertionError: Metadata refresh requested unnecessarily
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest.unavailableTopicTest(ConsumerCoordinatorTest.java:1034)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinatorTest.testRebalanceAfterTopicUnavailableWithPatternSubscribe(ConsumerCoordinatorTest.java:984)
{code}

The problem seems to be a race condition in the test case with the heartbeat 
thread and the foreground thread unsafely attempting to update metadata at the 
same time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #82

2018-11-07 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7560; PushHttpMetricsReporter should not convert metric value to

--
[...truncated 1.47 MB...]

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit STARTED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest PASSED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
STARTED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning 
STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnGroupIdAndPartitionGivenTogether 
STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnGroupIdAndPartitionGivenTogether 
PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnUnrecognizedNewConsumerOption 
STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnUnrecognizedNewConsumerOption 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod STARTED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithNoOffsetReset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithNoOffsetReset PASSED

kafka.tools.DumpLogSegmentsTest > testPrintDataLog STARTED

kafka.tools.DumpLogSegmentsTest > testPrintDataLog PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #3185

2018-11-07 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7560; PushHttpMetricsReporter should not convert metric value to

--
[...truncated 800.34 KB...]

kafka.server.MetadataRequestTest > testControllerId STARTED

kafka.server.MetadataRequestTest > testControllerId PASSED

kafka.server.MetadataRequestTest > testAliveBrokersWithNoTopics STARTED

kafka.server.MetadataRequestTest > testAliveBrokersWithNoTopics PASSED

kafka.server.MetadataRequestTest > testAllTopicsRequest STARTED

kafka.server.MetadataRequestTest > testAllTopicsRequest PASSED

kafka.server.MetadataRequestTest > testClusterIdIsValid STARTED

kafka.server.MetadataRequestTest > testClusterIdIsValid PASSED

kafka.server.MetadataRequestTest > testNoTopicsRequest STARTED

kafka.server.MetadataRequestTest > testNoTopicsRequest PASSED

kafka.server.MetadataRequestTest > 
testAutoCreateTopicWithInvalidReplicationFactor STARTED

kafka.server.MetadataRequestTest > 
testAutoCreateTopicWithInvalidReplicationFactor PASSED

kafka.server.MetadataRequestTest > testPreferredReplica STARTED

kafka.server.MetadataRequestTest > testPreferredReplica PASSED

kafka.server.MetadataRequestTest > testClusterIdWithRequestVersion1 STARTED

kafka.server.MetadataRequestTest > testClusterIdWithRequestVersion1 PASSED

kafka.server.MetadataRequestTest > testAutoTopicCreation STARTED

kafka.server.MetadataRequestTest > testAutoTopicCreation PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
STARTED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
STARTED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment 
STARTED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
STARTED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.server.ProduceRequestTest > testSimpleProduceRequest STARTED

kafka.server.ProduceRequestTest > testSimpleProduceRequest PASSED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest STARTED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest PASSED

kafka.server.ProduceRequestTest > testProduceToNonReplica STARTED

kafka.server.ProduceRequestTest > testProduceToNonReplica PASSED

kafka.server.ProduceRequestTest > testZSTDProduceRequest STARTED

kafka.server.ProduceRequestTest > testZSTDProduceRequest PASSED

kafka.server.AbstractFetcherThreadTest > testSimpleFetch STARTED

kafka.server.AbstractFetcherThreadTest > testSimpleFetch PASSED

kafka.server.AbstractFetcherThreadTest > testFollowerFetchOutOfRangeHigh STARTED

kafka.server.AbstractFetcherThreadTest > testFollowerFetchOutOfRangeHigh PASSED

kafka.server.AbstractFetcherThreadTest > testFencedTruncation STARTED

kafka.server.AbstractFetcherThreadTest > testFencedTruncation PASSED

kafka.server.AbstractFetcherThreadTest > 
testRetryAfterUnknownLeaderEpochInLatestOffsetFetch STARTED

kafka.server.AbstractFetcherThreadTest > 
testRetryAfterUnknownLeaderEpochInLatestOffsetFetch PASSED

kafka.server.AbstractFetcherThreadTest > testTruncationSkippedIfNoEpochChange 
STARTED

kafka.server.AbstractFetcherThreadTest > testTruncationSkippedIfNoEpochChange 
PASSED

kafka.server.AbstractFetcherThreadTest > testUnknownLeaderEpochInTruncation 
STARTED

kafka.server.AbstractFetcherThreadTest > testUnknownLeaderEpochInTruncation 
PASSED

kafka.server.AbstractFetcherThreadTest > testConsumerLagRemovedWithPartition 
STARTED

kafka.server.AbstractFetcherThreadTest > testConsumerLagRemovedWithPartition 
PASSED

kafka.server.AbstractFetcherThreadTest > testFollowerFetchOutOfRangeLow STARTED

kafka.server.AbstractFetcherThreadTest > testFollowerFetchOutOfRangeLow PASSED

kafka.server.AbstractFetcherThreadTest > testFencedOffsetResetAfterOutOfRange 
STARTED

kafka.server.AbstractFetcherThreadTest > testFencedOffsetResetAfterOutOfRange 
PASSED

kafka.server.AbstractFetcherThreadTest > testUnknownLeaderEpochWhileFetching 
STARTED

kafka.server.AbstractFetcherThreadTest > testUnknownLeaderEpochWhileFetching 
PASSED

kafka.server.AbstractFetcherThreadTest > testFencedFetch STARTED

kafka.server.AbstractFetcherThreadTest > testFencedFetch PASSED

kafka.server.AbstractFetcherThreadTest > testMetricsRemovedOnShutdown STARTED

kafka.server.AbstractFetcherThreadTest > testMetricsRemovedOnShutdown PASSED

kafka.server.AbstractFetcherThreadTest > testCorruptMessage STARTED

kafka.server.AbstractFetcherThreadTest > testCorruptMessage PASSED

kafka.server.AbstractFetcherThreadTest > testTruncation STARTED

kafka.server.AbstractFetcherThreadTest > testTruncation PASSED

kafka.server.ReplicationQuotasTest > 

Re: [VOTE] - KIP-213 Support non-key joining in KTable

2018-11-07 Thread Adam Bellemare
Bumping this thread, as per convention - 1

On Fri, Nov 2, 2018 at 8:22 AM Adam Bellemare 
wrote:

> As expected :) But still, thanks none-the-less!
>
> On Fri, Nov 2, 2018 at 3:36 AM Jan Filipiak 
> wrote:
>
>> reminder
>>
>> On 30.10.2018 15:47, Adam Bellemare wrote:
>> > Hi All
>> >
>> > I would like to call a vote on
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable
>> .
>> > This allows a Kafka Streams DSL user to perform KTable to KTable
>> > foreign-key joins on their data. I have been using this in production
>> for
>> > some time and I have composed a PR that enables this. It is a fairly
>> > extensive PR, but I believe it will add considerable value to the Kafka
>> > Streams DSL.
>> >
>> > The PR can be found here:
>> > https://github.com/apache/kafka/pull/5527
>> >
>> > See
>> http://mail-archives.apache.org/mod_mbox/kafka-dev/201810.mbox/browser
>> > for previous discussion thread.
>> >
>> > I would also like to give a shout-out to Jan Filipiak who helped me out
>> > greatly in this project, and who led the initial work into this problem.
>> > Without Jan's help and insight I do not think this would have been
>> possible
>> > to get to this point.
>> >
>> > Adam
>> >
>>
>


[jira] [Resolved] (KAFKA-6262) KIP-232: Detect outdated metadata by adding ControllerMetadataEpoch field

2018-11-07 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-6262.
-
Resolution: Duplicate

The design in this Jira has been moved to KIP-320.

> KIP-232: Detect outdated metadata by adding ControllerMetadataEpoch field
> -
>
> Key: KAFKA-6262
> URL: https://issues.apache.org/jira/browse/KAFKA-6262
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>Priority: Major
>
> Currently the following sequence of events may happen that cause consumer to 
> rewind back to the earliest offset even if there is no log truncation in 
> Kafka. This can be a problem for MM by forcing MM to lag behind significantly 
> and duplicate a large amount of data.
> - Say there are three brokers 1,2,3 for a given partition P. Broker 1 is the 
> leader. Initially they are all in ISR. HW and LEO are both 10.
> - SRE does controlled shutdown for broker 1. Controller sends 
> LeaderAndIsrRequest to all three brokers so that leader = broker 2 and 
> isr_set = [broker 2, broker 3].
> - Broker 2 and 3 receives and processes LeaderAndIsrRequest almost 
> instantaneously. Now broker 2 and broker 3 can accept ProduceRequest and 
> FetchRequest for the partition P. 
> However, broker 1 has not processed this LeaderAndIsrRequest due to backlog 
> in its request queue. So broker 1 still think it is leader for the partition 
> P.
> - Because there is leadership movement, a consumer receives 
> NotLeaderForPartitionException, which triggers this consumer to send 
> MetadataRequest to a randomly selected broker, say broker 2. Broker 2 tells 
> consumer that itself is the leader for partition P. Consumer fetches date of 
> partition P from broker 2. The latest data has offset 20.
> - Later this consumer receives NotLeaderForPartitionException for another 
> partition. It sends MetadataRequest to a randomly selected broker again. This 
> time it sends MetadataRequest to broker 1, which tells the consumer that 
> itself is the leader for partition P.
> - This consumer issues FetchRequest for the partition P at offset 21. Broker 
> 1 returns OffsetOutOfRangeExeption because it thinks the LogEndOffset for 
> this partition is 10.
> There are two possible solutions for this problem. The long term solution is 
> probably to include version in the MetadataResponse so that consumer knows 
> whether the medata is outdated. This requires a KIP.
> The short term solution, which should solve the problem in most cases, is to 
> let consumer keep fetching metadata from the same (initially randomly picked) 
> broker until the connection to this broker is disconnected. The metadata 
> version will not go back in time if consumer keeps fetching metadata from the 
> same broker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7603) Producer should negotiate message format version with broker

2018-11-07 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-7603:
---

 Summary: Producer should negotiate message format version with 
broker
 Key: KAFKA-7603
 URL: https://issues.apache.org/jira/browse/KAFKA-7603
 Project: Kafka
  Issue Type: Improvement
Reporter: Dong Lin


Currently Producer will always send the record with the highest magic format 
version that is supported by both the produce and broker library regardless of 
log.message.format.version config in the broker.

This causes unnecessary message downconvert overhead if 
log.message.format.version has not been upgraded and producer/broker library 
has been upgraded. It is preferred for produce to produce message with format 
version no higher than the log.message.format.version configured in the broker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Observer

2018-11-07 Thread abeceda4
Addition to my previous e-mail ...
 
I think, that there should be Observer pattern. I think, that Consumer 
interface is Observer interface with method update (i do not know how this 
method is called in Consumer). I have two ideas, which classes could be 
concrete observers. First idea is, that concrete observers are classes 
KafkaConsumer and MockConsumer (both implement interface Consumer). Second 
idea: I read, that object of KafkaConsumer is created when application wants to 
be consumer. So my idea is, that concrete observers are all that objects. But i 
think more, that concrete observers are KafkaConsumer and MockConsumer.
 
Observers must observe some subject. I think that this subject is some Topic. I 
think, that there could be interface of Topic with methods attachObserver, 
dettachObserver and notifiyObserver. Also there could be class of concrete 
subject which implements interface Topic. But I can not find classes of Topics. 
I read, that topics are managed by zookeeper. I am worry, that part of observer 
is in zookeeper. I am worry, that subject is created in command line and 
therefore I can not find part of observer pattern - subject.
 
Can you help me please with that?
 
Thank you very much


Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Rajini Sivaram
+1 (binding)

Checked source build and unit tests. Ran quickstart with source and binary.

Thank you for managing the release, Manikumar!

Regards,

Rajini

On Wed, Nov 7, 2018 at 6:18 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Checked signatures, build and quickstart.
>
> Thank you for managing the release, Mani!
>
>
> On Thu, Oct 25, 2018 at 7:29 PM Manikumar 
> wrote:
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 2.0.1.
> >
> > This is a bug fix release closing 49 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> >
> > Release notes for the 2.0.1 release:
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by  Tuesday, October 30, end of day
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> >
> > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> >
> > * Documentation:
> > http://kafka.apache.org/20/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/20/protocol.html
> >
> > * Successful Jenkins builds for the 2.0 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.0-jdk8/177/
> >
> > /**
> >
> > Thanks,
> > Manikumar
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Gwen Shapira
+1 (binding)

Checked signatures, build and quickstart.

Thank you for managing the release, Mani!


On Thu, Oct 25, 2018 at 7:29 PM Manikumar  wrote:
>
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 2.0.1.
>
> This is a bug fix release closing 49 tickets:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
>
> Release notes for the 2.0.1 release:
> http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by  Tuesday, October 30, end of day
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
>
> * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> https://github.com/apache/kafka/releases/tag/2.0.1-rc0
>
> * Documentation:
> http://kafka.apache.org/20/documentation.html
>
> * Protocol:
> http://kafka.apache.org/20/protocol.html
>
> * Successful Jenkins builds for the 2.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/177/
>
> /**
>
> Thanks,
> Manikumar



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: kafka-2.1-jdk8 #52

2018-11-07 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7560; PushHttpMetricsReporter should not convert metric value to

--
[...truncated 427.34 KB...]

kafka.server.KafkaConfigTest > testCaseInsensitiveListenerProtocol STARTED

kafka.server.KafkaConfigTest > testCaseInsensitiveListenerProtocol PASSED

kafka.server.KafkaConfigTest > testListenerAndAdvertisedListenerNames STARTED

kafka.server.KafkaConfigTest > testListenerAndAdvertisedListenerNames PASSED

kafka.server.KafkaConfigTest > testNonroutableAdvertisedListeners STARTED

kafka.server.KafkaConfigTest > testNonroutableAdvertisedListeners PASSED

kafka.server.KafkaConfigTest > 
testInterBrokerListenerNameAndSecurityProtocolSet STARTED

kafka.server.KafkaConfigTest > 
testInterBrokerListenerNameAndSecurityProtocolSet PASSED

kafka.server.KafkaConfigTest > testFromPropsInvalid STARTED

kafka.server.KafkaConfigTest > testFromPropsInvalid PASSED

kafka.server.KafkaConfigTest > testInvalidCompressionType STARTED

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault STARTED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided STARTED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testValidCompressionType STARTED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid STARTED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testListenerNamesWithAdvertisedListenerUnset 
STARTED

kafka.server.KafkaConfigTest > testListenerNamesWithAdvertisedListenerUnset 
PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
STARTED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided STARTED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault STARTED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol STARTED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled STARTED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testInterBrokerVersionMessageFormatCompatibility 
STARTED

kafka.server.KafkaConfigTest > testInterBrokerVersionMessageFormatCompatibility 
PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault STARTED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration STARTED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol STARTED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol PASSED

kafka.server.ListOffsetsRequestTest > testListOffsetsErrorCodes STARTED

kafka.server.ListOffsetsRequestTest > testListOffsetsErrorCodes PASSED

kafka.server.ListOffsetsRequestTest > testCurrentEpochValidation STARTED

kafka.server.ListOffsetsRequestTest > testCurrentEpochValidation PASSED

kafka.server.CreateTopicsRequestTest > testValidCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testValidCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testErrorCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testErrorCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testInvalidCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testInvalidCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testNotController STARTED

kafka.server.CreateTopicsRequestTest > testNotController PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
PASSED

kafka.server.FetchRequestDownConversionConfigTest > 
testV1FetchWithDownConversionDisabled STARTED

kafka.server.FetchRequestDownConversionConfigTest > 
testV1FetchWithDownConversionDisabled PASSED

kafka.server.FetchRequestDownConversionConfigTest > testV1FetchFromReplica 
STARTED

kafka.server.FetchRequestDownConversionConfigTest > testV1FetchFromReplica 
PASSED

kafka.server.FetchRequestDownConversionConfigTest > 
testLatestFetchWithDownConversionDisabled STARTED

kafka.server.FetchRequestDownConversionConfigTest > 
testLatestFetchWithDownConversionDisabled 

design patterns

2018-11-07 Thread abeceda4

Hi can you help me identify some design patterns in the code apache/kafka ?

Thanks
 
Mrkvica



Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Ismael Juma
Since that was just a system tests fix, it doesn't seem to require another
RC. We just need a couple more votes. I'll ping some PMC members.

Ismael

On Wed, Nov 7, 2018 at 5:01 AM Manikumar  wrote:

> KAFKA-7581, KAFKA-7579 are not blockers for 2.0.1 release. KAFKA-7579 got
> fixed on 2.0 branch.
> This can be part of 2.0.1, if we are going with another RC.
>
> We need couple of more PMC votes to pass this vote thread.
>
> On Wed, Nov 7, 2018 at 4:43 PM Eno Thereska 
> wrote:
>
> > Two JIRAs are still marked as blockers, although it's not clear to me if
> > they really are. Any update?
> > Thanks
> > Eno
> >
> > On Thu, Nov 1, 2018 at 5:10 PM Manikumar 
> > wrote:
> >
> > > We were waiting for the system test results. There were few failures:
> > > KAFKA-7579,  KAFKA-7559, KAFKA-7561
> > > they are not blockers for 2.0.1 release. We need more votes from
> > > PMC/committers :)
> > >
> > > Thanks Stanislav! for the system test results.
> > >
> > > Thanks,
> > > Manikumar
> > >
> > > On Thu, Nov 1, 2018 at 10:20 PM Eno Thereska 
> > > wrote:
> > >
> > > > Anything else holding this up?
> > > >
> > > > Thanks
> > > > Eno
> > > >
> > > > On Thu, Nov 1, 2018 at 10:27 AM Jakub Scholz 
> wrote:
> > > >
> > > > > +1 (non-binding) ... I used the staged binaries and run tests with
> > > > > different clients.
> > > > >
> > > > > On Fri, Oct 26, 2018 at 4:29 AM Manikumar <
> manikumar.re...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hello Kafka users, developers and client-developers,
> > > > > >
> > > > > > This is the first candidate for release of Apache Kafka 2.0.1.
> > > > > >
> > > > > > This is a bug fix release closing 49 tickets:
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> > > > > >
> > > > > > Release notes for the 2.0.1 release:
> > > > > >
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> > > > > >
> > > > > > *** Please download, test and vote by  Tuesday, October 30, end
> of
> > > day
> > > > > >
> > > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > > http://kafka.apache.org/KEYS
> > > > > >
> > > > > > * Release artifacts to be voted upon (source and binary):
> > > > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> > > > > >
> > > > > > * Maven artifacts to be voted upon:
> > > > > > https://repository.apache.org/content/groups/staging/
> > > > > >
> > > > > > * Javadoc:
> > > > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> > > > > >
> > > > > > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > > > > > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> > > > > >
> > > > > > * Documentation:
> > > > > > http://kafka.apache.org/20/documentation.html
> > > > > >
> > > > > > * Protocol:
> > > > > > http://kafka.apache.org/20/protocol.html
> > > > > >
> > > > > > * Successful Jenkins builds for the 2.0 branch:
> > > > > > Unit/integration tests:
> > > > > https://builds.apache.org/job/kafka-2.0-jdk8/177/
> > > > > >
> > > > > > /**
> > > > > >
> > > > > > Thanks,
> > > > > > Manikumar
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Eno Thereska
Thanks Manikumar, I think you need to make a call on KAFKA-7579 as release
manager and push this through.

Eno

On Wed, Nov 7, 2018 at 1:01 PM Manikumar  wrote:

> KAFKA-7581, KAFKA-7579 are not blockers for 2.0.1 release. KAFKA-7579 got
> fixed on 2.0 branch.
> This can be part of 2.0.1, if we are going with another RC.
>
> We need couple of more PMC votes to pass this vote thread.
>
> On Wed, Nov 7, 2018 at 4:43 PM Eno Thereska 
> wrote:
>
> > Two JIRAs are still marked as blockers, although it's not clear to me if
> > they really are. Any update?
> > Thanks
> > Eno
> >
> > On Thu, Nov 1, 2018 at 5:10 PM Manikumar 
> > wrote:
> >
> > > We were waiting for the system test results. There were few failures:
> > > KAFKA-7579,  KAFKA-7559, KAFKA-7561
> > > they are not blockers for 2.0.1 release. We need more votes from
> > > PMC/committers :)
> > >
> > > Thanks Stanislav! for the system test results.
> > >
> > > Thanks,
> > > Manikumar
> > >
> > > On Thu, Nov 1, 2018 at 10:20 PM Eno Thereska 
> > > wrote:
> > >
> > > > Anything else holding this up?
> > > >
> > > > Thanks
> > > > Eno
> > > >
> > > > On Thu, Nov 1, 2018 at 10:27 AM Jakub Scholz 
> wrote:
> > > >
> > > > > +1 (non-binding) ... I used the staged binaries and run tests with
> > > > > different clients.
> > > > >
> > > > > On Fri, Oct 26, 2018 at 4:29 AM Manikumar <
> manikumar.re...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hello Kafka users, developers and client-developers,
> > > > > >
> > > > > > This is the first candidate for release of Apache Kafka 2.0.1.
> > > > > >
> > > > > > This is a bug fix release closing 49 tickets:
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> > > > > >
> > > > > > Release notes for the 2.0.1 release:
> > > > > >
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> > > > > >
> > > > > > *** Please download, test and vote by  Tuesday, October 30, end
> of
> > > day
> > > > > >
> > > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > > http://kafka.apache.org/KEYS
> > > > > >
> > > > > > * Release artifacts to be voted upon (source and binary):
> > > > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> > > > > >
> > > > > > * Maven artifacts to be voted upon:
> > > > > > https://repository.apache.org/content/groups/staging/
> > > > > >
> > > > > > * Javadoc:
> > > > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> > > > > >
> > > > > > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > > > > > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> > > > > >
> > > > > > * Documentation:
> > > > > > http://kafka.apache.org/20/documentation.html
> > > > > >
> > > > > > * Protocol:
> > > > > > http://kafka.apache.org/20/protocol.html
> > > > > >
> > > > > > * Successful Jenkins builds for the 2.0 branch:
> > > > > > Unit/integration tests:
> > > > > https://builds.apache.org/job/kafka-2.0-jdk8/177/
> > > > > >
> > > > > > /**
> > > > > >
> > > > > > Thanks,
> > > > > > Manikumar
> > > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-7560) PushHttpMetricsReporter should not convert metric value to double

2018-11-07 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7560.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   2.2.0

Issue resolved by pull request 5886
[https://github.com/apache/kafka/pull/5886]

> PushHttpMetricsReporter should not convert metric value to double
> -
>
> Key: KAFKA-7560
> URL: https://issues.apache.org/jira/browse/KAFKA-7560
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Stanislav Kozlovski
>Assignee: Dong Lin
>Priority: Blocker
> Fix For: 2.2.0, 2.1.0
>
>
> Currently PushHttpMetricsReporter will convert value from 
> KafkaMetric.metricValue() to double. This will not work for non-numerical 
> metrics such as version in AppInfoParser whose value can be string. This has 
> caused issue for PushHttpMetricsReporter which in turn caused system test 
> kafkatest.tests.client.quota_test.QuotaTest.test_quota to fail with the 
> following exception:  
> {code:java}
>  File "/opt/kafka-dev/tests/kafkatest/tests/client/quota_test.py", line 196, 
> in validate     metric.value for k, metrics in 
> producer.metrics(group='producer-metrics', name='outgoing-byte-rate', 
> client_id=producer.client_id) for metric in metrics ValueError: max() arg is 
> an empty sequence
> {code}
> Since we allow metric value to be object, PushHttpMetricsReporter should also 
> read metric value as object and pass it to the http server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Manikumar
KAFKA-7581, KAFKA-7579 are not blockers for 2.0.1 release. KAFKA-7579 got
fixed on 2.0 branch.
This can be part of 2.0.1, if we are going with another RC.

We need couple of more PMC votes to pass this vote thread.

On Wed, Nov 7, 2018 at 4:43 PM Eno Thereska  wrote:

> Two JIRAs are still marked as blockers, although it's not clear to me if
> they really are. Any update?
> Thanks
> Eno
>
> On Thu, Nov 1, 2018 at 5:10 PM Manikumar 
> wrote:
>
> > We were waiting for the system test results. There were few failures:
> > KAFKA-7579,  KAFKA-7559, KAFKA-7561
> > they are not blockers for 2.0.1 release. We need more votes from
> > PMC/committers :)
> >
> > Thanks Stanislav! for the system test results.
> >
> > Thanks,
> > Manikumar
> >
> > On Thu, Nov 1, 2018 at 10:20 PM Eno Thereska 
> > wrote:
> >
> > > Anything else holding this up?
> > >
> > > Thanks
> > > Eno
> > >
> > > On Thu, Nov 1, 2018 at 10:27 AM Jakub Scholz  wrote:
> > >
> > > > +1 (non-binding) ... I used the staged binaries and run tests with
> > > > different clients.
> > > >
> > > > On Fri, Oct 26, 2018 at 4:29 AM Manikumar  >
> > > > wrote:
> > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the first candidate for release of Apache Kafka 2.0.1.
> > > > >
> > > > > This is a bug fix release closing 49 tickets:
> > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> > > > >
> > > > > Release notes for the 2.0.1 release:
> > > > >
> http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> > > > >
> > > > > *** Please download, test and vote by  Tuesday, October 30, end of
> > day
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > http://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > > https://repository.apache.org/content/groups/staging/
> > > > >
> > > > > * Javadoc:
> > > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> > > > >
> > > > > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > > > > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> > > > >
> > > > > * Documentation:
> > > > > http://kafka.apache.org/20/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > http://kafka.apache.org/20/protocol.html
> > > > >
> > > > > * Successful Jenkins builds for the 2.0 branch:
> > > > > Unit/integration tests:
> > > > https://builds.apache.org/job/kafka-2.0-jdk8/177/
> > > > >
> > > > > /**
> > > > >
> > > > > Thanks,
> > > > > Manikumar
> > > > >
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7579.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

This is fixed via  KAFKA-7561.

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Blocker
> Fix For: 2.1.0, 2.0.2
>
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Eno Thereska
Two JIRAs are still marked as blockers, although it's not clear to me if
they really are. Any update?
Thanks
Eno

On Thu, Nov 1, 2018 at 5:10 PM Manikumar  wrote:

> We were waiting for the system test results. There were few failures:
> KAFKA-7579,  KAFKA-7559, KAFKA-7561
> they are not blockers for 2.0.1 release. We need more votes from
> PMC/committers :)
>
> Thanks Stanislav! for the system test results.
>
> Thanks,
> Manikumar
>
> On Thu, Nov 1, 2018 at 10:20 PM Eno Thereska 
> wrote:
>
> > Anything else holding this up?
> >
> > Thanks
> > Eno
> >
> > On Thu, Nov 1, 2018 at 10:27 AM Jakub Scholz  wrote:
> >
> > > +1 (non-binding) ... I used the staged binaries and run tests with
> > > different clients.
> > >
> > > On Fri, Oct 26, 2018 at 4:29 AM Manikumar 
> > > wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the first candidate for release of Apache Kafka 2.0.1.
> > > >
> > > > This is a bug fix release closing 49 tickets:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> > > >
> > > > Release notes for the 2.0.1 release:
> > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> > > >
> > > > *** Please download, test and vote by  Tuesday, October 30, end of
> day
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > http://kafka.apache.org/KEYS
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> > > >
> > > > * Maven artifacts to be voted upon:
> > > > https://repository.apache.org/content/groups/staging/
> > > >
> > > > * Javadoc:
> > > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> > > >
> > > > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > > > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> > > >
> > > > * Documentation:
> > > > http://kafka.apache.org/20/documentation.html
> > > >
> > > > * Protocol:
> > > > http://kafka.apache.org/20/protocol.html
> > > >
> > > > * Successful Jenkins builds for the 2.0 branch:
> > > > Unit/integration tests:
> > > https://builds.apache.org/job/kafka-2.0-jdk8/177/
> > > >
> > > > /**
> > > >
> > > > Thanks,
> > > > Manikumar
> > > >
> > >
> >
>


Re: [VOTE] KIP-374: Add '--help' option to all available Kafka CLI commands

2018-11-07 Thread Viktor Somogyi-Vass
+1 (non-binding)

Thanks for this KIP.

Viktor

On Wed, Oct 31, 2018 at 2:43 PM Srinivas Reddy 
wrote:

> Hi All,
>
> I would like to call for a vote on KIP-374:
> https://cwiki.apache.org/confluence/x/FgSQBQ
>
> Summary:
> Currently, the '--help' option is recognized by some Kafka commands
> but not all. To provide a consistent user experience, it would be nice to
> add a '--help' option to all Kafka commands.
>
> I'd appreciate any votes or feedback.
>
> --
> Srinivas Reddy
>
> http://mrsrinivas.com/
>
>
> (Sent via gmail web)
>


Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by specifying member id

2018-11-07 Thread Boyang Chen
I took a quick pass of the proposal. First I would say it's a very brilliant 
initiative from Konstantine and Confluent folks. To draft up a proposal like 
this needs deep understanding of the rebalance protocol! I summarized some 
thoughts here.


Overall the motivations of the two proposals align on that:

  1.  Both believe the invariant resource (belonging to the same process) 
should be preserved across rebalance.
  2.  Transit failures (K8 thread death) shouldn't trigger resource 
redistribution. I don't use rebalance here since part one of the cooperative 
proposal could potentially introduce more rebalances but only on must-move 
resources.
  3.  Scale up/down and rolling bounce are causing unnecessary resource 
shuffling that need to be mitigated.


On motivation level, I think both approach could solve/mitigate the above 
issues. They are just different in design philosophy, or I would say the 
perspective difference between framework user and algorithm designer.


Two proposals have different focuses. KIP-345 is trying to place more 
fine-grained control on the broker side to reduce the unnecessary rebalances, 
while keeping the client logic intact. This is pretty intuitive cause-effect 
for normal developers who are not very familiar with rebalance protocol. As a 
developer working with Kafka Streams daily, I'd be happy to see a simplified 
rebalance protocol and just focus on maintaining the stream/consumer jobs. Too 
many rebalances raised my concern on the job health. To be concise, static 
membership has the advantage of reducing mental burden.


Cooperative proposal takes thoughtful approach on client side. We want to have 
fine-grained control on the join/exit group behaviors and make the current 
dynamic membership better to address above issues. I do feel our idea crossed 
on the delayed rebalance when we scale up/down, which could potentially reduce 
the state shuffling and decouple the behavior from session timeout which is 
already overloaded.  In this sense, I believe both approaches would serve well 
in making "reasonable rebalance" happen at the "right timing".


However, based on my understanding, either 345 or cooperative rebalancing is 
not solving the problem Mike has proposed: could we do a better job at scaling 
up/down in ideal timing? My initial response was to introduce an admin API 
which now I feel is sub-optimal, in that the goal of smooth transition is to 
make sure the newly up hosts are actually "ready". For example:


We have 4 instance reading from 8 topic partitions (= 8 tasks). At some time we 
would like to scale up to 8 hosts, with the current improvements we could 
reduce 4 potential rebalances to a single one. But the new hosts are yet 
unknown to be "ready" if they need to reconstruct the local state. To be 
actually ready, we need 4 standby tasks running on those empty hosts and leader 
needs to wait for the signal of "replay/reconstruct complete" to actually 
involve them into the main consumer group. Otherwise, rebalance just kills our 
performance since we need to wait indefinite long for task migration.


The scale down is also tricky such that we are not able to define a "true" 
leave of a member. Rebalance immediately after "true" leaves are most optimal 
comparing with human intervention. Does this make sense?


My intuition is that cooperative approach which was implemented on the client 
side could better handle scaling cases than KIP 345, since it involves a lot of 
algorithmic changes to define "replaying" stage, which I feel would 
over-complicate broker logic if implemented on coordinator. If we let 345 focus 
on reducing unnecessary rebalance, and let cooperative approach focus on 
judging best timing of scale up/down, the two efforts could be aligned. In long 
term, I feel the more complex improvement of consumer protocol should happen on 
client side instead of server side which is easier to test and has less global 
impact for the entire Kafka production cluster.


Thanks again to Konstantine, Matthias and other folks in coming up with this 
great client proposal. This is great complementation to KIP 345. In a high 
level, we are not having any collision on the path and both proposals are 
making sense here. Just need better sync to avoid duplicate effort :)


Best,

Boyang



From: Boyang Chen 
Sent: Wednesday, November 7, 2018 1:57 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by 
specifying member id

Thanks Matthias for bringing this awesome proposal up! I shall take a deeper 
look and make a comparison between the two proposals.


Meanwhile for the scale down specifically for stateful streaming, we could 
actually introduce a new status called "learner" where the newly up hosts could 
try to catch up with the assigned task progress first before triggering the 
rebalance, from which we don't see a sudden dip on the progress. However, it is 
built on top