Build failed in Jenkins: kafka-2.1-jdk8 #246

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Increase the timeout in one of Connect's distributed system 
tests


--
[...truncated 922.39 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.UncleanLeaderElectionTest > 
testTopicUncleanLeaderElectionEnable STARTED

kafka.integration.UncleanLeaderElectionTest > 
testTopicUncleanLeaderElectionEnable PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals STARTED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack PASSED


Build failed in Jenkins: kafka-2.2-jdk8-old #194

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[john] MINOR: clarify node grouping of input topics using pattern subscription


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H41 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision eca17e51cb7a7240f47850f72341b3eecc345676 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f eca17e51cb7a7240f47850f72341b3eecc345676
Commit message: "MINOR: clarify node grouping of input topics using pattern 
subscription (#7793)"
 > git rev-list --no-walk 77eecb2c874e8db56214065e63fbe84391b74bce # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins5683372788329180078.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins5683372788329180078.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=eca17e51cb7a7240f47850f72341b3eecc345676, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io


[jira] [Created] (KAFKA-9285) Implement failed message topic to account for processing lag during failure

2019-12-06 Thread Richard Yu (Jira)
Richard Yu created KAFKA-9285:
-

 Summary: Implement failed message topic to account for processing 
lag during failure
 Key: KAFKA-9285
 URL: https://issues.apache.org/jira/browse/KAFKA-9285
 Project: Kafka
  Issue Type: New Feature
  Components: consumer
Reporter: Richard Yu


Presently, in current Kafka failure schematics, when a consumer crashes, the 
user is typically responsible for both detecting as well as restarting the 
failed consumer. Therefore, during this period of time, when the consumer is 
dead, it would result in a period of inactivity where no records are consumed, 
hence lag results. Previously, there has been attempts to resolve this problem: 
when failure is detected by broker, a substitute consumer will be started (the 
so-called [Rebalance 
Consumer|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-333%3A+Add+faster+mode+of+rebalancing]])
 which will continue processing records in Kafka's stead. 

However, this has complications, as records will only be stored locally, and in 
case of this consumer failing as well, that data will be lost. Instead, we need 
to consider how we can still process these records and at the same time 
effectively _persist_ them. It is here that I propose the concept of a _failed 
message topic._ At a high level, it works like this. When we find that a 
consumer has failed, messages which was originally meant to be sent to that 
consumer would be redirected to this failed messaged topic. The user can choose 
to assign consumers to this topic, which would consume messages from failed 
consumers while other consumer threads are down. 

Naturally, records from different topics can not go into the same failed 
message topic, since we cannot tell which records belong to which consumer.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9281) Consider more flexible node grouping for Pattern subscription

2019-12-06 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-9281.

Resolution: Duplicate

> Consider more flexible node grouping for Pattern subscription
> -
>
> Key: KAFKA-9281
> URL: https://issues.apache.org/jira/browse/KAFKA-9281
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>
> Input topics subscribed to using pattern subscription will currently all be 
> grouped into the same node group, meaning the number of tasks is determined 
> by the maximum partition count of any matching topic. This means less 
> overhead per partition and is suitable for some scenarios, but it limits the 
> ability to scale out by preventing further parallelization that is possible 
> with independent partitions. We should consider making it possible for 
> pattern subscription to create a task for every partition summed across all 
> matching topics.
> We don't necessarily want to change the default (current) behavior, but we 
> could make this more flexible either by autoscaling based on some heuristic, 
> or making it customizable by the user. One possibility would be to augment 
> the Pattern based source KStream method with an optional parameter that to 
> tell Streams how to generate tasks for that pattern, for example
> {code:java}
> public synchronized KStream stream(pattern, numTasks);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.1-jdk8 #245

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7489: Fix ConnectDistributedTest system test to use KafkaVersion


--
[...truncated 889.07 KB...]

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithAssign 
STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithAssign 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerStickyAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerStickyAssignment PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance STARTED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst STARTED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst PASSED

kafka.api.PlaintextConsumerTest > testSeek STARTED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit STARTED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes STARTED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes STARTED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testListTopics STARTED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors STARTED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption STARTED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor STARTED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue STARTED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testHeaders STARTED

kafka.api.PlaintextConsumerTest > testHeaders PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment PASSED

kafka.api.PlaintextConsumerTest > testHeadersSerializerDeserializer STARTED

kafka.api.PlaintextConsumerTest > testHeadersSerializerDeserializer PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume STARTED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured STARTED

kafka.api.PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured PASSED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe STARTED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime STARTED

kafka.api.PlaintextConsumerTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1009

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Increase the timeout in one of Connect's distributed system 
tests

[github] MINOR: Bump system test version from 2.2.1 to 2.2.2 (#7765)

[vvcephei] MINOR: clarify node grouping of input topics using pattern 
subscription


--
[...truncated 2.76 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED


Jenkins build is back to normal : kafka-2.2-jdk8 #11

2019-12-06 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9284) Add documentation and system tests for TLS-encrypted Zookeeper connections

2019-12-06 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-9284:


 Summary: Add documentation and system tests for TLS-encrypted 
Zookeeper connections
 Key: KAFKA-9284
 URL: https://issues.apache.org/jira/browse/KAFKA-9284
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, system tests
Affects Versions: 2.4.0
Reporter: Ron Dagostino
Assignee: Ron Dagostino


TLS connectivity to Zookeeper became available in the 3.5.x versions.  Now with 
the inclusion of these Zookeeper versions Kafka should supply documentation 
that distills the steps required to take advantage of TLS and include systems 
tests to validate such setups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [kafka-clients] [VOTE] 2.4.0 RC3

2019-12-06 Thread Israel Ekpo
This particular test keeps failing for me

./gradlew :core:integrationTest --tests
kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker

I have created an issue to document my observations

https://issues.apache.org/jira/browse/KAFKA-9283

It fails sometimes when the environment is scala 2.12.10 but fails
consistently in 2.13.0 and 2.13.1

All the other tests do pass without any issues but just this one

My environment details are in the issue





On Fri, Dec 6, 2019 at 4:49 PM Israel Ekpo  wrote:

>
> Ran the tests in the following environments using Eric's script available
> here:
>
> https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
>
> OS: Ubuntu 18.04.3 LTS
> Java Version: OpenJDK 11.0.4
> Scala Versions: 2.12.10, 12.13.0, 12.13.1
> Gradle Version: 5.6.2
>
> I have made one observation in the release artifacts here:
> https://home.apache.org/~manikumar/kafka-2.4.0-rc3/kafka-2.4.0-src.tgz
>
> The latest available release for Scala is 12.13.1
>
> It looks like the artifacts were built in an environment where the scala
> version in the CLI is set to 2.12.10 so this is the value in
> gradle.properties for the source artifact here:
>
> scalaVersion=2.12.10
>
> Not sure if that needs to change but it seems in the previous releases
> this is usually set to the highest available scala version at the time of
> release.
>
> Also, the test
> kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> failed the first time I ran all the tests but it passed the second and
> third time I ran the group kafka.admin.ReassignPartitionsClusterTest and
> just the single method
> (kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker).
>
>
> I will re-run the tests again in Scala versions 12.13.0, 12.13.1 and share
> my observations later.
>
> So far, it looks good.
>
> isekpo@ossvalidator:~$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:Ubuntu 18.04.3 LTS
> Release:18.04
> Codename:   bionic
>
> isekpo@ossvalidator:~$ java -version
> openjdk version "11.0.4" 2019-07-16
> OpenJDK Runtime Environment (build 11.0.4+11-post-Ubuntu-1ubuntu218.04.3)
> OpenJDK 64-Bit Server VM (build 11.0.4+11-post-Ubuntu-1ubuntu218.04.3,
> mixed mode, sharing)
>
> isekpo@ossvalidator:~$ scala -version
> Scala code runner version 2.12.10 -- Copyright 2002-2019, LAMP/EPFL and
> Lightbend, Inc.
> isekpo@ossvalidator:~$
>
> isekpo@ossvalidator:~$ gradle -version
>
> 
> Gradle 5.6.2
> 
>
> Build time:   2019-09-05 16:13:54 UTC
> Revision: 55a5e53d855db8fc7b0e494412fc624051a8e781
>
> Kotlin:   1.3.41
> Groovy:   2.5.4
> Ant:  Apache Ant(TM) version 1.9.14 compiled on March 12 2019
> JVM:  11.0.4 (Ubuntu 11.0.4+11-post-Ubuntu-1ubuntu218.04.3)
> OS:   Linux 5.0.0-1027-azure amd64
>
> 1309 tests completed, 1 failed, 17 skipped
>
> > Task :core:integrationTest FAILED
>
> FAILURE: Build failed with an exception.
>
> * What went wrong:
> Execution failed for task ':core:integrationTest'.
> > There were failing tests. See the report at:
> file:///home/isekpo/scratchpad/14891.out/kafka-2.4.0-src/core/build/reports/tests/integrationTest/index.html
>
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or
> --debug option to get more log output. Run with --scan to get full insights.
>
> * Get more help at https://help.gradle.org
>
> Deprecated Gradle features were used in this build, making it incompatible
> with Gradle 6.0.
> Use '--warning-mode all' to show the individual deprecation warnings.
> See
> https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings
>
> BUILD FAILED in 53m 52s
> 14 actionable tasks: 3 executed, 11 up-to-date
>
>
> Details for failed test:
>
> shouldMoveSinglePartitionWithinBroker
> org.scalatest.exceptions.TestFailedException: Partition should have been
> moved to the expected log directory
> at
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> at
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> at
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
> at org.scalatest.Assertions.fail(Assertions.scala:1091)
> at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> at org.scalatest.Assertions$.fail(Assertions.scala:1389)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:842)
> at
> kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker(ReassignPartitionsClusterTest.scala:177)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> 

[jira] [Created] (KAFKA-9283) Flaky Test - kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker

2019-12-06 Thread Israel Ekpo (Jira)
Israel Ekpo created KAFKA-9283:
--

 Summary: Flaky Test - 
kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
 Key: KAFKA-9283
 URL: https://issues.apache.org/jira/browse/KAFKA-9283
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.0
 Environment: OS: Ubuntu 18.04.3 LTS
Java Version: OpenJDK 11.0.4
Scala Versions: 12.13.0, 12.13.1
Gradle Version: 5.6.2
Reporter: Israel Ekpo
 Fix For: 2.4.0


This same test fails occasionally on when run in Scala  2.12.10 but has been 
failing consistently in Scala versions 12.13.0, 12.13.1. Needs review

OS: Ubuntu 18.04.3 LTS
Java Version: OpenJDK 11.0.4
Scala Versions: 12.13.0, 12.13.1
Gradle Version: 5.6.2

./gradlew :core:integrationTest --tests 
kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker

> Configure project :
Building project 'core' with Scala version 2.13.1
Building project 'streams-scala' with Scala version 2.13.1

> Task :core:integrationTest FAILED
kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker 
failed, log available in 
/home/isekpo/scratchpad/111744.out/kafka-2.4.0-src/core/build/reports/testOutput/kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker.test.stdout

kafka.admin.ReassignPartitionsClusterTest > 
shouldMoveSinglePartitionWithinBroker FAILED
 org.scalatest.exceptions.TestFailedException: Partition should have been moved 
to the expected log directory
 at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
 at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
 at org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
 at org.scalatest.Assertions.fail(Assertions.scala:1091)
 at org.scalatest.Assertions.fail$(Assertions.scala:1087)
 at org.scalatest.Assertions$.fail(Assertions.scala:1389)
 at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:842)
 at 
kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker(ReassignPartitionsClusterTest.scala:177)

1 test completed, 1 failed

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:integrationTest'.
> There were failing tests. See the report at: 
> file:///home/isekpo/scratchpad/111744.out/kafka-2.4.0-src/core/build/reports/tests/integrationTest/index.html

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See 
https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 1m 56s
13 actionable tasks: 4 executed, 9 up-to-date



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.0-jdk8 #306

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Increase the timeout in one of Connect's distributed system 
tests


--
[...truncated 435.76 KB...]

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls STARTED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED


Build failed in Jenkins: kafka-trunk-jdk11 #1008

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9091: Add a metric tracking the number of open connections with a


--
[...truncated 2.75 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [kafka-clients] [VOTE] 2.4.0 RC3

2019-12-06 Thread Israel Ekpo
Ran the tests in the following environments using Eric's script available
here:

https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh

OS: Ubuntu 18.04.3 LTS
Java Version: OpenJDK 11.0.4
Scala Versions: 2.12.10, 12.13.0, 12.13.1
Gradle Version: 5.6.2

I have made one observation in the release artifacts here:
https://home.apache.org/~manikumar/kafka-2.4.0-rc3/kafka-2.4.0-src.tgz

The latest available release for Scala is 12.13.1

It looks like the artifacts were built in an environment where the scala
version in the CLI is set to 2.12.10 so this is the value in
gradle.properties for the source artifact here:

scalaVersion=2.12.10

Not sure if that needs to change but it seems in the previous releases this
is usually set to the highest available scala version at the time of
release.

Also, the test
kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
failed the first time I ran all the tests but it passed the second and
third time I ran the group kafka.admin.ReassignPartitionsClusterTest and
just the single method
(kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker).


I will re-run the tests again in Scala versions 12.13.0, 12.13.1 and share
my observations later.

So far, it looks good.

isekpo@ossvalidator:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 18.04.3 LTS
Release:18.04
Codename:   bionic

isekpo@ossvalidator:~$ java -version
openjdk version "11.0.4" 2019-07-16
OpenJDK Runtime Environment (build 11.0.4+11-post-Ubuntu-1ubuntu218.04.3)
OpenJDK 64-Bit Server VM (build 11.0.4+11-post-Ubuntu-1ubuntu218.04.3,
mixed mode, sharing)

isekpo@ossvalidator:~$ scala -version
Scala code runner version 2.12.10 -- Copyright 2002-2019, LAMP/EPFL and
Lightbend, Inc.
isekpo@ossvalidator:~$

isekpo@ossvalidator:~$ gradle -version


Gradle 5.6.2


Build time:   2019-09-05 16:13:54 UTC
Revision: 55a5e53d855db8fc7b0e494412fc624051a8e781

Kotlin:   1.3.41
Groovy:   2.5.4
Ant:  Apache Ant(TM) version 1.9.14 compiled on March 12 2019
JVM:  11.0.4 (Ubuntu 11.0.4+11-post-Ubuntu-1ubuntu218.04.3)
OS:   Linux 5.0.0-1027-azure amd64

1309 tests completed, 1 failed, 17 skipped

> Task :core:integrationTest FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:integrationTest'.
> There were failing tests. See the report at:
file:///home/isekpo/scratchpad/14891.out/kafka-2.4.0-src/core/build/reports/tests/integrationTest/index.html

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or
--debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible
with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See
https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 53m 52s
14 actionable tasks: 3 executed, 11 up-to-date


Details for failed test:

shouldMoveSinglePartitionWithinBroker
org.scalatest.exceptions.TestFailedException: Partition should have been
moved to the expected log directory
at
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
at
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
at
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
at org.scalatest.Assertions.fail(Assertions.scala:1091)
at org.scalatest.Assertions.fail$(Assertions.scala:1087)
at org.scalatest.Assertions$.fail(Assertions.scala:1389)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:842)
at
kafka.admin.ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker(ReassignPartitionsClusterTest.scala:177)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)

Build failed in Jenkins: kafka-2.2-jdk8-old #193

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Increase the timeout in one of Connect's distributed system 
tests


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 77eecb2c874e8db56214065e63fbe84391b74bce 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 77eecb2c874e8db56214065e63fbe84391b74bce
Commit message: "MINOR: Increase the timeout in one of Connect's distributed 
system tests (#7790)"
 > git rev-list --no-walk fc2cccaff62392b6b6aec80d7b200a61503be94a # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins847452588570498107.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins847452588570498107.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=77eecb2c874e8db56214065e63fbe84391b74bce, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user b...@confluent.io


Build failed in Jenkins: kafka-2.2-jdk8-old #192

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7489: Fix ConnectDistributedTest system test to use KafkaVersion


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision fc2cccaff62392b6b6aec80d7b200a61503be94a 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f fc2cccaff62392b6b6aec80d7b200a61503be94a
Commit message: "KAFKA-7489: Fix ConnectDistributedTest system test to use 
KafkaVersion (backport) (#7791)"
 > git rev-list --no-walk 5fc6f0d076d739772d014db30f0155c72417cead # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins8957704555053716620.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins8957704555053716620.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=fc2cccaff62392b6b6aec80d7b200a61503be94a, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user b...@confluent.io


[jira] [Resolved] (KAFKA-7489) ConnectDistributedTest is always running broker with dev version

2019-12-06 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7489.
--
  Reviewer: John Roesler
Resolution: Fixed

Merged this to the `2.3`, `2.2`., and `2.1` branches. As mentioned above, it's 
already fixed on `2.4` and `trunk`.

> ConnectDistributedTest is always running broker with dev version
> 
>
> Key: KAFKA-7489
> URL: https://issues.apache.org/jira/browse/KAFKA-7489
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect, system tests
>Reporter: Andras Katona
>Assignee: Randall Hauch
>Priority: Major
> Fix For: 2.1.2, 2.2.3, 2.3.2
>
>
> h2. Test class
> kafkatest.tests.connect.connect_distributed_test.ConnectDistributedTest
> h2. Details
> _test_broker_compatibility_ is +parametrized+ with different types of 
> brokers, yet it is passed as string to _setup_services_ and this way 
> KafkaService is initialised with DEV version in the end.
> This is easy to fix, just wrap the _broker_version_ with KafkaVersion
> {panel}
> self.setup_services(broker_version={color:#FF}KafkaVersion{color}(broker_version),
>  auto_create_topics=auto_create_topics, security_protocol=security_protocol)
> {panel}
> But test is failing with the parameter LATEST_0_9 with the following error 
> message
> {noformat}
> Kafka Connect failed to start on node: ducker@ducker02 in condition mode: 
> LISTEN
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Broker Interceptors

2019-12-06 Thread radai
to me this is an "API vs SPI" discussion.
pluggable broker bits should fall on the "SPI" side, where tighter
coupling is the price you pay for power and performance, the target
audience is small (and supposedly smarter), and compatibility breaks
are more common and accepted.

On Fri, Dec 6, 2019 at 8:39 AM Ismael Juma  wrote:
>
> Public API classes can be found here:
>
> https://kafka.apache.org/24/javadoc/overview-summary.html
>
> Everything else is internal.
>
> Ismael
>
> On Fri, Dec 6, 2019 at 8:20 AM Tom Bentley  wrote:
>
> > Hi Ismael,
> >
> > How come? They're public in the clients jar. I'm not doubting you, all I'm
> > really asking is "how should I have known this?"
> >
> > Tom
> >
> > On Fri, Dec 6, 2019 at 4:12 PM Ismael Juma  wrote:
> >
> > > AbstractRequest and AbstractResponse are currently internal classes.
> > >
> > > Ismael
> > >
> > > On Fri, Dec 6, 2019 at 1:56 AM Tom Bentley  wrote:
> > >
> > > > Hi,
> > > >
> > > > Couldn't this be done without exposing broker internals at the slightly
> > > > higher level of AbstractRequest and AbstractResponse? Those classes are
> > > > public. If the observer interface used Java default methods then
> > adding a
> > > > new request type would not break existing implementations. I'm thinking
> > > > something like this:
> > > >
> > > > ```
> > > > public interface RequestObserver {
> > > > default void observeAny(RequestContext context, AbstractRequest
> > > > request) {}
> > > > default void observe(RequestContext context, MetadataRequest
> > > request) {
> > > > observeAny(context, request);
> > > > }
> > > > default void observe(RequestContext context, ProduceRequest
> > request)
> > > {
> > > > observeAny(context, request);
> > > > }
> > > > default void observe(RequestContext context, FetchRequest request)
> > {
> > > > observeAny(context, request);
> > > > }
> > > >...
> > > > ```
> > > >
> > > > And similar for a `ResponseObserver`. Request classes would implement
> > > this
> > > > method
> > > >
> > > > ```
> > > > public abstract void observeForAudit(RequestContext context,
> > > > RequestObserver requestObserver);
> > > > ```
> > > >
> > > > where the implementation would look like this:
> > > >
> > > > ```
> > > > @Override
> > > > public void observe(RequestContext context, RequestObserver
> > > > requestObserver) {
> > > > requestObserver.observe(context, this);
> > > > }
> > > > ```
> > > >
> > > > I think this sufficiently abstracted to allow KafkaApis.handle() and
> > > > sendResponse() to call observe() generically.
> > > >
> > > > Kind regards,
> > > >
> > > > Tom
> > > >
> > > > On Wed, Dec 4, 2019 at 6:59 PM Lincong Li 
> > > wrote:
> > > >
> > > > > Hi Thomas,
> > > > >
> > > > > Thanks for your interest in KIP-388. As Ignacio and Radai have
> > > mentioned,
> > > > > this
> > > > > <
> > > > >
> > > >
> > >
> > https://github.com/linkedin/kafka/commit/a378c8980af16e3c6d3f6550868ac0fd5a58682e
> > > > > >
> > > > > is our (LinkedIn's) implementation of KIP-388. The implementation and
> > > > > deployment of this broker-side observer has been working very well
> > for
> > > us
> > > > > by far. On the other hand, I totally agree with the longer-term
> > > concerns
> > > > > raised by other committers. However we still decided to implement the
> > > KIP
> > > > > idea as a hot fix in order to solve our immediate problem and meet
> > our
> > > > > business requirements.
> > > > >
> > > > > The "Rejected Alternatives for Kafka Audit" section at the end of
> > > KIP-388
> > > > > sheds some lights on the client-side auditor/interceptor/observer
> > > (sorry
> > > > > about the potential confusion caused by these words being used
> > > > > interchangeably).
> > > > >
> > > > > Best regards,
> > > > > Lincong Li
> > > > >
> > > > > On Wed, Dec 4, 2019 at 8:15 AM Thomas Aley 
> > > wrote:
> > > > >
> > > > > > Thanks for the responses. I did worry about the challenge of
> > > exposing a
> > > > > > vast number of internal classes with general interceptor
> > framework. A
> > > > > less
> > > > > > general solution more along the lines of the producer/consumer
> > > > > > interceptors on the client would satisfy the majority of use cases.
> > > If
> > > > we
> > > > > > are smart, we should be able to come up with a pattern that could
> > be
> > > > > > extended further in future if the community sees the demand.
> > > > > >
> > > > > > Looking through the discussion thread for KIP-388, I see a lot of
> > > good
> > > > > > points to consider and I intend to dive further into this.
> > > > > >
> > > > > >
> > > > > > Tom Aley
> > > > > > thomas.a...@ibm.com
> > > > > >
> > > > > >
> > > > > >
> > > > > > From:   Ismael Juma 
> > > > > > To: Kafka Users 
> > > > > > Cc: dev 
> > > > > > Date:   03/12/2019 16:12
> > > > > > Subject:[EXTERNAL] Re: Broker Interceptors
> > > > > >
> > > > > >
> > > > > >
> > > > > > The main challenge is doing this 

[jira] [Created] (KAFKA-9282) Consider more flexible node grouping for Pattern subscription

2019-12-06 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9282:
--

 Summary: Consider more flexible node grouping for Pattern 
subscription
 Key: KAFKA-9282
 URL: https://issues.apache.org/jira/browse/KAFKA-9282
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Sophie Blee-Goldman


The current grouping for input topics using pattern subscription creates a 
single node group for all matching topics, meaning the number of tasks scales 
with the maximum partition count across all topics. This reduces overhead and 
is suitable for some scenarios, but limits the ability to scale out and 
prevents easily parallelized processing of completely independent partitions. 
We should consider making it possible for the number of tasks to instead scale 
with the total number of partitions summed across all matching input topics.

Ideally Streams could just autoscale based on some heuristic and the currently 
available resources, but we would have to be careful if those things change. 
Alternatively we could just leave this up to the user to decide, potentially by 
augmenting the Pattern-based source KStream method with a new overload allowing 
this grouping to be specified. For example
{code:java}
StreamsBuilder {
  public KStream stream(topicPattern, numTasks);
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Proposal for EmbeddedKafkaCluster: Support JUnit 5 Extension in addition to JUnit 4 Rule

2019-12-06 Thread John Roesler
Hi Tommy,

There are some practically ancient discussions around doing this exact thing, 
but they haven't generally come to fruition because EmbeddedKafkaCluster itself 
isn't really a slam dunk, usability-wise. You'll notice that copying it (i.e., 
using it _at all_ ) is the option of last resort in my prior message.

Clearly, we use it (heavily) in the Apache Kafka project's tests, but I can 
tell you that it's kind of a source of "constant sorrow". It's not that it 
doesn't "work" (it does), but it's more that it doesn't provide the "right" 
support for higher level testing of a Streams application (as opposed to 
testing the Streams framework itself). This is why we provided 
TopologyTestDriver. To provide you with the ability to test your application 
code in an efficient and sane manner.

As it stands, supposing you really can't use TopologyTestDriver, 
EmbeddedKafkaCluster offers no practical benefits over just having a broker 
running locally for your integration tests. My thinking is, why not just do 
that? You can use the maven-exec-plugin, for example, to start and stop the 
broker automatically for your tests.

By the way, I'm not saying that we should not offer a lower-level testing 
support utlity, it's just that I don't think we should just move 
EmbeddedKafkaCluster into the public API. Particularly, for testing, we need 
something more efficient and that can be more synchronous, but still presents 
the same API as a broker. This would be a ton of work to design an build, 
though, which I assume is why no one has done it.

Thanks,
-John

On Fri, Dec 6, 2019, at 12:21, Thomas Becker wrote:
>  
> Personally, I would love to see EmbeddedKafkaCluster moved to a public 
> test artifact, similarly to kafka-streams-test-utils. Having to 
> copy/paste internal classes into your project is...unfortunate.
> 
> 
> On Fri, 2019-12-06 at 10:42 -0600, John Roesler wrote:
> > [EXTERNAL EMAIL] Attention: This email was sent from outside TiVo. DO NOT 
> > CLICK any links or attachments unless you expected them. 
> >  
> > 
> > Hi Matthias! 
> > Thanks for the note, and the kind sentiment. 
> > We're always open to improvements like this, so your contribution would 
> > certainly be welcome. 
> > Just FYI, "public interface" changes need to go through the KIP process 
> > (see 
> > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKafka%2BImprovement%2BProposalsdata=02%7C01%7CThomas.Becker%40tivo.com%7C8c3fd9ab40da4062868008d77a6b615c%7Cd05b7c6912014c0db45d7f1dcc227e4d%7C1%7C0%7C637112473947272691sdata=dsPBokqCBrydL8w%2FjLlj4pRfWCwmk%2B%2BQO2tTiKyoQAw%3Dreserved=0
> >  ). You could of course, open an exploratory PR and ask here for comments 
> > before deciding if you want to make a KIP, if you prefer. Personally, I 
> > often find it clarifying to put together a PR concurrently with the KIP, 
> > because it helps to flush out any "devils in the details", and also can 
> > help the KIP discussion. 
> > Just wanted to make you aware of the process. I'm happy to help you 
> > navigate this process, by the way. 
> > Regarding the specific proposal, the number one question in my mind is 
> > compatibility... I.e., do we add new dependencies, or change dependencies, 
> > that may conflict with what's already on users' classpaths? Would the 
> > change result in any source-code or binary incompatibility with previous 
> > versions? Etc... you get the idea. 
> > We can make such changes, but it's a _lot_ easier to support doing it in a 
> > major release. Right now, that would be 3.0, which is not currently 
> > planned. We're currently working toward 2.5, and then 2.6, and so on, 
> > essentially waiting for a good reason to bump up to 3.0. 
> > All that said, EmbeddedKafkaCluster is an interesting case, because it's 
> > not actually a public API! When Bill wrote the book, he included it (I 
> > assume) because there was no other practical approach to testing available. 
> > However, since the publication, we have added an official integration 
> > testing framework called TopologyTestDriver. When people ask me for testing 
> > advice, I tell them: 1. Use TopologyTestDriver (for Streams testing) 2. If 
> > you need a "real" broker for your test, then set up a pre/post integration 
> > test hook to run Kafka independently (e.g., with Maven). 3. If that's not 
> > practical, then _copy_ EmbeddedKafkaCluster into your own project, don't 
> > depend on an internal test artifact. 
> > To me, this means that we essentially have free reign to make changes to 
> > EmbeddedKafkaCluster, since it _should_ only be used for internal testing. 
> > In that case, I would just evaluate the merits up bumping all the tests in 
> > Apache Kafka up to JUnit 5. Although that's not a public API, it might be a 
> > big enough change to the process to justify a design document and 
> > project-wide discussion. 
> > Well, I guess it turns 

Jenkins build is back to normal : kafka-2.4-jdk8 #100

2019-12-06 Thread Apache Jenkins Server
See 



[jira] [Created] (KAFKA-9281) Consider more flexible node grouping for Pattern subscription

2019-12-06 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9281:
--

 Summary: Consider more flexible node grouping for Pattern 
subscription
 Key: KAFKA-9281
 URL: https://issues.apache.org/jira/browse/KAFKA-9281
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Sophie Blee-Goldman


Input topics subscribed to using pattern subscription will currently all be 
grouped into the same node group, meaning the number of tasks is determined by 
the maximum partition count of any matching topic. This means less overhead per 
partition and is suitable for some scenarios, but it limits the ability to 
scale out by preventing further parallelization that is possible with 
independent partitions. We should consider making it possible for pattern 
subscription to create a task for every partition summed across all matching 
topics.

We don't necessarily want to change the default (current) behavior, but we 
could make this more flexible either by autoscaling based on some heuristic, or 
making it customizable by the user. One possibility would be to augment the 
Pattern based source KStream method with an optional parameter that to tell 
Streams how to generate tasks for that pattern, for example
{code:java}
public synchronized KStream stream(pattern, numTasks);
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Proposal for EmbeddedKafkaCluster: Support JUnit 5 Extension in addition to JUnit 4 Rule

2019-12-06 Thread Thomas Becker
Personally, I would love to see EmbeddedKafkaCluster moved to a public test 
artifact, similarly to kafka-streams-test-utils. Having to copy/paste internal 
classes into your project is...unfortunate.


On Fri, 2019-12-06 at 10:42 -0600, John Roesler wrote:

[EXTERNAL EMAIL] Attention: This email was sent from outside TiVo. DO NOT CLICK 
any links or attachments unless you expected them.





Hi Matthias!


Thanks for the note, and the kind sentiment.


We're always open to improvements like this, so your contribution would 
certainly be welcome.


Just FYI, "public interface" changes need to go through the KIP process (see 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKafka%2BImprovement%2BProposalsdata=02%7C01%7CThomas.Becker%40tivo.com%7C8c3fd9ab40da4062868008d77a6b615c%7Cd05b7c6912014c0db45d7f1dcc227e4d%7C1%7C0%7C637112473947272691sdata=dsPBokqCBrydL8w%2FjLlj4pRfWCwmk%2B%2BQO2tTiKyoQAw%3Dreserved=0
 ). You could of course, open an exploratory PR and ask here for comments 
before deciding if you want to make a KIP, if you prefer. Personally, I often 
find it clarifying to put together a PR concurrently with the KIP, because it 
helps to flush out any "devils in the details", and also can help the KIP 
discussion.


Just wanted to make you aware of the process. I'm happy to help you navigate 
this process, by the way.


Regarding the specific proposal, the number one question in my mind is 
compatibility... I.e., do we add new dependencies, or change dependencies, that 
may conflict with what's already on users' classpaths? Would the change result 
in any source-code or binary incompatibility with previous versions? Etc... you 
get the idea.


We can make such changes, but it's a _lot_ easier to support doing it in a 
major release. Right now, that would be 3.0, which is not currently planned. 
We're currently working toward 2.5, and then 2.6, and so on, essentially 
waiting for a good reason to bump up to 3.0.


All that said, EmbeddedKafkaCluster is an interesting case, because it's not 
actually a public API!

When Bill wrote the book, he included it (I assume) because there was no other 
practical approach to testing available.

However, since the publication, we have added an official integration testing 
framework called TopologyTestDriver. When people ask me for testing advice, I 
tell them:

1. Use TopologyTestDriver (for Streams testing)

2. If you need a "real" broker for your test, then set up a pre/post 
integration test hook to run Kafka independently (e.g., with Maven).

3. If that's not practical, then _copy_ EmbeddedKafkaCluster into your own 
project, don't depend on an internal test artifact.


To me, this means that we essentially have free reign to make changes to 
EmbeddedKafkaCluster, since it _should_ only

be used for internal testing. In that case, I would just evaluate the merits up 
bumping all the tests in Apache Kafka up to JUnit 5. Although that's not a 
public API, it might be a big enough change to the process to justify a design 
document and project-wide discussion.


Well, I guess it turns out I had more to say than I initially thought... Sorry 
for rambling a bit.


What are your thoughts?

-John


On Fri, Dec 6, 2019, at 07:05, Matthias Merdes wrote:

>  Hi all,

>

> when reading ‘Kafka Streams in Action’ I especially enjoyed the

> thoughtful treatment of

> mocking, unit, and integration tests.

> Integration testing in the book (and in the Kafka codebase) is done

> using the @ClassRule-annotated EmbeddedKafkaCluster.

> JUnit 5 Jupiter replaced Rules with a different extension model.

> So my proposal would be to support a JUnit 5 Extension in addition to

> the JUnit 4 Rule for EmbeddedKafkaCluster

> to enable ’native’ integration in JUnit 5-based projects.

> Being a committer with the JUnit 5 team I would be happy to contribute

> a PR for such an extension.

> Please let me know if you are interested.

> Cheers,

> Matthias

>

>

>

>

>

>

>

>

>

>

> Matthias Merdes

> Senior Software Architect

>

>

>

> heidelpay GmbH

> Vangerowstraße 18

> 69115 Heidelberg

> -

> +49 6221 6471-692

> matthias.mer...@heidelpay.com

> --

>

> Geschäftsführer: Dipl. Vw. Mirko Hüllemann,

> Tamara Huber, André Munk, Georg Schardt

> Registergericht: AG Mannheim, HRB 337681

>


--
[cid:c1b2c53556d0d0c1f6c3c6b2eb2d9d4801163ed8.camel@tivo.com] Tommy Becker
Principal Engineer
Personalized Content Discovery
O +1 919.460.4747
tivo.com



This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If 

Build failed in Jenkins: kafka-trunk-jdk11 #1007

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-9267: ZkSecurityMigrator should not create /controller node


--
[...truncated 5.57 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] 2.4.0 RC3

2019-12-06 Thread Jason Gustafson
Hi Manikumar,

We've gotten to the bottom of
https://issues.apache.org/jira/browse/KAFKA-9212 and I think it should be a
blocker. The impact of this bug is that consumers cannot make progress
while a reassignment is in progress. The problem is missing logic in the
controller to update its metadata cache following some of the leader epoch
bumps. The fix is straightforward and should be ready shortly.

Thanks,
Jason

On Wed, Dec 4, 2019 at 9:10 PM Manikumar  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the fourth candidate for release of Apache Kafka 2.4.0.
>
> This release includes many new features, including:
> - Allow consumers to fetch from closest replica
> - Support for incremental cooperative rebalancing to the consumer rebalance
> protocol
> - MirrorMaker 2.0 (MM2), a new multi-cluster, cross-datacenter replication
> engine
> - New Java authorizer Interface
> - Support for  non-key joining in KTable
> - Administrative API for replica reassignment
> - Sticky partitioner
> - Return topic metadata and configs in CreateTopics response
> - Securing Internal connect REST endpoints
> - API to delete consumer offsets and expose it via the AdminClient.
>
> Release notes for the 2.4.0 release:
> https://home.apache.org/~manikumar/kafka-2.4.0-rc3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Monday, December 9th, 9 pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~manikumar/kafka-2.4.0-rc3/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~manikumar/kafka-2.4.0-rc3/javadoc/
>
> * Tag to be voted upon (off 2.4 branch) is the 2.4.0 tag:
> https://github.com/apache/kafka/releases/tag/2.4.0-rc3
>
> * Documentation:
> https://kafka.apache.org/24/documentation.html
>
> * Protocol:
> https://kafka.apache.org/24/protocol.html
>
> Thanks,
> Manikumar
>


Re: [DISCUSS] KIP-435: Internal Partition Reassignment Batching

2019-12-06 Thread Viktor Somogyi-Vass
Hi Colin,

Thanks for the honest answer. As a bottom line I think a better
reassignment logic should be included with Kafka somehow instead of relying
on external tools. It makes the system more mature on its own and
standardizes what others implemented many times.
Actually I also agree that decoupling this complexity would come with
benefits but I haven't looked into this direction yet. And frankly with
KIP-500 there will be a lot of changes in the controller itself.

My overall vision with this is that there could be an API of some sort at
some point where you'd be able to specify how would you want to do your
rebalance, what plugins you'd use, what would be the default, etc.. I
really don't mind if this functionality is on the client or on the broker
as long as it gives a consistent experience for the users. For a better
overall user experience though I think this is a basic building block and
it made sense to me to put both functionality onto the server side as there
is nothing much to be configured on batching except partition, replica
batch sizes and maybe leader movements. Also users could very well limit
the blast radius of a reassignment by applying throttling too where they
don't care about the order of reassignments or any special requirements
just about the fact that it's sustainable in the cluster and finishes
without causing troubles. With small batches and throttling individual
partition reassignments finish earlier because you divide the given
bandwidth among fewer participants at a time, so it's more effective. By
just tweaking these knobs (batch size and throttling) we could achieve a
significantly better and more effective rebalance.

Policies I think can be somewhat parallel to this as they may require some
information from metrics and you're right that the controller could gather
these easier but it's harder to make plugins, although I don't think it's
impossible to give a good framework for this. Also reassignment can be
extracted from the controller (Stan has already looked into separating the
reassignment logic from the KafkaController class in #7339) so I think we
could come up with a construct that is decoupled from the controller but
still resides on the broker side. Furthermore policies could also be
dynamically configured if needed.

Any change that we make should be backward compatible and I'd default them
to the current behavior which means external tools wouldn't have to take
extra steps but those who don't use any external tools would benefit these
changes.

I think it would be useful to discuss the overall vision on reassignments
as it would help us to decide whether this thing goes on the client or the
broker side so allow me to pose some questions to you and other PMCs as I'd
like to get some clarity and the overall stance on this so I can align my
work better.

What would be the overall goal that we want to achieve? Do we want to give
a solution for automatic partition load balancing or just a framework that
standardizes what many others implemented? Or do we want to adopt a
solution or recommend a specific one?
- For the first it might be better to extend the existing tools but for the
other it might be better to provide an API.
- Giving a tool for this makes ramping up easier for newbies and overall
gives a better user experience.
- Adopting one could be easy as we get a bunch of functionality which are
relatively battle proven but I haven't seen developers from any of the
tools yet to volunteer. Or perhaps the Kafka PMC group would need to decide
on a tool (which is wy out of my league :) ).

An alternative solution is we don't do anything but just suggest users to
use any tools. The question then is which tools should they use? Do we
recommend them to decide themselves?
- The variety of tools and deciding between them may not be easy and there
might be a number of factors based what functionality do you need, etc..
Also contributing to any of the projects might be more cumbersome than
contributing to Kafka itself. Therefore there might be benefits for the
users in consolidating the tools in Kafka.

Would we want to achieve the state where Kafka automatically heals itself
through rebalances?
- As far as I know currently for most of the an administrator have to
initiate these tasks and approve proposals but it might be desirable if
Kafka worked on its own where possible. Less maintenance burden.

To sum up: I could also lean towards having this on the client side so I'll
rework the KIP and look into how can we solve the reassignment batching
problem the best on the client side and will come back to you. However I
think there are important questions above that we need to clear up to set
the directions. Also if you think answering the above questions would
exceed the boundaries of email communication I can write up a KIP about
automatic partition balancing (of course if you can provide the directions
I should go) and we can have a separate discussion.

Thanks,

Re: Proposal for EmbeddedKafkaCluster: Support JUnit 5 Extension in addition to JUnit 4 Rule

2019-12-06 Thread John Roesler
Hi Matthias!

Thanks for the note, and the kind sentiment.

We're always open to improvements like this, so your contribution would 
certainly be welcome.

Just FYI, "public interface" changes need to go through the KIP process (see 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals 
). You could of course, open an exploratory PR and ask here for comments before 
deciding if you want to make a KIP, if you prefer. Personally, I often find it 
clarifying to put together a PR concurrently with the KIP, because it helps to 
flush out any "devils in the details", and also can help the KIP discussion.

Just wanted to make you aware of the process. I'm happy to help you navigate 
this process, by the way.

Regarding the specific proposal, the number one question in my mind is 
compatibility... I.e., do we add new dependencies, or change dependencies, that 
may conflict with what's already on users' classpaths? Would the change result 
in any source-code or binary incompatibility with previous versions? Etc... you 
get the idea.

We can make such changes, but it's a _lot_ easier to support doing it in a 
major release. Right now, that would be 3.0, which is not currently planned. 
We're currently working toward 2.5, and then 2.6, and so on, essentially 
waiting for a good reason to bump up to 3.0.

All that said, EmbeddedKafkaCluster is an interesting case, because it's not 
actually a public API!
When Bill wrote the book, he included it (I assume) because there was no other 
practical approach to testing available.
However, since the publication, we have added an official integration testing 
framework called TopologyTestDriver. When people ask me for testing advice, I 
tell them:
1. Use TopologyTestDriver (for Streams testing)
2. If you need a "real" broker for your test, then set up a pre/post 
integration test hook to run Kafka independently (e.g., with Maven).
3. If that's not practical, then _copy_ EmbeddedKafkaCluster into your own 
project, don't depend on an internal test artifact.

To me, this means that we essentially have free reign to make changes to 
EmbeddedKafkaCluster, since it _should_ only
be used for internal testing. In that case, I would just evaluate the merits up 
bumping all the tests in Apache Kafka up to JUnit 5. Although that's not a 
public API, it might be a big enough change to the process to justify a design 
document and project-wide discussion.

Well, I guess it turns out I had more to say than I initially thought... Sorry 
for rambling a bit.

What are your thoughts?
-John

On Fri, Dec 6, 2019, at 07:05, Matthias Merdes wrote:
>  Hi all, 
> 
> when reading ‘Kafka Streams in Action’ I especially enjoyed the 
> thoughtful treatment of 
> mocking, unit, and integration tests.
> Integration testing in the book (and in the Kafka codebase) is done 
> using the @ClassRule-annotated EmbeddedKafkaCluster. 
> JUnit 5 Jupiter replaced Rules with a different extension model.
> So my proposal would be to support a JUnit 5 Extension in addition to 
> the JUnit 4 Rule for EmbeddedKafkaCluster
> to enable ’native’ integration in JUnit 5-based projects.
> Being a committer with the JUnit 5 team I would be happy to contribute 
> a PR for such an extension.
> Please let me know if you are interested.
> Cheers,
> Matthias
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Matthias Merdes
> Senior Software Architect
> 
> 
> 
> heidelpay GmbH
> Vangerowstraße 18
> 69115 Heidelberg
> -
> +49 6221 6471-692
> matthias.mer...@heidelpay.com
> --
> 
> Geschäftsführer: Dipl. Vw. Mirko Hüllemann,
> Tamara Huber, André Munk, Georg Schardt
> Registergericht: AG Mannheim, HRB 337681 
>


Re: Broker Interceptors

2019-12-06 Thread Ismael Juma
Public API classes can be found here:

https://kafka.apache.org/24/javadoc/overview-summary.html

Everything else is internal.

Ismael

On Fri, Dec 6, 2019 at 8:20 AM Tom Bentley  wrote:

> Hi Ismael,
>
> How come? They're public in the clients jar. I'm not doubting you, all I'm
> really asking is "how should I have known this?"
>
> Tom
>
> On Fri, Dec 6, 2019 at 4:12 PM Ismael Juma  wrote:
>
> > AbstractRequest and AbstractResponse are currently internal classes.
> >
> > Ismael
> >
> > On Fri, Dec 6, 2019 at 1:56 AM Tom Bentley  wrote:
> >
> > > Hi,
> > >
> > > Couldn't this be done without exposing broker internals at the slightly
> > > higher level of AbstractRequest and AbstractResponse? Those classes are
> > > public. If the observer interface used Java default methods then
> adding a
> > > new request type would not break existing implementations. I'm thinking
> > > something like this:
> > >
> > > ```
> > > public interface RequestObserver {
> > > default void observeAny(RequestContext context, AbstractRequest
> > > request) {}
> > > default void observe(RequestContext context, MetadataRequest
> > request) {
> > > observeAny(context, request);
> > > }
> > > default void observe(RequestContext context, ProduceRequest
> request)
> > {
> > > observeAny(context, request);
> > > }
> > > default void observe(RequestContext context, FetchRequest request)
> {
> > > observeAny(context, request);
> > > }
> > >...
> > > ```
> > >
> > > And similar for a `ResponseObserver`. Request classes would implement
> > this
> > > method
> > >
> > > ```
> > > public abstract void observeForAudit(RequestContext context,
> > > RequestObserver requestObserver);
> > > ```
> > >
> > > where the implementation would look like this:
> > >
> > > ```
> > > @Override
> > > public void observe(RequestContext context, RequestObserver
> > > requestObserver) {
> > > requestObserver.observe(context, this);
> > > }
> > > ```
> > >
> > > I think this sufficiently abstracted to allow KafkaApis.handle() and
> > > sendResponse() to call observe() generically.
> > >
> > > Kind regards,
> > >
> > > Tom
> > >
> > > On Wed, Dec 4, 2019 at 6:59 PM Lincong Li 
> > wrote:
> > >
> > > > Hi Thomas,
> > > >
> > > > Thanks for your interest in KIP-388. As Ignacio and Radai have
> > mentioned,
> > > > this
> > > > <
> > > >
> > >
> >
> https://github.com/linkedin/kafka/commit/a378c8980af16e3c6d3f6550868ac0fd5a58682e
> > > > >
> > > > is our (LinkedIn's) implementation of KIP-388. The implementation and
> > > > deployment of this broker-side observer has been working very well
> for
> > us
> > > > by far. On the other hand, I totally agree with the longer-term
> > concerns
> > > > raised by other committers. However we still decided to implement the
> > KIP
> > > > idea as a hot fix in order to solve our immediate problem and meet
> our
> > > > business requirements.
> > > >
> > > > The "Rejected Alternatives for Kafka Audit" section at the end of
> > KIP-388
> > > > sheds some lights on the client-side auditor/interceptor/observer
> > (sorry
> > > > about the potential confusion caused by these words being used
> > > > interchangeably).
> > > >
> > > > Best regards,
> > > > Lincong Li
> > > >
> > > > On Wed, Dec 4, 2019 at 8:15 AM Thomas Aley 
> > wrote:
> > > >
> > > > > Thanks for the responses. I did worry about the challenge of
> > exposing a
> > > > > vast number of internal classes with general interceptor
> framework. A
> > > > less
> > > > > general solution more along the lines of the producer/consumer
> > > > > interceptors on the client would satisfy the majority of use cases.
> > If
> > > we
> > > > > are smart, we should be able to come up with a pattern that could
> be
> > > > > extended further in future if the community sees the demand.
> > > > >
> > > > > Looking through the discussion thread for KIP-388, I see a lot of
> > good
> > > > > points to consider and I intend to dive further into this.
> > > > >
> > > > >
> > > > > Tom Aley
> > > > > thomas.a...@ibm.com
> > > > >
> > > > >
> > > > >
> > > > > From:   Ismael Juma 
> > > > > To: Kafka Users 
> > > > > Cc: dev 
> > > > > Date:   03/12/2019 16:12
> > > > > Subject:[EXTERNAL] Re: Broker Interceptors
> > > > >
> > > > >
> > > > >
> > > > > The main challenge is doing this without exposing a bunch of
> internal
> > > > > classes. I haven't seen a proposal that handles that aspect well so
> > > far.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Tue, Dec 3, 2019 at 7:21 AM Sönke Liebau
> > > > >  wrote:
> > > > >
> > > > > > Hi Thomas,
> > > > > >
> > > > > > I think that idea is worth looking at. As you say, if no
> > interceptor
> > > is
> > > > > > configured then the performance overhead should be negligible.
> > > > Basically
> > > > > it
> > > > > > is then up to the user to decide if he wants tomtake the
> > performance
> > > > > hit.
> > > > > > We should make sure to think 

Re: Broker Interceptors

2019-12-06 Thread Tom Bentley
Hi Ismael,

How come? They're public in the clients jar. I'm not doubting you, all I'm
really asking is "how should I have known this?"

Tom

On Fri, Dec 6, 2019 at 4:12 PM Ismael Juma  wrote:

> AbstractRequest and AbstractResponse are currently internal classes.
>
> Ismael
>
> On Fri, Dec 6, 2019 at 1:56 AM Tom Bentley  wrote:
>
> > Hi,
> >
> > Couldn't this be done without exposing broker internals at the slightly
> > higher level of AbstractRequest and AbstractResponse? Those classes are
> > public. If the observer interface used Java default methods then adding a
> > new request type would not break existing implementations. I'm thinking
> > something like this:
> >
> > ```
> > public interface RequestObserver {
> > default void observeAny(RequestContext context, AbstractRequest
> > request) {}
> > default void observe(RequestContext context, MetadataRequest
> request) {
> > observeAny(context, request);
> > }
> > default void observe(RequestContext context, ProduceRequest request)
> {
> > observeAny(context, request);
> > }
> > default void observe(RequestContext context, FetchRequest request) {
> > observeAny(context, request);
> > }
> >...
> > ```
> >
> > And similar for a `ResponseObserver`. Request classes would implement
> this
> > method
> >
> > ```
> > public abstract void observeForAudit(RequestContext context,
> > RequestObserver requestObserver);
> > ```
> >
> > where the implementation would look like this:
> >
> > ```
> > @Override
> > public void observe(RequestContext context, RequestObserver
> > requestObserver) {
> > requestObserver.observe(context, this);
> > }
> > ```
> >
> > I think this sufficiently abstracted to allow KafkaApis.handle() and
> > sendResponse() to call observe() generically.
> >
> > Kind regards,
> >
> > Tom
> >
> > On Wed, Dec 4, 2019 at 6:59 PM Lincong Li 
> wrote:
> >
> > > Hi Thomas,
> > >
> > > Thanks for your interest in KIP-388. As Ignacio and Radai have
> mentioned,
> > > this
> > > <
> > >
> >
> https://github.com/linkedin/kafka/commit/a378c8980af16e3c6d3f6550868ac0fd5a58682e
> > > >
> > > is our (LinkedIn's) implementation of KIP-388. The implementation and
> > > deployment of this broker-side observer has been working very well for
> us
> > > by far. On the other hand, I totally agree with the longer-term
> concerns
> > > raised by other committers. However we still decided to implement the
> KIP
> > > idea as a hot fix in order to solve our immediate problem and meet our
> > > business requirements.
> > >
> > > The "Rejected Alternatives for Kafka Audit" section at the end of
> KIP-388
> > > sheds some lights on the client-side auditor/interceptor/observer
> (sorry
> > > about the potential confusion caused by these words being used
> > > interchangeably).
> > >
> > > Best regards,
> > > Lincong Li
> > >
> > > On Wed, Dec 4, 2019 at 8:15 AM Thomas Aley 
> wrote:
> > >
> > > > Thanks for the responses. I did worry about the challenge of
> exposing a
> > > > vast number of internal classes with general interceptor framework. A
> > > less
> > > > general solution more along the lines of the producer/consumer
> > > > interceptors on the client would satisfy the majority of use cases.
> If
> > we
> > > > are smart, we should be able to come up with a pattern that could be
> > > > extended further in future if the community sees the demand.
> > > >
> > > > Looking through the discussion thread for KIP-388, I see a lot of
> good
> > > > points to consider and I intend to dive further into this.
> > > >
> > > >
> > > > Tom Aley
> > > > thomas.a...@ibm.com
> > > >
> > > >
> > > >
> > > > From:   Ismael Juma 
> > > > To: Kafka Users 
> > > > Cc: dev 
> > > > Date:   03/12/2019 16:12
> > > > Subject:[EXTERNAL] Re: Broker Interceptors
> > > >
> > > >
> > > >
> > > > The main challenge is doing this without exposing a bunch of internal
> > > > classes. I haven't seen a proposal that handles that aspect well so
> > far.
> > > >
> > > > Ismael
> > > >
> > > > On Tue, Dec 3, 2019 at 7:21 AM Sönke Liebau
> > > >  wrote:
> > > >
> > > > > Hi Thomas,
> > > > >
> > > > > I think that idea is worth looking at. As you say, if no
> interceptor
> > is
> > > > > configured then the performance overhead should be negligible.
> > > Basically
> > > > it
> > > > > is then up to the user to decide if he wants tomtake the
> performance
> > > > hit.
> > > > > We should make sure to think about monitoring capabilities like
> time
> > > > spent
> > > > > in the interceptor for records etc.
> > > > >
> > > > > The most obvious use case I think is server side schema validation,
> > > > which
> > > > > Confluent are also offering as part of their commercial product,
> but
> > > > other
> > > > > ideas come to mind as well.
> > > > >
> > > > > Best regards,
> > > > > Sönke
> > > > >
> > > > > Thomas Aley  schrieb am Di., 3. Dez. 2019,
> > 10:45:
> > > > >
> > > > > > Hi M. Manna,
> > > > > >
> > 

Re: [DISCUSS] KIP-542: Partition Reassignment Throttling

2019-12-06 Thread Ismael Juma
My concern is that we're very focused on reassignment where I think users
enable throttling to avoid overwhelming brokers with replica catch up
traffic (typically disk and/or bandwidth). The current approach achieves
that by not throttling ISR replication.

The downside is that when a broker falls out of the ISR, it may suddenly
get throttled and never catch up. However, if the throttle can cause this
kind of issue, then it's broken for replicas being reassigned too, so one
could say that it's a configuration error.

Do we have specific scenarios that would be solved by the proposed change?

Ismael

On Fri, Dec 6, 2019 at 2:26 AM Viktor Somogyi-Vass 
wrote:

> Thanks for the question. I think it depends on how the user will try to fix
> it.
> - If they just replace the disk then I think it shouldn't count as a
> reassignment and should be allocated under the normal replication quotas.
> In this case there is no reassignment going on as far as I can tell, the
> broker shuts down serving replicas from that dir/disk, notifies the
> controller which changes the leadership. When the disk is fixed the broker
> will be restarted to pick up the changes and it starts the replication from
> the current leader.
> - If the user reassigns the partitions to other brokers then it will fall
> under the reassignment traffic.
> Also if the user moves a partition to a different disk it would also count
> as normal replication as it technically not a reassignment but an
> alter-replica-dir event but it's still done with the reassignment tool, so
> I'd keep the current functionality of the
> --replica-alter-log-dirs-throttle.
> Is this aligned with your thinking?
>
> Viktor
>
> On Wed, Dec 4, 2019 at 2:47 PM Ismael Juma  wrote:
>
> > Thanks Viktor. How do we intend to handle the case where a broker loses
> its
> > disk and has to catch up from the beginning?
> >
> > Ismael
> >
> > On Wed, Dec 4, 2019, 4:31 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> > wrote:
> >
> > > Thanks for the notice Ismael, KAFKA-4313 fixed this issue indeed. I've
> > > updated the KIP.
> > >
> > > Viktor
> > >
> > > On Tue, Dec 3, 2019 at 3:28 PM Ismael Juma  wrote:
> > >
> > > > Hi Viktor,
> > > >
> > > > The KIP states:
> > > >
> > > > "KIP-73
> > > > <
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas
> > > > >
> > > > added
> > > > quotas for replication but it doesn't separate normal replication
> > traffic
> > > > from reassignment. So a user is able to specify the partition and the
> > > > throttle rate but it will be applied to both ISR and non-ISR traffic"
> > > >
> > > > This is not true, ISR traffic is not throttled.
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Oct 24, 2019 at 5:38 AM Viktor Somogyi-Vass <
> > > > viktorsomo...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi People,
> > > > >
> > > > > I've created a KIP to improve replication quotas by handling
> > > reassignment
> > > > > related throttling as a separate case with its own configurable
> > limits
> > > > and
> > > > > change the kafka-reassign-partitions tool to use these new configs
> > > going
> > > > > forward.
> > > > > Please have a look, I'd be happy to receive any feedback and answer
> > > > > all your questions.
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-542%3A+Partition+Reassignment+Throttling
> > > > >
> > > > > Thanks,
> > > > > Viktor
> > > > >
> > > >
> > >
> >
>


Re: Broker Interceptors

2019-12-06 Thread Ismael Juma
AbstractRequest and AbstractResponse are currently internal classes.

Ismael

On Fri, Dec 6, 2019 at 1:56 AM Tom Bentley  wrote:

> Hi,
>
> Couldn't this be done without exposing broker internals at the slightly
> higher level of AbstractRequest and AbstractResponse? Those classes are
> public. If the observer interface used Java default methods then adding a
> new request type would not break existing implementations. I'm thinking
> something like this:
>
> ```
> public interface RequestObserver {
> default void observeAny(RequestContext context, AbstractRequest
> request) {}
> default void observe(RequestContext context, MetadataRequest request) {
> observeAny(context, request);
> }
> default void observe(RequestContext context, ProduceRequest request) {
> observeAny(context, request);
> }
> default void observe(RequestContext context, FetchRequest request) {
> observeAny(context, request);
> }
>...
> ```
>
> And similar for a `ResponseObserver`. Request classes would implement this
> method
>
> ```
> public abstract void observeForAudit(RequestContext context,
> RequestObserver requestObserver);
> ```
>
> where the implementation would look like this:
>
> ```
> @Override
> public void observe(RequestContext context, RequestObserver
> requestObserver) {
> requestObserver.observe(context, this);
> }
> ```
>
> I think this sufficiently abstracted to allow KafkaApis.handle() and
> sendResponse() to call observe() generically.
>
> Kind regards,
>
> Tom
>
> On Wed, Dec 4, 2019 at 6:59 PM Lincong Li  wrote:
>
> > Hi Thomas,
> >
> > Thanks for your interest in KIP-388. As Ignacio and Radai have mentioned,
> > this
> > <
> >
> https://github.com/linkedin/kafka/commit/a378c8980af16e3c6d3f6550868ac0fd5a58682e
> > >
> > is our (LinkedIn's) implementation of KIP-388. The implementation and
> > deployment of this broker-side observer has been working very well for us
> > by far. On the other hand, I totally agree with the longer-term concerns
> > raised by other committers. However we still decided to implement the KIP
> > idea as a hot fix in order to solve our immediate problem and meet our
> > business requirements.
> >
> > The "Rejected Alternatives for Kafka Audit" section at the end of KIP-388
> > sheds some lights on the client-side auditor/interceptor/observer (sorry
> > about the potential confusion caused by these words being used
> > interchangeably).
> >
> > Best regards,
> > Lincong Li
> >
> > On Wed, Dec 4, 2019 at 8:15 AM Thomas Aley  wrote:
> >
> > > Thanks for the responses. I did worry about the challenge of exposing a
> > > vast number of internal classes with general interceptor framework. A
> > less
> > > general solution more along the lines of the producer/consumer
> > > interceptors on the client would satisfy the majority of use cases. If
> we
> > > are smart, we should be able to come up with a pattern that could be
> > > extended further in future if the community sees the demand.
> > >
> > > Looking through the discussion thread for KIP-388, I see a lot of good
> > > points to consider and I intend to dive further into this.
> > >
> > >
> > > Tom Aley
> > > thomas.a...@ibm.com
> > >
> > >
> > >
> > > From:   Ismael Juma 
> > > To: Kafka Users 
> > > Cc: dev 
> > > Date:   03/12/2019 16:12
> > > Subject:[EXTERNAL] Re: Broker Interceptors
> > >
> > >
> > >
> > > The main challenge is doing this without exposing a bunch of internal
> > > classes. I haven't seen a proposal that handles that aspect well so
> far.
> > >
> > > Ismael
> > >
> > > On Tue, Dec 3, 2019 at 7:21 AM Sönke Liebau
> > >  wrote:
> > >
> > > > Hi Thomas,
> > > >
> > > > I think that idea is worth looking at. As you say, if no interceptor
> is
> > > > configured then the performance overhead should be negligible.
> > Basically
> > > it
> > > > is then up to the user to decide if he wants tomtake the performance
> > > hit.
> > > > We should make sure to think about monitoring capabilities like time
> > > spent
> > > > in the interceptor for records etc.
> > > >
> > > > The most obvious use case I think is server side schema validation,
> > > which
> > > > Confluent are also offering as part of their commercial product, but
> > > other
> > > > ideas come to mind as well.
> > > >
> > > > Best regards,
> > > > Sönke
> > > >
> > > > Thomas Aley  schrieb am Di., 3. Dez. 2019,
> 10:45:
> > > >
> > > > > Hi M. Manna,
> > > > >
> > > > > Thank you for your feedback, any and all thoughts on this are
> > > appreciated
> > > > > from the community.
> > > > >
> > > > > I think it is important to distinguish that there are two parts to
> > > this.
> > > > > One would be a server side interceptor framework and the other
> would
> > > be
> > > > > the interceptor implementations themselves.
> > > > >
> > > > > The idea would be that the Interceptor framework manifests as a
> plug
> > > > point
> > > > > in the request/response paths that by 

[jira] [Created] (KAFKA-9280) Duplicate messages are observed in ACK mode ALL

2019-12-06 Thread VIkram (Jira)
VIkram created KAFKA-9280:
-

 Summary: Duplicate messages are observed in ACK mode ALL
 Key: KAFKA-9280
 URL: https://issues.apache.org/jira/browse/KAFKA-9280
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.2.1
Reporter: VIkram


In ack mode ALL, leader is sending the message to consumer even before 
receiving the acknowledgements from other replicas. This can lead to 
*+duplicate messages+*.

 

Setup details:
 * 1 zookeeper, 5 brokers
 * Producer: Synchronous
 * Topic: 1 partition, replication factor - 3, min isr - 2

 

Say First replica (Leader), Second replica and Third replica are the three 
replicas of the topic.

 

*Sequence of events:*

a) All brokers are up and running.

b) Clients started running.

c) Kill second replica of the topic.

d) Kill the third replica. Now min isr will not be satisfied.

e) Bring up third replica. Min isr will be satisfied.

 

*Breakdown of step 'd':*
 # Just before producer sends next message, killed third replica with kill -9 
(Leader takes time ~5sec to detect that the broker is down).
 # Producer sent a message to leader.
 # Before the leader knows that third replica is down, it accepts the message 
from producer.
 # Leader forwards the message to third replica.
 # Before receiving ACK from third replica, leader sent the message to consumer.
 # Leader doesn't get an ACK from third replica.
 # Now leader detects that third replica is down and throws 
NOT_ENOUGH_REPLICAS_EXCEPTION.
 # Now leader stops accepting messages from producer.
 # Since producer didn't get ACK from leader, it will throw TIMEOUT_EXCEPTION 
after timeout (2min in our case) .
 # So far, producer believes that the message was not received by leader 
whereas the consumer actually received it.
 # Now producer retries sending the same message. (In our application it is the 
next integer we send).
 # Now when second/third replica is up, leader accepts the message and sends 
the same message to consumer. *Thus sending duplicates.*

 

 

*Logs:*
 # 2-3 seconds before producer sends next message, killed third replica with 
kill -9 (Leader takes time ~5sec to detect that the broker is down).

_{{{_

_> kill -9 49596_

_}}}_

 __ 
 # Producer sent a message to leader.

_{{{_

_[20190327 11:02:48.231 EDT (main-Terminating-1) Will send message: 
ProducerRecord(topic=t229, partition=null, headers=RecordHeaders(headers = [], 
isReadOnly = false), key=null, value=p229-4, timestamp=null)_

_}}}_

 
 # Before the leader knows that third replica is down, it accepts the message 
from producer.
 # Leader forwards the message to third replica.
 # Before receiving ACK from third replica, leader sent the message to consumer.

_{{{_

 _Received MessageRecord: ConsumerRecord(topic = t229, partition = 0, 
leaderEpoch = 1, offset = 3, CreateTime = 1553698968232, serialized key size = 
-1, serialized value size = 6, headers = RecordHeaders(headers = [], isReadOnly 
= false), key = null, value = p229-4)_

_}}}_

 __ 
 # Leader doesn't get an ACK from third replica.
 # Now leader detects that third replica is down and throws 
NOT_ENOUGH_REPLICAS_EXCEPTION.

_{{{_

_[2019-03-27 11:02:53,541] ERROR [ReplicaManager broker=0] Error processing 
append operation on partition t229-0 (kafka.server.ReplicaManager)_

_org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the 
current ISR Set(0) is insufficient to satisfy the min.isr requirement of 2 for 
partition t229-0_

_}}}_

 
 # Now leader stops accepting messages from producer.
 # Since producer didn't get ACK from leader, it will throw TIMEOUT_EXCEPTION 
after timeout (2min in our case) .

_{{{_

 _java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
t229-0:12 ms_

_has passed since batch creation_

    _at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:98)_

    _at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:67)_

    _at 
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)_

_Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 1 
record(s) for t229-0:12 ms has passed since batch creation_

_}}}_

 
 # So far, producer believes that the message was not received by leader 
whereas the consumer actually received it.
 # Now producer retries sending the same message. (In our application it is the 
next integer we send).
 # Now when second/third replica is up, leader accepts the message and sends 
the same to consumer. Thus sending duplicates.

 

Ideally, in ACK mode all it is expected that leader sends message to consumer 
only after it receives ack from all other replicas. But this is not happening.

 

+*Question*+

1) In ack =all case, Does leader send message to consumer only after all 

Proposal for EmbeddedKafkaCluster: Support JUnit 5 Extension in addition to JUnit 4 Rule

2019-12-06 Thread Matthias Merdes
Hi all,

when reading ‘Kafka Streams in Action’ I especially enjoyed the thoughtful 
treatment of
mocking, unit, and integration tests.
Integration testing in the book (and in the Kafka codebase) is done using the 
@ClassRule-annotated EmbeddedKafkaCluster.
JUnit 5 Jupiter replaced Rules with a different extension model.
So my proposal would be to support a JUnit 5 Extension in addition to the JUnit 
4 Rule for EmbeddedKafkaCluster
to enable ’native’ integration in JUnit 5-based projects.
Being a committer with the JUnit 5 team I would be happy to contribute a PR for 
such an extension.
Please let me know if you are interested.
Cheers,
Matthias










Matthias Merdes
Senior Software Architect


[cid:image001.png@01D5618F.B97F6850]
heidelpay GmbH
Vangerowstraße 18
69115 Heidelberg
-
+49 6221 6471-692
matthias.mer...@heidelpay.com
--
[cid:image002.png@01D5618F.B97F6850] [cid:image003.png@01D5618F.B97F6850] 
[cid:image004.png@01D5618F.B97F6850] [cid:image005.png@01D5618F.B97F6850]
Geschäftsführer: Dipl. Vw. Mirko Hüllemann,
Tamara Huber, André Munk, Georg Schardt
Registergericht: AG Mannheim, HRB 337681



[jira] [Created] (KAFKA-9279) Silent data loss in Kafka producer

2019-12-06 Thread Andrew Klopper (Jira)
Andrew Klopper created KAFKA-9279:
-

 Summary: Silent data loss in Kafka producer
 Key: KAFKA-9279
 URL: https://issues.apache.org/jira/browse/KAFKA-9279
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 2.3.0
Reporter: Andrew Klopper


It appears that it is possible for a producer.commitTransaction() call to 
succeed even if an individual producer.send() call has failed. The following 
code demonstrates the issue:
{code:java}
package org.example.dataloss;

import java.nio.charset.StandardCharsets;
import java.util.Properties;
import java.util.Random;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.ByteArraySerializer;

public class Main {

public static void main(final String[] args) {
final Properties producerProps = new Properties();

if (args.length != 2) {
System.err.println("Invalid command-line arguments");
System.exit(1);
}
final String bootstrapServer = args[0];
final String topic = args[1];

producerProps.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
bootstrapServer);
producerProps.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, "50");
producerProps.setProperty(ProducerConfig.LINGER_MS_CONFIG, "1000");
producerProps.setProperty(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 
"100");
producerProps.setProperty(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, 
"true");
producerProps.setProperty(ProducerConfig.CLIENT_ID_CONFIG, 
"dataloss_01");
producerProps.setProperty(ProducerConfig.TRANSACTIONAL_ID_CONFIG, 
"dataloss_01");

try (final KafkaProducer producer = new 
KafkaProducer<>(producerProps, new ByteArraySerializer(), new 
ByteArraySerializer())) {
producer.initTransactions();
producer.beginTransaction();

final Random random = new Random();
final byte[] largePayload = new byte[200];
random.nextBytes(largePayload);
producer.send(
new ProducerRecord<>(
topic,
"large".getBytes(StandardCharsets.UTF_8),
largePayload
),
(metadata, e) -> {
if (e == null) {
System.out.println("INFO: Large payload succeeded");
} else {
System.err.printf("ERROR: Large payload failed: %s\n", 
e.getMessage());
}
}
);

producer.commitTransaction();
System.out.println("Commit succeeded");

} catch (final Exception e) {
System.err.printf("FATAL ERROR: %s", e.getMessage());
}
}
}
{code}
The code prints the following output:
{code:java}
ERROR: Large payload failed: The message is 293 bytes when serialized which 
is larger than the maximum request size you have configured with the 
max.request.size configuration.
Commit succeeded{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9278) Avoid using static instance of DisconnectException

2019-12-06 Thread Qinghui Xu (Jira)
Qinghui Xu created KAFKA-9278:
-

 Summary: Avoid using static instance of DisconnectException
 Key: KAFKA-9278
 URL: https://issues.apache.org/jira/browse/KAFKA-9278
 Project: Kafka
  Issue Type: Bug
Reporter: Qinghui Xu


For every `java.lang.Throwable` instance, there is a stacktrace provided during 
its instantiation. Thus a static instance of DisconnectException contains a 
stacktrace of the thread creating it, which is irrelevant (and misleading) for 
all the calls (in other threads) that reuse it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1006

2019-12-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9179; Fix flaky test due to race condition when fetching


--
[...truncated 4.55 MB...]
kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
STARTED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testSaslReauthenticationFailure STARTED

kafka.network.SocketServerTest > testSaslReauthenticationFailure PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testControlPlaneRequest STARTED

kafka.network.SocketServerTest > testControlPlaneRequest PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > testStagedListenerStartup STARTED

kafka.network.SocketServerTest > testStagedListenerStartup PASSED

kafka.network.SocketServerTest > testConnectionRateLimit STARTED

kafka.network.SocketServerTest > testConnectionRateLimit PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED


Re: [DISCUSS] KIP-542: Partition Reassignment Throttling

2019-12-06 Thread Viktor Somogyi-Vass
Thanks for the question. I think it depends on how the user will try to fix
it.
- If they just replace the disk then I think it shouldn't count as a
reassignment and should be allocated under the normal replication quotas.
In this case there is no reassignment going on as far as I can tell, the
broker shuts down serving replicas from that dir/disk, notifies the
controller which changes the leadership. When the disk is fixed the broker
will be restarted to pick up the changes and it starts the replication from
the current leader.
- If the user reassigns the partitions to other brokers then it will fall
under the reassignment traffic.
Also if the user moves a partition to a different disk it would also count
as normal replication as it technically not a reassignment but an
alter-replica-dir event but it's still done with the reassignment tool, so
I'd keep the current functionality of the --replica-alter-log-dirs-throttle.
Is this aligned with your thinking?

Viktor

On Wed, Dec 4, 2019 at 2:47 PM Ismael Juma  wrote:

> Thanks Viktor. How do we intend to handle the case where a broker loses its
> disk and has to catch up from the beginning?
>
> Ismael
>
> On Wed, Dec 4, 2019, 4:31 AM Viktor Somogyi-Vass 
> wrote:
>
> > Thanks for the notice Ismael, KAFKA-4313 fixed this issue indeed. I've
> > updated the KIP.
> >
> > Viktor
> >
> > On Tue, Dec 3, 2019 at 3:28 PM Ismael Juma  wrote:
> >
> > > Hi Viktor,
> > >
> > > The KIP states:
> > >
> > > "KIP-73
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas
> > > >
> > > added
> > > quotas for replication but it doesn't separate normal replication
> traffic
> > > from reassignment. So a user is able to specify the partition and the
> > > throttle rate but it will be applied to both ISR and non-ISR traffic"
> > >
> > > This is not true, ISR traffic is not throttled.
> > >
> > > Ismael
> > >
> > > On Thu, Oct 24, 2019 at 5:38 AM Viktor Somogyi-Vass <
> > > viktorsomo...@gmail.com>
> > > wrote:
> > >
> > > > Hi People,
> > > >
> > > > I've created a KIP to improve replication quotas by handling
> > reassignment
> > > > related throttling as a separate case with its own configurable
> limits
> > > and
> > > > change the kafka-reassign-partitions tool to use these new configs
> > going
> > > > forward.
> > > > Please have a look, I'd be happy to receive any feedback and answer
> > > > all your questions.
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-542%3A+Partition+Reassignment+Throttling
> > > >
> > > > Thanks,
> > > > Viktor
> > > >
> > >
> >
>


Re: Broker Interceptors

2019-12-06 Thread Tom Bentley
Hi,

Couldn't this be done without exposing broker internals at the slightly
higher level of AbstractRequest and AbstractResponse? Those classes are
public. If the observer interface used Java default methods then adding a
new request type would not break existing implementations. I'm thinking
something like this:

```
public interface RequestObserver {
default void observeAny(RequestContext context, AbstractRequest
request) {}
default void observe(RequestContext context, MetadataRequest request) {
observeAny(context, request);
}
default void observe(RequestContext context, ProduceRequest request) {
observeAny(context, request);
}
default void observe(RequestContext context, FetchRequest request) {
observeAny(context, request);
}
   ...
```

And similar for a `ResponseObserver`. Request classes would implement this
method

```
public abstract void observeForAudit(RequestContext context,
RequestObserver requestObserver);
```

where the implementation would look like this:

```
@Override
public void observe(RequestContext context, RequestObserver
requestObserver) {
requestObserver.observe(context, this);
}
```

I think this sufficiently abstracted to allow KafkaApis.handle() and
sendResponse() to call observe() generically.

Kind regards,

Tom

On Wed, Dec 4, 2019 at 6:59 PM Lincong Li  wrote:

> Hi Thomas,
>
> Thanks for your interest in KIP-388. As Ignacio and Radai have mentioned,
> this
> <
> https://github.com/linkedin/kafka/commit/a378c8980af16e3c6d3f6550868ac0fd5a58682e
> >
> is our (LinkedIn's) implementation of KIP-388. The implementation and
> deployment of this broker-side observer has been working very well for us
> by far. On the other hand, I totally agree with the longer-term concerns
> raised by other committers. However we still decided to implement the KIP
> idea as a hot fix in order to solve our immediate problem and meet our
> business requirements.
>
> The "Rejected Alternatives for Kafka Audit" section at the end of KIP-388
> sheds some lights on the client-side auditor/interceptor/observer (sorry
> about the potential confusion caused by these words being used
> interchangeably).
>
> Best regards,
> Lincong Li
>
> On Wed, Dec 4, 2019 at 8:15 AM Thomas Aley  wrote:
>
> > Thanks for the responses. I did worry about the challenge of exposing a
> > vast number of internal classes with general interceptor framework. A
> less
> > general solution more along the lines of the producer/consumer
> > interceptors on the client would satisfy the majority of use cases. If we
> > are smart, we should be able to come up with a pattern that could be
> > extended further in future if the community sees the demand.
> >
> > Looking through the discussion thread for KIP-388, I see a lot of good
> > points to consider and I intend to dive further into this.
> >
> >
> > Tom Aley
> > thomas.a...@ibm.com
> >
> >
> >
> > From:   Ismael Juma 
> > To: Kafka Users 
> > Cc: dev 
> > Date:   03/12/2019 16:12
> > Subject:[EXTERNAL] Re: Broker Interceptors
> >
> >
> >
> > The main challenge is doing this without exposing a bunch of internal
> > classes. I haven't seen a proposal that handles that aspect well so far.
> >
> > Ismael
> >
> > On Tue, Dec 3, 2019 at 7:21 AM Sönke Liebau
> >  wrote:
> >
> > > Hi Thomas,
> > >
> > > I think that idea is worth looking at. As you say, if no interceptor is
> > > configured then the performance overhead should be negligible.
> Basically
> > it
> > > is then up to the user to decide if he wants tomtake the performance
> > hit.
> > > We should make sure to think about monitoring capabilities like time
> > spent
> > > in the interceptor for records etc.
> > >
> > > The most obvious use case I think is server side schema validation,
> > which
> > > Confluent are also offering as part of their commercial product, but
> > other
> > > ideas come to mind as well.
> > >
> > > Best regards,
> > > Sönke
> > >
> > > Thomas Aley  schrieb am Di., 3. Dez. 2019, 10:45:
> > >
> > > > Hi M. Manna,
> > > >
> > > > Thank you for your feedback, any and all thoughts on this are
> > appreciated
> > > > from the community.
> > > >
> > > > I think it is important to distinguish that there are two parts to
> > this.
> > > > One would be a server side interceptor framework and the other would
> > be
> > > > the interceptor implementations themselves.
> > > >
> > > > The idea would be that the Interceptor framework manifests as a plug
> > > point
> > > > in the request/response paths that by itself has negligible
> > performance
> > > > impact as without an interceptor registered in the framework it is
> > > > essentially a no-op. This way the out-the-box behavior of the Kafka
> > > broker
> > > > remains essentially unchanged, it is only if the cluster
> administrator
> > > > registers an interceptor into the framework that the path of a record
> > is
> > > > intercepted. This is much like the already accepted 

[jira] [Resolved] (KAFKA-9267) ZkSecurityMigrator should not create /controller node

2019-12-06 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-9267.
--
Fix Version/s: 2.5.0
   Resolution: Fixed

Issue resolved by pull request 7778
[https://github.com/apache/kafka/pull/7778]

> ZkSecurityMigrator should not create /controller node
> -
>
> Key: KAFKA-9267
> URL: https://issues.apache.org/jira/browse/KAFKA-9267
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: NanerLee
>Priority: Major
> Fix For: 2.5.0
>
>
> As we can see in these source codes – [ZkSecurityMigrator.scala#L226|#L226]
> _ZkSecurityMigrator_ checks and sets acl recursively for each path in 
> _SecureRootPaths_. And _/controller_ is also in _SecureRootPaths_.
> As we can predicted, _zkClient.makeSurePersistentPathExists()_ will create 
> _/controller_ node if _/controller_ is not existed.
> _/controller_ is a *EPHEMERAL* node for controller election, but 
> _makeSurePersistentPathExists()_ will create a *PERSISTENT* node with *null* 
> data.
> If that happens, null data will cause a *NPE*, and the controller cannot be 
> elected, kafka cluster will be unavailable .
>  In addition, a *PERSISTENT* node doesn't disappear automatically, we have to 
> delete it manually to fix the problem.
>  
> *PERSISTENT* _/controller_ node with *null* data in zk:
> {code:java}
> [zk: localhost:2181(CONNECTED) 16] get /kafka/controller
> null
> cZxid = 0x112284
> ctime = Tue Dec 03 18:37:26 CST 2019
> mZxid = 0x112284
> mtime = Tue Dec 03 18:37:26 CST 2019
> pZxid = 0x112284
> cversion = 0
> dataVersion = 0
> aclVersion = 1
> ephemeralOwner = 0x0
> dataLength = 0
> numChildren = 0{code}
> *Normal* /controller node in zk:
> {code:java}
> [zk: localhost:2181(CONNECTED) 21] get /kafka/controller
> {"version":1,"brokerid":1001,"timestamp":"1575370170528"}
> cZxid = 0x1123e1
> ctime = Tue Dec 03 18:49:30 CST 2019
> mZxid = 0x1123e1
> mtime = Tue Dec 03 18:49:30 CST 2019
> pZxid = 0x1123e1
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x16ecb572df50021
> dataLength = 57
> numChildren = 0{code}
>  *NPE* in controller.log : 
> {code:java}
> [2019-11-21 15:02:41,276] INFO [ControllerEventThread controllerId=1002] 
> Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
> [2019-11-21 15:02:41,282] ERROR [ControllerEventThread controllerId=1002] 
> Error processing event Startup 
> (kafka.controller.ControllerEventManager$ControllerEventThread)
> java.lang.NullPointerException
>  at com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:857)
>  at 
> com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2572)
>  at kafka.utils.Json$.parseBytes(Json.scala:62)
>  at kafka.zk.ControllerZNode$.decode(ZkData.scala:56)
>  at kafka.zk.KafkaZkClient.getControllerId(KafkaZkClient.scala:902)
>  at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$elect(KafkaController.scala:1199)
>  at 
> kafka.controller.KafkaController$Startup$.process(KafkaController.scala:1148)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:86)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:86)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:86)
>  at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:85)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82){code}
>  
> So, I submit a PR that _ZkSecurityMigrator_ will not handle _/controller_ 
> node when _/controller_ is not existed.
> This bug seems to affect all versions, please review and merge the PR as soon 
> as possible.
> Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)