Build failed in Jenkins: kafka-2.3-jdk8 #4

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-8315: fix the JoinWindows retention deprecation doc (#6664)

[rajinisivaram] KAFKA-8052; Ensure fetch session epoch is updated before new 
request

[jason] MINOR: A few logging improvements in the broker (#6773)

--
[...truncated 2.54 MB...]

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.CreateRequest.(ZooKeeperClient.scala:507)
at 
kafka.zookeeper.ZooKeeperClientTest.testGetChildrenExistingZNode(ZooKeeperClientTest.scala:204)

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED
kafka.zookeeper.ZooKeeperClientTest.testZNodeChangeHandlerForCreation failed, 
log available in 


kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.ExistsRequest.(ZooKeeperClient.scala:515)
at 
kafka.zookeeper.ZooKeeperClientTest.testZNodeChangeHandlerForCreation(ZooKeeperClientTest.scala:276)

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED
kafka.zookeeper.ZooKeeperClientTest.testGetAclExistingZNode failed, log 
available in 


kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.CreateRequest.(ZooKeeperClient.scala:507)
at 
kafka.zookeeper.ZooKeeperClientTest.testGetAclExistingZNode(ZooKeeperClientTest.scala:181)

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED
kafka.zookeeper.ZooKeeperClientTest.testSetAclNonExistentZNode failed, log 
available in 


kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.SetAclRequest.(ZooKeeperClient.scala:531)
at 
kafka.zookeeper.ZooKeeperClientTest.testSetAclNonExistentZNode(ZooKeeperClientTest.scala:191)

Caused by:
java.lang.ClassNotFoundException: scala.Product$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED
kafka.zookeeper.ZooKeeperClientTest.testConnectionLossRequestTermination 
failed, log available in 


kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
FAILED
java.lang.NoClassDefFoundError: scala/Product$class
at kafka.zookeeper.GetDataRequest.(ZooKeeperClient.scala:519)
at 
kafka.zookeeper.ZooKeeperClientTest$$anonfun$6.apply(ZooKeeperClientTest.scala:458)
at 
kafka.zookeeper.ZooKeeperClientTest$$anonfun$6.apply(ZooKeeperClientTest.scala:458)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237)
at scala.collection.immutable.Range.foreach(Range.scala:158)
at scala.collection.TraversableLike.map(TraversableLike.scala:237)
at scala.collection.TraversableLike.map$(TraversableLike.scala:230)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at 
kafka.zookeeper.ZooKeeperClientTest.testConnectionLossRequestTermination(ZooKeeperClientTest.scala:458)

Caused by:
java.lang.ClassNotFoundException: scala.Product$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 9 more

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

Build failed in Jenkins: kafka-trunk-jdk11 #558

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: A few logging improvements in the broker (#6773)

[jason] MINOR: Set `replicaId` for OffsetsForLeaderEpoch from followers (#6775)

--
[...truncated 2.47 MB...]
org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > identity STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > identity PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3664

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8052; Ensure fetch session epoch is updated before new request

[github] MINOR: A few logging improvements in the broker (#6773)

[jason] MINOR: Set `replicaId` for OffsetsForLeaderEpoch from followers (#6775)

--
[...truncated 2.46 MB...]

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldLockGlobalStateDirectory STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldLockGlobalStateDirectory PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsFromCheckpointToHighwatermark STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsFromCheckpointToHighwatermark PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldConvertValuesIfStoreImplementsTimestampedBytesStore STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldConvertValuesIfStoreImplementsTimestampedBytesStore PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowLockExceptionIfIOExceptionCaughtWhenTryingToLockStateDir STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowLockExceptionIfIOExceptionCaughtWhenTryingToLockStateDir PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfAttemptingToRegisterStoreTwice STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfAttemptingToRegisterStoreTwice PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 

[jira] [Created] (KAFKA-8405) Remove deprecated preferred leader RPC and Command

2019-05-21 Thread Jose Armando Garcia Sancio (JIRA)
Jose Armando Garcia Sancio created KAFKA-8405:
-

 Summary: Remove deprecated preferred leader RPC and Command
 Key: KAFKA-8405
 URL: https://issues.apache.org/jira/browse/KAFKA-8405
 Project: Kafka
  Issue Type: Task
  Components: admin
Affects Versions: 3.0.0
Reporter: Jose Armando Garcia Sancio
 Fix For: 3.0.0


For version 2.4.0, we deprecated:
# AdminClient.electPreferredLeaders
# ElectPreferredLeadersResult
# ElectPreferredLeadersOptions
# PreferredReplicaLeaderElectionCommand.

For version 3.0.0 we should remove all of this symbols and the reference to 
them. For the command that includes:
# bin/kafka-preferred-replica-election.sh
# bin/windows/kafka-preferred-replica-election.bat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-455: Create an Administrative API for Replica Reassignment

2019-05-21 Thread George Li
 Hi Colin,  

 Great! Looking forward to these features.    +1 (non-binding)

What is the estimated timeline to have this implemented?  If any help is needed 
in the implementation of cancelling reassignments,  I can help if there is 
spare cycle. 


Thanks,
George



On Thursday, May 16, 2019, 9:48:56 AM PDT, Colin McCabe 
 wrote:  
 
 Hi George,

Yes, KIP-455 allows the reassignment of individual partitions to be cancelled.  
I think it's very important for these operations to be at the partition level.

best,
Colin

On Tue, May 14, 2019, at 16:34, George Li wrote:
>  Hi Colin,
> 
> Thanks for the updated KIP.  It has very good improvements of Kafka 
> reassignment operations. 
> 
> One question, looks like the KIP includes the Cancellation of 
> individual pending reassignments as well when the 
> AlterPartitionReasisgnmentRequest has empty replicas for the 
> topic/partition. Will you also be implementing the the partition 
> cancellation/rollback in the PR ?    If yes,  it will make KIP-236 (it 
> has PR already) trivial, since the cancel all pending reassignments, 
> one just needs to do a ListPartitionRessignmentRequest, then submit 
> empty replicas for all those topic/partitions in 
> one AlterPartitionReasisgnmentRequest. 
> 
> 
> Thanks,
> George
> 
>    On Friday, May 10, 2019, 8:44:31 PM PDT, Colin McCabe 
>  wrote:  
>  
>  On Fri, May 10, 2019, at 17:34, Colin McCabe wrote:
> > On Fri, May 10, 2019, at 16:43, Jason Gustafson wrote:
> > > Hi Colin,
> > > 
> > > I think storing reassignment state at the partition level is the right 
> > > move
> > > and I also agree that replicas should understand that there is a
> > > reassignment in progress. This makes KIP-352 a trivial follow-up for
> > > example. The only doubt I have is whether the leader and isr znode is the
> > > right place to store the target reassignment. It is a bit odd to keep the
> > > target assignment in a separate place from the current assignment, right? 
> > > I
> > > assume the thinking is probably that although the current assignment 
> > > should
> > > probably be in the leader and isr znode as well, it is hard to move the
> > > state in a compatible way. Is that right? But if we have no plan to remove
> > > the assignment znode, do you see a downside to storing the target
> > > assignment there as well?
> > >
> > 
> > Hi Jason,
> > 
> > That's a good point -- it's probably better to keep the target 
> > assignment in the same znode as the current assignment, for 
> > consistency.  I'll change the KIP.
> 
> Hi Jason,
> 
> Thanks again for the review.
> 
> I took another look at this, and I think we should stick with the 
> initial proposal of putting the reassignment state into 
> /brokers/topics/[topic]/partitions/[partitionId]/state.  The reason is 
> because we'll want to bump the leader epoch for the partition when 
> changing the reassignment state, and the leader epoch resides in that 
> znode anyway.  I agree there is some inconsistency here, but so be it: 
> if we were to greenfield these zookeeper data structures, we might do 
> it differently, but the proposed scheme will work fine and be 
> extensible for the future.
> 
> > 
> > > A few additional questions:
> > > 
> > > 1. Should `alterPartitionReassignments` be `alterPartitionAssignments`?
> > > It's the current assignment we're altering, right?
> > 
> > That's fair.  AlterPartitionAssigments reads a little better, and I'll 
> > change it to that.
> 
> +1.  I've changed the RPC and API name in the wiki.
> 
> > 
> > > 2. Does this change affect the Metadata API? In other words, are clients
> > > aware of reassignments? If so, then we probably need a change to
> > > UpdateMetadata as well. The only alternative I can think of would be to
> > > represent the replica set in the Metadata request as the union of the
> > > current and target replicas, but I can't think of any benefit to hiding
> > > reassignments. Note that if we did this, we probably wouldn't need a
> > > separate API to list reassignments.
> > 
> > I thought about this a bit... and I think on balance, you're right.  We 
> > should keep this information together with the replica nodes, isr 
> > nodes, and offline replicas, and that information is available in the 
> > MetadataResponse. 
> >  However, I do think in order to do this, we'll need a flag in the 
> > MetadataRequest that specifiies "only show me reassigning partitions".  
> > I'll add this.
> 
> I revisited this, and I think we should stick with the original 
> proposal of having a separate ListPartitionReassignments API.  There 
> really is no use case where the Producer or Consumer needs to know 
> about a reassignment.  They should just be notified when the set of 
> partitions changes, which doesn't require changes to 
> MetadataRequest/Response.  The Admin client only cares if someone is 
> managing the reassignment.  So adding this state to the 
> MetadataResponse adds overhead for no real benefit.  In the common case 
> where 

Build failed in Jenkins: kafka-trunk-jdk11 #557

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-8315: fix the JoinWindows retention deprecation doc (#6664)

[github] KAFKA-8052; Ensure fetch session epoch is updated before new request

--
[...truncated 2.38 MB...]
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.start(EmbeddedConnectCluster.java:132)
at 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.setup(RebalanceSourceConnectorsIntegrationTest.java:92)

java.lang.RuntimeException: Could not stop brokers
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.stop(EmbeddedConnectCluster.java:148)
at 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.close(RebalanceSourceConnectorsIntegrationTest.java:98)

Caused by:
java.lang.RuntimeException: Could not shutdown producer
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.stop(EmbeddedKafkaCluster.java:141)
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.after(EmbeddedKafkaCluster.java:102)
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.stop(EmbeddedConnectCluster.java:143)
... 1 more

Caused by:
java.lang.NullPointerException
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.stop(EmbeddedKafkaCluster.java:138)
... 3 more

org.apache.kafka.connect.integration.ErrorHandlingIntegrationTest > 
testSkipRetryAndDLQWithHeaders STARTED
org.apache.kafka.connect.integration.ErrorHandlingIntegrationTest.testSkipRetryAndDLQWithHeaders
 failed, log available in 


org.apache.kafka.connect.integration.ErrorHandlingIntegrationTest > 
testSkipRetryAndDLQWithHeaders FAILED
java.lang.IllegalStateException: Shutdown in progress
at 
java.base/java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.base/java.lang.Runtime.addShutdownHook(Runtime.java:215)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:261)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:231)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:241)
at kafka.utils.TestUtils$.tempDir(TestUtils.scala:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:39)
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.start(EmbeddedKafkaCluster.java:106)
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.before(EmbeddedKafkaCluster.java:97)
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.start(EmbeddedConnectCluster.java:132)
at 
org.apache.kafka.connect.integration.ErrorHandlingIntegrationTest.setup(ErrorHandlingIntegrationTest.java:90)

java.lang.RuntimeException: Could not stop brokers
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.stop(EmbeddedConnectCluster.java:148)
at 
org.apache.kafka.connect.integration.ErrorHandlingIntegrationTest.close(ErrorHandlingIntegrationTest.java:99)

Caused by:
java.lang.RuntimeException: Could not shutdown producer
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.stop(EmbeddedKafkaCluster.java:141)
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.after(EmbeddedKafkaCluster.java:102)
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.stop(EmbeddedConnectCluster.java:143)
... 1 more

Caused by:
java.lang.NullPointerException
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.stop(EmbeddedKafkaCluster.java:138)
... 3 more

org.apache.kafka.connect.integration.ConnectorCientPolicyIntegrationTest > 
testCreateWithNotAllowedOverridesForPrincipalPolicy STARTED
org.apache.kafka.connect.integration.ConnectorCientPolicyIntegrationTest.testCreateWithNotAllowedOverridesForPrincipalPolicy
 failed, log available in 


org.apache.kafka.connect.integration.ConnectorCientPolicyIntegrationTest > 
testCreateWithNotAllowedOverridesForPrincipalPolicy FAILED
java.lang.IllegalStateException: Shutdown in progress
at 
java.base/java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.base/java.lang.Runtime.addShutdownHook(Runtime.java:215)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #3663

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8265: Fix override config name to match KIP-458. (#6776)

[wangguoz] KAFKA-8315: fix the JoinWindows retention deprecation doc (#6664)

--
[...truncated 2.54 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED


Build failed in Jenkins: kafka-2.3-jdk8 #3

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-3143: Controller should transition offline replicas on startup

--
[...truncated 2.91 MB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName 

[jira] [Resolved] (KAFKA-7565) NPE in KafkaConsumer

2019-05-21 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7565.
---
   Resolution: Duplicate
Fix Version/s: 2.3.0

> NPE in KafkaConsumer
> 
>
> Key: KAFKA-7565
> URL: https://issues.apache.org/jira/browse/KAFKA-7565
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.1.1
>Reporter: Alexey Vakhrenev
>Priority: Critical
> Fix For: 2.3.0
>
>
> The stacktrace is
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:221)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:202)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:563)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:390)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:244)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1171)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> {noformat}
> Couldn't find minimal reproducer, but it happens quite often in our system. 
> We use {{pause()}} and {{wakeup()}} methods quite extensively, maybe it is 
> somehow related.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8052) Intermittent INVALID_FETCH_SESSION_EPOCH error on FETCH request

2019-05-21 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8052.
---
   Resolution: Fixed
 Reviewer: Jason Gustafson
Fix Version/s: 2.3.0

> Intermittent INVALID_FETCH_SESSION_EPOCH error on FETCH request 
> 
>
> Key: KAFKA-8052
> URL: https://issues.apache.org/jira/browse/KAFKA-8052
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.1.0
>Reporter: Bartek Jakub
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.3.0
>
>
> I noticed in my logs some weird behavior. I see in logs intermittent log: 
> {noformat}
> 2019-03-06 14:02:13.024 INFO 1 --- [container-1-C-1] 
> o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-4, 
> groupId=service-main] Node 2 was unable to process the fetch request with 
> (sessionId=1321134604, epoch=125730): INVALID_FETCH_SESSION_EPOCH.{noformat}
> which happens every ~1 hour. 
>  
> I was wondering if it's my Kafka provider fault so I decided to investigate 
> the problem and I tried to reproduce the issue on my local - with success. My 
> configuration is:
>  * Kafka Clients version - 2.0.1
>  * Kafka - 2.12_2.1.0
>  
> I enabled trace logs for 'org.apache.kafka.clients' and that's what I get:
> {noformat}
> 2019-03-05 21:04:16.161 DEBUG 3052 --- [container-0-C-1] 
> o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-3, 
> groupId=service-main] Built incremental fetch (sessionId=197970881, 
> epoch=525) for node 1001. Added (), altered (), removed () out of 
> (itunes-command-19, itunes-command-18, itunes-command-11, itunes-command-10, 
> itunes-command-13, itunes-command-12, itunes-command-15, itunes-command-14, 
> itunes-command-17, itunes-command-16)
> 2019-03-05 21:04:16.161 DEBUG 3052 --- [container-0-C-1] 
> o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-3, 
> groupId=service-main] Sending READ_UNCOMMITTED 
> IncrementalFetchRequest(toSend=(), toForget=(), implied=(itunes-command-19, 
> itunes-command-18, itunes-command-11, itunes-command-10, itunes-command-13, 
> itunes-command-12, itunes-command-15, itunes-command-14, itunes-command-17, 
> itunes-command-16)) to broker localhost:9092 (id: 1001 rack: null)
> 2019-03-05 21:04:16.161 TRACE 3052 --- [container-0-C-1] 
> org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-3, 
> groupId=service-main] Sending FETCH 
> {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=197970881,epoch=525,topics=[],forgotten_topics_data=[]}
>  with correlation id 629 to node 1001
> 2019-03-05 21:04:16.664 TRACE 3052 --- [container-0-C-1] 
> org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-3, 
> groupId=service-main] Completed receive from node 1001 for FETCH with 
> correlation id 629, received 
> {throttle_time_ms=0,error_code=0,session_id=197970881,responses=[]}
> 2019-03-05 21:04:16.664 DEBUG 3052 --- [container-0-C-1] 
> o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-3, 
> groupId=service-main] Node 1001 sent an incremental fetch response for 
> session 197970881 with response=(), implied=(itunes-command-19, 
> itunes-command-18, itunes-command-11, itunes-command-10, itunes-command-13, 
> itunes-command-12, itunes-command-15, itunes-command-14, itunes-command-17, 
> itunes-command-16)
> 2019-03-05 21:04:16.665 DEBUG 3052 --- [container-0-C-1] 
> o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-3, 
> groupId=service-main] Built incremental fetch (sessionId=197970881, 
> epoch=526) for node 1001. Added (), altered (), removed () out of 
> (itunes-command-19, itunes-command-18, itunes-command-11, itunes-command-10, 
> itunes-command-13, itunes-command-12, itunes-command-15, itunes-command-14, 
> itunes-command-17, itunes-command-16)
> 2019-03-05 21:04:16.665 DEBUG 3052 --- [container-0-C-1] 
> o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-3, 
> groupId=service-main] Sending READ_UNCOMMITTED 
> IncrementalFetchRequest(toSend=(), toForget=(), implied=(itunes-command-19, 
> itunes-command-18, itunes-command-11, itunes-command-10, itunes-command-13, 
> itunes-command-12, itunes-command-15, itunes-command-14, itunes-command-17, 
> itunes-command-16)) to broker localhost:9092 (id: 1001 rack: null)
> 2019-03-05 21:04:16.665 TRACE 3052 --- [container-0-C-1] 
> org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-3, 
> groupId=service-main - F630] Sending FETCH 
> {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=197970881,epoch=526,topics=[],forgotten_topics_data=[]}
>  with correlation id 630 to node 1001
> 2019-03-05 21:04:17.152 DEBUG 3052 --- [ 

Build failed in Jenkins: kafka-trunk-jdk11 #556

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8265: Fix override config name to match KIP-458. (#6776)

--
[...truncated 2.50 MB...]
org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes PASSED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle STARTED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle PASSED

org.apache.kafka.trogdor.workload.TimeIntervalTransactionsGeneratorTest > 
testCommitsTransactionAfterIntervalPasses STARTED

org.apache.kafka.trogdor.workload.TimeIntervalTransactionsGeneratorTest > 
testCommitsTransactionAfterIntervalPasses PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage PASSED

org.apache.kafka.trogdor.basic.BasicPlatformTest > testCreateBasicPlatform 
STARTED

org.apache.kafka.trogdor.basic.BasicPlatformTest > testCreateBasicPlatform 
PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentCreateWorkers STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentCreateWorkers PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetUptime STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetUptime PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentStartShutdown STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentStartShutdown PASSED

org.apache.kafka.trogdor.agent.AgentTest > 
testCreateExpiredWorkerIsNotScheduled STARTED

org.apache.kafka.trogdor.agent.AgentTest > 
testCreateExpiredWorkerIsNotScheduled PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentProgrammaticShutdown STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentProgrammaticShutdown PASSED

org.apache.kafka.trogdor.agent.AgentTest > testDestroyWorkers STARTED

org.apache.kafka.trogdor.agent.AgentTest > testDestroyWorkers PASSED

org.apache.kafka.trogdor.agent.AgentTest > testKiboshFaults STARTED

org.apache.kafka.trogdor.agent.AgentTest > testKiboshFaults PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithTimeout STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithTimeout PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit 

[jira] [Created] (KAFKA-8404) Authorization header is not passed in Connect when forwarding REST requests

2019-05-21 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-8404:


 Summary: Authorization header is not passed in Connect when 
forwarding REST requests
 Key: KAFKA-8404
 URL: https://issues.apache.org/jira/browse/KAFKA-8404
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Robert Yokota
 Fix For: 2.3.0


When Connect forwards a REST request from one worker to another, the 
Authorization header is not forwarded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3662

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-3143: Controller should transition offline replicas on startup

--
[...truncated 2.46 MB...]

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly 

Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-05-21 Thread Sophie Blee-Goldman
I definitely agree with Guozhang's "meta" comment: if it's possible to
allow users to pick and choose individual RocksDB metrics that would be
ideal. One further question is whether these will be debug or info level
metrics, or a separate level altogether? If there is a nontrivial overhead
associated with attaching RocksDB metrics it would probably be good to be
able to independently turn on/off Rocks metrics

On Tue, May 21, 2019 at 9:00 AM Guozhang Wang  wrote:

> Hello Bruno,
>
> Thanks for the KIP, I have a few minor comments and a meta one which are
> relatively aligned with other folks:
>
> Minor:
>
> 1) Regarding the "rocksdb-state-id = [store ID]", to be consistent with
> other state store metrics (see
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams),
> this tag should be either "rocksdb-window-state-id",
> "rocksdb-session-state-id" or "rocksdb-state-id". For window / session
> store backed by rocksDB, the tags should not be "rocksdb-state-id".
>
> 2) Also for window / session store, my take is that you plan to have
> multiple rocksDB behind the scene to report to the same set of metrics, is
> that right? My concern is that for such types of state stores, most of the
> access would be on the first segment rocksDB instance, and hence coalescing
> all of instances as a single set of metrics may dilute it.
>
> 3) I agree with @sop...@confluent.io  that we should
> better have some documentation educating users what to do when see what
> anomalies in metrics; though I think this is a long endeavoring effort that
> may go beyond this KIP's scope, let's keep that as a separate umbrella JIRA
> to accumulate knowledge over time.
>
>
> Meta:
>
> 4) Instead of trying to enumerate all the ones that might be helpful, I'd
> recommend that we expose a user-friendly API in StreamsMetrics to allow
> users to add more metrics from RocksDB they'd like to have, while only
> keeping a small set of most-meaningful ones that are ubiquitously useful
> out-of-the-box. WDYT?
>
>
> Guozhang
>
>
>
> On Tue, May 21, 2019 at 8:04 AM Dongjin Lee  wrote:
>
>> Hi Bruno,
>>
>> I just read the KIP. I think this feature is great. As far as I know, most
>> Kafka users monitor the host resources, JVM resources, and Kafka metrics
>> only, not RocksDB for configuring the statistics feature is a little bit
>> tiresome. Since RocksDB impacts the performance of Kafka Streams, I
>> believe
>> providing these metrics with other metrics in one place is much better.
>>
>> However, I am a little bit not assured for how much information should be
>> provided to the users with the documentation - how the user can control
>> the
>>  RocksDB may on the boundary of the scope. How do you think?
>>
>> +1. I entirely agree to Bill's comments; I also think `rocksdb-store-id`
>> is
>> better than `rocksdb-state-id` and metrics on total compactions and an
>> average number of compactions is also needed.
>>
>> Regards,
>> Dongjin
>>
>> On Tue, May 21, 2019 at 2:48 AM John Roesler  wrote:
>>
>> > Hi Bruno,
>> >
>> > Looks really good overall. This is going to be an awesome addition.
>> >
>> > My only thought was that we have "bytes-flushed-(rate|total) and
>> > flush-time-(avg|min|max)" metrics, and the description states that
>> > these are specifically for Memtable flush operations. What do you
>> > think about calling it "memtable-bytes-flushed... (etc)"? On one hand,
>> > I could see this being redundant, since that's the only thing that
>> > gets "flushed" inside of Rocks. But on the other, we have an
>> > independent "flush" operation in streams, which might be confusing.
>> > Plus, it might help people who are looking at the full "menu" of
>> > hundreds of metrics. They can't read and remember every description
>> > while trying to understand the full list of metrics, so going for
>> > maximum self-documentation in the name seems nice.
>> >
>> > But that's a minor thought. Modulo the others' comments, this looks good
>> > to me.
>> >
>> > Thanks,
>> > -John
>> >
>> > On Mon, May 20, 2019 at 11:07 AM Bill Bejeck  wrote:
>> > >
>> > > Hi Bruno,
>> > >
>> > > Thanks for the KIP, this will be a useful addition.
>> > >
>> > > Overall the KIP looks good to me, and I have two minor comments.
>> > >
>> > > 1. For the tags should, I'm wondering if rocksdb-state-id should be
>> > > rocksdb-store-id
>> > > instead?
>> > >
>> > > 2. With the compaction metrics, would it be possible to add total
>> > > compactions and an average number of compactions?  I've taken a look
>> at
>> > the
>> > > available RocksDB metrics, and I'm not sure.  But users can control
>> how
>> > > many L0 files it takes to trigger compaction so if it is possible; it
>> may
>> > > be useful.
>> > >
>> > > Thanks,
>> > > Bill
>> > >
>> > >
>> > > On Mon, May 20, 2019 at 9:15 AM Bruno Cadonna 
>> > wrote:
>> > >
>> > > > Hi Sophie,
>> > > >
>> > > > Thank you for your comments.
>> > > >
>> > > > It's a good idea to supplement the metrics 

[VOTE] KIP-429: Kafka Consumer Incremental Rebalance Protocol

2019-05-21 Thread Guozhang Wang
Hello folks,

I'd like to start the voting for KIP-429 now, details can be found here:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-RebalanceCallbackErrorHandling

And the on-going PRs available for review:

Part I: https://github.com/apache/kafka/pull/6528
Part II: https://github.com/apache/kafka/pull/6778


Thanks
-- Guozhang


To add in contributors list

2019-05-21 Thread omkar mestry
Hello,

My name is Omkar Mahadev Mestry and I am a java developer, currently
working on java and kafka streams. I have been working with kafka and would
like to contribute to kafka community. Please add me in the contributors
list so that I can start working with JIRA. I have attached my resume as
well.
PFA

Thanks & Regards
Omkar Mestry


Build failed in Jenkins: kafka-trunk-jdk11 #555

2019-05-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-3143: Controller should transition offline replicas on startup

--
[...truncated 2.46 MB...]
kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.TopicDeletionManagerTest > 
testBrokerFailureAfterDeletionStarted STARTED

kafka.controller.TopicDeletionManagerTest > 
testBrokerFailureAfterDeletionStarted PASSED

kafka.controller.TopicDeletionManagerTest > testInitialization STARTED

kafka.controller.TopicDeletionManagerTest > testInitialization PASSED

kafka.controller.TopicDeletionManagerTest > testBasicDeletion STARTED

kafka.controller.TopicDeletionManagerTest > testBasicDeletion PASSED

kafka.controller.TopicDeletionManagerTest > testDeletionWithBrokerOffline 
STARTED

kafka.controller.TopicDeletionManagerTest > testDeletionWithBrokerOffline PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testControlPlaneRequest STARTED

kafka.network.SocketServerTest > testControlPlaneRequest PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > testConnectionRateLimit STARTED

kafka.network.SocketServerTest > testConnectionRateLimit PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-05-21 Thread Guozhang Wang
Hello Bruno,

Thanks for the KIP, I have a few minor comments and a meta one which are
relatively aligned with other folks:

Minor:

1) Regarding the "rocksdb-state-id = [store ID]", to be consistent with
other state store metrics (see
https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams),
this tag should be either "rocksdb-window-state-id",
"rocksdb-session-state-id" or "rocksdb-state-id". For window / session
store backed by rocksDB, the tags should not be "rocksdb-state-id".

2) Also for window / session store, my take is that you plan to have
multiple rocksDB behind the scene to report to the same set of metrics, is
that right? My concern is that for such types of state stores, most of the
access would be on the first segment rocksDB instance, and hence coalescing
all of instances as a single set of metrics may dilute it.

3) I agree with @sop...@confluent.io  that we should
better have some documentation educating users what to do when see what
anomalies in metrics; though I think this is a long endeavoring effort that
may go beyond this KIP's scope, let's keep that as a separate umbrella JIRA
to accumulate knowledge over time.


Meta:

4) Instead of trying to enumerate all the ones that might be helpful, I'd
recommend that we expose a user-friendly API in StreamsMetrics to allow
users to add more metrics from RocksDB they'd like to have, while only
keeping a small set of most-meaningful ones that are ubiquitously useful
out-of-the-box. WDYT?


Guozhang



On Tue, May 21, 2019 at 8:04 AM Dongjin Lee  wrote:

> Hi Bruno,
>
> I just read the KIP. I think this feature is great. As far as I know, most
> Kafka users monitor the host resources, JVM resources, and Kafka metrics
> only, not RocksDB for configuring the statistics feature is a little bit
> tiresome. Since RocksDB impacts the performance of Kafka Streams, I believe
> providing these metrics with other metrics in one place is much better.
>
> However, I am a little bit not assured for how much information should be
> provided to the users with the documentation - how the user can control the
>  RocksDB may on the boundary of the scope. How do you think?
>
> +1. I entirely agree to Bill's comments; I also think `rocksdb-store-id` is
> better than `rocksdb-state-id` and metrics on total compactions and an
> average number of compactions is also needed.
>
> Regards,
> Dongjin
>
> On Tue, May 21, 2019 at 2:48 AM John Roesler  wrote:
>
> > Hi Bruno,
> >
> > Looks really good overall. This is going to be an awesome addition.
> >
> > My only thought was that we have "bytes-flushed-(rate|total) and
> > flush-time-(avg|min|max)" metrics, and the description states that
> > these are specifically for Memtable flush operations. What do you
> > think about calling it "memtable-bytes-flushed... (etc)"? On one hand,
> > I could see this being redundant, since that's the only thing that
> > gets "flushed" inside of Rocks. But on the other, we have an
> > independent "flush" operation in streams, which might be confusing.
> > Plus, it might help people who are looking at the full "menu" of
> > hundreds of metrics. They can't read and remember every description
> > while trying to understand the full list of metrics, so going for
> > maximum self-documentation in the name seems nice.
> >
> > But that's a minor thought. Modulo the others' comments, this looks good
> > to me.
> >
> > Thanks,
> > -John
> >
> > On Mon, May 20, 2019 at 11:07 AM Bill Bejeck  wrote:
> > >
> > > Hi Bruno,
> > >
> > > Thanks for the KIP, this will be a useful addition.
> > >
> > > Overall the KIP looks good to me, and I have two minor comments.
> > >
> > > 1. For the tags should, I'm wondering if rocksdb-state-id should be
> > > rocksdb-store-id
> > > instead?
> > >
> > > 2. With the compaction metrics, would it be possible to add total
> > > compactions and an average number of compactions?  I've taken a look at
> > the
> > > available RocksDB metrics, and I'm not sure.  But users can control how
> > > many L0 files it takes to trigger compaction so if it is possible; it
> may
> > > be useful.
> > >
> > > Thanks,
> > > Bill
> > >
> > >
> > > On Mon, May 20, 2019 at 9:15 AM Bruno Cadonna 
> > wrote:
> > >
> > > > Hi Sophie,
> > > >
> > > > Thank you for your comments.
> > > >
> > > > It's a good idea to supplement the metrics with configuration option
> > > > to change the metrics. I also had some thoughts about it. However, I
> > > > think I need some experimentation to get this right.
> > > >
> > > > I added the block cache hit rates for index and filter blocks to the
> > > > KIP. As far as I understood, they should stay at zero, if users do
> not
> > > > configure RocksDB to include index and filter blocks into the block
> > > > cache. Did you also understand this similarly? I guess also in this
> > > > case some experimentation would be good to be sure.
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > >
> > > > On Sat, May 18, 2019 at 2:29 

[jira] [Created] (KAFKA-8403) Suppress needs a Materialized variant

2019-05-21 Thread John Roesler (JIRA)
John Roesler created KAFKA-8403:
---

 Summary: Suppress needs a Materialized variant
 Key: KAFKA-8403
 URL: https://issues.apache.org/jira/browse/KAFKA-8403
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


The newly added KTable Suppress operator lacks a Materialized variant, which 
would be useful if you wanted to query the results of the suppression.

Suppression results will eventually match the upstream results, but the 
intermediate distinction may be meaningful for some applications. For example, 
you could want to query only the final results of a windowed aggregation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8152) Offline partition state not propagated by controller

2019-05-21 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8152.
--
Resolution: Duplicate

> Offline partition state not propagated by controller
> 
>
> Key: KAFKA-8152
> URL: https://issues.apache.org/jira/browse/KAFKA-8152
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> Currently when the controller starts up, only the state of online partitions 
> will be sent to other brokers. Any broker which is started or restarted after 
> the controller will see only a subset of the partitions of any topic which 
> has offline partitions. If all the partitions for a topic are offline, then 
> the broker will not know of the topic at all. As far as I can tell, the bug 
> is the fact that `ReplicaStateMachine.startup` only does an initial state 
> change for replicas which are online.
> This can be reproduced with the following steps:
>  # Startup two brokers
>  # Create a single partition topic with rf=1
>  # Shutdown the broker where the replica landed
>  # Shutdown the other broker
>  # Restart the broker without the replica
>  # Run `kafka-topics --describe --bootstrap-server \{server ip}`
> Note that the metadata inconsistency will only be apparent when using 
> `bootstrap-server` in `kafka-topics.sh`. Using zookeeper, everything will 
> seem normal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3143) inconsistent state in ZK when all replicas are dead

2019-05-21 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3143.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 5041
[https://github.com/apache/kafka/pull/5041]

> inconsistent state in ZK when all replicas are dead
> ---
>
> Key: KAFKA-3143
> URL: https://issues.apache.org/jira/browse/KAFKA-3143
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Major
>  Labels: reliability
> Fix For: 2.3.0
>
>
> This issue can be recreated in the following steps.
> 1. Start 3 brokers, 1, 2 and 3.
> 2. Create a topic with a single partition and 2 replicas, say on broker 1 and 
> 2.
> If we stop both replicas 1 and 2, depending on where the controller is, the 
> leader and isr stored in ZK in the end are different.
> If the controller is on broker 3, what's stored in ZK will be -1 for leader 
> and an empty set for ISR.
> On the other hand, if the controller is on broker 2 and we stop broker 1 
> followed by broker 2, what's stored in ZK will be 2 for leader and 2 for ISR.
> The issue is that in the first case, the controller will call 
> ReplicaStateMachine to transition to OfflineReplica, which will change the 
> leader and isr. However, in the second case, the controller fails over, but 
> we don't transition ReplicaStateMachine to OfflineReplica during controller 
> initialization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-471: Expose RocksDB Metrics in Kafka Streams

2019-05-21 Thread Dongjin Lee
Hi Bruno,

I just read the KIP. I think this feature is great. As far as I know, most
Kafka users monitor the host resources, JVM resources, and Kafka metrics
only, not RocksDB for configuring the statistics feature is a little bit
tiresome. Since RocksDB impacts the performance of Kafka Streams, I believe
providing these metrics with other metrics in one place is much better.

However, I am a little bit not assured for how much information should be
provided to the users with the documentation - how the user can control the
 RocksDB may on the boundary of the scope. How do you think?

+1. I entirely agree to Bill's comments; I also think `rocksdb-store-id` is
better than `rocksdb-state-id` and metrics on total compactions and an
average number of compactions is also needed.

Regards,
Dongjin

On Tue, May 21, 2019 at 2:48 AM John Roesler  wrote:

> Hi Bruno,
>
> Looks really good overall. This is going to be an awesome addition.
>
> My only thought was that we have "bytes-flushed-(rate|total) and
> flush-time-(avg|min|max)" metrics, and the description states that
> these are specifically for Memtable flush operations. What do you
> think about calling it "memtable-bytes-flushed... (etc)"? On one hand,
> I could see this being redundant, since that's the only thing that
> gets "flushed" inside of Rocks. But on the other, we have an
> independent "flush" operation in streams, which might be confusing.
> Plus, it might help people who are looking at the full "menu" of
> hundreds of metrics. They can't read and remember every description
> while trying to understand the full list of metrics, so going for
> maximum self-documentation in the name seems nice.
>
> But that's a minor thought. Modulo the others' comments, this looks good
> to me.
>
> Thanks,
> -John
>
> On Mon, May 20, 2019 at 11:07 AM Bill Bejeck  wrote:
> >
> > Hi Bruno,
> >
> > Thanks for the KIP, this will be a useful addition.
> >
> > Overall the KIP looks good to me, and I have two minor comments.
> >
> > 1. For the tags should, I'm wondering if rocksdb-state-id should be
> > rocksdb-store-id
> > instead?
> >
> > 2. With the compaction metrics, would it be possible to add total
> > compactions and an average number of compactions?  I've taken a look at
> the
> > available RocksDB metrics, and I'm not sure.  But users can control how
> > many L0 files it takes to trigger compaction so if it is possible; it may
> > be useful.
> >
> > Thanks,
> > Bill
> >
> >
> > On Mon, May 20, 2019 at 9:15 AM Bruno Cadonna 
> wrote:
> >
> > > Hi Sophie,
> > >
> > > Thank you for your comments.
> > >
> > > It's a good idea to supplement the metrics with configuration option
> > > to change the metrics. I also had some thoughts about it. However, I
> > > think I need some experimentation to get this right.
> > >
> > > I added the block cache hit rates for index and filter blocks to the
> > > KIP. As far as I understood, they should stay at zero, if users do not
> > > configure RocksDB to include index and filter blocks into the block
> > > cache. Did you also understand this similarly? I guess also in this
> > > case some experimentation would be good to be sure.
> > >
> > > Best,
> > > Bruno
> > >
> > >
> > > On Sat, May 18, 2019 at 2:29 AM Sophie Blee-Goldman <
> sop...@confluent.io>
> > > wrote:
> > > >
> > > > Actually I wonder if it might be useful to users to be able to break
> up
> > > the
> > > > cache hit stats by type? Some people may choose to store index and
> filter
> > > > blocks alongside data blocks, and it would probably be very helpful
> for
> > > > them to know who is making more effective use of the cache in order
> to
> > > tune
> > > > how much of it is allocated to each. I'm not sure how common this
> really
> > > is
> > > > but I think it would be invaluable to those who do. RocksDB
> performance
> > > can
> > > > be quite opaque..
> > > >
> > > > Cheers,
> > > > Sophie
> > > >
> > > > On Fri, May 17, 2019 at 5:01 PM Sophie Blee-Goldman <
> sop...@confluent.io
> > > >
> > > > wrote:
> > > >
> > > > > Hey Bruno!
> > > > >
> > > > > This all looks pretty good to me, but one suggestion I have is to
> > > > > supplement each of the metrics with some info on how the user can
> > > control
> > > > > them. In other words, which options could/should they set in
> > > > > RocksDBConfigSetter should they discover a particular bottleneck?
> > > > >
> > > > > I don't think this necessarily needs to go into the KIP, but I do
> > > think it
> > > > > should be included in the docs somewhere (happy to help build up
> the
> > > list
> > > > > of associated options when the time comes)
> > > > >
> > > > > Thanks!
> > > > > Sophie
> > > > >
> > > > > On Fri, May 17, 2019 at 2:54 PM Bruno Cadonna 
> > > wrote:
> > > > >
> > > > >> Hi all,
> > > > >>
> > > > >> this KIP describes the extension of the Kafka Streams' metrics to
> > > include
> > > > >> RocksDB's internal statistics.
> > > > >>
> > > > >> Please have a look at it and let me know what you 

[jira] [Created] (KAFKA-8402) bin/kafka-preferred-replica-election.sh fails if generated json is bigger than 1MB

2019-05-21 Thread Vyacheslav Stepanov (JIRA)
Vyacheslav Stepanov created KAFKA-8402:
--

 Summary: bin/kafka-preferred-replica-election.sh fails if 
generated json is bigger than 1MB
 Key: KAFKA-8402
 URL: https://issues.apache.org/jira/browse/KAFKA-8402
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 1.1.1
Reporter: Vyacheslav Stepanov


If I run script {{bin/kafka-preferred-replica-election.sh}} without specifying 
the list of topics/partitions - it will get all topics/partitions from 
zookeeper and transform that to json, then it will create zookeeper node at 
{{/admin/preferred_replica_election}} using this json as data for that 
zookeeper node. If the generated json is bigger than 1MB (default max size of 
data of zookeeper node) - the script will fail without giving a good 
description of failure. The size of 1MB can be reached if the amount of 
topics/partitions is high enough.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8320) Connect Error handling is using the RetriableException from common package

2019-05-21 Thread Randall Hauch (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8320.
--
   Resolution: Fixed
Fix Version/s: 2.2.1
   2.1.2
   2.3.0
   2.0.2

Merged and backported thru the `2.0` branch.

> Connect Error handling is using the RetriableException from common package
> --
>
> Key: KAFKA-8320
> URL: https://issues.apache.org/jira/browse/KAFKA-8320
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Major
> Fix For: 2.0.2, 2.3.0, 2.1.2, 2.2.1
>
>
> When a SourceConnector throws 
> org.apache.kafka.connect.errors.RetriableException during the poll, connect 
> runtime is supposed to ignore the error and retry per 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect]
>  . When the conenctors throw the execption its not handled gracefully. 
> WorkerSourceTask is catching the exception from wrong package 
> `org.apache.kafka.common.errors`.  It is not clear from the API standpoint as 
> to which package the connect framework supports. The safest thing would be to 
> support both the packages even though it's less desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-317: Transparent Data Encryption

2019-05-21 Thread Sönke Liebau
Hi everybody,

I'd like to rekindle the discussion around KIP-317.
I have reworked the KIP a little bit in order to design everything as a
pluggable implementation. During the course of that work I've also decided
to rename the KIP, as encryption will only be transparent in some cases. It
is now called "Add end to end data encryption functionality to Apache
Kafka" [1].

I'd very much appreciate it if you could give the KIP a quick read. This is
not at this point a fully fleshed out design, as I would like to agree on
the underlying structure that I came up with first, before spending time on
details.

TL/DR is:
Create three pluggable classes:
KeyManager runs on the broker and manages which keys to use, key rollover
etc
KeyProvider runs on the client and retrieves keys based on what the
KeyManager tells it
EncryptionEngine runs on the client andhandles the actual encryption
First idea of control flow between these components can be seen at [2]

Please let me know any thoughts or concerns that you may have!

Best regards,
Sönke

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-317%3A+Add+end-to-end+data+encryption+functionality+to+Apache+Kafka
[2]
https://cwiki.apache.org/confluence/download/attachments/85479936/kafka_e2e-encryption_control-flow.png?version=1=1558439227551=v2



On Fri, 10 Aug 2018 at 14:05, Sönke Liebau 
wrote:

> Hi Viktor,
>
> thanks for your input! We could accommodate magic headers by removing any
> known fixed bytes pre-encryption, sticking them in a header field and
> prepending them after decryption. However, I am not sure whether this is
> actually necessary, as most modern (AES for sure) algorithms are considered
> to be resistant to known-plaintext types of attack. Even if the entire
> plaintext is known to the attacker he still needs to brute-force the key -
> which may take a while.
>
> Something different to consider in this context are compression
> sidechannel attacks like CRIME or BREACH, which may be relevant depending
> on what type of data is being sent through Kafka. Both these attacks depend
> on the encrypted record containing a combination of secret and user
> controlled data.
> For example if Kafka was used to forward data that the user entered on a
> website along with a secret API key that the website adds to a back-end
> server and the user can obtain the Kafka messages, these attacks would
> become relevant. Not much we can do about that except disallow encryption
> when compression is enabled (TLS chose this approach in version 1.3)
>
> I agree with you, that we definitely need to clearly document any risks
> and how much security can reasonably be expected in any given scenario. We
> might even consider logging a warning message when sending data that is
> compressed and encrypted.
>
> On a different note, I've started amending the KIP to make key management
> and distribution pluggable, should hopefully be able to publish sometime
> Monday.
>
> Best regards,
> Sönke
>
>
> On Thu, Jun 21, 2018 at 12:26 PM, Viktor Somogyi 
> wrote:
>
>> Hi Sönke,
>>
>> Compressing before encrypting has its dangers as well. Suppose you have a
>> known compression format which adds a magic header and you're using a
>> block
>> cipher with a small enough block, then it becomes much easier to figure
>> out
>> the encryption key. For instance you can look at Snappy's stream
>> identifier:
>> https://github.com/google/snappy/blob/master/framing_format.txt
>> . Based on this you should only use block ciphers where block sizes are
>> much larger then 6 bytes. AES for instance should be good with its 128
>> bits
>> = 16 bytes but even this isn't entirely secure as the first 6 bytes
>> already
>> leaked some information - and it depends on the cypher that how much it
>> is.
>> Also if we suppose that an adversary accesses a broker and takes all the
>> data, they'll have much easier job to decrypt it as they'll have much more
>> examples.
>> So overall we should make sure to define and document the compatible
>> encryptions with the supported compression methods and the level of
>> security they provide to make sure the users are fully aware of the
>> security implications.
>>
>> Cheers,
>> Viktor
>>
>> On Tue, Jun 19, 2018 at 11:55 AM Sönke Liebau
>>  wrote:
>>
>> > Hi Stephane,
>> >
>> > thanks for pointing out the broken pictures, I fixed those.
>> >
>> > Regarding encrypting before or after batching the messages, you are
>> > correct, I had not thought of compression and how this changes things.
>> > Encrypted data does not really encrypt well. My reasoning at the time
>> > of writing was that if we encrypt the entire batch we'd have to wait
>> > for the batch to be full before starting to encrypt. Whereas with per
>> > message encryption we can encrypt them as they come in and more or
>> > less have them ready for sending when the batch is complete.
>> > However I think the difference will probably not be that large (will
>> > do some testing) and offset by just encrypting 

[jira] [Created] (KAFKA-8401) consumer.poll(Duration.ofMillis(100)) blocking

2019-05-21 Thread leishuiyu (JIRA)
leishuiyu created KAFKA-8401:


 Summary: consumer.poll(Duration.ofMillis(100)) blocking 
 Key: KAFKA-8401
 URL: https://issues.apache.org/jira/browse/KAFKA-8401
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 1.1.0
 Environment: kafka 1.1.0
zk   3.4.11
Reporter: leishuiyu


# this is code
{code:java}
//public class Consumer extends Thread {

KafkaConsumer consumer;

public Consumer() {
Properties props = new Properties();
//47.105.201.137 is public network Ip
props.put("bootstrap.servers", "47.105.201.137:9092");  //连接地址
props.put("group.id", "lsy_test");
props.put("zookeeper.session.timeout.ms", "400");
props.put("zookeeper.sync.time.ms", "200");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", 
"org.apache.kafka.common.serialization.IntegerDeserializer");
props.put("value.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");
this.consumer = new KafkaConsumer(props);
}


@Override
public void run() {
consumer.subscribe(Arrays.asList("flink_order"));
while (true) {
ConsumerRecords poll = 
consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord record : poll) {
System.out.println(record.key() + "---" + record.value());
}
}
}

public static void main(String[] args) {
Consumer sumer = new Consumer();
sumer.start();
}
}

{code}

 #  Configured hosts for remote machines
{code:java}
//xx.xx.xx.xx centos-7{code}

 # when my code running in local machines,the 
bootstrap.servers=47.105.201.137:9092 the consumer poll is blocking ,howerver 
in my mac set /etc/hosts 47.105.201.137 centos-7 and 
boostrap.servers=centos-7:9092 the consumer can poll message,The previous 
methods consumer.listTopics() is successful,only poll message is blocking ,I 
feel very confused



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 2.2.1 RC1

2019-05-21 Thread Jakub Scholz
+1 (non-binding) ... I used the binaries and run tests with different
clients. All seems to work fine.

On Tue, May 14, 2019 at 5:15 AM Vahid Hashemian 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 2.2.1.
>
> Compared to RC0, this release candidate also fixes the following issues:
>
>- [KAFKA-6789] - Add retry logic in AdminClient requests
>- [KAFKA-8348] - Document of kafkaStreams improvement
>- [KAFKA-7633] - Kafka Connect requires permission to create internal
>topics even if they exist
>- [KAFKA-8240] - Source.equals() can fail with NPE
>- [KAFKA-8335] - Log cleaner skips Transactional mark and batch record,
>causing unlimited growth of __consumer_offsets
>- [KAFKA-8352] - Connect System Tests are failing with 404
>
> Release notes for the 2.2.1 release:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
>
> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> https://github.com/apache/kafka/releases/tag/2.2.1-rc1
>
> * Documentation:
> https://kafka.apache.org/22/documentation.html
>
> * Protocol:
> https://kafka.apache.org/22/protocol.html
>
> * Successful Jenkins builds for the 2.2 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
>
> Thanks!
> --Vahid
>


Re: v2 ProduceResponse documentation wrong?

2019-05-21 Thread Graeme Jenkinson
Looking at the protocol doc again I think it may just be a misunderstanding on 
my part.

Graeme

> On 20 May 2019, at 19:58, Graeme Jenkinson  wrote:
> 
> Hi,
> 
> on parsing the a v2 ProduceResponse from Kafka I am seeing a discrepancy from 
> the documentation. Specifically the base_offset and log_append_time fields 
> appear to be in the opposite positions to those documented in the protocol 
> guide: https://kafka.apache.org/protocol#The_Messages_Produce 
> 
> 
> It's possible I’ve messed something up in the tcpdump and trimming of the 
> Kafka responses, but I don’t think so. Is anyone familiar enough with the 
> protocol to confirm/deny?
> 
> Kind regards,
> 
> Graeme