[jira] [Created] (KAFKA-9114) Kafka broker fails to establish secure zookeeper connection via SSL.

2019-10-29 Thread Gangadhar Balikai (Jira)
Gangadhar Balikai created KAFKA-9114:


 Summary: Kafka broker fails to establish secure zookeeper 
connection via SSL.
 Key: KAFKA-9114
 URL: https://issues.apache.org/jira/browse/KAFKA-9114
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.3.0, 2.3.1
Reporter: Gangadhar Balikai


When i try to enable TLS/SSL between Kafka broker (tried 2.3.0 && 2.3.1) and 
zookeeper (3.5.5 & 3.5.6) cluster of 3 nodes. 

kafka broker fails with following stack trace, i have given stacktrace, kafka & 
zookeeper configurations used below.

*JDK*: 1_8_0_161_64

{color:#de350b}[2019-10-30 03:52:10,036] ERROR Fatal error during KafkaServer 
startup. Prepare to shutdown (kafka.server.KafkaServer){color}

{color:#de350b}java.io.IOException: Couldn't instantiate 
org.apache.zookeeper.ClientCnxnSocketNetty{color}
{color:#de350b} at 
org.apache.zookeeper.ZooKeeper.getClientCnxnSocket(ZooKeeper.java:1851){color}
{color:#de350b} at 
org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:453){color}
{color:#de350b} at 
org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:384){color}
{color:#de350b} at 
kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:103){color}
{color:#de350b} at 
kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1826){color}
{color:#de350b} at 
kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:364){color}
{color:#de350b} at 
kafka.server.KafkaServer.initZkClient(KafkaServer.scala:387){color}
{color:#de350b} at 
kafka.server.KafkaServer.startup(KafkaServer.scala:207){color}
{color:#de350b} at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38){color}
{color:#de350b} at kafka.Kafka$.main(Kafka.scala:84){color}
{color:#de350b} at kafka.Kafka.main(Kafka.scala){color}
{color:#de350b}Caused by: java.lang.NoSuchMethodException: 
org.apache.zookeeper.ClientCnxnSocketNetty.(){color}
{color:#de350b} at java.lang.Class.getConstructor0(Class.java:3082){color}
{color:#de350b} at 
java.lang.Class.getDeclaredConstructor(Class.java:2178){color}
{color:#de350b} at 
org.apache.zookeeper.ZooKeeper.getClientCnxnSocket(ZooKeeper.java:1848){color}
{color:#de350b} ... 10 more{color}
{color:#de350b}[2019-10-30 03:52:10,039] INFO shutting down 
(kafka.server.KafkaServer){color}
{color:#de350b}[2019-10-30 03:52:10,046] INFO shut down completed 
(kafka.server.KafkaServer){color}
{color:#de350b}[2019-10-30 03:52:10,046] ERROR Exiting Kafka. 
(kafka.server.KafkaServerStartable){color}
{color:#de350b}[2019-10-30 03:52:10,048] INFO shutting down 
(kafka.server.KafkaServer){color}

STEPS.

1)  I copied following zookeeper dependencies into kafka bin. 

a) kafka 2.3.0 and zookeer 3.5.5

"zookeeper-3.5.6.jar" "zookeeper-jute-3.5.6.jar" "netty*.jar" 
"commons-cli-1.2.jar"

b) kafka 2.3.1 and zookeer 3.5.6

"zookeeper-3.5.6.jar" "zookeeper-jute-3.5.6.jar" 
"netty-buffer-4.1.42.Final.jar" "netty-buffer-4.1.42.Final.LICENSE.txt" 
"netty-codec-4.1.42.Final.jar" "netty-codec-4.1.42.Final.LICENSE.txt" 
"netty-common-4.1.42.Final.jar" "netty-common-4.1.42.Final.LICENSE.txt" 
"netty-handler-4.1.42.Final.jar" "netty-handler-4.1.42.Final.LICENSE.txt" 
"netty-resolver-4.1.42.Final.jar" "netty-resolver-4.1.42.Final.LICENSE.txt" 
"netty-transport-4.1.42.Final.jar" "netty-transport-4.1.42.Final.LICENSE.txt" 
"netty-transport-native-epoll-4.1.42.Final.jar" 
"netty-transport-native-epoll-4.1.42.Final.LICENSE.txt" 
"netty-transport-native-unix-common-4.1.42.Final.jar" 
"netty-transport-native-unix-common-4.1.42.Final.LICENSE.txt" 
"commons-cli-1.2.jar"

*2) Configurations:* 

The *zookeeper* cluster looks good with

1) configuration *zoo.conf*. 

{color:#505f79}quorum.auth.server.loginContext=QuorumServer{color}
{color:#505f79}quorum.auth.learner.loginContext=QuorumLearner{color}
{color:#505f79}syncLimit=2{color}
{color:#505f79}tickTime=2000{color}
{color:#505f79}server.3=broker1\:2888\:3888{color}
{color:#505f79}server.2=broker2\:2888\:3888{color}
{color:#505f79}server.1=broker3\:2888\:3888{color}
{color:#505f79}authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider{color}
{color:#505f79}initLimit=10{color}
{color:#505f79}secureClientPort=2281{color}
{color:#505f79}quorum.auth.learnerRequireSasl=true{color}
{color:#505f79}quorum.auth.enableSasl=true{color}
{color:#505f79}quorum.auth.kerberos.servicePrincipal=servicename/_HOST{color}
{color:#505f79}quorum.cnxn.threads.size=20{color}
{color:#505f79}zookeeper.client.secure=true{color}
{color:#505f79}quorum.auth.serverRequireSasl=true{color}
{color:#505f79}zookeeper.serverCnxnFactory=org.apache.zookeeper.ClientCnxnSocketNetty{color}
{color:#505f79}dataDir=../data/zookeeper/data/{color}

2) with *SERVER_JVMFLAGS* set to  

-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
 
-Dzookeeper.ssl.client.auth=none 

[jira] [Resolved] (KAFKA-8972) KafkaConsumer.unsubscribe could leave inconsistent user rebalance callback state

2019-10-29 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-8972.

Resolution: Fixed

Nice work guys!

> KafkaConsumer.unsubscribe could leave inconsistent user rebalance callback 
> state
> 
>
> Key: KAFKA-8972
> URL: https://issues.apache.org/jira/browse/KAFKA-8972
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Blocker
> Fix For: 2.4.0
>
>
> Our current implementation ordering of {{KafkaConsumer.unsubscribe}} is the 
> following:
> {code}
> this.subscriptions.unsubscribe();
> this.coordinator.onLeavePrepare();
> this.coordinator.maybeLeaveGroup("the consumer unsubscribed from all topics");
> {code}
> And inside {{onLeavePrepare}} we would look into the assignment and try to 
> revoke them and notify users via {{RebalanceListener#onPartitionsRevoked}}, 
> and then clear the assignment.
> However, the subscription's assignment is already cleared in 
> {{this.subscriptions.unsubscribe();}} which means user's rebalance listener 
> would never be triggered. In other words, from consumer client's pov nothing 
> is owned after unsubscribe, but from the user caller's pov the partitions are 
> not revoked yet. For callers like Kafka Streams which rely on the rebalance 
> listener to maintain their internal state, this leads to inconsistent state 
> management and failure cases.
> Before KIP-429 this issue is hidden away since every time the consumer 
> re-joins the group later, it would still revoke everything anyways regardless 
> of the passed-in parameters of the rebalance listener; with KIP-429 this is 
> easier to reproduce now.
> I think we can summarize our fix as:
> • Inside `unsubscribe`, first do `onLeavePrepare / maybeLeaveGroup` and then 
> `subscription.unsubscribe`. This we we are guaranteed that the streams' tasks 
> are all closed as revoked by then.
> • [Optimization] If the generation is reset due to fatal error from join / hb 
> response etc, then we know that all partitions are lost, and we should not 
> trigger `onPartitionRevoked`, but instead just `onPartitionsLost` inside 
> `onLeavePrepare`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #43

2019-10-29 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-8972 (2.4 blocker): correctly release lost partitions during


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED


Build failed in Jenkins: kafka-trunk-jdk11 #920

2019-10-29 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-8972 (2.4 blocker): correctly release lost partitions during

[bbejeck] KAFKA-9077: Fix reading of metrics of Streams' SimpleBenchmark (#7610)


--
[...truncated 5.51 MB...]

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED


Re: [DISCUSS] KIP-544: Make metrics exposed via JMX configurable

2019-10-29 Thread Xavier Léauté
>
> How would the practical application look like if this was implemented?
>

One useful application is to hide partition-level metrics, some of which
may only be needed for debugging purposes.


> Would monitoring agents switch between the whitelist and blacklist
> periodically if they wanted to monitor every metrics?
>

I'm not sure if switching periodically would be practical. However, I do
see cases where one might want to enable a subset of metrics temporarily
for debugging, without incurring the need to expose all metrics all the
time.

I can certainly add some examples regular expressions to the KIP to
illustrate this.


Build failed in Jenkins: kafka-2.4-jdk8 #42

2019-10-29 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: improve logging of tasks on shutdown (#7597)


--
[...truncated 5.10 MB...]

kafka.server.KafkaApisTest > 
testLeaderReplicaIfLocalRaisesNotLeaderForPartition PASSED

kafka.server.KafkaApisTest > testOffsetDeleteWithInvalidGroup STARTED

kafka.server.KafkaApisTest > testOffsetDeleteWithInvalidGroup PASSED

kafka.server.KafkaApisTest > testJoinGroupProtocolsOrder STARTED

kafka.server.KafkaApisTest > testJoinGroupProtocolsOrder PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testMetadataRequestOnDistinctListenerWithInconsistentListenersAcrossBrokers 
STARTED

kafka.server.KafkaApisTest > 
testMetadataRequestOnDistinctListenerWithInconsistentListenersAcrossBrokers 
PASSED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion STARTED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testLeaderReplicaIfLocalRaisesFencedLeaderEpoch 
STARTED

kafka.server.KafkaApisTest > testLeaderReplicaIfLocalRaisesFencedLeaderEpoch 
PASSED

kafka.server.KafkaApisTest > testFetchRequestV9WithNoLogConfig STARTED

kafka.server.KafkaApisTest > testFetchRequestV9WithNoLogConfig PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted PASSED

kafka.server.KafkaApisTest > 
rejectSyncGroupRequestWhenStaticMembershipNotSupported STARTED

kafka.server.KafkaApisTest > 
rejectSyncGroupRequestWhenStaticMembershipNotSupported PASSED

kafka.server.KafkaApisTest > 
rejectHeartbeatRequestWhenStaticMembershipNotSupported STARTED

kafka.server.KafkaApisTest > 
rejectHeartbeatRequestWhenStaticMembershipNotSupported PASSED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testMetadataRequestOnSharedListenerWithInconsistentListenersAcrossBrokers 
STARTED

kafka.server.KafkaApisTest > 
testMetadataRequestOnSharedListenerWithInconsistentListenersAcrossBrokers PASSED

kafka.server.KafkaApisTest > testAddPartitionsToTxnWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testAddPartitionsToTxnWithInvalidPartition PASSED

kafka.server.KafkaApisTest > testOffsetDeleteWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testOffsetDeleteWithInvalidPartition PASSED

kafka.server.KafkaApisTest > 
testLeaderReplicaIfLocalRaisesUnknownTopicOrPartition STARTED

kafka.server.KafkaApisTest > 
testLeaderReplicaIfLocalRaisesUnknownTopicOrPartition PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddOffsetToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddOffsetToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testLeaderReplicaIfLocalRaisesUnknownLeaderEpoch 
STARTED

kafka.server.KafkaApisTest > testLeaderReplicaIfLocalRaisesUnknownLeaderEpoch 
PASSED

kafka.server.KafkaApisTest > testTxnOffsetCommitWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testTxnOffsetCommitWithInvalidPartition PASSED

kafka.server.KafkaApisTest > testSingleLeaveGroup STARTED

kafka.server.KafkaApisTest > testSingleLeaveGroup PASSED

kafka.server.KafkaApisTest > 
rejectJoinGroupRequestWhenStaticMembershipNotSupported STARTED

kafka.server.KafkaApisTest > 
rejectJoinGroupRequestWhenStaticMembershipNotSupported PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedMessageFormatForBadPartitionAndNoErrorsForGoodPartition
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedMessageFormatForBadPartitionAndNoErrorsForGoodPartition
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleEndTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleEndTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 

[jira] [Created] (KAFKA-9113) Clean up task management

2019-10-29 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9113:
--

 Summary: Clean up task management
 Key: KAFKA-9113
 URL: https://issues.apache.org/jira/browse/KAFKA-9113
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Sophie Blee-Goldman


Along KIP-429 we did a lot of refactoring of the task management classes, 
including the TaskManager and AssignedTasks (and children).  While hopefully 
easier to reason about there's still significant opportunity for further 
cleanup including safer state tracking.  Some potential improvements:

1) Verify that no tasks are ever in more than one state at once. One 
possibility is to just check that the suspended, created, restoring, and 
running maps are all disjoint, but this begs the question of when and where to 
do those checks, and how often. Another idea might be to put all tasks into a 
single map and just track their state on a per-task basis. Whatever it is 
should be aware that some methods are on the critical code path, and should not 
be burdened with excessive safety checks (ie AssignedStreamTasks#process)

2) Cleanup of closing and/or shutdown logic – there are some potential 
improvements to be made here as well, for example AssignedTasks currently 
implements a closeZombieTask method despite the fact that standby tasks are 
never zombies. 

3)  The StoreChangelogReader also interacts with (only) the 
AssignedStreamsTasks class, through the TaskManager. It can be difficult to 
reason about these interactions and the state of the changelog reader.

4) All 4 classes and their state have very strict consistency requirements that 
currently are almost impossible to verify, which has already resulted in 
several bugs that we were lucky to catch in time. We should tighten up how 
these classes manage their own state, and how the overall state is managed 
between them, so that it is easy to make changes without introducing new bugs 
because one class updated its own state without knowing it needed to tell 
another class to also update its



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9112) Combine streams `onAssignment` with `partitionsAssigned` task creation

2019-10-29 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9112:
--

 Summary: Combine streams `onAssignment` with `partitionsAssigned` 
task creation
 Key: KAFKA-9112
 URL: https://issues.apache.org/jira/browse/KAFKA-9112
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen


Task manager needs to call `createTasks` inside partitionsAssigned callback, 
which is after the `onAssignment` callback for assignor. This means during task 
creation we rely on the status change based on the intermediate data structures 
populated by a different callback, which is hard to reason about. We should 
consider consolidate logics to either one of the callbacks, prefer 
`onAssignment` as it contains full information needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4004

2019-10-29 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: improve logging of tasks on shutdown (#7597)

[wangguoz] KAFKA-8972 (2.4 blocker): correctly release lost partitions during


--
[...truncated 2.71 MB...]
org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp 

Build failed in Jenkins: kafka-trunk-jdk11 #919

2019-10-29 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: improve logging of tasks on shutdown (#7597)


--
[...truncated 2.72 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 

[jira] [Resolved] (KAFKA-9077) System Test Failure: StreamsSimpleBenchmarkTest

2019-10-29 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9077.

Resolution: Fixed

> System Test Failure: StreamsSimpleBenchmarkTest
> ---
>
> Key: KAFKA-9077
> URL: https://issues.apache.org/jira/browse/KAFKA-9077
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, system tests
>Affects Versions: 2.4.0
>Reporter: Manikumar
>Assignee: Bruno Cadonna
>Priority: Minor
> Fix For: 2.5.0
>
>
> StreamsSimpleBenchmarkTest tests are failing on 2.4 and trunk.
> http://confluent-kafka-2-4-system-test-results.s3-us-west-2.amazonaws.com/2019-10-21--001.1571716233--confluentinc--2.4--cb4944f/report.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Jira contributor list] Adding request

2019-10-29 Thread Guozhang Wang
Hello Ahmed,

I've added you to the contributors list, thank you!

Guozhang

On Mon, Oct 28, 2019 at 11:37 AM Ahmed Oubalas 
wrote:

> Hi all,
>
> I'm new to apache kafka project and would love to contribute to it by
> fixing bugs/documentation.
>
> As per https://kafka.apache.org/contributing.html, could you  please add
> me
> to the
> contributor list so that I can start working on starter bugs.
>
> JIRA id: Hmed06
> Thank you,
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-150 - Kafka-Streams Cogroup

2019-10-29 Thread Walker Carlson
Hi Gouzhang,

I am not sure what you mean by "Fields from different streams are never
aggregated together", this certainly can be the case but not the general
rule. If we want to take care of the special cases where the key-sets are
disjoint for each stream then they can be given no-op operators. This would
have the same effect as a stitching join as the function to update the
store would have to be defined either way, even to just place it in.

Now if we look at it from the other way, if we only specify the multiway
join then the user will need to aggregate each stream. Then they must do
the join which either will involve aggregators and value joiners or some
questionable optimization that would rely on each aggregator defined for a
grouped stream meshing together. And this would all have to happen inside
KStream.

I do agree that there are optimizations that can be done on joining
multiple tables per your example, in both cases whether it be a "stitching
join" or not. But I do not think the place to do it is in Streams. This
could be relatively easy to accomplish. I think we save ourselves pain if
we consider the tables and streams as separate cases, as aggregating
multiple streams into one KTable can be done more efficiently than making
multiple KTables and then joining them together. We may be able to get
around this in the case of a stitching join but I am not sure how we could
do it safely otherwise.

Walker





On Mon, Oct 28, 2019 at 6:26 PM Guozhang Wang  wrote:

> Hi Walker,
>
> This is a good point about compatibility breakage while overloading the
> existing classes; while reading John and your exchanges, I think I still
> need to clarify the motivations a bit more:
>
> 1) Multiple streams need to be aggregated together, inputs are always
> *KStreams* and end result is a *KTable*.
> 2) Fields from different streams are never aggregated together, i.e. on the
> higher level it is more like a "stitching up" the fields and then doing a
> single aggregation.
>
> In this context, I agree with you that it is still a streams-aggregation
> operator that we are trying to optimize (though its a multi-way), not a
> multi-way table-table-join operator that we are tying to optimize here.
>
>
> -
>
> But now taking a step back looking at it, I'm wondering, because of 2) that
> all input streams do not have overlapping fields, we can generalize this to
> a broader scope. Consider this case for example:
>
> table1 = builder.table("topic1");
> table2 = builder.table("topic2");
> table3 = builder.table("topic3");
> table4 = table1.join(table2).join(table3);
>
> Suppose the join operations do not take out any fields or add any new
> fields, i.e. say table1 has fields A, table2 has fields B, and table2 has
> fields C besides the key K, the table 4 has field {A, B, C} --- the join is
> just "stitching up" the fields --- then the above topology can actually be
> optimized in a similar way:
>
> * we only keep one materialized store in the form of K -> {A, B, C} as the
> materialized store of the final join result of table4.
> * when a record comes in from table1/2/3, just query the store on K, and
> then update the corresponding A/B/C field and then writes back to the
> store.
>
>
> Then the above streams-aggregation operator can be treated as a special
> case of this: you first aggregate separately on stream1/2/3 and generate
> table1/2/3, and then do this "stitching join", behind the scene we can
> optimize the topology to do exactly the co-group logic by updating the
> second bullet point above as an aggregation operator:
>
> * when a record comes in from *stream1/2/3*, just query the store on K, and
> then update the corresponding A/B/C field *with an aggregator *and then
> writes back to the store.
>
> -
>
> Personally I think this is better because with 1) larger applicable scope,
> and 2) without introducing new interfaces. But of course on the other side
> it requires us to do this optimization inside the Streams with some syntax
> hint from users (for example, users need to specify it is a "stitching
> join" such that all fields are still preserved in the join result). WDYT?
>
>
> Guozhang
>
>
> On Mon, Oct 28, 2019 at 4:20 PM Walker Carlson 
> wrote:
>
> > Hi John,
> >
> > Thank you for the background information. I think I understand your
> point.
> >
> > I believe that this could be fixed by making the motivation a little
> > clearer in the KIP.  I think that the motivation is when you have
> multiple
> > streams that need to aggregate together to form a single object the
> > current, non optimal, way to do this is through a multiway table join.
> This
> > is a little hacky. There is a slight but significant difference in these
> > cases, as in the null value handling you pointed out.
> >
> > For the example in the motivation, these tables were grouped streams so
> > they already dropped the null values. If we consider Cogroup sitting in
> the
> > same grey area that 

Re: contribution

2019-10-29 Thread Guozhang Wang
Hi Jianhai,

I've added you to the contributor list, you should be able to assign
tickets to yourself now.

For the newbie tasks, you can take a look at those un-assigned or
assigned-but-no-longer actively worked on tasks (you can contact the
current assignee on the ticket itself by leaving a comment to ask politely
if you can take it over).

Guozhang

On Tue, Oct 29, 2019 at 9:56 AM Xu Jianhai  wrote:

> More question, how can I get the newbie or project jira task? newbie task
> web:
>
> https://issues.apache.org/jira/browse/KAFKA-9088?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open
>  project task web:
>
> https://issues.apache.org/jira/browse/KAFKA-658?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20project%20AND%20status%20%3D%20Open
>
> Who is assigner? How can I contact to get the task?
>
> On Tue, Oct 29, 2019 at 10:55 PM Xu Jianhai  wrote:
>
> > Hi, after read contribution page:
> > https://kafka.apache.org/contributing.html , I wish add my jira name: Xu
> > JianHai  to  contributor list. Have a question, just provide jira name is
> > right?
> >
> >
>


-- 
-- Guozhang


Re: contribution

2019-10-29 Thread Xu Jianhai
More question, how can I get the newbie or project jira task? newbie task
web:
https://issues.apache.org/jira/browse/KAFKA-9088?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open
 project task web:
https://issues.apache.org/jira/browse/KAFKA-658?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20project%20AND%20status%20%3D%20Open

Who is assigner? How can I contact to get the task?

On Tue, Oct 29, 2019 at 10:55 PM Xu Jianhai  wrote:

> Hi, after read contribution page:
> https://kafka.apache.org/contributing.html , I wish add my jira name: Xu
> JianHai  to  contributor list. Have a question, just provide jira name is
> right?
>
>


[VOTE] KIP-543: Expand ConfigCommand's non-ZK functionality

2019-10-29 Thread Brian Byrne
Hello all,

I'd like to call a vote on KIP-543: Expand ConfigCommand's non-ZK
functionality, linked here: https://cwiki.apache.org/confluence/x/ww-3Bw

Thanks,
Brian


[jira] [Created] (KAFKA-9111) Incorrect project category in DOAP file, breaking projects.apache.org category listing

2019-10-29 Thread Nick Burch (Jira)
Nick Burch created KAFKA-9111:
-

 Summary: Incorrect project category in DOAP file, breaking 
projects.apache.org category listing
 Key: KAFKA-9111
 URL: https://issues.apache.org/jira/browse/KAFKA-9111
 Project: Kafka
  Issue Type: Bug
  Components: website
Reporter: Nick Burch


The Kafka DOAP file in git has the project category entered incorrectly. This 
means that the projects.apache.org "by Category" listing doesn't show Kafka in 
the right place

I would expect to see Kakfa under "Big Data" at 
[https://projects.apache.org/projects.html?category#big-data] , but it's 
actually under a broken/nested-looking entry at 
[https://projects.apache.org/projects.html?category#https://projects.apache.org/projects.html?category#big-data]

As per [https://projects.apache.org/guidelines.html] the category at 
[https://github.com/apache/kafka/blob/trunk/doap_Kafka.rdf#L36] should be of 
the form , and so your big data 
category resource URI should be [http://projects.apache.org/category/big-data]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


contribution

2019-10-29 Thread Xu Jianhai
Hi, after read contribution page: https://kafka.apache.org/contributing.html ,
I wish add my jira name: Xu JianHai  to  contributor list. Have a question,
just provide jira name is right?


[jira] [Created] (KAFKA-9110) Improve efficiency of disk reads when TLS is enabled

2019-10-29 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-9110:
--

 Summary: Improve efficiency of disk reads when TLS is enabled
 Key: KAFKA-9110
 URL: https://issues.apache.org/jira/browse/KAFKA-9110
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma


We currently do 8k reads and do unnecessary copies and allocations in every 
read. Increasing the read size is particularly helpful for magnetic disks and 
avoiding the copies and allocations improves CPU efficiency.

See the pull request for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-544: Make metrics exposed via JMX configurable

2019-10-29 Thread Stanislav Kozlovski
Hey Xavier,

Thank you for working on this. This KIP looks very good to me.

Since these configs will work with Kafka's own metrics library, will the
configs be part of the clients' configurations? It would be good to point
that out explicitly in the KIP.

I also second Viktor's question on what this would look like in practice.
i.e if we had the following metric:

*kafka.cluster:type=Partition,name=UnderMinIsr,topic=foobar,partition=9*

Would the regex apply to the whole string? i.e would we be able to match
parts of the string like `type=`, `name=`, `topic=`, or would it only apply
to the values?

Best,
Stanislav

On Mon, Oct 28, 2019 at 9:06 AM Viktor Somogyi-Vass 
wrote:

> Hi Xavier,
>
> How would the practical application look like if this was implemented?
> Would monitoring agents switch between the whitelist and blacklist
> periodically if they wanted to monitor every metrics?
> I think we should make some usage recommendations.
>
> Thanks,
> Viktor
>
> On Sun, Oct 27, 2019 at 3:34 PM Gwen Shapira  wrote:
>
> > Thanks Xavier.
> >
> > I really like this proposal. Collecting JMX metrics in clusters with
> > 100K partitions was nearly impossible due to the design of JMX and the
> > single lock mechanism. Yammer's limitations meant that any metric we
> > reported was exposed via JMX, so we couldn't have cheaper reporters
> > export one set of metrics, and JMX export another.
> >
> > Your proposal looks like a great way to lift this limitation and give
> > us more flexibility in reporting metrics.
> >
> > Gwen
> >
> > On Fri, Oct 25, 2019 at 5:17 PM Xavier Léauté 
> wrote:
> > >
> > > Hi All,
> > >
> > > I wrote a short KIP to make the set of metrics exposed via JMX
> > configurable.
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-544%3A+Make+metrics+exposed+via+JMX+configurable
> > >
> > > Let me know what you think.
> > >
> > > Thanks,
> > > Xavier
> >
>


-- 
Best,
Stanislav


[jira] [Created] (KAFKA-9109) Get Rid of Cast from ProcessorContext to InternalProcessorContext

2019-10-29 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-9109:


 Summary: Get Rid of Cast from ProcessorContext to 
InternalProcessorContext
 Key: KAFKA-9109
 URL: https://issues.apache.org/jira/browse/KAFKA-9109
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bruno Cadonna


The following cast is often used in Kafka Streams code.

{code:java}
public void init(final ProcessorContext context) {
internalProcessorContext = (InternalProcessorContext) context;
...
}
{code}

This code leads to a {{ClassCastException}} if the implementation of the 
{{ProcessorContext}} is not an {{InternalProcessorContext}}, which defeats the 
purpose of using interface {{ProcessorContext}} in the API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


RE: [DISCUSS] KIP-280: Enhanced log compaction

2019-10-29 Thread Senthilnathan Muthusamy
Hi Tom,

Sorry for the delayed response.

Regarding the fall back to offset decision for both timestamp & header value is 
based on the previous author discuss 
https://lists.apache.org/thread.html/f44317eb6cd34f91966654c80509d4a457dbbccdd02b86645782be67@%3Cdev.kafka.apache.org%3E
 and as per the discussion, it is really required to avoid duplicates.

And the timestamp strategy is from the original KIP author and we are keeping 
it as is.

Finally on the sequence order guarantee by the producer, it is not feasible on 
waiting for ack in async / multi-threads/processes scenarios and hence the 
header sequence based compact strategy with producer's responsibility to have a 
unique sequence generation for the topic-partition-key level.

Hoping this clarifies all your questions. Please let us know if you have any 
further questions.

@Guozhang Wang / @Matthias J. Sax, I see you both had a detail discussion on 
the original KIP with previous author and it would great to hear your inputs as 
well.

Thanks,
Senthil

-Original Message-
From: Tom Bentley  
Sent: Tuesday, October 22, 2019 2:32 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-280: Enhanced log compaction

Hi Senthilnathan,

In the motivation isn't it a little misleading to say "On the producer side, we 
clearly preserve an order for the two messages,  "? IMHO, the semantics of the producer are clear that having an 
V2>observed
order of sending records from different producers is not sufficient to 
guarantee ordering on the broker. You really need to send the 2nd record only 
after the 1st record is acked. It's the difficultly of achieving that in 
practice that's the true motivation for your KIP.

I can see the attraction of using timestamps, but it would be helpful to 
explain how that really solves the problem. When the producers are in different 
processes on different machines you're relying on their clocks being 
synchronized, which is a whole subject in itself. Even if they're synchronized 
the resolution of System.currentTimeMillis() is typically many milliseconds. If 
your producers are in different threads of the same process that could be a 
real problem because it makes ties quite likely.
And you don't explain why it's OK to resolve ties using the offset. The basis 
of your argument is that the offset is giving you the wrong answer.
So it seems to me that using it as a tiebreaker is just narrowing the chances 
of getting the wrong answer. Maybe none of this matters for your use case, but 
I think it should be spelled out in the KIP, because it surely would matter for 
similar use cases.

Using a sequence at least removes the problem of ties, but the interesting bit 
is now in how you deal with races between threads/processes in getting a 
sequence number allocated (which is out of scope of the KIP, I guess).
How is resolving that race any simpler that resolving the motivating race by 
waiting for the ack of the first record sent?

Kind regards,

Tom

On Mon, Oct 21, 2019 at 9:06 PM Senthilnathan Muthusamy 
 wrote:

> Hi All,
>
> We are bring back the KIP-280 to live with small correct for the 
> discussion & voting. Thanks to previous author Luis Cabral on the 
> KIP-280 initiation and we are taking over to complete and get it into 2.4...
>
> Below is the correction that we made to the existing KIP-280:
>
>   *   Allowing the compact strategy configuration at the topic level as
> the log compaction is at the topic level and a broker can have 
> multiple topics. This allows the flexibility to have the strategy at 
> both broker level (i.e. for all topics within the broker) and topic 
> level (i.e. for a subset of topics within a broker) as well...
>
> KIP-280:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwik
> i.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-280%253A%2BEnhanced%
> 2Blog%2Bcompactiondata=02%7C01%7Csenthilm%40microsoft.com%7C686c3
> 2fa4a554d61ae1408d756d409f6%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0
> %7C637073341017520406sdata=KrRem2KWCBscHX963Ah8wZ%2Fj9dkhCeAa7Gs6
> XqJ%2F5SQ%3Dreserved=0 PULL REQUEST: 
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> ub.com%2Fapache%2Fkafka%2Fpull%2F7528data=02%7C01%7Csenthilm%40mi
> crosoft.com%7C686c32fa4a554d61ae1408d756d409f6%7C72f988bf86f141af91ab2
> d7cd011db47%7C1%7C0%7C637073341017520406sdata=bt32PgDUjJjpXohEWpt
> Fxv6mPERCwcRFlVROzinBtnk%3Dreserved=0 (unit test coverage in 
> progress)
>
> Previous Thread DISCUSS:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> s.apache.org%2Fthread.html%2F79aa6e50d7c737ddf83455dd8063692a535a1afa5
> 58620fe1a1496d3%40%253Cdev.kafka.apache.org%253Edata=02%7C01%7Cse
> nthilm%40microsoft.com%7C686c32fa4a554d61ae1408d756d409f6%7C72f988bf86
> f141af91ab2d7cd011db47%7C1%7C0%7C637073341017520406sdata=XwcUWWYD
> PV1nA%2BbkDGLFNlXZ5bysVblWUTDQEzAaKxM%3Dreserved=0
> Previous Thread VOTE:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
>