[VOTE] KIP-276 Add StreamsConfig prefix for different consumers

2018-04-16 Thread Boyang Chen
Hey friends.


I would like to start a vote on KIP 276: add StreamsConfig prefix for different 
consumers.

KIP: 
here

Pull request: here

Jira: here


Let me know if you have questions.


Thank you!






Build failed in Jenkins: kafka-trunk-jdk8 #2557

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6650: Allowing transition to OfflineReplica state for replicas

--
[...truncated 417.63 KB...]
kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED


Build failed in Jenkins: kafka-trunk-jdk10 #24

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6650: Allowing transition to OfflineReplica state for replicas

--
[...truncated 1.48 MB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #2556

2018-04-16 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-283: Efficient Memory Usage for Down-Conversion

2018-04-16 Thread Jun Rao
Hi, Dhruvil,

Thanks for the KIP. Looks good me to overall. Just one comment below.

"To prevent this from happening, we will not delay down-conversion of the
first partition in the response. We will down-convert all messages of the
first partition in the I/O thread (like we do today), and only delay
down-conversion for subsequent partitions." It seems that we can further
optimize this by only down-converting the first message set in the first
partition in the request handling threads?

Jun


On Fri, Apr 6, 2018 at 2:56 PM, Dhruvil Shah  wrote:

> Hi,
>
> I created a KIP to help mitigate out of memory issues during
> down-conversion. The KIP proposes introducing a configuration that can
> prevent down-conversions altogether, and also describes a design for
> efficient memory usage for down-conversion.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 283%3A+Efficient+Memory+Usage+for+Down-Conversion
>
> Suggestions and feedback are welcome!
>
> Thanks,
> Dhruvil
>


Jenkins build is back to normal : kafka-trunk-jdk7 #3344

2018-04-16 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6650) The controller should be able to handle a partially deleted topic

2018-04-16 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-6650.

   Resolution: Fixed
Fix Version/s: 2.0.0

merged the PR to trunk.

> The controller should be able to handle a partially deleted topic
> -
>
> Key: KAFKA-6650
> URL: https://issues.apache.org/jira/browse/KAFKA-6650
> Project: Kafka
>  Issue Type: Bug
>Reporter: Lucas Wang
>Assignee: Lucas Wang
>Priority: Minor
> Fix For: 2.0.0
>
>
> A previous controller could have deleted some partitions of a topic from ZK, 
> but not all partitions, and then died.
> In that case, the new controller should be able to handle the partially 
> deleted topic, and finish the deletion.
> In the current code base, if there is no leadership info for a replica's 
> partition, the transition to OfflineReplica state for the replica will fail. 
> Afterwards the transition to ReplicaDeletionStarted will fail as well since 
> the only valid previous state for ReplicaDeletionStarted is OfflineReplica. 
> Furthermore, it means the topic deletion will never finish.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #23

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6727; Fix broken Config hashCode() and equals() (#4796)

[jason] MINOR: Log the exception thrown by Selector.poll (#4873)

--
[...truncated 1.48 MB...]

kafka.controller.ReplicaStateMachineTest > 
testOfflineReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testOfflineReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionSuccessfulToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionSuccessfulToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition PASSED


[jira] [Created] (KAFKA-6796) Surprising UNKNOWN_TOPIC error for produce/fetch requests to non-replicas

2018-04-16 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6796:
--

 Summary: Surprising UNKNOWN_TOPIC error for produce/fetch requests 
to non-replicas
 Key: KAFKA-6796
 URL: https://issues.apache.org/jira/browse/KAFKA-6796
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.1, 1.1.0
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Currently if the client sends a produce request or a fetch request to a broker 
which isn't a replica, we return UNKNOWN_TOPIC_OR_PARTITION. This is a bit 
surprising to see when the topic actually exists. It would be better to return 
NOT_LEADER to avoid confusion. Clients typically handle both errors by 
refreshing metadata and retrying, so changing this should not cause any change 
in behavior on the client. This case can be hit following a partition 
reassignment after the leader is moved and the local replica is deleted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3343

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6727; Fix broken Config hashCode() and equals() (#4796)

--
[...truncated 414.49 KB...]
kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED


Re: [VOTE] KIP-281: ConsumerPerformance: Increase Polling Loop Timeout and Make It Reachable by the End User

2018-04-16 Thread Ted Yu
+1

On Mon, Apr 16, 2018 at 2:25 PM, Alex Dunayevsky 
wrote:

> Hello friends,
>
> Let's start the vote for KIP-281: ConsumerPerformance: Increase Polling
> Loop Timeout and Make It Reachable by the End User:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 281%3A+ConsumerPerformance%3A+Increase+Polling+Loop+Timeout+
> and+Make+It+Reachable+by+the+End+User
>
> Thank you,
> Alexander Dunayevsky
>


[VOTE] KIP-281: ConsumerPerformance: Increase Polling Loop Timeout and Make It Reachable by the End User

2018-04-16 Thread Alex Dunayevsky
Hello friends,

Let's start the vote for KIP-281: ConsumerPerformance: Increase Polling
Loop Timeout and Make It Reachable by the End User:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-281%3A+ConsumerPerformance%3A+Increase+Polling+Loop+Timeout+and+Make+It+Reachable+by+the+End+User

Thank you,
Alexander Dunayevsky


Re: [DISCUSS] KIP-281 ConsumerPerformance: Increase Polling Loop Timeout and Make It Reachable by the End User

2018-04-16 Thread Alex Dunayevsky
Hi Jason,

Thanks for your feedback and attention to detail (some credit to Kafka
0.8).
I've decreased the default polling loop timeout value to 10 seconds and
renamed parameter to --timeout. Updates reflected in PR and KIP.
Let's start the vote

Cheers,
Alex Dunayevsky

On Thu, Apr 5, 2018, 21:37 Alex Dunayevsky  wrote:

> Sure, updated the table under 'Public Interfaces' by adding the TimeUnit
> column.
> Thank you
>
> > In the table under 'Public Interfaces', please add a column with TimeUnit.
> > Looks good overall.
>
>
>
>
>
> ‌
>


Jenkins build is back to normal : kafka-trunk-jdk10 #22

2018-04-16 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2555

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Mention that -1 disables retention by time (#4881)

[jason] KAFKA-6514; Add API version as a tag for the RequestsPerSec metric

[wangguoz] Kafka-6792: Fix wrong pointer in the link for stream dsl (#4876)

--
[...truncated 418.11 KB...]
kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED


[jira] [Created] (KAFKA-6795) Add unit test for ReplicaAlterLogDirsThread

2018-04-16 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-6795:
---

 Summary: Add unit test for ReplicaAlterLogDirsThread
 Key: KAFKA-6795
 URL: https://issues.apache.org/jira/browse/KAFKA-6795
 Project: Kafka
  Issue Type: Improvement
Reporter: Anna Povzner
Assignee: Anna Povzner


ReplicaAlterLogDirsThread was added as part of KIP-113 implementation, but 
there is no unit test. 

[~lindong] I assigned this to myself, since ideally I wanted to add unit tests 
for KAFKA-6361 related changes (KIP-279), but feel free to re-assign. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-270 A Scala wrapper library for Kafka Streams

2018-04-16 Thread Guozhang Wang
Thanks Debasish for the KIP!

Will make another pass on the PR itself, but the KIP itself looks good. I'm
+1 (binding).

Guozhang

On Mon, Apr 16, 2018 at 10:51 AM, John Roesler  wrote:

> Thanks again for this effort. I'm +1 (non-binding).
> -John
>
> On Mon, Apr 16, 2018 at 9:39 AM, Ismael Juma  wrote:
>
> > Thanks for the contribution. I haven't reviewed all the new APIs in
> detail,
> > but the general approach sounds good to me. +1 (binding).
> >
> > Ismael
> >
> > On Wed, Apr 11, 2018 at 3:09 AM, Debasish Ghosh <
> > debasish.gh...@lightbend.com> wrote:
> >
> > > Hello everyone -
> > >
> > > This is in continuation to the discussion regarding
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 270+-+A+Scala+Wrapper+Library+for+Kafka+Streams,
> > > which is a KIP for implementing a Scala wrapper library for Kafka
> > Streams.
> > >
> > > We have had a PR (https://github.com/apache/kafka/pull/4756) for quite
> > > some
> > > time now and we have had lots of discussions and suggested
> improvements.
> > > Thanks to all who participated in the discussion and came up with all
> the
> > > suggestions for improvements.
> > >
> > > The purpose of this thread is to get an agreement on the implementation
> > and
> > > have it included as part of Kafka.
> > >
> > > Looking forward ..
> > >
> > > regards.
> > >
> > > --
> > > Debasish Ghosh
> > > Principal Engineer
> > >
> > > Twitter: @debasishg
> > > Blog: http://debasishg.blogspot.com
> > > Code: https://github.com/debasishg
> > >
> >
>



-- 
-- Guozhang


[jira] [Resolved] (KAFKA-4327) Move Reset Tool from core to streams

2018-04-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4327.
--
Resolution: Won't Fix

I'm resolving this ticket as won't fix for now.

> Move Reset Tool from core to streams
> 
>
> Key: KAFKA-4327
> URL: https://issues.apache.org/jira/browse/KAFKA-4327
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Jorge Quilcate
>Priority: Minor
>
> This is a follow up on https://issues.apache.org/jira/browse/KAFKA-4008
> Currently, Kafka Streams Application Reset Tool is part of {{core}} module 
> due to ZK dependency. After KIP-4 got merged, this dependency can be dropped 
> and the Reset Tool can be moved to {{streams}} module.
> This should also update {{InternalTopicManager#filterExistingTopics}} that 
> revers to ResetTool in an exception message:
> {{"Use 'kafka.tools.StreamsResetter' tool"}}
> -> {{"Use '" + kafka.tools.StreamsResetter.getClass().getName() + "' tool"}}
> Doing this JIRA also requires to update the docs with regard to broker 
> backward compatibility -- not all broker support "topic delete request" and 
> thus, the reset tool will not be backward compatible to all broker versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3342

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Mention that -1 disables retention by time (#4881)

[jason] KAFKA-6514; Add API version as a tag for the RequestsPerSec metric

[wangguoz] Kafka-6792: Fix wrong pointer in the link for stream dsl (#4876)

--
[...truncated 411.68 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED
ERROR: Could not install GRADLE_3_5_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:895)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:458)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:666)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:631)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:391)
at hudson.scm.SCM.poll(SCM.java:408)
at hudson.model.AbstractProject._poll(AbstractProject.java:1384)
at hudson.model.AbstractProject.poll(AbstractProject.java:1287)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:594)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:640)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 

[jira] [Resolved] (KAFKA-6514) Add API version as a tag for the RequestsPerSec metric

2018-04-16 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6514.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Add API version as a tag for the RequestsPerSec metric
> --
>
> Key: KAFKA-6514
> URL: https://issues.apache.org/jira/browse/KAFKA-6514
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.0.0
>Reporter: Allen Wang
>Priority: Major
> Fix For: 1.2.0
>
>
> After we upgrade broker to a new version, one important insight is to see how 
> many clients have been upgraded so that we can switch the message format when 
> most of the clients have also been updated to the new version to minimize the 
> performance penalty. 
> RequestsPerSec with the version tag will give us that insight.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2554

2018-04-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
[FINDBUGS] Collecting findbugs analysis files...
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
[FINDBUGS] Computing warning deltas based on reference build #2551
Recording test results
ERROR: Build step failed with exception
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at hudson.FilePath.act(FilePath.java:986)
at hudson.FilePath.act(FilePath.java:975)
at 
hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:136)
at 
hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:166)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:153)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
at hudson.model.Run.execute(Run.java:1749)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 

Build failed in Jenkins: kafka-trunk-jdk10 #21

2018-04-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:862)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1996)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1964)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1960)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281)
at com.sun.proxy.$Proxy109.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:850)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

Build failed in Jenkins: kafka-trunk-jdk7 #3341

2018-04-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:862)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1996)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1964)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1960)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281)
at com.sun.proxy.$Proxy109.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:850)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

[jira] [Resolved] (KAFKA-6792) Wrong pointer in the link for stream dsl

2018-04-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6792.
--
   Resolution: Fixed
Fix Version/s: 1.0.0

> Wrong pointer in the link for stream dsl
> 
>
> Key: KAFKA-6792
> URL: https://issues.apache.org/jira/browse/KAFKA-6792
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Reporter: robin m
>Priority: Major
> Fix For: 1.0.0
>
>
> Wrong pointer in the link for stream dsl.
> actual is : 
> [http://kafka.apache.org/11/documentation/streams/developer-guide#streams|http://kafka.apache.org/11/documentation/streams/developer-guide/#streams]_dsl
> correct is : 
> http://kafka.apache.org/11/documentation/streams/developer-guide/dsl-api.html#streams-dsl



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-270 A Scala wrapper library for Kafka Streams

2018-04-16 Thread John Roesler
Thanks again for this effort. I'm +1 (non-binding).
-John

On Mon, Apr 16, 2018 at 9:39 AM, Ismael Juma  wrote:

> Thanks for the contribution. I haven't reviewed all the new APIs in detail,
> but the general approach sounds good to me. +1 (binding).
>
> Ismael
>
> On Wed, Apr 11, 2018 at 3:09 AM, Debasish Ghosh <
> debasish.gh...@lightbend.com> wrote:
>
> > Hello everyone -
> >
> > This is in continuation to the discussion regarding
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 270+-+A+Scala+Wrapper+Library+for+Kafka+Streams,
> > which is a KIP for implementing a Scala wrapper library for Kafka
> Streams.
> >
> > We have had a PR (https://github.com/apache/kafka/pull/4756) for quite
> > some
> > time now and we have had lots of discussions and suggested improvements.
> > Thanks to all who participated in the discussion and came up with all the
> > suggestions for improvements.
> >
> > The purpose of this thread is to get an agreement on the implementation
> and
> > have it included as part of Kafka.
> >
> > Looking forward ..
> >
> > regards.
> >
> > --
> > Debasish Ghosh
> > Principal Engineer
> >
> > Twitter: @debasishg
> > Blog: http://debasishg.blogspot.com
> > Code: https://github.com/debasishg
> >
>


[jira] [Created] (KAFKA-6794) Support for incremental replica reassignment

2018-04-16 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6794:
--

 Summary: Support for incremental replica reassignment
 Key: KAFKA-6794
 URL: https://issues.apache.org/jira/browse/KAFKA-6794
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


Say you have a replication factor of 4 and you trigger a reassignment which 
moves all replicas to new brokers. Now 8 replicas are fetching at the same time 
which means you need to account for 8 times the current load plus the catch-up 
replication. To make matters worse, the replicas won't all become in-sync at 
the same time; in the worst case, you could have 7 replicas in-sync while one 
is still catching up. Currently, the old replicas won't be disabled until all 
new replicas are in-sync. This makes configuring the throttle tricky since ISR 
traffic is not subject to it.

Rather than trying to bring all 4 new replicas online at the same time, a 
friendlier approach would be to do it incrementally: bring one replica online, 
bring it in-sync, then remove one of the old replicas. Repeat until all 
replicas have been changed. This would reduce the impact of a reassignment and 
make configuring the throttle easier at the cost of a slower overall 
reassignment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2553

2018-04-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
[FINDBUGS] Collecting findbugs analysis files...
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
[FINDBUGS] Computing warning deltas based on reference build #2551
Recording test results
ERROR: Build step failed with exception
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at hudson.FilePath.act(FilePath.java:986)
at hudson.FilePath.act(FilePath.java:975)
at 
hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:114)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:136)
at 
hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:166)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:153)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
at hudson.model.Run.execute(Run.java:1749)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 

Build failed in Jenkins: kafka-trunk-jdk10 #20

2018-04-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:862)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1996)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1964)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1960)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H23
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281)
at com.sun.proxy.$Proxy109.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:850)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

Re: [VOTE] KIP-279: Fix log divergence between leader and follower after fast leader fail over

2018-04-16 Thread Ismael Juma
Thanks for the detailed KIP. +1 (binding)

Ismael

On Sat, Apr 14, 2018 at 3:54 PM, Anna Povzner  wrote:

> Hi All,
>
>
> I would like to start the vote on KIP-279: Fix log divergence between
> leader and follower after fast leader fail over.
>
>
> For reference, here's the KIP wiki:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 279%3A+Fix+log+divergence+between+leader+and+follower+
> after+fast+leader+fail+over
>
>
>
> and discussion thread:
> https://www.mail-archive.com/dev@kafka.apache.org/msg86753.html
>
>
> Thanks,
>
> Anna
>


Re: [VOTE] KIP-279: Fix log divergence between leader and follower after fast leader fail over

2018-04-16 Thread Ben Stopford
+1 thanks Anna

On Mon, Apr 16, 2018 at 5:37 PM Jason Gustafson  wrote:

> +1 Thanks for the KIP!
>
> On Sun, Apr 15, 2018 at 4:04 PM, Damian Guy  wrote:
>
> > Thanks Anna +1
> > On Sun, 15 Apr 2018 at 15:40, Guozhang Wang  wrote:
> >
> > > Anna, thanks for the KIP! +1 from me.
> > >
> > > Just one minor comment: `Broker A will respond with offset 11`, seems
> it
> > > should be `21`.
> > >
> > > On Sun, Apr 15, 2018 at 3:48 AM, Dong Lin  wrote:
> > >
> > > > Thanks for the KIP! LGTM. +1
> > > >
> > > > On Sat, Apr 14, 2018 at 5:31 PM, Ted Yu  wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Sat, Apr 14, 2018 at 3:54 PM, Anna Povzner 
> > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > >
> > > > > > I would like to start the vote on KIP-279: Fix log divergence
> > between
> > > > > > leader and follower after fast leader fail over.
> > > > > >
> > > > > >
> > > > > > For reference, here's the KIP wiki:
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 279%3A+Fix+log+divergence+between+leader+and+follower+
> > > > > > after+fast+leader+fail+over
> > > > > >
> > > > > >
> > > > > >
> > > > > > and discussion thread:
> > > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg86753.html
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Anna
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: [VOTE] KIP-279: Fix log divergence between leader and follower after fast leader fail over

2018-04-16 Thread Jason Gustafson
+1 Thanks for the KIP!

On Sun, Apr 15, 2018 at 4:04 PM, Damian Guy  wrote:

> Thanks Anna +1
> On Sun, 15 Apr 2018 at 15:40, Guozhang Wang  wrote:
>
> > Anna, thanks for the KIP! +1 from me.
> >
> > Just one minor comment: `Broker A will respond with offset 11`, seems it
> > should be `21`.
> >
> > On Sun, Apr 15, 2018 at 3:48 AM, Dong Lin  wrote:
> >
> > > Thanks for the KIP! LGTM. +1
> > >
> > > On Sat, Apr 14, 2018 at 5:31 PM, Ted Yu  wrote:
> > >
> > > > +1
> > > >
> > > > On Sat, Apr 14, 2018 at 3:54 PM, Anna Povzner 
> > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > >
> > > > > I would like to start the vote on KIP-279: Fix log divergence
> between
> > > > > leader and follower after fast leader fail over.
> > > > >
> > > > >
> > > > > For reference, here's the KIP wiki:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 279%3A+Fix+log+divergence+between+leader+and+follower+
> > > > > after+fast+leader+fail+over
> > > > >
> > > > >
> > > > >
> > > > > and discussion thread:
> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg86753.html
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Anna
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [DISCUSS] KIP-269: Substitution Within Configuration Values

2018-04-16 Thread Ron Dagostino
Hi Rajini.  I think a good, illustrative OAuth example is the situation
where a Kafka client (whether non-broker or broker, where the latter occurs
when OAUTHBEARER is the inter-broker SASL mechanism) needs to authenticate
to the token endpoint to retrieve an access token.  There are actually 2
potential authentications that may occur.  First, the client image itself
may authenticate as per https://tools.ietf.org/html/rfc6749#section-2.3,
and as the spec says, the mechanism can be anything ("The authorization
server MAY accept any form of client authentication meeting its security
requirements").  The most common form is a client_id and a client_secret as
per https://tools.ietf.org/html/rfc6749#section-2.3.1.  There may also be
(and almost always is) authentication of the entity (usually a human).  In
the Kafka case there is no human involved, but there still may be an
identity associated beyond just the image identity, and that would probably
(though again, not necessarily) take the form of a username and password as
per https://tools.ietf.org/html/rfc6749#section-4.3.  If the image identity
alone is sufficient then the client credentials grant may be used as per
https://tools.ietf.org/html/rfc6749#section-4.4.  The first general point
to take from this is that OAuth 2 is very flexible, so the amount and type
of secret material that the producer/consumer/broker will need will vary
from installation to installation.

The second point to address is how the login module is to retrieve the
secret material.  More and more customers are going to orchestrate their
images in Kubernetes clusters, and secrets are injected via file paths in
that environment.  So to get the client_id and client_secret (for example),
or the username and password, or all 4 if that is what the installation
requires, a login module running in Kubernetes is going to have to read 4
files.  One way to do that is for the login module to leverage JAAS module
options that would look like this:

client_id="$[file=/path/to/secrets/client_id]"
client_secret="$[file=/path/to/secrets/client_secret]"
username="$[file=/path/to/secrets/username]"
password="$[file=/path/to/secrets/password]"

The code in the login callback handler would then be as follows:

String clientId = getModuleOption("client_id");
String clientSecret = getModuleOption("client_secret");
String username = getModuleOption("username");
String password = getModuleOption("password");

It is certainly possible that the login callback handler could just read
those files directly as opposed to asking for the value of the module
option and letting the substitution occur.  The module options and code for
that approach might look like this:

pathToSecrets="/path/to/secrets/"

String pathToSecrets = getModuleOption("pathToSecrets");
String clientId = readFile(pathToSecrets, "client_id");
String clientSecret = readFile(pathToSecrets, "client_secret");
String username = readFile(pathToSecrets, "username");
String password = readFile(pathToSecrets, "password");

private String readFile(String dirName, String fileName) throws IOException
{
// etc...
}

The advantage of the first approach where the code simply asks for the
value of a module option is that it doesn't force a developer to create 4
files.  In development situations where standalone/IDE as opposed to full
PROD (e.g. k8s) infrastructure is a real possibility, it is convenient to
be able to simply hard-code values in the JAAS config; encapsulating the
retrieval of the values behind the module option facilitates being able to
do that.  The same code that runs in production will run in DEV.  Otherwise
the developer has to create the 4 files -- it wouldn't be the end of the
world, of course, but it is easier to not have to deal with it.

Substitution doesn't seem to be a critical issue when we look at it like
this, but I think there are other considerations.


   - There are secret values that can't be stored in Zookeeper even for
   those who migrate to dynamic configuration (password.encoder.secret and
   password.encoder.old.secret). Storage of those values in configuration
   files is likely to be one of the weaker links in the security setup, and
   the possibility of a stronger approach (i.e. injection at runtime,
   abstracted behind a configuration value that does substitution) is
   interesting.  The issue of notification of changes as previously discussed
   does not apply here because these values are only used at broker startup.
   - My guess is that migration to dynamic configuration and/or KIP-86
   callback handler replacement will proceed piecemeal with some customers
   waiting much longer than others -- and some may never migrate to one or
   both.  There are going to be people who will appreciate the ability to
   abstract out a retrieval mechanism behind a configuration value (whether
   that be a JAAS module option or a producer/consumer/broker config).  The
   issue of notification of changes as previously 

Re: [VOTE] KIP-235 Add DNS alias support for secured connection

2018-04-16 Thread Ted Yu
Looks good to me.

BTW KAFKA-6195 contains more technical details than the KIP. See if you can
enrich the Motivation section with some of the details.

Thanks

On Fri, Mar 23, 2018 at 12:05 PM, Skrzypek, Jonathan <
jonathan.skrzy...@gs.com> wrote:

> Hi,
>
> I would like to start a vote for KIP-235
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 235%3A+Add+DNS+alias+support+for+secured+connection
>
> This is a proposition to add an option for reverse dns lookup of
> bootstrap.servers hosts, allowing the use of dns aliases on clusters using
> SASL authentication.
>
>
>
>


RE: [VOTE] KIP-235 Add DNS alias support for secured connection

2018-04-16 Thread Skrzypek, Jonathan
Hi,

Could anyone take a look ?
Does the proposal sound reasonable ?

Jonathan Skrzypek


From: Skrzypek, Jonathan [Tech]
Sent: 23 March 2018 19:05
To: dev@kafka.apache.org
Subject: [VOTE] KIP-235 Add DNS alias support for secured connection

Hi,

I would like to start a vote for KIP-235

https://cwiki.apache.org/confluence/display/KAFKA/KIP-235%3A+Add+DNS+alias+support+for+secured+connection

This is a proposition to add an option for reverse dns lookup of 
bootstrap.servers hosts, allowing the use of dns aliases on clusters using SASL 
authentication.





Re: [VOTE] KIP-270 A Scala wrapper library for Kafka Streams

2018-04-16 Thread Ismael Juma
Thanks for the contribution. I haven't reviewed all the new APIs in detail,
but the general approach sounds good to me. +1 (binding).

Ismael

On Wed, Apr 11, 2018 at 3:09 AM, Debasish Ghosh <
debasish.gh...@lightbend.com> wrote:

> Hello everyone -
>
> This is in continuation to the discussion regarding
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 270+-+A+Scala+Wrapper+Library+for+Kafka+Streams,
> which is a KIP for implementing a Scala wrapper library for Kafka Streams.
>
> We have had a PR (https://github.com/apache/kafka/pull/4756) for quite
> some
> time now and we have had lots of discussions and suggested improvements.
> Thanks to all who participated in the discussion and came up with all the
> suggestions for improvements.
>
> The purpose of this thread is to get an agreement on the implementation and
> have it included as part of Kafka.
>
> Looking forward ..
>
> regards.
>
> --
> Debasish Ghosh
> Principal Engineer
>
> Twitter: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: https://github.com/debasishg
>


Re: [VOTE] KIP-270 A Scala wrapper library for Kafka Streams

2018-04-16 Thread Bill Bejeck
+1

-Bill

On Mon, Apr 16, 2018 at 5:14 AM, Matthias J. Sax 
wrote:

> +1 (binding)
>
> On 4/16/18 9:33 AM, Michael Noll wrote:
> > +1 (non-binding)
> >
> > Thanks for contributing and shepherding this, Debasish and team!
> >
> > And thanks also to Alexis Seigneurin for the original start of the Scala
> > wrapper API.
> >
> > Best,
> > Michael
> >
> >
> >
> > On Thu, Apr 12, 2018 at 10:24 AM, Damian Guy 
> wrote:
> >
> >> Thanks for the KIP Debasish - +1 binding
> >>
> >> On Wed, 11 Apr 2018 at 12:16 Ted Yu  wrote:
> >>
> >>> +1
> >>>  Original message From: Debasish Ghosh <
> >>> debasish.gh...@lightbend.com> Date: 4/11/18  3:09 AM  (GMT-08:00) To:
> >>> dev@kafka.apache.org Subject: [VOTE] KIP-270 A Scala wrapper library
> for
> >>> Kafka Streams
> >>> Hello everyone -
> >>>
> >>> This is in continuation to the discussion regarding
> >>>
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 270+-+A+Scala+Wrapper+Library+for+Kafka+Streams
> >>> ,
> >>> which is a KIP for implementing a Scala wrapper library for Kafka
> >> Streams.
> >>>
> >>> We have had a PR (https://github.com/apache/kafka/pull/4756) for quite
> >>> some
> >>> time now and we have had lots of discussions and suggested
> improvements.
> >>> Thanks to all who participated in the discussion and came up with all
> the
> >>> suggestions for improvements.
> >>>
> >>> The purpose of this thread is to get an agreement on the implementation
> >> and
> >>> have it included as part of Kafka.
> >>>
> >>> Looking forward ..
> >>>
> >>> regards.
> >>>
> >>> --
> >>> Debasish Ghosh
> >>> Principal Engineer
> >>>
> >>> Twitter: @debasishg
> >>> Blog: http://debasishg.blogspot.com
> >>> Code: https://github.com/debasishg
> >>>
> >>
> >
>
>


[jira] [Created] (KAFKA-6793) Unnecessary warning log message

2018-04-16 Thread Anna O (JIRA)
Anna O created KAFKA-6793:
-

 Summary: Unnecessary warning log message 
 Key: KAFKA-6793
 URL: https://issues.apache.org/jira/browse/KAFKA-6793
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.0
Reporter: Anna O


When upgraded KafkaStreams from 0.11.0.2 to 1.1.0 the following warning log 
started to appear:

level: WARN
logger: org.apache.kafka.clients.consumer.ConsumerConfig
message: The configuration 'admin.retries' was supplied but isn't a known 
config.

The config is not explicitly supplied to the streams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] #2 KIP-248: Create New ConfigCommand That Uses The New AdminClient

2018-04-16 Thread Viktor Somogyi
Hi Rajini,

The current ConfigCommand would still be possible to use, therefore those
who wish to set up SCRAM or initial quotas would be able to continue doing
it through kafka-run-class.sh.
In an ideal world I'd keep it in the current ConfigCommand command so we
wouldn't mix the zookeeper and admin client configs. Perhaps I could create
a kafka-configs-zookeeper.sh shell script for shortening the
kafka-run-class command.
What do you and others think?

Thanks,
Viktor



On Mon, Apr 16, 2018 at 10:15 AM, Rajini Sivaram 
wrote:

> Hi Viktor,
>
> The KIP proposes to remove the ability to configure using ZooKeeper. This
> means we will no longer have the ability to start up a cluster with SCRAM
> credentials since we first need to create SCRAM credentials before brokers
> can start if the broker uses SCRAM for inter-broker communication and we
> need SCRAM credentials for the AdminClient before we can create new ones.
> For quotas as well, we will no longer be able to configure quotas until at
> least one broker has been started. Perhaps, we ought to retain the ability
> to configure using ZooKeeper for these initialization scenarios and support
> only AdminClient for dynamic updates?
>
> What do others think?
>
> Regards,
>
> Rajini
>
> On Sun, Apr 15, 2018 at 10:28 AM, Ted Yu  wrote:
>
> > +1
> >  Original message From: zhenya Sun 
> > Date: 4/15/18  12:42 AM  (GMT-08:00) To: dev  Cc:
> > dev  Subject: Re: [VOTE] #2 KIP-248: Create New
> > ConfigCommand That Uses The New AdminClient
> > non-binding +1
> >
> >
> >
> >
> >
> > from my iphone!
> > On 04/15/2018 15:41, Attila Sasvári wrote:
> > Thanks for updating the KIP.
> >
> > +1 (non-binding)
> >
> > Viktor Somogyi  ezt írta (időpont: 2018. ápr.
> 9.,
> > H 16:49):
> >
> > > Hi Magnus,
> > >
> > > Thanks for the heads up, added the endianness to the KIP. Here is the
> > > current text:
> > >
> > > "Double
> > > A new type needs to be added to transfer quota values. Since the
> protocol
> > > classes in Kafka already uses ByteBuffers it is logical to use their
> > > functionality for serializing doubles. The serialization is basically a
> > > representation of the specified floating-point value according to the
> > IEEE
> > > 754 floating-point "double format" bit layout. The ByteBuffer
> serializer
> > > writes eight bytes containing the given double value, in Big Endian
> byte
> > > order, into this buffer at the current position, and then increments
> the
> > > position by eight.
> > >
> > > The implementation will be defined in
> > > org.apache.kafka.common.protocol.types with the other protocol types
> > and it
> > > will have no default value much like the other types available in the
> > > protocol."
> > >
> > > Also, I haven't changed the protocol docs yet but will do so in my PR
> for
> > > this feature.
> > >
> > > Let me know if you'd still add something.
> > >
> > > Regards,
> > > Viktor
> > >
> > >
> > > On Mon, Apr 9, 2018 at 3:32 PM, Magnus Edenhill 
> > > wrote:
> > >
> > > > Hi Viktor,
> > > >
> > > > since serialization of floats isn't as straight forward as integers,
> > > please
> > > > specify the exact serialization format of DOUBLE in the protocol docs
> > > > (e.g., IEEE 754),
> > > > including endianness (big-endian please).
> > > >
> > > > This will help the non-java client ecosystem.
> > > >
> > > > Thanks,
> > > > Magnus
> > > >
> > > >
> > > > 2018-04-09 15:16 GMT+02:00 Viktor Somogyi :
> > > >
> > > > > Hi Attila,
> > > > >
> > > > > 1. It uses ByteBuffers, which in turn will use
> > Double.doubleToLongBits
> > > to
> > > > > convert the double value to a long and that long will be written in
> > the
> > > > > buffer. I'v updated the KIP with this.
> > > > > 2. Good idea, modified it.
> > > > > 3. During the discussion I remember we didn't really decide which
> one
> > > > would
> > > > > be the better one but I agree that a wrapper class that makes sure
> > the
> > > > list
> > > > > that is used as a key is immutable is a good idea and would ease
> the
> > > life
> > > > > of people using the interface. Also more importantly would make
> sure
> > > that
> > > > > we always use the same hashCode. I have created wrapper classes for
> > the
> > > > map
> > > > > value as well but that was reverted to keep things consistent.
> > Although
> > > > for
> > > > > the key I think we wouldn't break consistency. I updated the KIP.
> > > > >
> > > > > Thanks,
> > > > > Viktor
> > > > >
> > > > >
> > > > > On Tue, Apr 3, 2018 at 1:27 PM, Attila Sasvári <
> asasv...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Thanks for working on it Viktor.
> > > > > >
> > > > > > It looks good to me, but I have some questions:
> > > > > > - I see a new type DOUBLE is used for quota_value , and it is not
> > > > listed
> > > > > > among the primitive types 

Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-04-16 Thread Matthias J. Sax
Thanks for the update Vito!

It's up to you to keep the details part in the KIP or not.


The (incomplete) question was, if we need `StateStoreFailException` or
if existing `InvalidStateStoreException` could be used? Do you suggest
that `InvalidStateStoreException` is not thrown at all anymore, but only
the new sub-classes (just to get a better understanding).


Not sure what this sentence means:

> The internal exception will be wrapped as category exception finally.

Can you elaborate?


Can you explain the purpose of the "internal exceptions". It's unclear
to me atm why they are introduced.


-Matthias

On 4/10/18 12:33 AM, vito jeng wrote:
> Matthias,
> 
> Thanks for the review.
> I reply separately in the following sections.
> 
> 
> ---
> Vito
> 
> On Sun, Apr 8, 2018 at 1:30 PM, Matthias J. Sax 
> wrote:
> 
>> Thanks for updating the KIP and sorry for the long pause...
>>
>> Seems you did a very thorough investigation of the code. It's useful to
>> understand what user facing interfaces are affected.
> 
> (Some parts might be even too detailed for a KIP.)
>>
> 
> I also think too detailed. Especially the section `Changes in call trace`.
> Do you think it should be removed?
> 
> 
>>
>> To summarize my current understanding of your KIP, the main change is to
>> introduce new exceptions that extend `InvalidStateStoreException`.
>>
> 
> yep. Keep compatibility in this KIP is important things.
> I think the best way is that all new exceptions extend from
> `InvalidStateStoreException`.
> 
> 
>>
>> Some questions:
>>
>>  - Why do we need ```? Could `InvalidStateStoreException` be used for
>> this purpose?
>>
> 
> Does this question miss some word?
> 
> 
>>
>>  - What the superclass of `StateStoreStreamThreadNotRunningException`
>> is? Should it be `InvalidStateStoreException` or `StateStoreFailException`
>> ?
>>
>>  - Is `StateStoreClosed` a fatal or retryable exception ?
>>
>>
> I apologize for not well written parts. I tried to modify some code in the
> recent period and modify the KIP.
> The modification is now complete. Please look again.
> 
> 
>>
>>
>> -Matthias
>>
>>
>> On 2/21/18 5:15 PM, vito jeng wrote:
>>> Matthias,
>>>
>>> Sorry for not response these days.
>>> I just finished it. Please have a look. :)
>>>
>>>
>>>
>>> ---
>>> Vito
>>>
>>> On Wed, Feb 14, 2018 at 5:45 AM, Matthias J. Sax 
>>> wrote:
>>>
 Is there any update on this KIP?

 -Matthias

 On 1/3/18 12:59 AM, vito jeng wrote:
> Matthias,
>
> Thank you for your response.
>
> I think you are right. We need to look at the state both of
> KafkaStreams and StreamThread.
>
> After further understanding of KafkaStreams thread and state store,
> I am currently rewriting the KIP.
>
>
>
>
> ---
> Vito
>
> On Fri, Dec 29, 2017 at 4:32 AM, Matthias J. Sax <
>> matth...@confluent.io>
> wrote:
>
>> Vito,
>>
>> Sorry for this late reply.
>>
>> There can be two cases:
>>
>>  - either a store got migrated way and thus, is not hosted an the
>> application instance anymore,
>>  - or, a store is hosted but the instance is in state rebalance
>>
>> For the first case, users need to rediscover the store. For the second
>> case, they need to wait until rebalance is finished.
>>
>> If KafkaStreams is in state ERROR, PENDING_SHUTDOWN, or NOT_RUNNING,
>> uses cannot query at all and thus they cannot rediscover or retry.
>>
>> Does this make sense?
>>
>> -Matthias
>>
>> On 12/20/17 12:54 AM, vito jeng wrote:
>>> Matthias,
>>>
>>> I try to clarify some concept.
>>>
>>> When streams state is REBALANCING, it means the user can just plain
>> retry.
>>>
>>> When streams state is ERROR or PENDING_SHUTDOWN or NOT_RUNNING, it
 means
>>> state store migrated to another instance, the user needs to
>> rediscover
>> the
>>> store.
>>>
>>> Is my understanding correct?
>>>
>>>
>>> ---
>>> Vito
>>>
>>> On Sun, Nov 5, 2017 at 12:30 AM, Matthias J. Sax <
 matth...@confluent.io>
>>> wrote:
>>>
 Thanks for the KIP Vito!

 I agree with what Guozhang said. The original idea of the Jira was,
>> to
 give different exceptions for different "recovery" strategies to the
>> user.

 For example, if a store is currently recreated, a user just need to
 wait
 and can query the store later. On the other hand, if a store go
 migrated
 to another instance, a user needs to rediscover the store instead
>> of a
 "plain retry".

 Fatal errors might be a third category.

 Not sure if there is something else?

 Anyway, the KIP should contain a section that talks about this ideas
 and
 reasoning.


 -Matthias


Re: [DISCUSS] KIP-269: Substitution Within Configuration Values

2018-04-16 Thread Rajini Sivaram
Hi Ron,

Thanks for the analysis, this is very useful. Reducing the feature to the
minimum required for the scenarios helps (though I was hoping that the
redact flag would one day help with improving SASL diagnostics, that can be
for another day).

In the past when users requested different password storing mechanisms like
in KIP-76, the main reason was to avoid storing clear passwords in a file.
Now that we can store encrypted passwords in ZK, that is less of a
motivation for SSL keystore passwords (at least on the broker-side). But
integrating with external password vaults is perhaps still useful for SASL.
But an important point is that none of the built-in substitution mechanisms
helps with this security requirement.

Colin had suggested that it would be better to do this type of substitution
outside of Kafka, e.g. using scripts. So if we can store encrypted
passwords in ZooKeeper, users can write scripts to extract passwords from
their favourite password vault and store them encrypted in ZK, do we still
need substitution?

I think there are two use cases for SASL:

   1. Client configuration (Producer/Consumer/AdminClient)
   2. Dynamic password updates with caching (as you mentioned) for both
   clients and brokers

Looking at only SASL, we can do 1) with SASL callback handlers (KIP-86) for
both clients and brokers. Callback handlers can do 2) as well. On the
broker-side, as you mentioned, dynamic configs in ZK would be better for 2)
(we don't currently support changing SASL configs for existing listeners,
but eventually we probably would).

So if we assume that SSL passwords can be handled by encrypted passwords in
ZK, SASL/PLAIN and SCRAM can be handled by callback handlers, can we narrow
down the OAuth scenarios that really benefit from substitution? Do we need
specific built-in substitution mechanisms for these scenarios given that
any built-in substitution mechanism would be inherently insecure? Do we
have to support custom substitution mechanisms for these? Perhaps an
example would help since most of us are not that familiar with the
different OAuth options and you have already thought this through in the
context of OAuth?

Once we identify what we need for OAuth, we could go back and look at
whether this would also help other SASL mechanisms, SSL passwords, other
configs etc.

Thank you,

Rajini



On Sun, Apr 15, 2018 at 3:54 AM, Ron Dagostino  wrote:

> I am unsure if substitution should be supported for just JAAS configs or if
> we should allow it for cluster/broker/consumer configs.  What I think would
> be helpful would be to boil down this proposal to its most essential
> requirement, and I think the discussion has helped us arrive at what that
> looks like: an ability to inject secrets from external sources.
> SASL/OAUTHBEARER (KIP 255) requires this, and there is value for other SASL
> mechanisms at the JAAS config level as described in this KIP.  If we focus
> on just this functionality, and keep in mind the feedback about
> configuration getting out of hand, then we likely arrive at the following
> conclusions:
>
>- *If* we allow substitution in producer/consumer/broker configs (and
>again, I am not sure if this is a good idea yet) then we would only
> allow
>it for configs of type Password (which includes sasl.jaas.config); since
>these are by definition sensitive values, it is redundant to specify
>redact, and therefore redact is no longer needed as an explicit modifier
>-- it is always in effect.
>- We can and should eliminate dependencies by removing the ability to
>refer to another configuration value; this means defaultKey and
>fromValueOfKey are no longer needed as modifiers, and the keyValue
>substitution type goes away.
>- It doesn't seem reasonable to allow an empty or blank secret to be
>injected, which means notBlank and notEmpty are redundant and
>unnecessary modifiers.
>- A default value wouldn't seem to make sense with secret injection, so
>we don't need the defaultValue modifier.
>
> There are still some outstanding issues.
>
> Are substituted values cached (and if so, for how long?) or are they
> recalculated each time they are requested?  For example, if a password
> comes from some external source (file, password vault, etc.), is that
> external source queried every time the password is required, or is queried
> once and then always reused, or is it queried once and then reused for some
> amount of time before the external source is queried again?  Compare this
> to dynamic configuration as implemented via KIP 226, which is a
> push/event-driven mechanism via kafka-configs.sh, Zookeeper, and the
> Reconfigurable interface.  Changes to substituted values as proposed in
> this KIP are not push/event-driven; as proposed it will be up to Kafka to
> somehow check for changes, or to set a cache timeout as described above.
> This is clearly inferior to the push/event-driven mechanism provided 

Build failed in Jenkins: kafka-trunk-jdk8 #2552

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6696 Trogdor should support destroying tasks (#4759)

[rajinisivaram] KAFKA-6771. Make specifying partitions more flexible (#4850)

--
[...truncated 416.73 KB...]

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED


Re: [VOTE] KIP-270 A Scala wrapper library for Kafka Streams

2018-04-16 Thread Matthias J. Sax
+1 (binding)

On 4/16/18 9:33 AM, Michael Noll wrote:
> +1 (non-binding)
> 
> Thanks for contributing and shepherding this, Debasish and team!
> 
> And thanks also to Alexis Seigneurin for the original start of the Scala
> wrapper API.
> 
> Best,
> Michael
> 
> 
> 
> On Thu, Apr 12, 2018 at 10:24 AM, Damian Guy  wrote:
> 
>> Thanks for the KIP Debasish - +1 binding
>>
>> On Wed, 11 Apr 2018 at 12:16 Ted Yu  wrote:
>>
>>> +1
>>>  Original message From: Debasish Ghosh <
>>> debasish.gh...@lightbend.com> Date: 4/11/18  3:09 AM  (GMT-08:00) To:
>>> dev@kafka.apache.org Subject: [VOTE] KIP-270 A Scala wrapper library for
>>> Kafka Streams
>>> Hello everyone -
>>>
>>> This is in continuation to the discussion regarding
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 270+-+A+Scala+Wrapper+Library+for+Kafka+Streams
>>> ,
>>> which is a KIP for implementing a Scala wrapper library for Kafka
>> Streams.
>>>
>>> We have had a PR (https://github.com/apache/kafka/pull/4756) for quite
>>> some
>>> time now and we have had lots of discussions and suggested improvements.
>>> Thanks to all who participated in the discussion and came up with all the
>>> suggestions for improvements.
>>>
>>> The purpose of this thread is to get an agreement on the implementation
>> and
>>> have it included as part of Kafka.
>>>
>>> Looking forward ..
>>>
>>> regards.
>>>
>>> --
>>> Debasish Ghosh
>>> Principal Engineer
>>>
>>> Twitter: @debasishg
>>> Blog: http://debasishg.blogspot.com
>>> Code: https://github.com/debasishg
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk10 #19

2018-04-16 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6696 Trogdor should support destroying tasks (#4759)

[rajinisivaram] KAFKA-6771. Make specifying partitions more flexible (#4850)

--
[...truncated 1.48 MB...]
at java.lang.Thread.run(Thread.java:748)

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED


Re: [DISCUSS] KIP-282: Add the listener name to the authentication context

2018-04-16 Thread Mickael Maison
I have not seen any replies yet =(

Considering how minor it is, if there are no objections I'll start a
vote in a few days

On Thu, Apr 5, 2018 at 5:21 PM, Mickael Maison  wrote:
> Hi all,
>
> I have submitted KIP-282 to add the listener name to the authentication 
> context:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-282%3A+Add+the+listener+name+to+the+authentication+context
>
> This is a very minor KIP to simplify identifying the source of a
> connection instead of relying on parsing the client address when
> building Principals.
>
> Feedback and suggestions are welcome!
>
> Thanks


Re: [VOTE] #2 KIP-248: Create New ConfigCommand That Uses The New AdminClient

2018-04-16 Thread Rajini Sivaram
Hi Viktor,

The KIP proposes to remove the ability to configure using ZooKeeper. This
means we will no longer have the ability to start up a cluster with SCRAM
credentials since we first need to create SCRAM credentials before brokers
can start if the broker uses SCRAM for inter-broker communication and we
need SCRAM credentials for the AdminClient before we can create new ones.
For quotas as well, we will no longer be able to configure quotas until at
least one broker has been started. Perhaps, we ought to retain the ability
to configure using ZooKeeper for these initialization scenarios and support
only AdminClient for dynamic updates?

What do others think?

Regards,

Rajini

On Sun, Apr 15, 2018 at 10:28 AM, Ted Yu  wrote:

> +1
>  Original message From: zhenya Sun 
> Date: 4/15/18  12:42 AM  (GMT-08:00) To: dev  Cc:
> dev  Subject: Re: [VOTE] #2 KIP-248: Create New
> ConfigCommand That Uses The New AdminClient
> non-binding +1
>
>
>
>
>
> from my iphone!
> On 04/15/2018 15:41, Attila Sasvári wrote:
> Thanks for updating the KIP.
>
> +1 (non-binding)
>
> Viktor Somogyi  ezt írta (időpont: 2018. ápr. 9.,
> H 16:49):
>
> > Hi Magnus,
> >
> > Thanks for the heads up, added the endianness to the KIP. Here is the
> > current text:
> >
> > "Double
> > A new type needs to be added to transfer quota values. Since the protocol
> > classes in Kafka already uses ByteBuffers it is logical to use their
> > functionality for serializing doubles. The serialization is basically a
> > representation of the specified floating-point value according to the
> IEEE
> > 754 floating-point "double format" bit layout. The ByteBuffer serializer
> > writes eight bytes containing the given double value, in Big Endian byte
> > order, into this buffer at the current position, and then increments the
> > position by eight.
> >
> > The implementation will be defined in
> > org.apache.kafka.common.protocol.types with the other protocol types
> and it
> > will have no default value much like the other types available in the
> > protocol."
> >
> > Also, I haven't changed the protocol docs yet but will do so in my PR for
> > this feature.
> >
> > Let me know if you'd still add something.
> >
> > Regards,
> > Viktor
> >
> >
> > On Mon, Apr 9, 2018 at 3:32 PM, Magnus Edenhill 
> > wrote:
> >
> > > Hi Viktor,
> > >
> > > since serialization of floats isn't as straight forward as integers,
> > please
> > > specify the exact serialization format of DOUBLE in the protocol docs
> > > (e.g., IEEE 754),
> > > including endianness (big-endian please).
> > >
> > > This will help the non-java client ecosystem.
> > >
> > > Thanks,
> > > Magnus
> > >
> > >
> > > 2018-04-09 15:16 GMT+02:00 Viktor Somogyi :
> > >
> > > > Hi Attila,
> > > >
> > > > 1. It uses ByteBuffers, which in turn will use
> Double.doubleToLongBits
> > to
> > > > convert the double value to a long and that long will be written in
> the
> > > > buffer. I'v updated the KIP with this.
> > > > 2. Good idea, modified it.
> > > > 3. During the discussion I remember we didn't really decide which one
> > > would
> > > > be the better one but I agree that a wrapper class that makes sure
> the
> > > list
> > > > that is used as a key is immutable is a good idea and would ease the
> > life
> > > > of people using the interface. Also more importantly would make sure
> > that
> > > > we always use the same hashCode. I have created wrapper classes for
> the
> > > map
> > > > value as well but that was reverted to keep things consistent.
> Although
> > > for
> > > > the key I think we wouldn't break consistency. I updated the KIP.
> > > >
> > > > Thanks,
> > > > Viktor
> > > >
> > > >
> > > > On Tue, Apr 3, 2018 at 1:27 PM, Attila Sasvári 
> > > > wrote:
> > > >
> > > > > Thanks for working on it Viktor.
> > > > >
> > > > > It looks good to me, but I have some questions:
> > > > > - I see a new type DOUBLE is used for quota_value , and it is not
> > > listed
> > > > > among the primitive types on the Kafka protocol guide. Can you add
> > some
> > > > > more details?
> > > > > - I am not sure that using an environment (i.e.
> > > USE_OLD_COMMAND)variable
> > > > is
> > > > > the best way to control behaviour of kafka-config.sh . In other
> > scripts
> > > > > (e.g. console-consumer) an argument is passed (e.g.
> --new-consumer).
> > If
> > > > we
> > > > > still want to use it, then I would suggest something like
> > > > > USE_OLD_KAFKA_CONFIG_COMMAND. What do you think?
> > > > > - I have seen maps like Map > > Collection>.
> > > > > If List is the key type, you should make sure that
> > this
> > > > > List is immutable. Have you considered to introduce a new wrapper
> > > class?
> > > > >
> > > > > Regards,
> > > > > - Attila
> > > > >
> > > > > On Thu, Mar 29, 2018 at 1:46 PM, zhenya Sun 

Re: [VOTE] KIP-270 A Scala wrapper library for Kafka Streams

2018-04-16 Thread Michael Noll
+1 (non-binding)

Thanks for contributing and shepherding this, Debasish and team!

And thanks also to Alexis Seigneurin for the original start of the Scala
wrapper API.

Best,
Michael



On Thu, Apr 12, 2018 at 10:24 AM, Damian Guy  wrote:

> Thanks for the KIP Debasish - +1 binding
>
> On Wed, 11 Apr 2018 at 12:16 Ted Yu  wrote:
>
> > +1
> >  Original message From: Debasish Ghosh <
> > debasish.gh...@lightbend.com> Date: 4/11/18  3:09 AM  (GMT-08:00) To:
> > dev@kafka.apache.org Subject: [VOTE] KIP-270 A Scala wrapper library for
> > Kafka Streams
> > Hello everyone -
> >
> > This is in continuation to the discussion regarding
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 270+-+A+Scala+Wrapper+Library+for+Kafka+Streams
> > ,
> > which is a KIP for implementing a Scala wrapper library for Kafka
> Streams.
> >
> > We have had a PR (https://github.com/apache/kafka/pull/4756) for quite
> > some
> > time now and we have had lots of discussions and suggested improvements.
> > Thanks to all who participated in the discussion and came up with all the
> > suggestions for improvements.
> >
> > The purpose of this thread is to get an agreement on the implementation
> and
> > have it included as part of Kafka.
> >
> > Looking forward ..
> >
> > regards.
> >
> > --
> > Debasish Ghosh
> > Principal Engineer
> >
> > Twitter: @debasishg
> > Blog: http://debasishg.blogspot.com
> > Code: https://github.com/debasishg
> >
>