[jira] [Created] (KAFKA-7028) super.users doesn't work with custom principals

2018-06-08 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-7028:
--

 Summary: super.users doesn't work with custom principals
 Key: KAFKA-7028
 URL: https://issues.apache.org/jira/browse/KAFKA-7028
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
 Fix For: 2.0.0


SimpleAclAuthorizer creates a KafkaPrincipal for the users defined in the 
super.users broker config. However, it should use the configured 
KafkaPrincipalBuilder so that it works with a custom defined one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


The purpose of ProducerRecord Headers

2018-06-08 Thread ????????????
Hi team:
When I reading KafkaProducer doSend() and I found each record have a 
Header[] headers . 
But I don't know why we should use the headers ? 
Someone could help me ?  
thanks a lot ~

Re: [DISCUSS] KIP-312: Add Overloaded StreamsBuilder Build Method to Accept java.util.Properties

2018-06-08 Thread Ted Yu
Since there're only two values for the optional optimization config
introduced by KAFKA-6935, I wonder the overloaded build method (with Properties
instance) would make the config unnecessary.

nit:
* @return @return the {@link Topology} that represents the specified
processing logic

Double @return above.

Cheers

On Fri, Jun 8, 2018 at 3:20 PM, Bill Bejeck  wrote:

> All,
>
> I'd like to start the discussion for adding an overloaded method to
> StreamsBuilder taking a java.util.Properties instance.
>
> The KIP is located here :
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 312%3A+Add+Overloaded+StreamsBuilder+Build+Method+
> to+Accept+java.util.Properties
>
> I look forward to your comments.
>
> Thanks,
> Bill
>


[DISCUSS] KIP-312: Add Overloaded StreamsBuilder Build Method to Accept java.util.Properties

2018-06-08 Thread Bill Bejeck
All,

I'd like to start the discussion for adding an overloaded method to
StreamsBuilder taking a java.util.Properties instance.

The KIP is located here :
https://cwiki.apache.org/confluence/display/KAFKA/KIP-312%3A+Add+Overloaded+StreamsBuilder+Build+Method+to+Accept+java.util.Properties

I look forward to your comments.

Thanks,
Bill


[jira] [Created] (KAFKA-7027) Overloaded StreamsBuilder Build Method to Accept java.util.Properties

2018-06-08 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-7027:
--

 Summary: Overloaded StreamsBuilder Build Method to Accept 
java.util.Properties
 Key: KAFKA-7027
 URL: https://issues.apache.org/jira/browse/KAFKA-7027
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 2.1.0


Add overloaded method to {{StreamsBuilder}} accepting a 
{{java.utils.Properties}} instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-2.0-jdk8 #8

2018-06-08 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-313: Add KStream.flatTransform and KStream.flatTransformValues

2018-06-08 Thread Bruno Cadonna
Hi list,

I created KIP-313 [1] for JIRA issue KAFKA-4217 [2] and I would like to
put the KIP up for discussion.

Best regards,
Bruno


[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-313%3A+Add+KStream.flatTransform+and+KStream.flatTransformValues

[2] https://issues.apache.org/jira/browse/KAFKA-4217


[jira] [Created] (KAFKA-7026) Sticky assignor could assign a partition to multiple consumers

2018-06-08 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-7026:
--

 Summary: Sticky assignor could assign a partition to multiple 
consumers
 Key: KAFKA-7026
 URL: https://issues.apache.org/jira/browse/KAFKA-7026
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian


In the following scenario sticky assignor assigns a topic partition to two 
consumers in the group:
 # Create a topic {{test}} with a single partition
 # Start consumer {{c1}} in group {{sticky-group}} ({{c1}} becomes group leader 
and gets {{test-0}})
 # Start consumer {{c2}}  in group {{sticky-group}} ({{c1}} holds onto 
{{test-0}}, {{c2}} does not get any partition) 
 # Pause {{c1}} (e.g. using Java debugger) ({{c2}} becomes leader and takes 
over {{test-0}}, {{c1}} leaves the group)
 # Resume {{c1}}

At this point both {{c1}} and {{c2}} will have {{test-0}} assigned to them.

 

The reason is {{c1}} still has kept its previous assignment ({{test-0}}) from 
the last assignment it received from the leader (itself) and did not get the 
next round of assignments (when {{c2}} became leader) because it was paused. 
Both {{c1}} and {{c2}} enter the rebalance supplying {{test-0}} as their 
existing assignment. The sticky assignor code does not currently check for this 
duplication.

 


Note: This issue was originally reported on 
[StackOverflow|https://stackoverflow.com/questions/50761842/kafka-stickyassignor-breaking-delivery-to-single-consumer-in-the-group].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2717

2018-06-08 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5697: Use nonblocking poll in Streams (#5107)

--
[...truncated 975.87 KB...]
kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutTopicsOption PASSED


Build failed in Jenkins: kafka-2.0-jdk8 #7

2018-06-08 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-7006 - remove duplicate Scala ResourceNameType in preference to…

--
[...truncated 969.18 KB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhilePendingStateChanged STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhilePendingStateChanged PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testLoadAndRemoveTransactionsForPartition STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testLoadAndRemoveTransactionsForPartition PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldNotRemovePrepareCommitTransactionalIds STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldNotRemovePrepareCommitTransactionalIds PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorLoadingError STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #2716

2018-06-08 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-7006 - remove duplicate Scala ResourceNameType in preference to…

[jason] KAFKA-6264; Split log segments as needed if offsets overflow the indexes

--
[...truncated 979.17 KB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 

Re: kafka ack=all and min-isr

2018-06-08 Thread James Cheng
I wrote a blog post on min.isr that explains it in more detail:
https://logallthethings.com/2016/07/11/min-insync-replicas-what-does-it-actually-do/

The post is 2 years old, but I think it's still correct.

-James

> On Jun 7, 2018, at 10:31 PM, Carl Samuelson  wrote:
> 
> Hi
> 
> Hopefully this is the correct email address and forum for this.
> I asked this question on stack overflow, but did not get an answer:
> https://stackoverflow.com/questions/50689177/kafka-ack-all-and-min-isr
> 
> *Summary*
> 
> The docs and code comments for Kafka suggest that when the producer setting
> acks is set to allthen an ack will only be sent to the producer when *all
> in-sync replicas have caught up*, but the code (Partition.Scala,
> checkEnoughReplicasReachOffset) seems to suggest that the ack is sent as
> soon as *min in-sync replicas have caught up*.
> 
> *Details*
> 
> The kafka docs have this:
> 
> acks=all This means the leader will wait for the full set of in-sync
> replicas to acknowledge the record. source
> 
> 
> Also, looking at the Kafka source code - partition.scala
> checkEnoughReplicasReachOffset() has the following comment (emphasis mine):
> 
> Note that this method will only be called if requiredAcks = -1 and we are
> waiting for *all replicas*in ISR to be fully caught up to the (local)
> leader's offset corresponding to this produce request before we acknowledge
> the produce request.
> 
> Finally, this answer  on Stack
> Overflow (emphasis mine again)
> 
> Also the min in-sync replica setting specifies the minimum number of
> replicas that need to be in-sync for the partition to remain available for
> writes. When a producer specifies ack (-1 / all config) it will still wait
> for acks from *all in sync replicas* at that moment (independent of the
> setting for min in-sync replicas).
> 
> But when I look at the code in Partition.Scala (note minIsr <
> curInSyncReplicas.size):
> 
> def checkEnoughReplicasReachOffset(requiredOffset: Long): (Boolean, Errors) = 
> {
>  ...
>  val minIsr = leaderReplica.log.get.config.minInSyncReplicas
>  if (leaderReplica.highWatermark.messageOffset >= requiredOffset) {
>if (minIsr <= curInSyncReplicas.size)
>  (true, Errors.NONE)
> 
> The code that calls this returns the ack:
> 
> if (error != Errors.NONE || hasEnough) {
>  status.acksPending = false
>  status.responseStatus.error = error
> }
> 
> So, the code looks like it returns an ack as soon as the in-sync replica
> set are greater than min in-sync replicas. However, the documentation and
> comments suggest that the ack is only sent once all in-sync replicas have
> caught up. What am I missing? At the very least, the comment above
> checkEnoughReplicasReachOffset looks like it should be changed.
> Regards,
> 
> Carl



[jira] [Resolved] (KAFKA-5697) StreamThread.shutdown() need to interrupt the stream threads to break the loop

2018-06-08 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5697.
--
   Resolution: Fixed
Fix Version/s: (was: 2.1.0)
   2.0.0

> StreamThread.shutdown() need to interrupt the stream threads to break the loop
> --
>
> Key: KAFKA-5697
> URL: https://issues.apache.org/jira/browse/KAFKA-5697
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: John Roesler
>Priority: Major
>  Labels: newbie
> Fix For: 2.0.0
>
>
> In {{StreamThread.shutdown()}} we currently do nothing but set the state, 
> hoping the stream thread may eventually check it and shutdown itself. 
> However, under certain scenarios the thread may get blocked within a single 
> loop and hence will never check on this state enum. For example, it's 
> {{consumer.poll}} call trigger {{ensureCoordinatorReady()}} which will block 
> until the coordinator can be found. If the coordinator broker is never up and 
> running then the Stream instance will be blocked forever.
> A simple way to produce this issue is to start the work count demo without 
> starting the ZK / Kafka broker, and then it will get stuck in a single loop 
> and even `ctrl-C` will not stop it since its set state will never be read by 
> the thread:
> {code:java}
> [2017-08-03 15:17:39,981] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-03 15:17:40,046] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-03 15:17:40,101] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-03 15:17:40,206] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-03 15:17:40,261] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-03 15:17:40,366] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> [2017-08-03 15:17:40,472] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> ^C[2017-08-03 15:17:40,580] WARN Connection to node -1 could not be 
> established. Broker may not be available. 
> (org.apache.kafka.clients.NetworkClient)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Need Access to Create KIP - Second Time Requesting

2018-06-08 Thread Matthias J. Sax
Your request was already granted.

Same day when you sent the first email.

Did you not see my reply on the dev list? (cc'ed you this time to make
sure you get the email).


-Matthias

On 6/8/18 5:05 AM, Adam Bellemare wrote:
> Hello
> 
> Sending a second request to get access to make a KIP.
> 
> As per: https://cwiki.apache.org/confluence/display/KAFKA/
> Kafka+Improvement+Proposals
> 
> I request access to be able to create a KIP for https://issues.apache.org/
> jira/browse/KAFKA-4628.
> 
> Wiki username: adam.bellemare
> 
> Thanks,
> Adam
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-266: Add TimeoutException for KafkaConsumer#position

2018-06-08 Thread Colin McCabe
On Wed, Jun 6, 2018, at 13:10, Guozhang Wang wrote:
> The reason that I'm hesitant to use the term "timeout" is that it's being
> over-used for multiple semantics: request RPC timeout, consumer session
> heartbeat liveness "timeout", and API blocking timeout. We can argue that
> in English both of them are indeed called a "timeout" value, but personally
> I'm afraid for normal users having the same word `timeout` would be
> confusing, and hence I'm proposing for using a slight different term.

Hmm.  I can see why you want to have a different-sounding name from the 
existing timeouts.  However, I think it could be less clear to omit the word 
timeout.  If your operation times out, and you get a TimeoutException, what 
configuration do you raise?  The timeout.  If the configuration name doesn't 
tell you that it's a timeout, it's harder to understand what needs to be 
changed.

For example, if "group.min.session.timeout.ms" was called something like 
"group.min.session.block.ms" or "group.min.session.heartbeat.ms", it would not 
be as clear.  

> Comparing with adding a new config, I'm actually more concerned about
> leveraging the request.timeout value for a default blocking timeout, since
> the default value is hard to decide, since for different blocking calls, it
> may have different rpc round trips behind the scene, so simply setting it
> as request.timeout + a delta may not be always good enough.

Yes, I agree that we need a new configuration key.  I don't think we should try 
to hard-code this.

I think we should just bite the bullet and create a new configuration key like 
"default.api.timeout.ms" that sets the default timeout for API calls.  The hard 
part is adding the new configuration in a way that doesn't disrupt existing 
configurations.

There are at least a few cases to worry about:

1. Someone uses the default (pretty long) timeouts for everything.
2. Someone has configured a short request.timeout.ms, in an effort to see 
failures more quickly
3. Someone has configured a very long (or maybe infinite) request.timeout.ms

Case #2 is probably the one which is hardest to support well.  We could 
probably do it with logic like this:

A. If default.api.timeout.ms is explicitly set, we use that value.  otherwise...
B. If request.timeout.ms is longer than 2 minutes, we set 
default.api.timeout.ms to request.timeout.ms + 1500.  otherwise...
 we set default.api.timeout.ms to request.timeout.ms

best,
Colin

> 
> 
> Guozhang
> 
> 
> On Tue, Jun 5, 2018 at 5:18 PM, Ted Yu  wrote:
> 
> > I see where the 0.5 in your previous response came about.
> >
> > The reason I wrote 'request.timeout.ms + 15000' was that I treat this
> > value
> > in place of the default.block.ms
> > According to your earlier description:
> >
> > bq. request.timeout.ms controls something different: the amount of time
> > we're willing to wait for an RPC to complete.
> >
> > Basically we're in agreement. It is just that figuring out good default is
> > non-trivial.
> >
> > On Tue, Jun 5, 2018 at 4:44 PM, Colin McCabe  wrote:
> >
> > > On Tue, Jun 5, 2018, at 16:35, Ted Yu wrote:
> > > > bq. could probably come up with a good default
> > > >
> > > > That's the difficult part.
> > > >
> > > > bq. max(1000, 0.5 * request.timeout.ms)
> > > >
> > > > Looking at some existing samples:
> > > > In tests/kafkatest/tests/connect/templates/connect-distributed.
> > properties
> > > ,
> > > > we have:
> > > >   request.timeout.ms=3
> > > >
> > > > Isn't the above formula putting an upper bound 15000 for the RPC
> > timeout
> > > ?
> > >
> > > Correct.  It would put a 15 second default on the RPC timeout in this
> > > case.  If that's too short, the user has the option to change it.
> > >
> > > If we feel that 15 seconds is too short, we could put a floor of 30
> > > seconds or whatever on the RPC timeout, instead of 1 second.
> > >
> > > >
> > > > By fixed duration, I meant something like
> > > >   request.timeout.ms + 15000
> > >
> > > The RPC timeout should be shorter than the request timeout, so that we
> > get
> > > multiple tries if the RPC hangs due to network issues.  It should not be
> > > longer.
> > >
> > > best,
> > > Colin
> > >
> > > >
> > > > Cheers
> > > >
> > > > On Tue, Jun 5, 2018 at 4:27 PM, Colin McCabe 
> > wrote:
> > > >
> > > > > I don't think it can be fixed.  The RPC duration is something that
> > you
> > > > > might reasonably want to tune.  For example, if it's too low, you
> > > might not
> > > > > be able to make progress at all on a heavily loaded server.
> > > > >
> > > > > We could probably come up with a good default, however.
> > > rpc.timeout.ms
> > > > > could be set to something like
> > > > > max(1000, 0.5 * request.timeout.ms)
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > >
> > > > > On Tue, Jun 5, 2018, at 16:21, Ted Yu wrote:
> > > > > > bq. we must make the API timeout longer than the RPC timeout
> > > > > >
> > > > > > I agree with the above.
> > > > > >
> > > > > > How about adding a 

[jira] [Resolved] (KAFKA-7006) Remove duplicate Scala ResourceNameType class

2018-06-08 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-7006.

Resolution: Fixed

merged the PR to trunk and 2.0 branch.

> Remove duplicate Scala ResourceNameType class
> -
>
> Key: KAFKA-7006
> URL: https://issues.apache.org/jira/browse/KAFKA-7006
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, security
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Major
> Fix For: 2.0.0
>
>
> Relating to one of the outstanding work items in PR 
> [#5117|[https://github.com/apache/kafka/pull/5117]...]
> The kafka.security.auth.ResourceTypeName class should be dropped in favour of 
> the Java.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7025) Android client support

2018-06-08 Thread Martin Vysny (JIRA)
Martin Vysny created KAFKA-7025:
---

 Summary: Android client support
 Key: KAFKA-7025
 URL: https://issues.apache.org/jira/browse/KAFKA-7025
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.0.1
Reporter: Martin Vysny


Adding kafka-clients:1.0.0 (also 1.0.1 and 1.1.0) to an Android project would 
make  the compilation fail: com.android.tools.r8.ApiLevelException: 
MethodHandle.invoke and MethodHandle.invokeExact are only supported starting 
with Android O (--min-api 26)

 

Would it be possible to make the kafka-clients backward compatible with 
reasonable Android API (say, 4.4.x) please?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7024) Rocksdb state directory should be created before opening the DB

2018-06-08 Thread Abhishek Agarwal (JIRA)
Abhishek Agarwal created KAFKA-7024:
---

 Summary: Rocksdb state directory should be created before opening 
the DB
 Key: KAFKA-7024
 URL: https://issues.apache.org/jira/browse/KAFKA-7024
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Abhishek Agarwal


After enabling rocksDB logging, We continually see these errors in kafka stream 
logs, everytime a new window segment is created
```
Error when reading  
```
While its not a problem in itself, since rocksDB internally will create the 
directory but It will do so only after logging the above error. It would avoid 
unnecessary logging if the segment directory can be created in advance. Right 
now, only the parent directories are created for a rocksDB segment. Logging is 
more prominent when there are many partitions and segment size is smaller 
(minute or two). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Need Access to Create KIP - Second Time Requesting

2018-06-08 Thread Adam Bellemare
Hello

Sending a second request to get access to make a KIP.

As per: https://cwiki.apache.org/confluence/display/KAFKA/
Kafka+Improvement+Proposals

I request access to be able to create a KIP for https://issues.apache.org/
jira/browse/KAFKA-4628.

Wiki username: adam.bellemare

Thanks,
Adam


Re: [DISCUSS] 0.10.2.2 bug fix release

2018-06-08 Thread Rajini Sivaram
+1

Thanks, Matthias!


On Fri, Jun 8, 2018 at 1:58 AM, Ismael Juma  wrote:

> One more thing, I suggest we go with what we have  for all 3 releases you
> are doing. We should aim to make the big fix release process as smooth as
> possible and we should strive to avoid last minute additions to older
> release branches. We can be a bit more flexible for 1.0.2 since it's more
> recent.
>
> Ismael
>
> On 7 Jun 2018 4:34 pm, "Ismael Juma"  wrote:
>
> Thanks for doing this Matthias, +1.
>
> Ismael
>
> On Thu, Jun 7, 2018 at 1:50 PM Matthias J. Sax 
> wrote:
>
> > Dear all,
> >
> > I want to propose a 0.10.2.2 bug fix release. 0.10.2.1 is over a year
> > old and a couple of critical fixes are available for 0.10.2.2.
> >
> > Please find a list of all 24 resolved tickets here:
> >
> >
> > https://issues.apache.org/jira/browse/KAFKA-6566?jql=
> project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%200.10.2.2%
> 20AND%20resolution%20!%3D%20Unresolved%20ORDER%20BY%
> 20priority%20DESC%2C%20status%20DESC%2C%20updated%20DESC%20%20
> >
> > There are no open tickets with target version 0.10.2.2 at the moment. If
> > there are any tickets you want to get included in 0.10.2.2 please let us
> > know as soon as possible.
> >
> >
> > If nobody objects, I plan to create the first RC for 0.10.2.2 next
> > Thursday. Please find a summary in the wiki:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.2.2
> >
> >
> > Thanks a lot,
> >   Matthias
> >
> >
> >
> >
>


Re: [DISCUSS] 0.11.0.3 bug fix release

2018-06-08 Thread Rajini Sivaram
+1

Thanks, Matthias!

On Fri, Jun 8, 2018 at 1:54 AM, Ismael Juma  wrote:

> +1, thanks!
>
> On Thu, 7 Jun 2018, 11:16 Matthias J. Sax,  wrote:
>
> > Dear all,
> >
> > I want to propose a 0.11.0.3 bug fix release. 0.11.0.2 is 6 months old
> > and a couple of critical fixes are available for 0.11.0.3.
> >
> > Please find a list of all 16 resolved tickets here:
> >
> >
> > https://issues.apache.org/jira/browse/KAFKA-6925?jql=
> project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%
> 20Closed)%20AND%20fixVersion%20%3D%200.11.0.3
> >
> > There are no open tickets with target version 0.11.0.3 at the moment. If
> > there are any tickets you want to get included in 0.11.0.3 please let us
> > know as soon as possible.
> >
> >
> > If nobody objects, I plan to create the first RC for 0.11.0.3 next
> > Thursday. Please find a summary in the wiki:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.3
> >
> >
> > Thanks a lot,
> >   Matthias
> >
> >
> >
>


Re: [DISCUSS] 1.0.2 bug fix release

2018-06-08 Thread Rajini Sivaram
+1

Thank you, Matthias!

On Fri, Jun 8, 2018 at 1:53 AM, Ismael Juma  wrote:

> +1, thanks!
>
> On Thu, 7 Jun 2018, 11:16 Matthias J. Sax,  wrote:
>
> > Dear all,
> >
> > I want to propose a 1.0.2 bug fix release. 1.0.1 is 3 months old and a
> > couple of critical fixes are available for 1.0.2.
> >
> > Please find a list of all 14 resolved tickets here:
> >
> >
> > https://issues.apache.org/jira/browse/KAFKA-6937?jql=
> project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%
> 20Closed)%20AND%20fixVersion%20%3D%201.0.2
> >
> > There are 7 open tickets with target version 1.0.2.
> >
> >
> > https://issues.apache.org/jira/browse/KAFKA-6083?jql=
> project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%
> 22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%
> 20fixVersion%20%3D%201.0.2
> >
> > If you own one of these tickets, please let us know if you plan to
> > resolve them soon. Otherwise, please change the target version to a
> > future release.
> >
> > If there are any other tickets you want to get included in 1.0.2 please
> > let us know as soon as possible.
> >
> >
> > If nobody objects, I plan to create the first RC for 1.0.2 next
> > Thursday. Please find a summary in the wiki:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/Copy+
> of+Release+Plan+1.0.2
> >
> >
> > Thanks a lot,
> >   Matthias
> >
> >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2715

2018-06-08 Thread Apache Jenkins Server
See 




Re: kafka ack=all and min-isr

2018-06-08 Thread Ismael Juma
The key point is:

if (leaderReplica.highWatermark.messageOffset >= requiredOffset) {

The high watermark only moves when all the replicas in ISR have that
particular offset. Does that clarify it?

Ismael

On Thu, Jun 7, 2018 at 10:31 PM Carl Samuelson 
wrote:

> Hi
>
> Hopefully this is the correct email address and forum for this.
> I asked this question on stack overflow, but did not get an answer:
> https://stackoverflow.com/questions/50689177/kafka-ack-all-and-min-isr
>
> *Summary*
>
> The docs and code comments for Kafka suggest that when the producer setting
> acks is set to allthen an ack will only be sent to the producer when *all
> in-sync replicas have caught up*, but the code (Partition.Scala,
> checkEnoughReplicasReachOffset) seems to suggest that the ack is sent as
> soon as *min in-sync replicas have caught up*.
>
> *Details*
>
> The kafka docs have this:
>
> acks=all This means the leader will wait for the full set of in-sync
> replicas to acknowledge the record. source
> 
>
> Also, looking at the Kafka source code - partition.scala
> checkEnoughReplicasReachOffset() has the following comment (emphasis mine):
>
> Note that this method will only be called if requiredAcks = -1 and we are
> waiting for *all replicas*in ISR to be fully caught up to the (local)
> leader's offset corresponding to this produce request before we acknowledge
> the produce request.
>
> Finally, this answer  on
> Stack
> Overflow (emphasis mine again)
>
> Also the min in-sync replica setting specifies the minimum number of
> replicas that need to be in-sync for the partition to remain available for
> writes. When a producer specifies ack (-1 / all config) it will still wait
> for acks from *all in sync replicas* at that moment (independent of the
> setting for min in-sync replicas).
>
> But when I look at the code in Partition.Scala (note minIsr <
> curInSyncReplicas.size):
>
> def checkEnoughReplicasReachOffset(requiredOffset: Long): (Boolean,
> Errors) = {
>   ...
>   val minIsr = leaderReplica.log.get.config.minInSyncReplicas
>   if (leaderReplica.highWatermark.messageOffset >= requiredOffset) {
> if (minIsr <= curInSyncReplicas.size)
>   (true, Errors.NONE)
>
> The code that calls this returns the ack:
>
> if (error != Errors.NONE || hasEnough) {
>   status.acksPending = false
>   status.responseStatus.error = error
> }
>
> So, the code looks like it returns an ack as soon as the in-sync replica
> set are greater than min in-sync replicas. However, the documentation and
> comments suggest that the ack is only sent once all in-sync replicas have
> caught up. What am I missing? At the very least, the comment above
> checkEnoughReplicasReachOffset looks like it should be changed.
> Regards,
>
> Carl
>