Build failed in Jenkins: kafka-1.0-jdk7 #193

2018-05-25 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6937: In-sync replica delayed during fetch if replica throttle is

--
[...truncated 216.31 KB...]
kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance STARTED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance PASSED

kafka.api.ConsumerBounceTest > testClose STARTED

kafka.api.ConsumerBounceTest > testClose PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable STARTED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures SKIPPED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserQuotaTest > testThrottledRequest STARTED

kafka.api.UserQuotaTest > testThrottledRequest PASSED

kafka.api.ApiVersionTest > testMinVersionForMessageFormat STARTED

kafka.api.ApiVersionTest > testMinVersionForMessageFormat PASSED

kafka.api.ApiVersionTest > testApply STARTED

kafka.api.ApiVersionTest > testApply PASSED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms STARTED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms PASSED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption PASSED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec STARTED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.MetricsTest > testMetrics STARTED

kafka.api.MetricsTest > testMetrics PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testClose STARTED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush STARTED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition STARTED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset STARTED

kafka.api.SslProducerSendTest > 

Build failed in Jenkins: kafka-trunk-jdk10 #138

2018-05-25 Thread Apache Jenkins Server
See 


Changes:

[lindong28] MINOR: AdminClient should respect retry backoff

--
[...truncated 1.49 MB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhilePendingStateChanged STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhilePendingStateChanged PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testLoadAndRemoveTransactionsForPartition STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testLoadAndRemoveTransactionsForPartition PASSED


Jenkins build is back to normal : kafka-trunk-jdk8 #2673

2018-05-25 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #137

2018-05-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6937: In-sync replica delayed during fetch if replica throttle is

[ismael] KAFKA-6930: Convert byte array to string in KafkaZkClient debug log

[junrao] Minor: Fixed ConsumerOffset#path (#5060)

--
[...truncated 1.49 MB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed 

Jenkins build is back to normal : kafka-trunk-jdk10 #136

2018-05-25 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6930) Update KafkaZkClient debug log

2018-05-25 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6930.

Resolution: Fixed

> Update KafkaZkClient debug log
> --
>
> Key: KAFKA-6930
> URL: https://issues.apache.org/jira/browse/KAFKA-6930
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, zkclient
>Affects Versions: 1.1.0
>Reporter: darion yaphet
>Priority: Trivial
> Attachments: [KAFKA-6930]_Update_KafkaZkClient_debug_log.patch, 
> snapshot.png
>
>
> Currently , KafkaZkClient could print data: Array[Byte] in debug log , we 
> should print data as String . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6937) In-sync replica delayed during fetch if replica throttle is exceeded

2018-05-25 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-6937.

   Resolution: Fixed
Fix Version/s: 1.1.1
   1.0.2
   2.0.0

Merged the PR.

> In-sync replica delayed during fetch if replica throttle is exceeded
> 
>
> Key: KAFKA-6937
> URL: https://issues.apache.org/jira/browse/KAFKA-6937
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.1, 1.1.0, 1.0.1
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Major
> Fix For: 2.0.0, 1.0.2, 1.1.1
>
>
> When replication throttling is enabled, in-sync replica's traffic should 
> never be throttled. However, in DelayedFetch.tryComplete(), we incorrectly 
> delay the completion of an in-sync replica fetch request if replication 
> throttling is engaged. 
> The impact is that the producer may see increased latency if acks = all. The 
> delivery of the message to the consumer may also be delayed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6951) Implement offset expiration semantics for unsubscribed topics

2018-05-25 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6951:
--

 Summary: Implement offset expiration semantics for unsubscribed 
topics
 Key: KAFKA-6951
 URL: https://issues.apache.org/jira/browse/KAFKA-6951
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Vahid Hashemian
Assignee: Vahid Hashemian
 Fix For: 2.1.0


[This 
portion|https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets#KIP-211:ReviseExpirationSemanticsofConsumerGroupOffsets-UnsubscribingfromaTopic]
 of KIP-211 will be implemented separately from the main PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-176: Remove deprecated new-consumer option for tools

2018-05-25 Thread Ismael Juma
+1 (binding).

Ismael

On Wed, 23 May 2018, 09:04 Paolo Patierno,  wrote:

> Sorry ... I hope it's not too late but I created the KIP-176 on September
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-176%3A+Remove+deprecated+new-consumer+option+for+tools
>
> but due to be a breaking change, I needed to wait for a major release ...
> and the right time is now.
> Can you vote for that adding to the release plan, please ?
>
> Thanks,
>
> Paolo Patierno
> Principal Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: [VOTE] KIP-176: Remove deprecated new-consumer option for tools

2018-05-25 Thread Gwen Shapira
+1

Thank you for contributing and for waiting for the right release.

On Wed, May 23, 2018 at 9:04 AM, Paolo Patierno  wrote:

> Sorry ... I hope it's not too late but I created the KIP-176 on September
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 176%3A+Remove+deprecated+new-consumer+option+for+tools
>
> but due to be a breaking change, I needed to wait for a major release ...
> and the right time is now.
> Can you vote for that adding to the release plan, please ?
>
> Thanks,
>
> Paolo Patierno
> Principal Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [VOTE] KIP-176: Remove deprecated new-consumer option for tools

2018-05-25 Thread Ewen Cheslack-Postava
+1 (binding)

Just follow up on the existing version of the KIP, so nothing new here.
Possibly a bit disruptive given how quick the 1.0 -> 2.0 jump happened, but
it's the right time to remove it.

-Ewen

On Thu, May 24, 2018 at 8:13 AM Viktor Somogyi 
wrote:

> +1 (non-binding)
>
> I like this KIP. The new-consumer option is actually sometimes confusing,
> commands could work without it and 2.0.0 seems like a good time to do it.
> We should make sure to add enough tests to provide coverage.
>
> On Wed, May 23, 2018 at 7:32 PM, Ted Yu  wrote:
>
> > lgtm
> >
> > On Wed, May 23, 2018 at 9:04 AM, Paolo Patierno 
> > wrote:
> >
> > > Sorry ... I hope it's not too late but I created the KIP-176 on
> September
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 176%3A+Remove+deprecated+new-consumer+option+for+tools
> > >
> > > but due to be a breaking change, I needed to wait for a major release
> ...
> > > and the right time is now.
> > > Can you vote for that adding to the release plan, please ?
> > >
> > > Thanks,
> > >
> > > Paolo Patierno
> > > Principal Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> >
>


Re: New contributor, Jira KAFKA-6616

2018-05-25 Thread Matthias J. Sax
Added you to the list on contributors. You can know assign Jiras to
yourself.

-Matthias

On 5/25/18 10:59 AM, Victor Popov wrote:
> Hi,
> 
> I'm new to contributing and I want to start working on this issue.
> I see it's not being worked by anybody, so if someone can assign it to
> me or may be give me rights to assign it to myself.
> My Jira username is vpopov89.
> 
> Regards,
> Viktor
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-298: Error Handling in Connect kafka

2018-05-25 Thread Arjun Satish
All,

The PR discussions prompted us to consider adding a new configuration
property for the dead letter queue. This (boolean) property when enabled,
would add headers to the raw record that contain details about the error
(such as original topic, partition, offset of the record, connector name,
task number, processing stage, exception class, and error message). By
default, the property would be set to false and no headers would be added.

I understand that the KIP freeze deadline has passed, but this minor
feature brings good value to the dead letter queue.

Please find the updates to the KIP in the section here, and let me know
what you think:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-
298%3A+Error+Handling+in+Connect#KIP-298:ErrorHandlinginConnect-
DeadLetterQueue(forSinkConnectorsonly)

Thanks very much,




On Tue, May 22, 2018 at 2:37 PM, Arjun Satish 
wrote:

> All,
>
> I'm closing this voting thread with 5 binding votes (Gwen, Guozhang,
> Matthias, Ewen, Jason) and 3 non-binding votes (Konstantine, Magesh,
> Randall).
>
> Thanks everyone for your comments and feedback. Much appreciated!
>
> Best,
> Arjun
>
> On Tue, May 22, 2018 at 11:39 AM, Jason Gustafson 
> wrote:
>
>> +1. Thanks for the KIP!
>>
>> On Mon, May 21, 2018 at 1:34 PM, Arjun Satish 
>> wrote:
>>
>> > All,
>> >
>> > Thanks so much for your feedback on this KIP. I've made some small
>> > modifications today. I'll wait till midnight today (PDT) to close this
>> > vote. Please let me know if there are any further comments.
>> >
>> > Best,
>> >
>> > On Mon, May 21, 2018 at 11:29 AM, Ewen Cheslack-Postava <
>> e...@confluent.io
>> > >
>> > wrote:
>> >
>> > > +1 binding. I had one last comment in the DISCUSS thread, but not
>> really
>> > a
>> > > blocker.
>> > >
>> > > -Ewen
>> > >
>> > > On Mon, May 21, 2018 at 9:48 AM Matthias J. Sax <
>> matth...@confluent.io>
>> > > wrote:
>> > >
>> > > > +1 (binding)
>> > > >
>> > > >
>> > > >
>> > > > On 5/21/18 9:30 AM, Randall Hauch wrote:
>> > > > > Thanks, Arjun. +1 (non-binding)
>> > > > >
>> > > > > Regards,
>> > > > > Randall
>> > > > >
>> > > > > On Mon, May 21, 2018 at 11:14 AM, Guozhang Wang <
>> wangg...@gmail.com>
>> > > > wrote:
>> > > > >
>> > > > >> Thanks for the KIP. +1 (binding)
>> > > > >>
>> > > > >>
>> > > > >> Guozhang
>> > > > >>
>> > > > >> On Fri, May 18, 2018 at 3:36 PM, Gwen Shapira > >
>> > > > wrote:
>> > > > >>
>> > > > >>> +1
>> > > > >>>
>> > > > >>> Thank you! Error handling in Connect will be a huge improvement.
>> > > > >>>
>> > > > >>> On Thu, May 17, 2018, 1:58 AM Arjun Satish <
>> arjun.sat...@gmail.com
>> > >
>> > > > >> wrote:
>> > > > >>>
>> > > >  All,
>> > > > 
>> > > >  Many thanks for all the feedback on KIP-298. Highly appreciate
>> the
>> > > > time
>> > > > >>> and
>> > > >  effort you all put into it.
>> > > > 
>> > > >  I've updated the KIP accordingly, and would like to start to
>> > start a
>> > > > >> vote
>> > > >  on it.
>> > > > 
>> > > >  KIP: https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > >  298%3A+Error+Handling+in+Connect
>> > > >  JIRA: https://issues.apache.org/jira/browse/KAFKA-6738
>> > > >  Discussion Thread: https://www.mail-archive.com/
>> > > >  dev@kafka.apache.org/msg87660.html
>> > > > 
>> > > >  Thanks very much!
>> > > > 
>> > > > >>>
>> > > > >>
>> > > > >>
>> > > > >>
>> > > > >> --
>> > > > >> -- Guozhang
>> > > > >>
>> > > > >
>> > > >
>> > > >
>> > >
>> >
>>
>
>


[jira] [Created] (KAFKA-6950) Add mechanism to delay response to failed client authentication

2018-05-25 Thread Dhruvil Shah (JIRA)
Dhruvil Shah created KAFKA-6950:
---

 Summary: Add mechanism to delay response to failed client 
authentication
 Key: KAFKA-6950
 URL: https://issues.apache.org/jira/browse/KAFKA-6950
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Dhruvil Shah
Assignee: Dhruvil Shah
 Fix For: 2.0.0


This Jira is for tracking the implementation for 
[KIP-306|https://cwiki.apache.org/confluence/display/KAFKA/KIP-306%3A+Configuration+for+Delaying+Response+to+Failed+Client+Authentication].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3649) Add capability to query broker process for configuration properties

2018-05-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3649.
--
Resolution: Fixed

Requested features are added in  KIP-133: Describe and Alter Configs Admin APIs 
and KIP-226 - Dynamic Broker Configuration.

> Add capability to query broker process for configuration properties
> ---
>
> Key: KAFKA-3649
> URL: https://issues.apache.org/jira/browse/KAFKA-3649
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, config, core
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: David Tucker
>Assignee: Liquan Pei
>Priority: Major
>
> Developing an API by which running brokers could be queries for the various 
> configuration settings is an important feature to managing the Kafka cluster.
> Long term, the API could be enhanced to allow updates for those properties 
> that could be changed at run time ... but this involves a more thorough 
> evaluation of configuration properties (which once can be modified in a 
> running broker and which require a restart {of individual nodes or the entire 
> cluster}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


New contributor, Jira KAFKA-6616

2018-05-25 Thread Victor Popov
Hi,

I'm new to contributing and I want to start working on this issue.
I see it's not being worked by anybody, so if someone can assign it to
me or may be give me rights to assign it to myself.
My Jira username is vpopov89.

Regards,
Viktor


[jira] [Resolved] (KAFKA-6921) Remove old Scala producer and all related code, tests, and tools

2018-05-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6921.
--
Resolution: Fixed

> Remove old Scala producer and all related code, tests, and tools
> 
>
> Key: KAFKA-6921
> URL: https://issues.apache.org/jira/browse/KAFKA-6921
> Project: Kafka
>  Issue Type: Task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] KIP-264: Add a consumer metric to record raw fetch size

2018-05-25 Thread Vahid S Hashemian
In the absence of additional feedback on this KIP I'd like to start a 
vote.

To summarize, the KIP simply proposes to add a consumer metric to track 
the size of raw (uncompressed) fetched messages.
The KIP can be found here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-264%3A+Add+a+consumer+metric+to+record+raw+fetch+size

Thanks.
--Vahid




Re: Contributor Access to Kafka Project

2018-05-25 Thread Matthias J. Sax
What is your Jira user name ?

-Matthias



On 5/25/18 8:20 AM, vrspraveen atluri wrote:
> Hi Team,
> 
> 
> I am trying assign few bugs in Jira to but not able to do so. Could you
> help me granting the required permissions.
> 
> Thanks,
> Praveen.
> 



signature.asc
Description: OpenPGP digital signature


Re: Contributing to Apache Kafka

2018-05-25 Thread Matthias J. Sax
Done.

On 5/25/18 9:12 AM, Nikolay Izhikov wrote:
> Hello, guys!
> 
> I want to contribute to Apache Kafka.
> Please, give me permission to assign jira ticket to myself.
> 
> My Jira ID - NIzhikov
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-290: Support for wildcard suffixed ACLs

2018-05-25 Thread Andy Coates
> Since Resource is a concrete class now, we can't make it an interface
without breaking API compatibility.

Very true... hummm hacky, but we could sub-class Resource.

> Even if it were possible to do compatibly, I would argue it's a bad
idea.  If we need to add another bit of state like case insensitivity, we
don't want to have LiteralCaseInsensistiveResource,
WildcardSuffixedCaseInsensitiveResource, etc. etc.  You need 2^n subclasses
classes to represent N bits of state.

Not sure I agree - I would implement such dimensions using composition, not
different implementations, e.g. new CaseInsenisticeResourceFilter(new
PrefixedResourceFilter("/foo")) to get a case-insensitive prefixed filter.



On 22 May 2018 at 05:15, Colin McCabe  wrote:

> On Mon, May 21, 2018, at 04:53, Andy Coates wrote:
> > Hey Piyush,
> >
> > Thanks for the updated KIP! Couple of minor points from me:
> >
> > When storing wildcard-suffixed Acls in ZK, drop the asterisk of the end
> for
> > the path, e.g. change "*/kafka-wildcard-acl/Topic/teamA*" * to "*/*
> > *kafka-wildcard-acl**/Topic/**teamA"*. This reduces error conditions,
> i.e.
> > this is a place for storing wildcard-suffixed Acls, so it implicitly ends
> > in an asterisk. If you include the asterisk in the path then you need to
> > validate each entry, when reading, ends with an asterisk, and do
> something
> > if they don't. If you don't include the training asterisk then there is
> > nothing to validate and you can treat the prefix as a literal, (i.e. no
> > escaping needed).  TBH I'd probably drop the asterisk from the in-memory
> > representation as well, for the same reason.
>
> Hi Andy,
>
> I agree.  If everything in ZK under /kafka-wildcard-acl/ is a prefix ACL,
> there is no need to include the star at the end.  And really, it should be
> called something like /kafka-prefix-acl/, since it's only vaguely related
> to the idea of wildcards.
>
> >
> > Rather than creating an enum to indicate the type of a resource, you
> could
> > instead use polymorphism, e.g. make Resource an interface and have two
> > implementations: LiteralResource and WildcardSuffixedResource.  This is
> > also extensible, but may also allow for a cleaner implementation.
>
> Since Resource is a concrete class now, we can't make it an interface
> without breaking API compatibility.
>
> Even if it were possible to do compatibly, I would argue it's a bad idea.
> If we need to add another bit of state like case insensitivity, we don't
> want to have LiteralCaseInsensistiveResource,
> WildcardSuffixedCaseInsensitiveResource, etc. etc.  You need 2^n
> subclasses classes to represent N bits of state.
>
> I would argue that there should be a field in Resource like NameType which
> can be LITERAL or PREFIX.  That leaves us in a good position when someone
> inevitably comes up with a new NameType.
>
> Does this still have a chance to get in, or has the KIP window closed?  I
> would argue with one or two minor changes it's ready to go.  Pretty much
> all of the compatibility problems are solved with the separate ZK hierarchy.
>
> best,
> Colin
>
> >
> > Andy
> >
> > On 21 May 2018 at 01:58, Rajini Sivaram  wrote:
> >
> > > Hi Piyush, Thanks for the KIP!
> > >
> > > +1 (binding)
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Sun, May 20, 2018 at 2:53 PM, Andy Coates 
> wrote:
> > >
> > > > Awesome last minute effort Piyush.
> > > >
> > > > Really appreciate your time and input,
> > > >
> > > > Andy
> > > >
> > > > Sent from my iPhone
> > > >
> > > > > On 19 May 2018, at 03:43, Piyush Vijay 
> wrote:
> > > > >
> > > > > Updated the KIP.
> > > > >
> > > > > 1. New enum field 'ResourceNameType' in Resource and ResourceFilter
> > > > classes.
> > > > > 2. modify getAcls() and rely on ResourceNameType' field in
> Resource to
> > > > > return either exact matches or all matches based on
> wildcard-suffix.
> > > > > 3. CLI changes to identify if resource name is literal or
> > > wildcard-suffix
> > > > > 4. Escaping doesn't work and isn't required if we're keeping a
> separate
> > > > > path on ZK (kafka-wildcard-acl) to store wildcard-suffix ACLs.
> > > > > 5. New API keys for Create / Delete / Describe Acls request with a
> new
> > > > > field in schemas for 'ResourceNameType'.
> > > > >
> > > > > Looks ready to me for the vote, will start voting thread now.
> Thanks
> > > > > everyone for the valuable feedback.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Piyush Vijay
> > > > >
> > > > >
> > > > > Piyush Vijay
> > > > >
> > > > >> On Fri, May 18, 2018 at 6:07 PM, Andy Coates 
> > > wrote:
> > > > >>
> > > > >> Hi Piyush,
> > > > >>
> > > > >> We're fast approaching the KIP deadline. Are you actively working
> on
> > > > this?
> > > > >> If you're not I can take over.
> > > > >>
> > > > >> Thanks,
> > > > >>
> > > > >> Andy
> > > > >>
> > > > >>> On 18 May 2018 at 14:25, Andy Coates 

Re: subscribe mail list

2018-05-25 Thread Matthias J. Sax
To subscribe, please follow instructions here:
https://kafka.apache.org/contact


On 5/24/18 8:16 PM,  wrote:
> subscribe mail list
> 



signature.asc
Description: OpenPGP digital signature


subscribe mail list

2018-05-25 Thread ????????????
subscribe mail list

Load testing Apche kafka usimh jmeter

2018-05-25 Thread Vihar Nani
Hi Team,

Can anyone help me with steps of integrating Apche kafa with Apche Jmeter?

I have been trying to load testing Kafka with Jmeter, But I couldn't get
proper information on this.







Thanks,
~Vihar Gangineni


Contributor Access to Kafka Project

2018-05-25 Thread vrspraveen atluri
Hi Team,


I am trying assign few bugs in Jira to but not able to do so. Could you
help me granting the required permissions.

Thanks,
Praveen.


Build failed in Jenkins: kafka-trunk-jdk10 #135

2018-05-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use reflection for signal handler and do not enable it for IBM

--
[...truncated 1.49 MB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhilePendingStateChanged STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhilePendingStateChanged PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 

Contributing to Apache Kafka

2018-05-25 Thread Nikolay Izhikov
Hello, guys!

I want to contribute to Apache Kafka.
Please, give me permission to assign jira ticket to myself.

My Jira ID - NIzhikov

signature.asc
Description: This is a digitally signed message part


Jenkins build is back to normal : kafka-1.1-jdk7 #136

2018-05-25 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2672

2018-05-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use reflection for signal handler and do not enable it for IBM

--
[...truncated 415.57 KB...]
kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator STARTED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED


Re: [DISCUSS] KIP-289: Improve the default group id behavior in KafkaConsumer

2018-05-25 Thread Vahid S Hashemian
Hi Victor,

Thanks for reviewing the KIP.

Yes, to minimize the backward compatibility impact, there would be no harm 
in letting a stand-alone consumer consume messages under a "" group id (as 
long as there is no offset commit).
It would have to knowingly seek to an offset or rely on the 
auto.offset.reset config for the starting offset.
This way the existing functionality would be preserved for the most part 
(with the argument that using the default group id for offset commit 
should not be the user's intention in practice).

Does it seem reasonable?

Thanks.
--Vahid




From:   Viktor Somogyi 
To: dev 
Date:   05/25/2018 04:56 AM
Subject:Re: [DISCUSS] KIP-289: Improve the default group id 
behavior in KafkaConsumer



Hi Vahid,

When reading your KIP I coldn't fully understand why did you decide at
failing with "offset_commit" in case #2? Can't we fail with an empty group
id even in "fetch" or "fetch_offset"? What was the reason for deciding to
fail at "offset_commit"? Was it because of upgrade compatibility reasons?

Thanks,
Viktor

On Thu, May 24, 2018 at 12:06 AM, Ted Yu  wrote:

> Looks good to me.
>  Original message From: Vahid S Hashemian <
> vahidhashem...@us.ibm.com> Date: 5/23/18  11:19 AM  (GMT-08:00) To:
> dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-289: Improve the default
> group id behavior in KafkaConsumer
> Hi Ted,
>
> Thanks for reviewing the KIP. I updated the KIP and introduced an error
> code for the scenario described.
>
> --Vahid
>
>
>
>
> From:   Ted Yu 
> To: dev@kafka.apache.org
> Date:   04/27/2018 04:31 PM
> Subject:Re: [DISCUSS] KIP-289: Improve the default group id
> behavior in KafkaConsumer
>
>
>
> bq. If they attempt an offset commit they will receive an error.
>
> Can you outline what specific error would be encountered ?
>
> Thanks
>
> On Fri, Apr 27, 2018 at 2:17 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi all,
> >
> > I have drafted a proposal for improving the behavior of KafkaConsumer
> when
> > using the default group id:
> >
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-

>
> > 289%3A+Improve+the+default+group+id+behavior+in+KafkaConsumer
> > The proposal based on the issue and suggestion reported in KAFKA-6774.
> >
> > Your feedback is welcome!
> >
> > Thanks.
> > --Vahid
> >
> >
>
>
>
>
>






[jira] [Created] (KAFKA-6949) NoSuchElementException in ReplicaManager

2018-05-25 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6949:
--

 Summary: NoSuchElementException in ReplicaManager
 Key: KAFKA-6949
 URL: https://issues.apache.org/jira/browse/KAFKA-6949
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


I found this in a failed execution of 
kafka.admin.ReassignPartitionsClusterTest.shouldExpandCluster. Looks like we're 
missing some option checking.

{code}
[2018-05-25 08:03:53,310] ERROR [ReplicaManager broker=100] Error while 
changing replica dir for partition my-topic-2 (kafka.server.ReplicaManager:76)
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at 
kafka.server.ReplicaManager$$anonfun$alterReplicaLogDirs$1.apply(ReplicaManager.scala:584)
at 
kafka.server.ReplicaManager$$anonfun$alterReplicaLogDirs$1.apply(ReplicaManager.scala:576)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
kafka.server.ReplicaManager.alterReplicaLogDirs(ReplicaManager.scala:576)
at 
kafka.server.KafkaApis.handleAlterReplicaLogDirsRequest(KafkaApis.scala:2037)
at kafka.server.KafkaApis.handle(KafkaApis.scala:138)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6907) Not able to delete topic

2018-05-25 Thread praveen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

praveen resolved KAFKA-6907.

Resolution: Fixed

> Not able to delete topic
> 
>
> Key: KAFKA-6907
> URL: https://issues.apache.org/jira/browse/KAFKA-6907
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 1.1.0
> Environment: Development
>Reporter: praveen
>Priority: Minor
> Fix For: 1.1.0
>
>
> Not able to delte kafka topic
>  
> ./kafka-topics.sh --delete --zookeeper zoo1:2181 --topic test1
> Topic test1 is marked for deletion.
> Note: This will have no impact if delete.topic.enable is not set to true.
>  ./kafka-topics.sh --describe --zookeeper zoo1:2181 --topic test1
> Topic:test1 PartitionCount:1 ReplicationFactor:2 Configs: 
> MarkedForDeletion:true
> Topic: test1 Partition: 0 Leader: -1 Replicas: 1,0 Isr: 0
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-305: Add Connect primitive number converters

2018-05-25 Thread Randall Hauch
Thanks, everyone!

This vote passes with 3 binding +1 votes, 5 non-binding +1 votes, and no -1
votes. All binding votes were in before the KIP deadline for 2.0, so this
has been added to the 2.0 release plan:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820

Best regards,
Randall

On Wed, May 23, 2018 at 4:49 AM, Rahul Singh 
wrote:

> +1 non binding
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On May 22, 2018, 9:31 PM -0400, Yeva Byzek , wrote:
> > +1
> >
> > Thanks,
> > Yeva
> >
> >
> > On Tue, May 22, 2018 at 7:48 PM, Magesh Nandakumar  > wrote:
> >
> > > +1 (non-binding)
> > >
> > > Thanks
> > > Magesh
> > >
> > > On Tue, May 22, 2018 at 4:23 PM, Randall Hauch 
> wrote:
> > >
> > > > (bump so a few new subscribers see this thread.)
> > > >
> > > > On Tue, May 22, 2018 at 4:39 PM, Randall Hauch 
> wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > On Tue, May 22, 2018 at 4:05 PM, Matthias J. Sax <
> > > matth...@confluent.io
> > > > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > On 5/22/18 1:49 PM, Gwen Shapira wrote:
> > > > > > > +1 (I can't believe we didn't have it until now...)
> > > > > > >
> > > > > > > On Tue, May 22, 2018 at 1:26 PM, Ewen Cheslack-Postava <
> > > > > > e...@confluent.io
> > > > > > > wrote:
> > > > > > >
> > > > > > > > +1 (binding)
> > > > > > > >
> > > > > > > > -Ewen
> > > > > > > >
> > > > > > > > On Tue, May 22, 2018 at 9:29 AM Ted Yu  > > wrote:
> > > > > > > >
> > > > > > > > > +1
> > > > > > > > >
> > > > > > > > > On Tue, May 22, 2018 at 9:19 AM, Randall Hauch <
> rha...@gmail.com
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > I'd like to start a vote of a very straightforward
> proposal for
> > > > > > Connect
> > > > > > > > > to
> > > > > > > > > > add converters for the basic primitive number types:
> integer,
> > > > short,
> > > > > > > > > long,
> > > > > > > > > > double, and float that reuse Kafka's corresponding
> serdes. Here
> > > is
> > > > > > the
> > > > > > > > > KIP:
> > > > > > > > > >
> > > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > > > 305%3A+Add+Connect+primitive+number+converters
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Best regards,
> > > > > > > > > >
> > > > > > > > > > Randall
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
>


Re: [VOTE] KIP-277 - Fine Grained ACL for CreateTopics API

2018-05-25 Thread Edoardo Comar
Thanks Ismael, noted on the KIP

On 21 May 2018 at 18:29, Ismael Juma  wrote:
> Thanks for the KIP, +1 (binding). Can you also please describe the
> compatibility impact of changing the error code from
> CLUSTER_AUTHORIZATION_FAILED to TOPIC_AUTHORIZATION_FAILED?
>
> Ismael
>
> On Wed, Apr 25, 2018 at 2:45 AM Edoardo Comar  wrote:
>
>> Hi,
>>
>> The discuss thread on KIP-277 (
>> https://www.mail-archive.com/dev@kafka.apache.org/msg86540.html )
>> seems to have been fruitful and concerns have been addressed, please allow
>> me start a vote on it:
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-277+-+Fine+Grained+ACL+for+CreateTopics+API
>>
>> I will update the small PR to the latest KIP semantics if the vote passes
>> (as I hope :-).
>>
>> cheers
>> Edo
>> --
>>
>> Edoardo Comar
>>
>> IBM Message Hub
>>
>> IBM UK Ltd, Hursley Park, SO21 2JN
>> Unless stated otherwise above:
>> IBM United Kingdom Limited - Registered in England and Wales with number
>> 741598.
>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>>



-- 
"When the people fear their government, there is tyranny; when the
government fears the people, there is liberty." [Thomas Jefferson]


Re: [DISCUSS] KIP-289: Improve the default group id behavior in KafkaConsumer

2018-05-25 Thread Viktor Somogyi
Hi Vahid,

When reading your KIP I coldn't fully understand why did you decide at
failing with "offset_commit" in case #2? Can't we fail with an empty group
id even in "fetch" or "fetch_offset"? What was the reason for deciding to
fail at "offset_commit"? Was it because of upgrade compatibility reasons?

Thanks,
Viktor

On Thu, May 24, 2018 at 12:06 AM, Ted Yu  wrote:

> Looks good to me.
>  Original message From: Vahid S Hashemian <
> vahidhashem...@us.ibm.com> Date: 5/23/18  11:19 AM  (GMT-08:00) To:
> dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-289: Improve the default
> group id behavior in KafkaConsumer
> Hi Ted,
>
> Thanks for reviewing the KIP. I updated the KIP and introduced an error
> code for the scenario described.
>
> --Vahid
>
>
>
>
> From:   Ted Yu 
> To: dev@kafka.apache.org
> Date:   04/27/2018 04:31 PM
> Subject:Re: [DISCUSS] KIP-289: Improve the default group id
> behavior in KafkaConsumer
>
>
>
> bq. If they attempt an offset commit they will receive an error.
>
> Can you outline what specific error would be encountered ?
>
> Thanks
>
> On Fri, Apr 27, 2018 at 2:17 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi all,
> >
> > I have drafted a proposal for improving the behavior of KafkaConsumer
> when
> > using the default group id:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>
> > 289%3A+Improve+the+default+group+id+behavior+in+KafkaConsumer
> > The proposal based on the issue and suggestion reported in KAFKA-6774.
> >
> > Your feedback is welcome!
> >
> > Thanks.
> > --Vahid
> >
> >
>
>
>
>
>


Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-05-25 Thread Edoardo Comar
Hi Jonathan,
I'm ok with an expandable enum for the config that could be extended
in the future.
It is marginally better than multiple potentially conflicting config entries.

Though as I think the change for KIP-302 is independent from KIP-235
and they do not conflict,
when we'll look back at it post 2.0 we may see if it is more valuable
to shoehorn its config in an expanded enum or not

thanks,
Edo

On 24 May 2018 at 16:50, Skrzypek, Jonathan  wrote:
> Hi,
>
> As Rajini suggested in the thread for KIP 235 (attached), we could try to 
> have an enum that would drive what does the client expands/resolves.
>
> I suggest a client config called client.dns.lookup with different values 
> possible :
>
> - no : no dns lookup
> - hostnames.only : perform dns lookup on both bootstrap.servers and 
> advertised listeners
> - canonical.hostnames.only : perform dns lookup on both bootstrap.servers and 
> advertised listeners
> - bootstrap.hostnames.only : perform dns lookup on bootstrap.servers list and 
> expand it
> - bootstrap.canonical.hostnames.only : perform dns lookup on 
> bootstrap.servers list and expand it
> - advertised.listeners.hostnames.only : perform dns lookup on advertised 
> listeners
> - advertised.listeners.canonical.hostnames.only : perform dns lookup on 
> advertised listeners
>
> I realize this is a bit heavy but this gives users the ability to pick and 
> choose.
> I didn't include a setting to mix hostnames and canonical hostnames as I'm 
> not sure there would be a valid use case.
>
> Alternatively, to have less possible values, we could have 2 parameters :
>
> - dns.lookup.type with values : hostname / canonical.host.name
> - dns.lookup.behaviour : bootstrap.servers, advertised.listeners, both
>
> Thoughts ?
>
> Jonathan Skrzypek
>
>
> -Original Message-
> From: Edoardo Comar [mailto:edoco...@gmail.com]
> Sent: 17 May 2018 23:50
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS resolved 
> IP addresses
>
> Hi Jonathan,
>
>> A solution might be to expose to users the choice of using hostname or 
>> canonical host name on both sides.
>> Say having one setting that collapses functionalities from both KIPs 
>> (bootstrap expansion + advertised lookup)
>> and an additional parameter that defines how the resolution is performed, 
>> using getCanonicalHostName() or not.
>
> thanks sounds to me *less* simple than independent config options, sorry.
>
> I would like to say once again that by itself  KIP-302 only speeds up
> the client behavior that can happen anyway when the client restarts
> multiple times,
> as every time there is no guarantee that - in presence of multiple A
> DNS records - the same IP is returned. Attempting to use additiona IPs
> if the first fail just makes client recovery faster.
>
> cheers
> Edo
>
> On 17 May 2018 at 12:12, Skrzypek, Jonathan  wrote:
>> Yes, makes sense.
>> You mentioned multiple times you see no overlap and no issue with your KIP, 
>> and that they solve different use cases.
>>
>> Appreciate you have an existing use case that would work, but we need to 
>> make sure this isn't confusing to users and that any combination will always 
>> work, across security protocols.
>>
>> A solution might be to expose to users the choice of using hostname or 
>> canonical host name on both sides.
>> Say having one setting that collapses functionalities from both KIPs 
>> (bootstrap expansion + advertised lookup) and an additional parameter that 
>> defines how the resolution is performed, using getCanonicalHostName() or not.
>>
>> Maybe that gives less flexibility as users wouldn't be able to decide to 
>> only perform DNS lookup on bootstrap.servers or on advertised listeners.
>> But this would ensure consistency so that a user can decide to use cnames or 
>> not (depending on their certificates and Kerberos principals in their 
>> environment) and it would work.
>>
>> Jonathan Skrzypek
>>
>> -Original Message-
>> From: Edoardo Comar [mailto:edoco...@gmail.com]
>> Sent: 16 May 2018 21:59
>> To: dev@kafka.apache.org
>> Subject: Re: [DISCUSS] KIP-302 - Enable Kafka clients to use all DNS 
>> resolved IP addresses
>>
>> Hi Jonathan,
>> I am afraid that may not work for everybody.
>>
>> It would not work for us.
>> With our current DNS, my Kafka clients are perfectly happy to use any IPs -
>> DNS has multiple A records for the 'myhostname.mydomain' used for
>> bootstrap and advertised listeners.
>> The hosts all serve TLS certificates that include
>> 'myhostname.mydomain'  and the clients are happy.
>>
>> However, applying getCanonicalHostName to those IPs would return
>> hostnames that would not match the TLS certificates.
>>
>> So once again I believe your solution and ours solve different use cases.
>>
>> cheers
>> Edo
>>
>> On 16 May 2018 at 18:29, Skrzypek, Jonathan  wrote:
>>> I think there are combinations that will break SASL 

[jira] [Created] (KAFKA-6948) Avoid overflow in timestamp comparison

2018-05-25 Thread Giovanni Liva (JIRA)
Giovanni Liva created KAFKA-6948:


 Summary: Avoid overflow in timestamp comparison
 Key: KAFKA-6948
 URL: https://issues.apache.org/jira/browse/KAFKA-6948
 Project: Kafka
  Issue Type: Improvement
Reporter: Giovanni Liva


Some comparisons with timestamp values are not safe. This comparisons can 
trigger errors that were found in some other issues, e.g. KAFKA-4290 or 
KAFKA-6608.

The following classes contains some comparison between timestamps that can 
overflow.
 * org.apache.kafka.clients.NetworkClientUtils
 * org.apache.kafka.clients.consumer.internals.ConsumerCoordinator
 * org.apache.kafka.common.security.kerberos.KerberosLogin
 * org.apache.kafka.connect.runtime.WorkerSinkTask

 * org.apache.kafka.connect.tools.MockSinkTask

 * org.apache.kafka.connect.tools.MockSourceTask

 * org.apache.kafka.streams.processor.internals.GlobalStreamThread

 * org.apache.kafka.streams.processor.internals.StateDirectory

 * org.apache.kafka.streams.processor.internals.StreamThread

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6947) Mirrormaker Closing producer due to send failure

2018-05-25 Thread Andrew Holford (JIRA)
Andrew Holford created KAFKA-6947:
-

 Summary: Mirrormaker Closing producer due to send failure
 Key: KAFKA-6947
 URL: https://issues.apache.org/jira/browse/KAFKA-6947
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Andrew Holford


Hi

On occasion our mirrormakers fail with the below error

[2018-05-25 05:10:31,695] ERROR Error when sending message to topic 
com_snapshot--demo with key: 13 bytes, value: 355 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Expiring 38 record(s) for 
com_snapshot--demo-5: 91886 ms has passed since last append

[2018-05-25 05:10:31,710] INFO Closing producer due to send failure. 
(kafka.tools.MirrorMaker$)

[2018-05-25 05:10:31,710] INFO Closing the Kafka producer with timeoutMillis = 
0 ms. (org.apache.kafka.clients.producer.KafkaProducer)

[2018-05-25 05:10:31,710] INFO Proceeding to force close the producer since 
pending requests could not be completed within timeout 0 ms. 
(org.apache.kafka.clients.producer.KafkaProducer)

 

On the broker side we see:

WARN Attempting to send response via channel for which there is no open 
connection, connection id 10.82.6.105:9093-172.27.205.216:32796 
(kafka.network.Processor)

Does anyone know what could cause this and what a possible solution could be?

Im a little confused by the "timeoutMillis = 0 ms" as well mentioned, is this 
some setting which needs adjusting somewhere? We have request.timeout.ms=6 
on the producer config with most other settings left as the defaults. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2061) Offer a --version flag to print the kafka version

2018-05-25 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-2061.

   Resolution: Fixed
Fix Version/s: 2.0.0

> Offer a --version flag to print the kafka version
> -
>
> Key: KAFKA-2061
> URL: https://issues.apache.org/jira/browse/KAFKA-2061
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Pennebaker
>Priority: Minor
> Fix For: 2.0.0
>
>
> As a newbie, I want kafka command line tools to offer a --version flag to 
> print the kafka version, so that it's easier to work with the community to 
> troubleshoot things.
> As a mitigation, users can query the package management system. But that's A) 
> Not necessarily a newbie's first instinct and B) Not always possible when 
> kafka is installed manually from tarballs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)