[GitHub] kafka pull request #4064: MINOR: add unit test for StateStoreSerdes

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4064


---


[GitHub] kafka pull request #4071: MINOR: a few web doc and javadoc fixes

2017-10-12 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4071

MINOR: a few web doc and javadoc fixes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KMinor-javadoc-gaps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4071.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4071


commit 8a962a25dff7d6388d64174c23e190c157ace199
Author: Guozhang Wang 
Date:   2017-10-13T05:47:18Z

made a first pass




---


[GitHub] kafka pull request #4070: MINOR: update comments in config/producer.properti...

2017-10-12 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/4070

MINOR: update comments in config/producer.properties



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka prodcuer.config

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4070.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4070


commit c923b0bf04574eb442952f14ac2a0ccb7a8f46d9
Author: Manikumar Reddy 
Date:   2017-10-13T04:44:27Z

MINOR: update comments in config/producer.properties




---


Build failed in Jenkins: kafka-trunk-jdk8 #2135

2017-10-12 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Bump the request timeout for the transactional message copier

--
[...truncated 3.28 MB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
PASSED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn STARTED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError 

Build failed in Jenkins: kafka-trunk-jdk7 #2888

2017-10-12 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Bump the request timeout for the transactional message copier

--
[...truncated 1.82 MB...]

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets 
STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 

[jira] [Created] (KAFKA-6059) Kafka cant delete old log files on windows

2017-10-12 Thread rico (JIRA)
rico created KAFKA-6059:
---

 Summary: Kafka cant delete old log files on windows
 Key: KAFKA-6059
 URL: https://issues.apache.org/jira/browse/KAFKA-6059
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.11.0.0, 0.10.2.1, 0.10.2.0, 0.10.1.1, 0.10.1.0, 
0.10.0.1, 0.10.0.0
 Environment: OS:windows 2016
Reporter: rico
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-1.0-jdk7 #33

2017-10-12 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Bump the request timeout for the transactional message copier

--
[...truncated 368.84 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 

[jira] [Resolved] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-10-12 Thread jang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jang resolved KAFKA-5734.
-
Resolution: Resolved

I used producer not intended.

> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx, jconsole.png
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4039: MINOR: Bump the request timeout for the transactio...

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4039


---


Re: [VOTE] 1.0.0 RC0

2017-10-12 Thread Ismael Juma
The only public classes are the ones in the javadoc. SecurityProtocol was
not public, but it now is.

Ismael

On 13 Oct 2017 12:16 am, "Ted Yu"  wrote:

> Thanks for the explanation.
>
> SecurityProtocol was declared public in previous releases, hence I didn't
> notice that it is internal.
>
> On Thu, Oct 12, 2017 at 4:07 PM, Guozhang Wang  wrote:
>
> > For internal classes that are designed to be abstracted away from normal
> > users, I think it is OK to not mention it in the upgrade guides.
> >
> > For developers rather than users of Kafka, they are assumed to be
> familiar
> > with the codebase and not only rely on upgrade guide docs for such
> > information.
> >
> >
> > Guozhang
> >
> > On Thu, Oct 12, 2017 at 2:58 PM, Ted Yu  wrote:
> >
> > > bq. Internal classes which had previously been located in this package
> > have
> > > been moved elsewhere
> > >
> > > It would be clearer to Kafka users if the relocation of
> > > org.apache.kafka.common.protocol.SecurityProtocol is mentioned
> > explicitly.
> > > Otherwise they need to dig into the code to find out.
> > >
> > > Just my two cents.
> > >
> > > On Thu, Oct 12, 2017 at 2:24 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Ted,
> > > >
> > > > I can found that we do have a corresponding doc change for this
> > renaming:
> > > >
> > > > https://github.com/apache/kafka/pull/3863/files#diff-
> > > > 8100f2416b657c1e1e4238dabf8a15e0
> > > >
> > > > And from the web docs:
> > > >
> > > > http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > > > kafka_2.11-1.0.0-site-docs.tgz
> > > >
> > > > I can indeed find it in the upgrade.html.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Thu, Oct 12, 2017 at 11:39 AM, Guozhang Wang 
> > > > wrote:
> > > >
> > > > > Thanks Ted,
> > > > >
> > > > > I'm looking into this for possible doc changes now.
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Wed, Oct 11, 2017 at 3:23 PM, Ted Yu 
> wrote:
> > > > >
> > > > >> Looks like the following change is needed for some downstream
> > project
> > > to
> > > > >> compile their code (which was using 0.11.0.1):
> > > > >>
> > > > >> -import org.apache.kafka.common.protocol.SecurityProtocol;
> > > > >> +import org.apache.kafka.common.security.auth.SecurityProtocol;
> > > > >>
> > > > >> I took a look at docs/upgrade.html but didn't see any mentioning.
> > > > >>
> > > > >> Should this be documented ?
> > > > >>
> > > > >> On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang <
> wangg...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >> > Hello Kafka users, developers and client-developers,
> > > > >> >
> > > > >> > This is the first candidate for release of Apache Kafka 1.0.0.
> > > > >> >
> > > > >> > It's worth noting that starting in this version we are using a
> > > > different
> > > > >> > version protocol with three digits: *major.minor.bug-fix*
> > > > >> >
> > > > >> > Any and all testing is welcome, but the following areas are
> worth
> > > > >> > highlighting:
> > > > >> >
> > > > >> > 1. Client developers should verify that their clients can
> > > > >> produce/consume
> > > > >> > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> > > data).
> > > > >> > 2. Performance and stress testing. Heroku and LinkedIn have
> helped
> > > > with
> > > > >> > this in the past (and issues have been found and fixed).
> > > > >> > 3. End users can verify that their apps work correctly with the
> > new
> > > > >> > release.
> > > > >> >
> > > > >> > This is a major version release of Apache Kafka. It includes 29
> > new
> > > > >> KIPs.
> > > > >> > See the release notes and release plan
> > > > >> > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > > >> > action?pageId=71764913
> > > > >> >  > > > >> pageId=71764913
> > > > >> > >*)
> > > > >> > for more details. A few feature highlights:
> > > > >> >
> > > > >> > * Java 9 support with significantly faster TLS and CRC32C
> > > > >> implementations
> > > > >> > (KIP)
> > > > >> > * JBOD improvements: disk failure only disables failed disk but
> > not
> > > > the
> > > > >> > broker (KIP-112/KIP-113)
> > > > >> > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > > > KIP-187,
> > > > >> > KIP-188, KIP-196)
> > > > >> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 /
> 160
> > /
> > > > >> 161),
> > > > >> > and drop compatibility "Evolving" annotations
> > > > >> >
> > > > >> > Release notes for the 1.0.0 release:
> > > > >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > > RELEASE_NOTES.html
> > > > >> >  > > RELEASE_NOTES.html
> > > > >*
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > *** Please download, test and vote by Friday, October 13, 8pm PT
> > > > >> >
> > > > >> > Kafka's KEYS file containing PGP keys we use to 

Re: [VOTE] 1.0.0 RC0

2017-10-12 Thread Ted Yu
Thanks for the explanation.

SecurityProtocol was declared public in previous releases, hence I didn't
notice that it is internal.

On Thu, Oct 12, 2017 at 4:07 PM, Guozhang Wang  wrote:

> For internal classes that are designed to be abstracted away from normal
> users, I think it is OK to not mention it in the upgrade guides.
>
> For developers rather than users of Kafka, they are assumed to be familiar
> with the codebase and not only rely on upgrade guide docs for such
> information.
>
>
> Guozhang
>
> On Thu, Oct 12, 2017 at 2:58 PM, Ted Yu  wrote:
>
> > bq. Internal classes which had previously been located in this package
> have
> > been moved elsewhere
> >
> > It would be clearer to Kafka users if the relocation of
> > org.apache.kafka.common.protocol.SecurityProtocol is mentioned
> explicitly.
> > Otherwise they need to dig into the code to find out.
> >
> > Just my two cents.
> >
> > On Thu, Oct 12, 2017 at 2:24 PM, Guozhang Wang 
> wrote:
> >
> > > Ted,
> > >
> > > I can found that we do have a corresponding doc change for this
> renaming:
> > >
> > > https://github.com/apache/kafka/pull/3863/files#diff-
> > > 8100f2416b657c1e1e4238dabf8a15e0
> > >
> > > And from the web docs:
> > >
> > > http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > > kafka_2.11-1.0.0-site-docs.tgz
> > >
> > > I can indeed find it in the upgrade.html.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Thu, Oct 12, 2017 at 11:39 AM, Guozhang Wang 
> > > wrote:
> > >
> > > > Thanks Ted,
> > > >
> > > > I'm looking into this for possible doc changes now.
> > > >
> > > > Guozhang
> > > >
> > > > On Wed, Oct 11, 2017 at 3:23 PM, Ted Yu  wrote:
> > > >
> > > >> Looks like the following change is needed for some downstream
> project
> > to
> > > >> compile their code (which was using 0.11.0.1):
> > > >>
> > > >> -import org.apache.kafka.common.protocol.SecurityProtocol;
> > > >> +import org.apache.kafka.common.security.auth.SecurityProtocol;
> > > >>
> > > >> I took a look at docs/upgrade.html but didn't see any mentioning.
> > > >>
> > > >> Should this be documented ?
> > > >>
> > > >> On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang 
> > > >> wrote:
> > > >>
> > > >> > Hello Kafka users, developers and client-developers,
> > > >> >
> > > >> > This is the first candidate for release of Apache Kafka 1.0.0.
> > > >> >
> > > >> > It's worth noting that starting in this version we are using a
> > > different
> > > >> > version protocol with three digits: *major.minor.bug-fix*
> > > >> >
> > > >> > Any and all testing is welcome, but the following areas are worth
> > > >> > highlighting:
> > > >> >
> > > >> > 1. Client developers should verify that their clients can
> > > >> produce/consume
> > > >> > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> > data).
> > > >> > 2. Performance and stress testing. Heroku and LinkedIn have helped
> > > with
> > > >> > this in the past (and issues have been found and fixed).
> > > >> > 3. End users can verify that their apps work correctly with the
> new
> > > >> > release.
> > > >> >
> > > >> > This is a major version release of Apache Kafka. It includes 29
> new
> > > >> KIPs.
> > > >> > See the release notes and release plan
> > > >> > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > >> > action?pageId=71764913
> > > >> >  > > >> pageId=71764913
> > > >> > >*)
> > > >> > for more details. A few feature highlights:
> > > >> >
> > > >> > * Java 9 support with significantly faster TLS and CRC32C
> > > >> implementations
> > > >> > (KIP)
> > > >> > * JBOD improvements: disk failure only disables failed disk but
> not
> > > the
> > > >> > broker (KIP-112/KIP-113)
> > > >> > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > > KIP-187,
> > > >> > KIP-188, KIP-196)
> > > >> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160
> /
> > > >> 161),
> > > >> > and drop compatibility "Evolving" annotations
> > > >> >
> > > >> > Release notes for the 1.0.0 release:
> > > >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > RELEASE_NOTES.html
> > > >> >  > RELEASE_NOTES.html
> > > >*
> > > >> >
> > > >> >
> > > >> >
> > > >> > *** Please download, test and vote by Friday, October 13, 8pm PT
> > > >> >
> > > >> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > >> > http://kafka.apache.org/KEYS
> > > >> >
> > > >> > * Release artifacts to be voted upon (source and binary):
> > > >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > > >> > *
> > > >> >
> > > >> > * Maven artifacts to be voted upon:
> > > >> > https://repository.apache.org/content/groups/staging/
> > > >> >
> > > >> > * Javadoc:
> > > >> > 

Re: [VOTE] 1.0.0 RC0

2017-10-12 Thread Guozhang Wang
For internal classes that are designed to be abstracted away from normal
users, I think it is OK to not mention it in the upgrade guides.

For developers rather than users of Kafka, they are assumed to be familiar
with the codebase and not only rely on upgrade guide docs for such
information.


Guozhang

On Thu, Oct 12, 2017 at 2:58 PM, Ted Yu  wrote:

> bq. Internal classes which had previously been located in this package have
> been moved elsewhere
>
> It would be clearer to Kafka users if the relocation of
> org.apache.kafka.common.protocol.SecurityProtocol is mentioned explicitly.
> Otherwise they need to dig into the code to find out.
>
> Just my two cents.
>
> On Thu, Oct 12, 2017 at 2:24 PM, Guozhang Wang  wrote:
>
> > Ted,
> >
> > I can found that we do have a corresponding doc change for this renaming:
> >
> > https://github.com/apache/kafka/pull/3863/files#diff-
> > 8100f2416b657c1e1e4238dabf8a15e0
> >
> > And from the web docs:
> >
> > http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > kafka_2.11-1.0.0-site-docs.tgz
> >
> > I can indeed find it in the upgrade.html.
> >
> >
> > Guozhang
> >
> >
> > On Thu, Oct 12, 2017 at 11:39 AM, Guozhang Wang 
> > wrote:
> >
> > > Thanks Ted,
> > >
> > > I'm looking into this for possible doc changes now.
> > >
> > > Guozhang
> > >
> > > On Wed, Oct 11, 2017 at 3:23 PM, Ted Yu  wrote:
> > >
> > >> Looks like the following change is needed for some downstream project
> to
> > >> compile their code (which was using 0.11.0.1):
> > >>
> > >> -import org.apache.kafka.common.protocol.SecurityProtocol;
> > >> +import org.apache.kafka.common.security.auth.SecurityProtocol;
> > >>
> > >> I took a look at docs/upgrade.html but didn't see any mentioning.
> > >>
> > >> Should this be documented ?
> > >>
> > >> On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang 
> > >> wrote:
> > >>
> > >> > Hello Kafka users, developers and client-developers,
> > >> >
> > >> > This is the first candidate for release of Apache Kafka 1.0.0.
> > >> >
> > >> > It's worth noting that starting in this version we are using a
> > different
> > >> > version protocol with three digits: *major.minor.bug-fix*
> > >> >
> > >> > Any and all testing is welcome, but the following areas are worth
> > >> > highlighting:
> > >> >
> > >> > 1. Client developers should verify that their clients can
> > >> produce/consume
> > >> > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> data).
> > >> > 2. Performance and stress testing. Heroku and LinkedIn have helped
> > with
> > >> > this in the past (and issues have been found and fixed).
> > >> > 3. End users can verify that their apps work correctly with the new
> > >> > release.
> > >> >
> > >> > This is a major version release of Apache Kafka. It includes 29 new
> > >> KIPs.
> > >> > See the release notes and release plan
> > >> > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > >> > action?pageId=71764913
> > >> >  > >> pageId=71764913
> > >> > >*)
> > >> > for more details. A few feature highlights:
> > >> >
> > >> > * Java 9 support with significantly faster TLS and CRC32C
> > >> implementations
> > >> > (KIP)
> > >> > * JBOD improvements: disk failure only disables failed disk but not
> > the
> > >> > broker (KIP-112/KIP-113)
> > >> > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > KIP-187,
> > >> > KIP-188, KIP-196)
> > >> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
> > >> 161),
> > >> > and drop compatibility "Evolving" annotations
> > >> >
> > >> > Release notes for the 1.0.0 release:
> > >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> RELEASE_NOTES.html
> > >> >  RELEASE_NOTES.html
> > >*
> > >> >
> > >> >
> > >> >
> > >> > *** Please download, test and vote by Friday, October 13, 8pm PT
> > >> >
> > >> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > >> > http://kafka.apache.org/KEYS
> > >> >
> > >> > * Release artifacts to be voted upon (source and binary):
> > >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > >> > *
> > >> >
> > >> > * Maven artifacts to be voted upon:
> > >> > https://repository.apache.org/content/groups/staging/
> > >> >
> > >> > * Javadoc:
> > >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/javadoc/
> > >> > *
> > >> >
> > >> > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc0 tag:
> > >> >
> > >> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > >> > 2f97bc6a9ee269bf90b019e50b4eeb43df2f1143
> > >> >
> > >> > * Documentation:
> > >> > Note the documentation can't be pushed live due to changes that will
> > >> not go
> > >> > live until the release. You can manually 

Re: [DISCUSS] KIP-171: Extend Consumer Group Reset Offset for Stream Application

2017-10-12 Thread Matthias J. Sax
Jorge,

thanks for the update.

I would suggest to not reuse `ConsumerGroupCommand` and re implement
what we need in `StreamsResetter` directly.

Even if we need to keep `StreamsResetter` in `core` for now, I think we
should not introduce new dependencies.

Currently, we still use old `kafka.admin.AdmitClient` in
`StreamsResetter`. We need new `KafkaAdminClient` to support "describe
consumer group" to get rid of this part. Than we can move
`StreamsResetter` to `streams` package.

Cf. https://issues.apache.org/jira/browse/KAFKA-6058
https://issues.apache.org/jira/browse/KAFKA-5965

Feel free to pick up KAFKA-6058 and KAFKA-5965.


-Matthias



On 10/9/17 12:54 AM, Jorge Esteban Quilcate Otoya wrote:
> Matthias,
> 
> Thanks for the heads up!
> 
> I think the main dependency is from `StreamResseter` to
> `ConsumerGroupCommand` class to actually reuse `#reset-offsets`
> functionality.
> 
> Not sure what would be the better way to remove it. To expose commands
> (e.g. `ConsumerGroupCommand`) as part of AdminClient, they have to be
> re-implemented on the `client` module right? Is this an option? If not I
> think we should keep `StreamResseter` as part of `core` module until we
> have `ConsumerGroupCommand` on `client` module as well.
> 
> El vie., 6 oct. 2017 a las 0:05, Matthias J. Sax ()
> escribió:
> 
>> Jorge,
>>
>> KIP-198 (that got merged already) overlaps with this KIP. Can you please
>> update your KIP accordingly?
>>
>> Also, while working on KIP-198, we identified some shortcomings in
>> AdminClient that do not allow us to move StreamsResetter our of core
>> package. We want to address those shortcoming in another KIP to add
>> missing functionality to the new AdminClient.
>>
>> Having say this, and remembering a discussion about dependencies that
>> might be introduced by this KIP, it might be good to understand those
>> dependencies in detail. Maybe we can resolve those dependencies somehow
>> and thus, be able to more StreamsResetter out of core package. Could you
>> summarize those dependencies in the KIP or just as a reply?
>>
>> Thanks!
>>
>>
>> -Matthias
>>
>> On 9/11/17 3:02 PM, Jorge Esteban Quilcate Otoya wrote:
>>> Thanks Guozhang!
>>>
>>> I have updated the KIP to:
>>>
>>> 1. Only one scenario param is allowed. If none, `to-earliest` will be
>> used,
>>> behaving as the current version.
>>>
>>> 2.
>>>   1. An exception will be printed mentioning that there is no existing
>>> offsets registered.
>>>   2. inputTopics format could support define partition numbers as in
>>> reset-offsets option for kafka-consumer-groups.
>>>
>>> 3. That should be handled by KIP-198.
>>>
>>> I will start the VOTE thread in a following email.
>>>
>>>
>>> El mié., 30 ago. 2017 a las 2:01, Guozhang Wang ()
>>> escribió:
>>>
 Hi Jorge,

 Thanks for the KIP. It would be a great to add feature to the reset
>> tools.
 I made a pass over it and it looks good to me overall. I have a few
 comments:

 1. For all the scenarios, do we allow users to specify more than one
 parameters? If not could you make that clear in the wiki, e.g. we would
 return with an error message saying that only one is allowed; if yes
>> then
 what precedence order we are following?

 2. Personally I feel that "--by-duration", "--to-offset" and
>> "--shift-by"
 are a tad overkill, because 1) they assume there exist some committed
 offset for each of the topic, but that may not be always true, 2)
>> offset /
 time shifting amount on different topics may not be a good fit
>> universally,
 i.e. one could imagine the we want to reset all input topics to their
 offsets of a given time, but resetting all topics' offset to the same
>> value
 or let all of them shifting the same amount of offsets are usually not
 applicable. For "--by-duration" it seems could be easily supported by
>> the
 "to-date".

 For the general consumer group reset tool, since it could be set one per
 partition these parameters may be more useful.

 3. As for the implementation details, when removing zookeeper config in
 `kafka-streams-application-reset`, we should consider return a meaning
 error message otherwise it would be "unrecognized config" blah.


 If you feel confident about the wiki after discussing about these
>> points,
 please feel free to move on to start a voting thread. Note that we are
 about 3 weeks away from KIP deadline and 4 weeks away from feature
 deadline.


 Guozhang





 On Tue, Aug 22, 2017 at 1:45 PM, Matthias J. Sax >>
 wrote:

> Thanks for the update Jorge.
>
> I don't have any further comments.
>
>
> -Matthias
>
> On 8/12/17 6:43 PM, Jorge Esteban Quilcate Otoya wrote:
>> I have updated the KIP:
>>
>> - Change execution parameters, using `--dry-run`
>> - 

[jira] [Created] (KAFKA-6058) Add "describe consumer group" to KafkaAdminClient

2017-10-12 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6058:
--

 Summary: Add "describe consumer group" to KafkaAdminClient
 Key: KAFKA-6058
 URL: https://issues.apache.org/jira/browse/KAFKA-6058
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Matthias J. Sax


{{KafkaAdminClient}} does not allow to get information about consumer groups. 
This feature is supported by old {{kafka.admin.AdminClient}} though.

We should add {{KafkaAdminClient#describeConsumerGroup()}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk7 #2887

2017-10-12 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4069: KAFKA-6057: Users forget `--execute` in the offset...

2017-10-12 Thread gilles-degols
GitHub user gilles-degols opened a pull request:

https://github.com/apache/kafka/pull/4069

KAFKA-6057: Users forget `--execute` in the offset reset tool



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gilles-degols/kafka 
kafka-6057-Users-forget-execute-in-the-offset-reset-tool

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4069.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4069


commit be80d959494460ea140b10ad83ec098488ce7d79
Author: Gilles Degols 
Date:   2017-10-12T22:14:33Z

Users forget  in the offset reset tool




---


Re: [VOTE] 1.0.0 RC0

2017-10-12 Thread Ted Yu
bq. Internal classes which had previously been located in this package have
been moved elsewhere

It would be clearer to Kafka users if the relocation of
org.apache.kafka.common.protocol.SecurityProtocol is mentioned explicitly.
Otherwise they need to dig into the code to find out.

Just my two cents.

On Thu, Oct 12, 2017 at 2:24 PM, Guozhang Wang  wrote:

> Ted,
>
> I can found that we do have a corresponding doc change for this renaming:
>
> https://github.com/apache/kafka/pull/3863/files#diff-
> 8100f2416b657c1e1e4238dabf8a15e0
>
> And from the web docs:
>
> http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> kafka_2.11-1.0.0-site-docs.tgz
>
> I can indeed find it in the upgrade.html.
>
>
> Guozhang
>
>
> On Thu, Oct 12, 2017 at 11:39 AM, Guozhang Wang 
> wrote:
>
> > Thanks Ted,
> >
> > I'm looking into this for possible doc changes now.
> >
> > Guozhang
> >
> > On Wed, Oct 11, 2017 at 3:23 PM, Ted Yu  wrote:
> >
> >> Looks like the following change is needed for some downstream project to
> >> compile their code (which was using 0.11.0.1):
> >>
> >> -import org.apache.kafka.common.protocol.SecurityProtocol;
> >> +import org.apache.kafka.common.security.auth.SecurityProtocol;
> >>
> >> I took a look at docs/upgrade.html but didn't see any mentioning.
> >>
> >> Should this be documented ?
> >>
> >> On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang 
> >> wrote:
> >>
> >> > Hello Kafka users, developers and client-developers,
> >> >
> >> > This is the first candidate for release of Apache Kafka 1.0.0.
> >> >
> >> > It's worth noting that starting in this version we are using a
> different
> >> > version protocol with three digits: *major.minor.bug-fix*
> >> >
> >> > Any and all testing is welcome, but the following areas are worth
> >> > highlighting:
> >> >
> >> > 1. Client developers should verify that their clients can
> >> produce/consume
> >> > to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> >> > 2. Performance and stress testing. Heroku and LinkedIn have helped
> with
> >> > this in the past (and issues have been found and fixed).
> >> > 3. End users can verify that their apps work correctly with the new
> >> > release.
> >> >
> >> > This is a major version release of Apache Kafka. It includes 29 new
> >> KIPs.
> >> > See the release notes and release plan
> >> > (*https://cwiki.apache.org/confluence/pages/viewpage.
> >> > action?pageId=71764913
> >> >  >> pageId=71764913
> >> > >*)
> >> > for more details. A few feature highlights:
> >> >
> >> > * Java 9 support with significantly faster TLS and CRC32C
> >> implementations
> >> > (KIP)
> >> > * JBOD improvements: disk failure only disables failed disk but not
> the
> >> > broker (KIP-112/KIP-113)
> >> > * Newly added metrics across all the modules (KIP-164, KIP-168,
> KIP-187,
> >> > KIP-188, KIP-196)
> >> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
> >> 161),
> >> > and drop compatibility "Evolving" annotations
> >> >
> >> > Release notes for the 1.0.0 release:
> >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/RELEASE_NOTES.html
> >> >  >*
> >> >
> >> >
> >> >
> >> > *** Please download, test and vote by Friday, October 13, 8pm PT
> >> >
> >> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >> > http://kafka.apache.org/KEYS
> >> >
> >> > * Release artifacts to be voted upon (source and binary):
> >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> >> > *
> >> >
> >> > * Maven artifacts to be voted upon:
> >> > https://repository.apache.org/content/groups/staging/
> >> >
> >> > * Javadoc:
> >> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/javadoc/
> >> > *
> >> >
> >> > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc0 tag:
> >> >
> >> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> >> > 2f97bc6a9ee269bf90b019e50b4eeb43df2f1143
> >> >
> >> > * Documentation:
> >> > Note the documentation can't be pushed live due to changes that will
> >> not go
> >> > live until the release. You can manually verify by downloading
> >> > http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> >> > kafka_2.11-1.0.0-site-docs.tgz
> >> >
> >> > * Successful Jenkins builds for the 1.0.0 branch:
> >> > Unit/integration tests: https://builds.apache.org/job/
> >> kafka-1.0-jdk7/20/
> >> >
> >> >
> >> > /**
> >> >
> >> >
> >> > Thanks,
> >> > -- Guozhang
> >> >
> >>
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> -- Guozhang
>


Re: [VOTE] 1.0.0 RC0

2017-10-12 Thread Guozhang Wang
Ted,

I can found that we do have a corresponding doc change for this renaming:

https://github.com/apache/kafka/pull/3863/files#diff-8100f2416b657c1e1e4238dabf8a15e0

And from the web docs:

http://home.apache.org/~guozhang/kafka-1.0.0-rc0/kafka_2.11-1.0.0-site-docs.tgz

I can indeed find it in the upgrade.html.


Guozhang


On Thu, Oct 12, 2017 at 11:39 AM, Guozhang Wang  wrote:

> Thanks Ted,
>
> I'm looking into this for possible doc changes now.
>
> Guozhang
>
> On Wed, Oct 11, 2017 at 3:23 PM, Ted Yu  wrote:
>
>> Looks like the following change is needed for some downstream project to
>> compile their code (which was using 0.11.0.1):
>>
>> -import org.apache.kafka.common.protocol.SecurityProtocol;
>> +import org.apache.kafka.common.security.auth.SecurityProtocol;
>>
>> I took a look at docs/upgrade.html but didn't see any mentioning.
>>
>> Should this be documented ?
>>
>> On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang 
>> wrote:
>>
>> > Hello Kafka users, developers and client-developers,
>> >
>> > This is the first candidate for release of Apache Kafka 1.0.0.
>> >
>> > It's worth noting that starting in this version we are using a different
>> > version protocol with three digits: *major.minor.bug-fix*
>> >
>> > Any and all testing is welcome, but the following areas are worth
>> > highlighting:
>> >
>> > 1. Client developers should verify that their clients can
>> produce/consume
>> > to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
>> > 2. Performance and stress testing. Heroku and LinkedIn have helped with
>> > this in the past (and issues have been found and fixed).
>> > 3. End users can verify that their apps work correctly with the new
>> > release.
>> >
>> > This is a major version release of Apache Kafka. It includes 29 new
>> KIPs.
>> > See the release notes and release plan
>> > (*https://cwiki.apache.org/confluence/pages/viewpage.
>> > action?pageId=71764913
>> > > pageId=71764913
>> > >*)
>> > for more details. A few feature highlights:
>> >
>> > * Java 9 support with significantly faster TLS and CRC32C
>> implementations
>> > (KIP)
>> > * JBOD improvements: disk failure only disables failed disk but not the
>> > broker (KIP-112/KIP-113)
>> > * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
>> > KIP-188, KIP-196)
>> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
>> 161),
>> > and drop compatibility "Evolving" annotations
>> >
>> > Release notes for the 1.0.0 release:
>> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/RELEASE_NOTES.html
>> > *
>> >
>> >
>> >
>> > *** Please download, test and vote by Friday, October 13, 8pm PT
>> >
>> > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > http://kafka.apache.org/KEYS
>> >
>> > * Release artifacts to be voted upon (source and binary):
>> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
>> > *
>> >
>> > * Maven artifacts to be voted upon:
>> > https://repository.apache.org/content/groups/staging/
>> >
>> > * Javadoc:
>> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/javadoc/
>> > *
>> >
>> > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc0 tag:
>> >
>> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>> > 2f97bc6a9ee269bf90b019e50b4eeb43df2f1143
>> >
>> > * Documentation:
>> > Note the documentation can't be pushed live due to changes that will
>> not go
>> > live until the release. You can manually verify by downloading
>> > http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
>> > kafka_2.11-1.0.0-site-docs.tgz
>> >
>> > * Successful Jenkins builds for the 1.0.0 branch:
>> > Unit/integration tests: https://builds.apache.org/job/
>> kafka-1.0-jdk7/20/
>> >
>> >
>> > /**
>> >
>> >
>> > Thanks,
>> > -- Guozhang
>> >
>>
>
>
>
> --
> -- Guozhang
>



-- 
-- Guozhang


[GitHub] kafka pull request #4068: MINOR: Update JavaDoc to use new API

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4068


---


[GitHub] kafka-site pull request #97: Port changes from PR4017 and PR3862 to 0110

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/97


---


Re: [VOTE] 1.0.0 RC0

2017-10-12 Thread Guozhang Wang
Thanks Ted,

I'm looking into this for possible doc changes now.

Guozhang

On Wed, Oct 11, 2017 at 3:23 PM, Ted Yu  wrote:

> Looks like the following change is needed for some downstream project to
> compile their code (which was using 0.11.0.1):
>
> -import org.apache.kafka.common.protocol.SecurityProtocol;
> +import org.apache.kafka.common.security.auth.SecurityProtocol;
>
> I took a look at docs/upgrade.html but didn't see any mentioning.
>
> Should this be documented ?
>
> On Tue, Oct 10, 2017 at 6:34 PM, Guozhang Wang  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 1.0.0.
> >
> > It's worth noting that starting in this version we are using a different
> > version protocol with three digits: *major.minor.bug-fix*
> >
> > Any and all testing is welcome, but the following areas are worth
> > highlighting:
> >
> > 1. Client developers should verify that their clients can produce/consume
> > to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> > 2. Performance and stress testing. Heroku and LinkedIn have helped with
> > this in the past (and issues have been found and fixed).
> > 3. End users can verify that their apps work correctly with the new
> > release.
> >
> > This is a major version release of Apache Kafka. It includes 29 new KIPs.
> > See the release notes and release plan
> > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=71764913
> >  action?pageId=71764913
> > >*)
> > for more details. A few feature highlights:
> >
> > * Java 9 support with significantly faster TLS and CRC32C implementations
> > (KIP)
> > * JBOD improvements: disk failure only disables failed disk but not the
> > broker (KIP-112/KIP-113)
> > * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> > KIP-188, KIP-196)
> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> > and drop compatibility "Evolving" annotations
> >
> > Release notes for the 1.0.0 release:
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/RELEASE_NOTES.html
> > *
> >
> >
> >
> > *** Please download, test and vote by Friday, October 13, 8pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > *
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc0/javadoc/
> > *
> >
> > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc0 tag:
> >
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > 2f97bc6a9ee269bf90b019e50b4eeb43df2f1143
> >
> > * Documentation:
> > Note the documentation can't be pushed live due to changes that will not
> go
> > live until the release. You can manually verify by downloading
> > http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
> > kafka_2.11-1.0.0-site-docs.tgz
> >
> > * Successful Jenkins builds for the 1.0.0 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/20/
> >
> >
> > /**
> >
> >
> > Thanks,
> > -- Guozhang
> >
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk7 #2886

2017-10-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Don't register signal handlers if running on Windows

[damian.guy] MINOR: improve Store parameter checks

--
[...truncated 366.73 KB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testExceedProducerIdLimit 
PASSED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId STARTED

kafka.coordinator.transaction.ProducerIdManagerTest > testGetProducerId PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSaveForLaterWhenLeaderUnknownButNotAvailable PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateEmptyMapWhenNoRequestsOutstanding PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCreateMetricsOnStarting PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldAbortAppendToLogOnEndTxnWhenNotCoordinatorError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRetryAppendToLogOnEndTxnWhenCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldCompleteAppendToLogOnEndTxnWhenSendMarkersSucceed PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldGenerateRequestPerPartitionPerBroker PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldRemoveMarkersForTxnPartitionWhenPartitionEmigrated PASSED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound STARTED

kafka.coordinator.transaction.TransactionMarkerChannelManagerTest > 
shouldSkipSendMarkersWhenLeaderNotFound PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
PASSED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn STARTED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #2132

2017-10-12 Thread Apache Jenkins Server
See 




[GitHub] kafka-site pull request #97: Port changes from PR4017 and PR3862 to 0110

2017-10-12 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka-site/pull/97

Port changes from PR4017 and PR3862 to 0110

Port changes from https://github.com/apache/kafka/pull/4017 and 
https://github.com/apache/kafka/pull/3862 to 0110.

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka-site joel-hamill/0110-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/97.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #97


commit 72b3bc737c4d53a8a4685adca2ef0e5a0f65c355
Author: Joel Hamill 
Date:   2017-10-04T00:51:03Z

MINOR: Update verbiage on landing page

Author: Joel Hamill 
Author: Joel Hamill <11722533+joel-ham...@users.noreply.github.com>

Reviewers: Guozhang Wang , Michael G. Noll 
, Damian Guy 

Closes #77 from joel-hamill/joel-hamill/nav-fixes-streams

commit 4f17bfb125142c9f333f80153414446c8406cd6b
Author: Joel Hamill 
Date:   2017-10-05T00:05:42Z

Back out changes to index

Add footer space

commit b3ecdd0b13199e41af548c22ce1f1ffcc669b6ed
Author: Joel Hamill 
Date:   2017-10-12T15:53:42Z

Port changes from PR4017 and PR3862 to 0110




---


Build failed in Jenkins: kafka-trunk-jdk7 #2885

2017-10-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update `config/consumer.properties` to have new consumer

--
[...truncated 1.82 MB...]
org.apache.kafka.streams.KafkaStreamsTest > testStateGlobalThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 

Jenkins build is back to normal : kafka-1.0-jdk7 #30

2017-10-12 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6057) Users forget `--execute` in the offset reset tool

2017-10-12 Thread Yeva Byzek (JIRA)
Yeva Byzek created KAFKA-6057:
-

 Summary: Users forget `--execute` in the offset reset tool
 Key: KAFKA-6057
 URL: https://issues.apache.org/jira/browse/KAFKA-6057
 Project: Kafka
  Issue Type: Improvement
  Components: consumer, core, tools
Reporter: Yeva Byzek


Sometimes users try to reset offsets but forget the {{--execute}} parameter. If 
this is omitted, no action was performed, but this is not conveyed to users. 

This JIRA is to augment the tool such that if {{--execute}} is omitted, then 
give users feedback that no action was performed unless {{--execute}} is 
provided.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4063: MINOR: improve Store parameter checks

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4063


---


[GitHub] kafka pull request #4068: MINOR: Update JavaDoc to use new API

2017-10-12 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/4068

MINOR: Update JavaDoc to use new API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
MINOR_fix_java_doc_example_for_1_0_API

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4068.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4068


commit e80b72f82098e3bec8096ace30c2217b796c6aa7
Author: Bill Bejeck 
Date:   2017-10-12T14:47:45Z

MINOR: Update JavaDoc to use new API




---


[GitHub] kafka pull request #4066: MINOR: Don't register signal handlers if running o...

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4066


---


[GitHub] kafka pull request #4067: MINOR: Merge script improvements

2017-10-12 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4067

MINOR: Merge script improvements

- Remove "list commits" since we never use it
- Fix release branch detection to just look
for branches that start with digits
- Make script executable

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka merge-script-improvements

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4067.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4067






---


Build failed in Jenkins: kafka-1.0-jdk7 #29

2017-10-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use port 0 in ResetIntegrationWithSslTest

--
[...truncated 1.81 MB...]

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED


Build failed in Jenkins: kafka-trunk-jdk7 #2884

2017-10-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use port 0 in ResetIntegrationWithSslTest

--
[...truncated 369.96 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

[GitHub] kafka pull request #4055: MINOR: Update `config/consumer.properties` to have...

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4055


---


[GitHub] kafka pull request #4065: MINOR: Use OS-assigned port, in case 9092 is alrea...

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4065


---


[GitHub] kafka pull request #4052: KAFKA-6046 DeleteRecordsRequest to a non-leader sh...

2017-10-12 Thread tedyu
Github user tedyu closed the pull request at:

https://github.com/apache/kafka/pull/4052


---


Re: [VOTE] 1.0.0 RC0

2017-10-12 Thread Ismael Juma
See inline.

On Thu, Oct 12, 2017 at 6:43 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:
>
>
> [2017-10-11 21:45:11,642] FATAL  (kafka.Kafka$)
> java.lang.IllegalArgumentException: Unknown signal: HUP
> at sun.misc.Signal.(Unknown Source)
> at kafka.Kafka$.registerHandler$1(Kafka.scala:67)
> at kafka.Kafka$.registerLoggingSignalHandler(Kafka.scala:73)
> at kafka.Kafka$.main(Kafka.scala:82)
> at kafka.Kafka.main(Kafka.scala)
>

https://github.com/apache/kafka/pull/4066

Ismael


[GitHub] kafka pull request #4066: MINOR: Don't register signal handlers if running o...

2017-10-12 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4066

MINOR: Don't register signal handlers if running on Windows

Also remove stray printStackTrace in test.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
dont-register-signal-handler-windows

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4066.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4066


commit a0f21badac9794b0dc456df7a5c5d15c4bedd8f8
Author: Ismael Juma 
Date:   2017-10-12T12:02:09Z

MINOR: Don't register signal handlers if running on Windows

Also remove stray printStackTrace in test.




---


[GitHub] kafka pull request #4065: MINOR: Use OS-assigned port, in case 9092 is alrea...

2017-10-12 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/4065

MINOR: Use OS-assigned port, in case 9092 is already bound

I found this by running the tests while I happened to have a kafka broker 
running.

@mjsax please could you review?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka MINOR-random-port

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4065.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4065


commit f5cea3d5ba7011eb2909008ab6dd2b80fc261f8f
Author: Tom Bentley 
Date:   2017-10-12T11:31:56Z

MINOR: Use OS-assigned port, in case 9092 is already bound




---


Jenkins build is back to normal : kafka-trunk-jdk8 #2130

2017-10-12 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk7 #2882

2017-10-12 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6053) NoSuchMethodError when creating ProducerRecord in upgrade system tests

2017-10-12 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6053.

   Resolution: Fixed
Fix Version/s: 1.1.0
   1.0.0

> NoSuchMethodError when creating ProducerRecord in upgrade system tests
> --
>
> Key: KAFKA-6053
> URL: https://issues.apache.org/jira/browse/KAFKA-6053
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0, 1.1.0
>
>
> This patch https://github.com/apache/kafka/pull/4029 used a new constructor 
> for {{ProducerRecord}} which doesn't exist in older clients. Hence system 
> tests which use older clients fail with: 
> {noformat}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> org.apache.kafka.clients.producer.ProducerRecord.(Ljava/lang/String;Ljava/lang/Integer;Ljava/lang/Long;Ljava/lang/Object;Ljava/lang/Object;)V
> at 
> org.apache.kafka.tools.VerifiableProducer.send(VerifiableProducer.java:232)
> at 
> org.apache.kafka.tools.VerifiableProducer.run(VerifiableProducer.java:462)
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:500)
> {"timestamp":1507711495458,"name":"shutdown_complete"}
> {"timestamp":1507711495459,"name":"tool_data","sent":0,"acked":0,"target_throughput":1,"avg_throughput":0.0}
> amehta-macbook-pro:worker6 apurva$
> {noformat}
> A trivial fix is to only use the new constructor if a message create time is 
> explicitly passed to the VerifiableProducer, since older versions of the test 
> would never use it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4058: MINOR: Fixed format string

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4058


---


[GitHub] kafka pull request #4062: KAFKA-6055: Fix a typo in JVM configuration of Win...

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4062


---


Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-10-12 Thread Edoardo Comar
Thanks Tom with the last additions (changes to the protocol) it now 
supersedes KIP-170

+1 non-binding
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Tom Bentley 
To: dev@kafka.apache.org
Date:   11/10/2017 09:21
Subject:[VOTE] KIP-201: Rationalising policy interfaces



I would like to start a vote on KIP-201, which proposes to replace the
existing policy interfaces with a single new policy interface that also
extends policy support to cover new and existing APIs in the AdminClient.

https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D201-253A-2BRationalising-2BPolicy-2Binterfaces=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=tE3xo2lmmoCoFZAX60PBT-J8TBDWcv-tarJyAlgwfJY=puFqZ3Ny4Xcdil5A5huwA5WZtS3WZpD9517uJkCgrCk=


Thanks for your time.

Tom



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


[GitHub] kafka pull request #4057: KAFKA-6053: Fix NoSuchMethodError when creating Pr...

2017-10-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4057


---


Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-10-12 Thread Mickael Maison
Thanks for driving this!
+1 (non binding)

On Wed, Oct 11, 2017 at 9:21 AM, Tom Bentley  wrote:
> I would like to start a vote on KIP-201, which proposes to replace the
> existing policy interfaces with a single new policy interface that also
> extends policy support to cover new and existing APIs in the AdminClient.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-201%3A+Rationalising+Policy+interfaces
>
> Thanks for your time.
>
> Tom


Kafka Connect Source questions

2017-10-12 Thread Stephane Maarek
Hi,

 

I had a look at the Connect Source Worker code and have two questions:
When a Source Task commits offsets, does it perform compaction / optimisation 
before sending off? E.g.  I read from 1 source partition, and I read 1000 
messages. Will the offset flush send 1000 messages to the offset storage, or 
just 1 (the last one)?
I don’t really understand why WorkerSourceTask is trying to flush outstanding 
messages before committing the offsets? (cf 
https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L328
 ).  
I would believe that committing the offsets would just commit the offsets for 
the messages we know for sure have been flushed at the moment the commit is 
requested. That would remove one massive timeout from happening if the source 
task pulls a lot of message and the producer is overwhelmed / can’t complete 
the message flush in the 5 seconds of timeout.  
 

Thanks a lot for the responses. I may open JIRAs based on the answers of the 
questions, if that would help bring some performance improvements. 

 

Stephane