[GitHub] kafka-site issue #68: Docs TOC adjustments

2017-07-21 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/68
  
LGTM. Merged to asf-site.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request #68: Docs TOC adjustments

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/68


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Using JMXMP to access Kafka metrics

2017-07-21 Thread Fernando Vega
We use jmxtrans and works pretty good.

[image: Turn] 

*Fernando Vega*
Sr. Operations Engineer
*cell* (415) 810-0242
901 Marshall St, Suite 200, Redwood City, CA 94063


turn.com    |   @TurnPlatform


This message is Turn Confidential, except for information included that is
already available to the public. If this message was sent to you
accidentally, please delete it.

On Wed, Jul 19, 2017 at 4:05 PM, Vijay Prakash <
vijay.prak...@microsoft.com.invalid> wrote:

> Jolokia is not really an option for me although I've looked at it. I just
> want to know if there's some way to enable using JMXMP to access the
> metrics pushed to JMX.
>
> -Original Message-
> From: Svante Karlsson [mailto:svante.karls...@csi.se]
> Sent: Wednesday, July 19, 2017 12:24 PM
> To: us...@kafka.apache.org
> Cc: dev@kafka.apache.org
> Subject: Re: Using JMXMP to access Kafka metrics
>
> I've used jolokia which gets JMX metrics without RMI (actually json over
> http)
> https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Fjolokia.org%2F=02%7C01%7CVijay.Prakash%40microsoft.com%
> 7Cf33af52736994dff3a6e08d4cedbaea0%7C72f988bf86f141af91ab2d7cd011
> db47%7C1%7C0%7C636360890423012957=QZRVpJI6tPRcuHEPMutTJwICSJtoTC
> uPj1Ndi0xhlXo%3D=0
>
> Integrates nicely with telegraf (and influxdb)
>
> 2017-07-19 20:47 GMT+02:00 Vijay Prakash <
> vijay.prak...@microsoft.com.invalid>:
>
> > Hey,
> >
> > Is there a way to use JMXMP instead of RMI to access Kafka metrics
> > through JMX? I tried creating a JMXMP JMXConnector but the connect
> > attempt just hangs forever.
> >
> > Thanks,
> > Vijay
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #1842

2017-07-21 Thread Apache Jenkins Server
See 




[GitHub] kafka-site pull request #68: Docs TOC adjustments

2017-07-21 Thread derrickdoo
GitHub user derrickdoo opened a pull request:

https://github.com/apache/kafka-site/pull/68

Docs TOC adjustments

Docs TOC updates for Streams

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/derrickdoo/kafka-site docs-toc-adjustments

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/68.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #68


commit 4144bb86f6c0f60f4d8c2ba06dd63fef00871908
Author: Derrick Or 
Date:   2017-07-18T00:17:42Z

updated docs nav and docs toc

commit 030c30a1ba10a18f63a66dfb32a1071b5e52cf89
Author: Derrick Or 
Date:   2017-07-22T04:17:48Z

add separate section for Streams API in docs toc




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3564: MINOR: Don't imply that ConsumerGroupCommand won't...

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3564


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1841

2017-07-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5481: ListOffsetResponse isn't logged in the right way with trace

--
[...truncated 2.64 MB...]
org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testTaskConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinLeaderCatchUpFails PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED


[GitHub] kafka pull request #3540: Streams landing page

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3540


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-0.10.2-jdk7 #187

2017-07-21 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-0.10.1-jdk7 #126

2017-07-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #2555

2017-07-21 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-5481: ListOffsetResponse isn't logged in the right way with trace

--
[...truncated 969.49 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED
ERROR: Could not install GRADLE_3_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:886)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:419)
at 

[GitHub] kafka pull request #3383: KAFKA-5481: ListOffsetResponse isn't logged in the...

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3383


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2554

2017-07-21 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Added safe deserialization implementation

--
[...truncated 976.96 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED


Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Sriram Subramanian
+1!

On Fri, Jul 21, 2017 at 12:35 PM, Jay Kreps  wrote:

> 1.0! Let's do it!
>
> -Jay
>
> On Tue, Jul 18, 2017 at 3:36 PM, Guozhang Wang  wrote:
>
> > Hi all,
> >
> > With 0.11.0.0 out of the way, I would like to volunteer to be the
> > release manager
> > for our next time-based feature release. See https://cwiki.apache.org/
> > confluence/display/KAFKA/Time+Based+Release+Plan if you missed
> > previous communication
> > on time-based releases or need a reminder.
> >
> > I put together a draft release plan with October 2017 as the release
> month
> > (as previously agreed) and a list of KIPs that have already been voted:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2017.Oct
> > As of today we already have 10 KIPs voted, including 2 merged and 3 with
> > PRs under review. As we start the process more KIPs are expected to be
> > added until the KIP freeze date.
> >
> > In addition to the current release plan, I would also like to propose to
> > set the release version to 1.0.0. More specifically, we will bump up the
> > major version from 0.11 to 1.0, and change the semantics of release
> digits
> > as:
> >
> > major.minor.bugfix[.release-candidate]
> >
> > To be better aligned with software versioning (https://en.wikipedia.org/
> > wiki/Software_versioning). Moving forward we can use three digits instead
> > of four in most places that do not require to indicate the rc number.
> Here
> > is my motivation:
> >
> > 1) Kafka has significantly evolved from its first Apache release of 0.7.0
> > (incubating) as a pub-sub messaging system into a distributed streaming
> > platform that can enable publish / store / process real-time data
> streams,
> > with the addition of replication (0.8.0), quota / security support for
> > multi-tenancy (0.8.2, 0.9.0), Connect and Streams API (0.9.0, 0.10.0),
> and
> > most recently the exactly-once support to have the desired
> > semantics (0.11.0); I think now is a good time to mark the release as a
> > major milestone in the evolution of Apache Kafka.
> >
> > 2) Some people believe 0.x means that the software is immature or not
> > stable, or the public APIs may subject to change incompatibly regardless
> > the fact that Kafka has been widely adopted in productions and the
> > community has made a great effort on maintaining backward compatibility.
> > Making Kafka 1.x will help with promoting the project for that
> perception.
> >
> > 3) Having three digits as of "major.minor.bugfix" is more natural from a
> > software version understanding pov and aligned with other open source
> > projects as well.
> >
> > How do people feel about 1.0.0.x as the next Kafka version? Please share
> > your thoughts.
> >
> > -- Guozhang
> >
>


[GitHub] kafka pull request #3563: Added safe deserialization impl

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3563


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3528: Kafka-4763 (Used for triggering test only)

2017-07-21 Thread lindong28
Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/3528


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3564: MINOR: Don't imply that ConsumerGroupCommand won't...

2017-07-21 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/3564

MINOR: Don't imply that ConsumerGroupCommand won't work with non-Java 
clients



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
minor-consumer-group-command-message

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3564.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3564


commit d32d92b06877012259332bfba5934babe7e31ec4
Author: Ewen Cheslack-Postava 
Date:   2017-07-21T22:47:08Z

MINOR: Don't imply that ConsumerGroupCommand won't work with non-Java 
clients




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5626) Producer should be able to negotiate ProduceRequest version with broker

2017-07-21 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-5626:
---

 Summary: Producer should be able to negotiate ProduceRequest 
version with broker
 Key: KAFKA-5626
 URL: https://issues.apache.org/jira/browse/KAFKA-5626
 Project: Kafka
  Issue Type: Improvement
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Kafka doc : official repo or web site repo ?

2017-07-21 Thread Guozhang Wang
Ewen, Paolo:

There is already a PR from Derrick who did the kafka-site PR for kafka repo
as well: https://github.com/apache/kafka/pull/3540

We usually only first push to kafka, and then piggy-back to kafka-site upon
releases; for some changes that we'd like to push in the middle of a
release cycle we usually do double-PRs and two commits.


Guozhang

On Fri, Jul 21, 2017 at 2:19 PM, Ewen Cheslack-Postava 
wrote:

> Paolo,
>
> Good question and good catch! I think those updates caused things to
> diverge because the author needed to make updates to both the regular docs
> and some of the styling assets that are only in the kafka-site repo. I've
> pinged the folks that worked on it to make sure the changes make it
> properly back into the main Kafka repo.
>
> -Ewen
>
> On Fri, Jul 21, 2017 at 6:23 AM, Paolo Patierno 
> wrote:
>
> > Hi Ewen,
> >
> >
> > thank you very much for your answer. I know that you guys are really busy
> > and I pushed my PRs during the 0.11.0 release cycle so ... I can
> > understand, no worries ;)
> >
> >
> > Regarding the documentation happy to know that I pushed in the right repo
> > but ...
> >
> > ... regarding the Kafka Streams new shiny documentation I see that the
> > main page on the web site has the following phrase on the top "The
> easiest
> > way to write mission-critical ..."
> >
> > If I try to search it in the Kafka repo I can't find it (I imagine that
> it
> > should be in docs/streams/index.html ... but it's not there).
> >
> > Instead, if I search for the same phrase in the kafka-site repo I can
> find
> > it in 0110/streams/index.html.
> >
> >
> > So it seems that the doc is in the kafka-site repo but not in the kafka
> > repo ... where am I wrong ?
> >
> >
> > Thanks,
> >
> > Paolo.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Windows Embedded & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Ewen Cheslack-Postava 
> > Sent: Friday, July 21, 2017 2:57 AM
> > To: dev@kafka.apache.org
> > Subject: Re: Kafka doc : official repo or web site repo ?
> >
> > Hi Paolo,
> >
> > Docs are a little bit confusing. Some of the docs are version-specific
> and
> > those live in the main repo and are managed by the same branches that
> > releases are managed on. Then there is the
> > https://github.com/apache/kafka-site which contains the actual contents
> of
> > the site. During release, the current snapshot of version-specific docs
> are
> > copied into the kafka-site repo (as the current version, or a historical
> > version as you would see at, e.g.,
> > http://kafka.apache.org/0102/documentation.html).
> >
> > For your PR, you submitted to the correct location. Sorry for the delay
> in
> > review, we tend to get a little backlogged -- not enough committers to
> > review and commit the volume of PRs we are getting. I've reviewed and
> > committed that first one and I see you have a bunch of others that we'll
> > try to get to as well. Once you get a feel for which reviewers can/will
> > review different parts of the code, you'll be able to more easily tag
> > committers for review and commit (in the case of Connect, myself
> (@ewencp),
> > @hachikuji, and @gwenshap are good targets).
> >
> > Thanks for contributing! I especially appreciate docs fixes :)
> >
> > -Ewen
> >
> > On Thu, Jul 13, 2017 at 1:51 AM, Paolo Patierno 
> > wrote:
> >
> > > Hi guys,
> > >
> > > I have big doubt on where is the doc and what to do for upgrading that.
> > >
> > > I see the docs folder in the Kafka repo (but even there I have a PR
> > opened
> > > for one month on Kafka Connect not yet merged) but then I see the Kafka
> > web
> > > site repo where there is the same doc.
> > >
> > > Reading the new Kafka Stream doc I noticed that it seems to be only in
> > the
> > > Kafka web site repo and not in the Kafka repo.
> > >
> > >
> > > Can you clarifying me where to submit PRs for doc ? On which side ?
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Windows Embedded & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #3563: Added safe deserialization impl

2017-07-21 Thread rhauch
GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/3563

Added safe deserialization impl



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka deserialization-validation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3563.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3563


commit fcd4d5f069aa084a6bab9dea4f81de0d63bcfbe8
Author: Hooman Broujerdi 
Date:   2017-07-20T02:50:17Z

Added safe deserialization Impl




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-21 Thread Vahid S Hashemian
Hi Jason,

Yes, I meant as a separate KIP.
I can start a KIP for that sometime soon.

Thanks.
--Vahid




From:   Jason Gustafson 
To: dev@kafka.apache.org
Cc: Kafka Users 
Date:   07/21/2017 11:37 AM
Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for 
ConsumerGroupCommand



>
> Regarding your comment about the current limitation on the information
> returned for a consumer group, do you think it's worth expanding the API
> to return some additional info (e.g. generation id, group leader, ...)?


Seems outside the scope of this KIP. Up to you, but I'd probably leave it
for future work.

-Jason

On Thu, Jul 20, 2017 at 4:21 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Jason,
>
> Regarding your comment about the current limitation on the information
> returned for a consumer group, do you think it's worth expanding the API
> to return some additional info (e.g. generation id, group leader, ...)?
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Jason Gustafson 
> To: Kafka Users 
> Cc: dev@kafka.apache.org
> Date:   07/19/2017 01:46 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hey Vahid,
>
> Thanks for the updates. Looks pretty good. A couple comments:
>
> 1. For the --state option, should we use the same column-oriented format
> as
> we use for the other options? I realize there would only be one row, but
> the inconsistency is a little vexing. Also, since this tool is working
> only
> with consumer groups, perhaps we can leave out "protocol type" and use
> "assignment strategy" in place of "protocol"? It would be nice to also
> include the group generation, but it seems we didn't add that to the
> DescribeGroup response. Perhaps we could also include a count of the
> number
> of members?
> 2. It's a little annoying that --subscription and --members share so 
much
> in common. Maybe we could drop --subscription and use a --verbose flag 
to
> control whether or not to include the subscription and perhaps the
> assignment as well? Not sure if that's more annoying or less, but maybe 
a
> generic --verbose will be useful in other contexts.
>
> As for your question on whether we need the --offsets option at all, I
> don't have a strong opinion, but it seems to make the command semantics 
a
> little more consistent.
>
> -Jason
>
> On Tue, Jul 18, 2017 at 12:56 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Jason,
> >
> > I updated the KIP based on your earlier suggestions:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
> > The only thing I am wondering at this point is whether it's worth to
> have
> > a `--describe --offsets` option that behaves exactly like 
`--describe`.
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> > From:   "Vahid S Hashemian" 
> > To: dev@kafka.apache.org
> > Cc: Kafka Users 
> > Date:   07/17/2017 03:24 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
> > ConsumerGroupCommand
> >
> >
> >
> > Hi Jason,
> >
> > Thanks for your quick feedback. Your suggestions seem reasonable.
> > I'll start updating the KIP accordingly and will send out another note
> > when it's ready.
> >
> > Regards.
> > --Vahid
> >
> >
> >
> >
> > From:   Jason Gustafson 
> > To: dev@kafka.apache.org
> > Cc: Kafka Users 
> > Date:   07/17/2017 02:11 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views 
for
> > ConsumerGroupCommand
> >
> >
> >
> > Hey Vahid,
> >
> > Hmm... If possible, it would be nice to avoid cluttering the default
> > option
> > too much, especially if it is information which is going to be the 
same
> > for
> > all members (such as the generation). My preference would be to use 
the
> > --state option that you've suggested for that info so that we can
> > represent
> > it more concisely.
> >
> > The reason I prefer the current output is that it is clear every entry
> > corresponds to a partition for which we have committed offset. Entries
> > like
> > this look strange:
> >
> > TOPIC  PARTITION  CURRENT-OFFSET 
LOG-END-OFFSET
> > LAGCONSUMER-ID
> > HOST   CLIENT-ID
> > -  -  -   -
> > -  consumer4-e173f09d-c761-4f4e-95c7-6fb73bb8fbff
> > /127.0.0.1
> > consumer4
> > -  -  -   -
> > -  consumer5-7b80e428-f8ff-43f3-8360-afd1c8ba43ea
> > /127.0.0.1
> > consumer5
> >
> > It makes me think that the consumers have committed offsets for an
> unknown
> > partition. The --members option seems like a clearer way to 
communicate
> > the
> > fact that there are some members with no assigned 

Re: Kafka doc : official repo or web site repo ?

2017-07-21 Thread Ewen Cheslack-Postava
Paolo,

Good question and good catch! I think those updates caused things to
diverge because the author needed to make updates to both the regular docs
and some of the styling assets that are only in the kafka-site repo. I've
pinged the folks that worked on it to make sure the changes make it
properly back into the main Kafka repo.

-Ewen

On Fri, Jul 21, 2017 at 6:23 AM, Paolo Patierno  wrote:

> Hi Ewen,
>
>
> thank you very much for your answer. I know that you guys are really busy
> and I pushed my PRs during the 0.11.0 release cycle so ... I can
> understand, no worries ;)
>
>
> Regarding the documentation happy to know that I pushed in the right repo
> but ...
>
> ... regarding the Kafka Streams new shiny documentation I see that the
> main page on the web site has the following phrase on the top "The easiest
> way to write mission-critical ..."
>
> If I try to search it in the Kafka repo I can't find it (I imagine that it
> should be in docs/streams/index.html ... but it's not there).
>
> Instead, if I search for the same phrase in the kafka-site repo I can find
> it in 0110/streams/index.html.
>
>
> So it seems that the doc is in the kafka-site repo but not in the kafka
> repo ... where am I wrong ?
>
>
> Thanks,
>
> Paolo.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Ewen Cheslack-Postava 
> Sent: Friday, July 21, 2017 2:57 AM
> To: dev@kafka.apache.org
> Subject: Re: Kafka doc : official repo or web site repo ?
>
> Hi Paolo,
>
> Docs are a little bit confusing. Some of the docs are version-specific and
> those live in the main repo and are managed by the same branches that
> releases are managed on. Then there is the
> https://github.com/apache/kafka-site which contains the actual contents of
> the site. During release, the current snapshot of version-specific docs are
> copied into the kafka-site repo (as the current version, or a historical
> version as you would see at, e.g.,
> http://kafka.apache.org/0102/documentation.html).
>
> For your PR, you submitted to the correct location. Sorry for the delay in
> review, we tend to get a little backlogged -- not enough committers to
> review and commit the volume of PRs we are getting. I've reviewed and
> committed that first one and I see you have a bunch of others that we'll
> try to get to as well. Once you get a feel for which reviewers can/will
> review different parts of the code, you'll be able to more easily tag
> committers for review and commit (in the case of Connect, myself (@ewencp),
> @hachikuji, and @gwenshap are good targets).
>
> Thanks for contributing! I especially appreciate docs fixes :)
>
> -Ewen
>
> On Thu, Jul 13, 2017 at 1:51 AM, Paolo Patierno 
> wrote:
>
> > Hi guys,
> >
> > I have big doubt on where is the doc and what to do for upgrading that.
> >
> > I see the docs folder in the Kafka repo (but even there I have a PR
> opened
> > for one month on Kafka Connect not yet merged) but then I see the Kafka
> web
> > site repo where there is the same doc.
> >
> > Reading the new Kafka Stream doc I noticed that it seems to be only in
> the
> > Kafka web site repo and not in the Kafka repo.
> >
> >
> > Can you clarifying me where to submit PRs for doc ? On which side ?
> >
> >
> > Thanks,
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Windows Embedded & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
>


Re: [DISCUSS] KIP-180: Add a broker metric specifying the number of consumer group rebalances in progress

2017-07-21 Thread Colin McCabe
That's a good point.  I revised the KIP to add metrics for all the group
states.

best,
Colin


On Fri, Jul 21, 2017, at 12:08, Guozhang Wang wrote:
> Ah, that's right Jason.
> 
> With that I can be convinced to add one metric per each state.
> 
> Guozhang
> 
> On Fri, Jul 21, 2017 at 11:44 AM, Jason Gustafson 
> wrote:
> 
> > >
> > > "Dead" and "Empty" states are transient: groups usually only leaves in
> > this
> > > state for a short while and then being deleted or transited to other
> > > states.
> >
> >
> > This is not strictly true for the "Empty" state which we also use to
> > represent simple groups which only use the coordinator to store offsets. I
> > think we may as well cover all the states if we're going to cover any of
> > them specifically.
> >
> > -Jason
> >
> >
> >
> > On Fri, Jul 21, 2017 at 9:45 AM, Guozhang Wang  wrote:
> >
> > > My two cents:
> > >
> > > "Dead" and "Empty" states are transient: groups usually only leaves in
> > this
> > > state for a short while and then being deleted or transited to other
> > > states.
> > >
> > > Since we have the existing "*NumGroups*" metric, `*NumGroups -
> > > **NumGroupsRebalancing
> > > - **NumGroupsAwaitingSync`* should cover the above three, where "Stable"
> > > should be contributing most of the counts: If we have a bug that causes
> > the
> > > num.Dead / Empty to keep increasing, then we would observe `NumGroups`
> > keep
> > > increasing which should be sufficient for alerting. And trouble shooting
> > of
> > > the issue could be relying on the log4j.
> > >
> > > *Guozhang*
> > >
> > > On Fri, Jul 21, 2017 at 7:19 AM, Ismael Juma  wrote:
> > >
> > > > Thanks for the KIP, Colin. This will definitely be useful. One
> > question:
> > > > would it be useful to have a metric for for the number of groups in
> > each
> > > > possible state? The KIP suggests "PreparingRebalance" and
> > "AwaitingSync".
> > > > That leaves "Stable", "Dead" and "Empty". Are those not useful?
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Jul 20, 2017 at 6:52 PM, Colin McCabe 
> > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I posted "KIP-180: Add a broker metric specifying the number of
> > > consumer
> > > > > group rebalances in progress" for discussion:
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 180%3A+Add+a+broker+metric+specifying+the+number+of+
> > > > > consumer+group+rebalances+in+progress
> > > > >
> > > > > Check it out.
> > > > >
> > > > > cheers,
> > > > > Colin
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> 
> 
> 
> -- 
> -- Guozhang


[jira] [Created] (KAFKA-5625) Invalid subscription data may cause streams app to throw BufferUnderflowException

2017-07-21 Thread JIRA
Xavier Léauté created KAFKA-5625:


 Summary: Invalid subscription data may cause streams app to throw 
BufferUnderflowException
 Key: KAFKA-5625
 URL: https://issues.apache.org/jira/browse/KAFKA-5625
 Project: Kafka
  Issue Type: Bug
Reporter: Xavier Léauté
Priority: Minor


I was able to cause my streams app to crash with the following error when 
attempting to join the same consumer group with a rogue client.

At the very least I would expect streams to throw a {{TaskAssignmentException}} 
to indicate invalid subscription data.

{code}
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506)
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361)
at 
org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.decode(SubscriptionInfo.java:97)
at 
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:302)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:365)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:512)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:93)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:462)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:445)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:798)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:778)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:168)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:358)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:574)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:545)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:519)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Guozhang Wang
That's fair enough too.

Guozhang

On Fri, Jul 21, 2017 at 12:13 PM, Ismael Juma  wrote:

> Yes, I agree that the choice of version number can be done separately.
> That's why I said I'd file a separate JIRA for the documentation
> improvements. Having said that, there are some expectations that people
> have for projects that have reached 1.0.0 and we should try to allocate
> time for the important ones.
>
> Ismael
>
> On 21 Jul 2017 8:07 pm, "Guozhang Wang"  wrote:
>
> > Thanks Ismael. I agree with you on all these points, and for some of
> these
> > points like 3) we never have a written-down policy though in practice we
> > tend to follow some patterns.
> >
> > To me deciding what's the version number of the next major release does
> not
> > necessarily mean we need now to change any of these or to set the hard
> > rules along with it; I'd like to keep them as two separate discussions as
> > they seem semi-orthogonal to me.
> >
> >
> > Guozhang
> >
> >
> > On Fri, Jul 21, 2017 at 8:44 AM, Ismael Juma  wrote:
> >
> > > On the topic of documentation, we should also document which releases
> are
> > > still supported and which are not. There a few factors to consider:
> > >
> > > 1. Which branches receive bug fixes. We typically backport fixes to the
> > two
> > > most recent stable branches (the most recent stable branch typically
> gets
> > > more backports than the older one).
> > >
> > > 2. Which branches receive security fixes. This could be the same as
> `1`,
> > > but we could attempt to backport more aggressively for security fixes
> as
> > > they tend to be rare (so far at least) and the impact could be severe.
> > >
> > > 3. The release policy for stable branches. We tend to stop releasing
> > from a
> > > given stable branch before we stop backporting fixes. Maybe that's OK,
> > but
> > > it would be good to document how we decide that a bug fix release is
> > > needed.
> > >
> > > 4. How long are direct upgrades supported for. During the time-based
> > > releases discussion, we agreed to support direct upgrades for 2 years.
> As
> > > it happens, direct upgrades from 0.8.2 to 1.0.0 would not be supported
> if
> > > we follow this strictly. Not clear if we want to do that.
> > >
> > > 5. How long are older clients supported for. Current brokers support
> > > clients all the way to 0.8.x.
> > >
> > > 6. How long are older brokers supported for. Current clients support
> > 0.10.x
> > > and newer brokers.
> > >
> > > 7. How long are message formats supported for. We never discussed
> this. I
> > > think 5 years would probably be the minimum.
> > >
> > > Ismael
> > >
> > > P.S. I'll file a JIRA to capture this information.
> > >
> > > On Wed, Jul 19, 2017 at 10:12 AM, Ismael Juma 
> wrote:
> > >
> > > > Hi Stevo,
> > > >
> > > > Thanks for your feedback. We should definitely do a better job of
> > > > documenting things. We basically follow semantic versioning, but it's
> > > > currently a bit confusing because:
> > > >
> > > > 1. There are 4 segments in the version. The "0." part should be
> ignored
> > > > when deciding what is major, minor and patch at the moment, but many
> > > people
> > > > don't know this. Once we move to 1.0.0, that problem goes away.
> > > >
> > > > 2. To know what is a public API, you must check the Javadoc (
> > > > https://kafka.apache.org/0110/javadoc/index.html?org/
> > > > apache/kafka/clients/consumer/KafkaConsumer.html). If it's not
> listed
> > > > there, it's not public API. Ideally, it would be obvious from the
> > package
> > > > name (i.e. there would be "internals" in the name), but we are not
> > there
> > > > yet. The exception are the old Scala APIs, but they have all been
> > > > deprecated and they will be removed eventually (the old Scala
> consumers
> > > > won't be removed until the June 2018 release at the earliest in order
> > to
> > > > give people time to migrate).
> > > >
> > > > 3. Even though we are following reasonably common practices, we
> haven't
> > > > documented them all in one place. It would be great to do it during
> the
> > > > next release cycle.
> > > >
> > > > A few comments below.
> > > >
> > > > On Wed, Jul 19, 2017 at 1:31 AM, Stevo Slavić 
> > wrote:
> > > >
> > > >> - APIs not labeled or labeled as stable
> > > >> -- change in major version is only one that can break backward
> > > >> compatibility (client APIs or behavior)
> > > >>
> > > >
> > > > To clarify, stable APIs should not be changed in an incompatible way
> > > > without a deprecation cycle. Independently of whether it's a major
> > > release
> > > > or not.
> > > >
> > > >
> > > >> -- change in minor version can introduce new features, but not break
> > > >> backward compatibility
> > > >> -- change in patch version, is for bug fixes only.
> > > >>
> > > >
> > > > Right, this has been the case for a while already. Also see
> annotations
> > > > below.
> > > >
> > > >
> 

Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Jay Kreps
1.0! Let's do it!

-Jay

On Tue, Jul 18, 2017 at 3:36 PM, Guozhang Wang  wrote:

> Hi all,
>
> With 0.11.0.0 out of the way, I would like to volunteer to be the
> release manager
> for our next time-based feature release. See https://cwiki.apache.org/
> confluence/display/KAFKA/Time+Based+Release+Plan if you missed
> previous communication
> on time-based releases or need a reminder.
>
> I put together a draft release plan with October 2017 as the release month
> (as previously agreed) and a list of KIPs that have already been voted:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2017.Oct
> As of today we already have 10 KIPs voted, including 2 merged and 3 with
> PRs under review. As we start the process more KIPs are expected to be
> added until the KIP freeze date.
>
> In addition to the current release plan, I would also like to propose to
> set the release version to 1.0.0. More specifically, we will bump up the
> major version from 0.11 to 1.0, and change the semantics of release digits
> as:
>
> major.minor.bugfix[.release-candidate]
>
> To be better aligned with software versioning (https://en.wikipedia.org/
> wiki/Software_versioning). Moving forward we can use three digits instead
> of four in most places that do not require to indicate the rc number. Here
> is my motivation:
>
> 1) Kafka has significantly evolved from its first Apache release of 0.7.0
> (incubating) as a pub-sub messaging system into a distributed streaming
> platform that can enable publish / store / process real-time data streams,
> with the addition of replication (0.8.0), quota / security support for
> multi-tenancy (0.8.2, 0.9.0), Connect and Streams API (0.9.0, 0.10.0), and
> most recently the exactly-once support to have the desired
> semantics (0.11.0); I think now is a good time to mark the release as a
> major milestone in the evolution of Apache Kafka.
>
> 2) Some people believe 0.x means that the software is immature or not
> stable, or the public APIs may subject to change incompatibly regardless
> the fact that Kafka has been widely adopted in productions and the
> community has made a great effort on maintaining backward compatibility.
> Making Kafka 1.x will help with promoting the project for that perception.
>
> 3) Having three digits as of "major.minor.bugfix" is more natural from a
> software version understanding pov and aligned with other open source
> projects as well.
>
> How do people feel about 1.0.0.x as the next Kafka version? Please share
> your thoughts.
>
> -- Guozhang
>


Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Ismael Juma
Yes, I agree that the choice of version number can be done separately.
That's why I said I'd file a separate JIRA for the documentation
improvements. Having said that, there are some expectations that people
have for projects that have reached 1.0.0 and we should try to allocate
time for the important ones.

Ismael

On 21 Jul 2017 8:07 pm, "Guozhang Wang"  wrote:

> Thanks Ismael. I agree with you on all these points, and for some of these
> points like 3) we never have a written-down policy though in practice we
> tend to follow some patterns.
>
> To me deciding what's the version number of the next major release does not
> necessarily mean we need now to change any of these or to set the hard
> rules along with it; I'd like to keep them as two separate discussions as
> they seem semi-orthogonal to me.
>
>
> Guozhang
>
>
> On Fri, Jul 21, 2017 at 8:44 AM, Ismael Juma  wrote:
>
> > On the topic of documentation, we should also document which releases are
> > still supported and which are not. There a few factors to consider:
> >
> > 1. Which branches receive bug fixes. We typically backport fixes to the
> two
> > most recent stable branches (the most recent stable branch typically gets
> > more backports than the older one).
> >
> > 2. Which branches receive security fixes. This could be the same as `1`,
> > but we could attempt to backport more aggressively for security fixes as
> > they tend to be rare (so far at least) and the impact could be severe.
> >
> > 3. The release policy for stable branches. We tend to stop releasing
> from a
> > given stable branch before we stop backporting fixes. Maybe that's OK,
> but
> > it would be good to document how we decide that a bug fix release is
> > needed.
> >
> > 4. How long are direct upgrades supported for. During the time-based
> > releases discussion, we agreed to support direct upgrades for 2 years. As
> > it happens, direct upgrades from 0.8.2 to 1.0.0 would not be supported if
> > we follow this strictly. Not clear if we want to do that.
> >
> > 5. How long are older clients supported for. Current brokers support
> > clients all the way to 0.8.x.
> >
> > 6. How long are older brokers supported for. Current clients support
> 0.10.x
> > and newer brokers.
> >
> > 7. How long are message formats supported for. We never discussed this. I
> > think 5 years would probably be the minimum.
> >
> > Ismael
> >
> > P.S. I'll file a JIRA to capture this information.
> >
> > On Wed, Jul 19, 2017 at 10:12 AM, Ismael Juma  wrote:
> >
> > > Hi Stevo,
> > >
> > > Thanks for your feedback. We should definitely do a better job of
> > > documenting things. We basically follow semantic versioning, but it's
> > > currently a bit confusing because:
> > >
> > > 1. There are 4 segments in the version. The "0." part should be ignored
> > > when deciding what is major, minor and patch at the moment, but many
> > people
> > > don't know this. Once we move to 1.0.0, that problem goes away.
> > >
> > > 2. To know what is a public API, you must check the Javadoc (
> > > https://kafka.apache.org/0110/javadoc/index.html?org/
> > > apache/kafka/clients/consumer/KafkaConsumer.html). If it's not listed
> > > there, it's not public API. Ideally, it would be obvious from the
> package
> > > name (i.e. there would be "internals" in the name), but we are not
> there
> > > yet. The exception are the old Scala APIs, but they have all been
> > > deprecated and they will be removed eventually (the old Scala consumers
> > > won't be removed until the June 2018 release at the earliest in order
> to
> > > give people time to migrate).
> > >
> > > 3. Even though we are following reasonably common practices, we haven't
> > > documented them all in one place. It would be great to do it during the
> > > next release cycle.
> > >
> > > A few comments below.
> > >
> > > On Wed, Jul 19, 2017 at 1:31 AM, Stevo Slavić 
> wrote:
> > >
> > >> - APIs not labeled or labeled as stable
> > >> -- change in major version is only one that can break backward
> > >> compatibility (client APIs or behavior)
> > >>
> > >
> > > To clarify, stable APIs should not be changed in an incompatible way
> > > without a deprecation cycle. Independently of whether it's a major
> > release
> > > or not.
> > >
> > >
> > >> -- change in minor version can introduce new features, but not break
> > >> backward compatibility
> > >> -- change in patch version, is for bug fixes only.
> > >>
> > >
> > > Right, this has been the case for a while already. Also see annotations
> > > below.
> > >
> > >
> > >> - APIs labeled as evolving can be broken in backward incompatible way
> in
> > >> any release, but are assumed less likely to be broken compared to
> > unstable
> > >> APIs
> > >> - APIs labeled as unstable can be broken in backward incompatible way
> in
> > >> any release, major, minor or patch
> > >>
> > >
> > > The relevant annotations do explain this:
> > >
> > 

Re: [VOTE] KIP-167 (Addendum): Add interface for the state store restoration process

2017-07-21 Thread Guozhang Wang
+1

On Thu, Jul 20, 2017 at 11:00 PM, Matthias J. Sax 
wrote:

> +1
>
> On 7/20/17 4:22 AM, Bill Bejeck wrote:
> > Hi,
> >
> > After working on the PR for this KIP I discovered that we need to add and
> > additional parameter (TopicPartition) to the StateRestoreListener
> interface
> > methods.
> >
> > The addition of the TopicPartition is required as the
> StateRestoreListener
> > is for the entire application, thus all tasks with recovering state
> stores
> > call the same listener instance.  The TopicPartition is needed to
> > disambiguate the progress of the state store recovery.
> >
> > For those that have voted before, please review the updated KIP
> >  167:+Add+interface+for+the+state+store+restoration+process>
> > and
> > re-vote.
> >
> > Thanks,
> > Bill
> >
>
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-180: Add a broker metric specifying the number of consumer group rebalances in progress

2017-07-21 Thread Guozhang Wang
Ah, that's right Jason.

With that I can be convinced to add one metric per each state.

Guozhang

On Fri, Jul 21, 2017 at 11:44 AM, Jason Gustafson 
wrote:

> >
> > "Dead" and "Empty" states are transient: groups usually only leaves in
> this
> > state for a short while and then being deleted or transited to other
> > states.
>
>
> This is not strictly true for the "Empty" state which we also use to
> represent simple groups which only use the coordinator to store offsets. I
> think we may as well cover all the states if we're going to cover any of
> them specifically.
>
> -Jason
>
>
>
> On Fri, Jul 21, 2017 at 9:45 AM, Guozhang Wang  wrote:
>
> > My two cents:
> >
> > "Dead" and "Empty" states are transient: groups usually only leaves in
> this
> > state for a short while and then being deleted or transited to other
> > states.
> >
> > Since we have the existing "*NumGroups*" metric, `*NumGroups -
> > **NumGroupsRebalancing
> > - **NumGroupsAwaitingSync`* should cover the above three, where "Stable"
> > should be contributing most of the counts: If we have a bug that causes
> the
> > num.Dead / Empty to keep increasing, then we would observe `NumGroups`
> keep
> > increasing which should be sufficient for alerting. And trouble shooting
> of
> > the issue could be relying on the log4j.
> >
> > *Guozhang*
> >
> > On Fri, Jul 21, 2017 at 7:19 AM, Ismael Juma  wrote:
> >
> > > Thanks for the KIP, Colin. This will definitely be useful. One
> question:
> > > would it be useful to have a metric for for the number of groups in
> each
> > > possible state? The KIP suggests "PreparingRebalance" and
> "AwaitingSync".
> > > That leaves "Stable", "Dead" and "Empty". Are those not useful?
> > >
> > > Ismael
> > >
> > > On Thu, Jul 20, 2017 at 6:52 PM, Colin McCabe 
> > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I posted "KIP-180: Add a broker metric specifying the number of
> > consumer
> > > > group rebalances in progress" for discussion:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 180%3A+Add+a+broker+metric+specifying+the+number+of+
> > > > consumer+group+rebalances+in+progress
> > > >
> > > > Check it out.
> > > >
> > > > cheers,
> > > > Colin
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Guozhang Wang
Thanks Ismael. I agree with you on all these points, and for some of these
points like 3) we never have a written-down policy though in practice we
tend to follow some patterns.

To me deciding what's the version number of the next major release does not
necessarily mean we need now to change any of these or to set the hard
rules along with it; I'd like to keep them as two separate discussions as
they seem semi-orthogonal to me.


Guozhang


On Fri, Jul 21, 2017 at 8:44 AM, Ismael Juma  wrote:

> On the topic of documentation, we should also document which releases are
> still supported and which are not. There a few factors to consider:
>
> 1. Which branches receive bug fixes. We typically backport fixes to the two
> most recent stable branches (the most recent stable branch typically gets
> more backports than the older one).
>
> 2. Which branches receive security fixes. This could be the same as `1`,
> but we could attempt to backport more aggressively for security fixes as
> they tend to be rare (so far at least) and the impact could be severe.
>
> 3. The release policy for stable branches. We tend to stop releasing from a
> given stable branch before we stop backporting fixes. Maybe that's OK, but
> it would be good to document how we decide that a bug fix release is
> needed.
>
> 4. How long are direct upgrades supported for. During the time-based
> releases discussion, we agreed to support direct upgrades for 2 years. As
> it happens, direct upgrades from 0.8.2 to 1.0.0 would not be supported if
> we follow this strictly. Not clear if we want to do that.
>
> 5. How long are older clients supported for. Current brokers support
> clients all the way to 0.8.x.
>
> 6. How long are older brokers supported for. Current clients support 0.10.x
> and newer brokers.
>
> 7. How long are message formats supported for. We never discussed this. I
> think 5 years would probably be the minimum.
>
> Ismael
>
> P.S. I'll file a JIRA to capture this information.
>
> On Wed, Jul 19, 2017 at 10:12 AM, Ismael Juma  wrote:
>
> > Hi Stevo,
> >
> > Thanks for your feedback. We should definitely do a better job of
> > documenting things. We basically follow semantic versioning, but it's
> > currently a bit confusing because:
> >
> > 1. There are 4 segments in the version. The "0." part should be ignored
> > when deciding what is major, minor and patch at the moment, but many
> people
> > don't know this. Once we move to 1.0.0, that problem goes away.
> >
> > 2. To know what is a public API, you must check the Javadoc (
> > https://kafka.apache.org/0110/javadoc/index.html?org/
> > apache/kafka/clients/consumer/KafkaConsumer.html). If it's not listed
> > there, it's not public API. Ideally, it would be obvious from the package
> > name (i.e. there would be "internals" in the name), but we are not there
> > yet. The exception are the old Scala APIs, but they have all been
> > deprecated and they will be removed eventually (the old Scala consumers
> > won't be removed until the June 2018 release at the earliest in order to
> > give people time to migrate).
> >
> > 3. Even though we are following reasonably common practices, we haven't
> > documented them all in one place. It would be great to do it during the
> > next release cycle.
> >
> > A few comments below.
> >
> > On Wed, Jul 19, 2017 at 1:31 AM, Stevo Slavić  wrote:
> >
> >> - APIs not labeled or labeled as stable
> >> -- change in major version is only one that can break backward
> >> compatibility (client APIs or behavior)
> >>
> >
> > To clarify, stable APIs should not be changed in an incompatible way
> > without a deprecation cycle. Independently of whether it's a major
> release
> > or not.
> >
> >
> >> -- change in minor version can introduce new features, but not break
> >> backward compatibility
> >> -- change in patch version, is for bug fixes only.
> >>
> >
> > Right, this has been the case for a while already. Also see annotations
> > below.
> >
> >
> >> - APIs labeled as evolving can be broken in backward incompatible way in
> >> any release, but are assumed less likely to be broken compared to
> unstable
> >> APIs
> >> - APIs labeled as unstable can be broken in backward incompatible way in
> >> any release, major, minor or patch
> >>
> >
> > The relevant annotations do explain this:
> >
> > https://kafka.apache.org/0110/javadoc/org/apache/kafka/
> common/annotation/
> > InterfaceStability.html
> > https://kafka.apache.org/0110/javadoc/org/apache/kafka/
> common/annotation/
> > InterfaceStability.Stable.html
> > https://kafka.apache.org/0110/javadoc/org/apache/kafka/
> common/annotation/
> > InterfaceStability.Evolving.html
> > https://kafka.apache.org/0110/javadoc/org/apache/kafka/
> common/annotation/
> > InterfaceStability.Unstable.html
> >
> > But we should have a section in our documentation as well.
> >
> >
> >> - deprecated stable APIs are treated as any stable APIs, they can be
> >> removed 

Re: [DISCUSS] KIP-180: Add a broker metric specifying the number of consumer group rebalances in progress

2017-07-21 Thread Jason Gustafson
>
> "Dead" and "Empty" states are transient: groups usually only leaves in this
> state for a short while and then being deleted or transited to other
> states.


This is not strictly true for the "Empty" state which we also use to
represent simple groups which only use the coordinator to store offsets. I
think we may as well cover all the states if we're going to cover any of
them specifically.

-Jason



On Fri, Jul 21, 2017 at 9:45 AM, Guozhang Wang  wrote:

> My two cents:
>
> "Dead" and "Empty" states are transient: groups usually only leaves in this
> state for a short while and then being deleted or transited to other
> states.
>
> Since we have the existing "*NumGroups*" metric, `*NumGroups -
> **NumGroupsRebalancing
> - **NumGroupsAwaitingSync`* should cover the above three, where "Stable"
> should be contributing most of the counts: If we have a bug that causes the
> num.Dead / Empty to keep increasing, then we would observe `NumGroups` keep
> increasing which should be sufficient for alerting. And trouble shooting of
> the issue could be relying on the log4j.
>
> *Guozhang*
>
> On Fri, Jul 21, 2017 at 7:19 AM, Ismael Juma  wrote:
>
> > Thanks for the KIP, Colin. This will definitely be useful. One question:
> > would it be useful to have a metric for for the number of groups in each
> > possible state? The KIP suggests "PreparingRebalance" and "AwaitingSync".
> > That leaves "Stable", "Dead" and "Empty". Are those not useful?
> >
> > Ismael
> >
> > On Thu, Jul 20, 2017 at 6:52 PM, Colin McCabe 
> wrote:
> >
> > > Hi all,
> > >
> > > I posted "KIP-180: Add a broker metric specifying the number of
> consumer
> > > group rebalances in progress" for discussion:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 180%3A+Add+a+broker+metric+specifying+the+number+of+
> > > consumer+group+rebalances+in+progress
> > >
> > > Check it out.
> > >
> > > cheers,
> > > Colin
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-07-21 Thread Jason Gustafson
>
> Regarding your comment about the current limitation on the information
> returned for a consumer group, do you think it's worth expanding the API
> to return some additional info (e.g. generation id, group leader, ...)?


Seems outside the scope of this KIP. Up to you, but I'd probably leave it
for future work.

-Jason

On Thu, Jul 20, 2017 at 4:21 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Jason,
>
> Regarding your comment about the current limitation on the information
> returned for a consumer group, do you think it's worth expanding the API
> to return some additional info (e.g. generation id, group leader, ...)?
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Jason Gustafson 
> To: Kafka Users 
> Cc: dev@kafka.apache.org
> Date:   07/19/2017 01:46 PM
> Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> ConsumerGroupCommand
>
>
>
> Hey Vahid,
>
> Thanks for the updates. Looks pretty good. A couple comments:
>
> 1. For the --state option, should we use the same column-oriented format
> as
> we use for the other options? I realize there would only be one row, but
> the inconsistency is a little vexing. Also, since this tool is working
> only
> with consumer groups, perhaps we can leave out "protocol type" and use
> "assignment strategy" in place of "protocol"? It would be nice to also
> include the group generation, but it seems we didn't add that to the
> DescribeGroup response. Perhaps we could also include a count of the
> number
> of members?
> 2. It's a little annoying that --subscription and --members share so much
> in common. Maybe we could drop --subscription and use a --verbose flag to
> control whether or not to include the subscription and perhaps the
> assignment as well? Not sure if that's more annoying or less, but maybe a
> generic --verbose will be useful in other contexts.
>
> As for your question on whether we need the --offsets option at all, I
> don't have a strong opinion, but it seems to make the command semantics a
> little more consistent.
>
> -Jason
>
> On Tue, Jul 18, 2017 at 12:56 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Jason,
> >
> > I updated the KIP based on your earlier suggestions:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
> > The only thing I am wondering at this point is whether it's worth to
> have
> > a `--describe --offsets` option that behaves exactly like `--describe`.
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> > From:   "Vahid S Hashemian" 
> > To: dev@kafka.apache.org
> > Cc: Kafka Users 
> > Date:   07/17/2017 03:24 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> > ConsumerGroupCommand
> >
> >
> >
> > Hi Jason,
> >
> > Thanks for your quick feedback. Your suggestions seem reasonable.
> > I'll start updating the KIP accordingly and will send out another note
> > when it's ready.
> >
> > Regards.
> > --Vahid
> >
> >
> >
> >
> > From:   Jason Gustafson 
> > To: dev@kafka.apache.org
> > Cc: Kafka Users 
> > Date:   07/17/2017 02:11 PM
> > Subject:Re: [DISCUSS] KIP-175: Additional '--describe' views for
> > ConsumerGroupCommand
> >
> >
> >
> > Hey Vahid,
> >
> > Hmm... If possible, it would be nice to avoid cluttering the default
> > option
> > too much, especially if it is information which is going to be the same
> > for
> > all members (such as the generation). My preference would be to use the
> > --state option that you've suggested for that info so that we can
> > represent
> > it more concisely.
> >
> > The reason I prefer the current output is that it is clear every entry
> > corresponds to a partition for which we have committed offset. Entries
> > like
> > this look strange:
> >
> > TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET
> > LAGCONSUMER-ID
> > HOST   CLIENT-ID
> > -  -  -   -
> > -  consumer4-e173f09d-c761-4f4e-95c7-6fb73bb8fbff
> > /127.0.0.1
> > consumer4
> > -  -  -   -
> > -  consumer5-7b80e428-f8ff-43f3-8360-afd1c8ba43ea
> > /127.0.0.1
> > consumer5
> >
> > It makes me think that the consumers have committed offsets for an
> unknown
> > partition. The --members option seems like a clearer way to communicate
> > the
> > fact that there are some members with no assigned partitions.
> >
> > A few additional suggestions:
> >
> > 1. Maybe we can rename --partitions to --offsets or --committed-offsets
> > and
> > the output could match the default output (in other words, --offsets is
> > treated as the default switch). Seems no harm including the assignment
> > information if we have it.
> > 2. Along the lines of Onur's comment, 

Jenkins build is back to normal : kafka-trunk-jdk7 #2553

2017-07-21 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : kafka-0.11.0-jdk7 #241

2017-07-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5624) Unsafe use of expired sensors

2017-07-21 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5624:
--

 Summary: Unsafe use of expired sensors
 Key: KAFKA-5624
 URL: https://issues.apache.org/jira/browse/KAFKA-5624
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


Seems a couple unhandled cases following sensor expiration:

1. Static sensors (such as {{ClientQuotaManager.}}) can be expired due to 
inactivity, but the references will remain valid and usable. Probably a good 
idea to either ensure we use a "get or create" pattern when accessing the 
sensor or add a new static registration option which makes the sensor 
ineligible for expiration.
2. It is possible to register metrics through the sensor even after it is 
expired. We should probably raise an exception instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-180: Add a broker metric specifying the number of consumer group rebalances in progress

2017-07-21 Thread Guozhang Wang
My two cents:

"Dead" and "Empty" states are transient: groups usually only leaves in this
state for a short while and then being deleted or transited to other states.

Since we have the existing "*NumGroups*" metric, `*NumGroups -
**NumGroupsRebalancing
- **NumGroupsAwaitingSync`* should cover the above three, where "Stable"
should be contributing most of the counts: If we have a bug that causes the
num.Dead / Empty to keep increasing, then we would observe `NumGroups` keep
increasing which should be sufficient for alerting. And trouble shooting of
the issue could be relying on the log4j.

*Guozhang*

On Fri, Jul 21, 2017 at 7:19 AM, Ismael Juma  wrote:

> Thanks for the KIP, Colin. This will definitely be useful. One question:
> would it be useful to have a metric for for the number of groups in each
> possible state? The KIP suggests "PreparingRebalance" and "AwaitingSync".
> That leaves "Stable", "Dead" and "Empty". Are those not useful?
>
> Ismael
>
> On Thu, Jul 20, 2017 at 6:52 PM, Colin McCabe  wrote:
>
> > Hi all,
> >
> > I posted "KIP-180: Add a broker metric specifying the number of consumer
> > group rebalances in progress" for discussion:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 180%3A+Add+a+broker+metric+specifying+the+number+of+
> > consumer+group+rebalances+in+progress
> >
> > Check it out.
> >
> > cheers,
> > Colin
> >
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk7 #2552

2017-07-21 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5512; Awake the heartbeat thread when timetoNextHeartbeat is equal

--
[...truncated 973.96 KB...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > 

[GitHub] kafka pull request #3442: KAFKA-5512: Awake the heartbeat thread when timeto...

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3442


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Ismael Juma
On the topic of documentation, we should also document which releases are
still supported and which are not. There a few factors to consider:

1. Which branches receive bug fixes. We typically backport fixes to the two
most recent stable branches (the most recent stable branch typically gets
more backports than the older one).

2. Which branches receive security fixes. This could be the same as `1`,
but we could attempt to backport more aggressively for security fixes as
they tend to be rare (so far at least) and the impact could be severe.

3. The release policy for stable branches. We tend to stop releasing from a
given stable branch before we stop backporting fixes. Maybe that's OK, but
it would be good to document how we decide that a bug fix release is needed.

4. How long are direct upgrades supported for. During the time-based
releases discussion, we agreed to support direct upgrades for 2 years. As
it happens, direct upgrades from 0.8.2 to 1.0.0 would not be supported if
we follow this strictly. Not clear if we want to do that.

5. How long are older clients supported for. Current brokers support
clients all the way to 0.8.x.

6. How long are older brokers supported for. Current clients support 0.10.x
and newer brokers.

7. How long are message formats supported for. We never discussed this. I
think 5 years would probably be the minimum.

Ismael

P.S. I'll file a JIRA to capture this information.

On Wed, Jul 19, 2017 at 10:12 AM, Ismael Juma  wrote:

> Hi Stevo,
>
> Thanks for your feedback. We should definitely do a better job of
> documenting things. We basically follow semantic versioning, but it's
> currently a bit confusing because:
>
> 1. There are 4 segments in the version. The "0." part should be ignored
> when deciding what is major, minor and patch at the moment, but many people
> don't know this. Once we move to 1.0.0, that problem goes away.
>
> 2. To know what is a public API, you must check the Javadoc (
> https://kafka.apache.org/0110/javadoc/index.html?org/
> apache/kafka/clients/consumer/KafkaConsumer.html). If it's not listed
> there, it's not public API. Ideally, it would be obvious from the package
> name (i.e. there would be "internals" in the name), but we are not there
> yet. The exception are the old Scala APIs, but they have all been
> deprecated and they will be removed eventually (the old Scala consumers
> won't be removed until the June 2018 release at the earliest in order to
> give people time to migrate).
>
> 3. Even though we are following reasonably common practices, we haven't
> documented them all in one place. It would be great to do it during the
> next release cycle.
>
> A few comments below.
>
> On Wed, Jul 19, 2017 at 1:31 AM, Stevo Slavić  wrote:
>
>> - APIs not labeled or labeled as stable
>> -- change in major version is only one that can break backward
>> compatibility (client APIs or behavior)
>>
>
> To clarify, stable APIs should not be changed in an incompatible way
> without a deprecation cycle. Independently of whether it's a major release
> or not.
>
>
>> -- change in minor version can introduce new features, but not break
>> backward compatibility
>> -- change in patch version, is for bug fixes only.
>>
>
> Right, this has been the case for a while already. Also see annotations
> below.
>
>
>> - APIs labeled as evolving can be broken in backward incompatible way in
>> any release, but are assumed less likely to be broken compared to unstable
>> APIs
>> - APIs labeled as unstable can be broken in backward incompatible way in
>> any release, major, minor or patch
>>
>
> The relevant annotations do explain this:
>
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/common/annotation/
> InterfaceStability.html
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/common/annotation/
> InterfaceStability.Stable.html
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/common/annotation/
> InterfaceStability.Evolving.html
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/common/annotation/
> InterfaceStability.Unstable.html
>
> But we should have a section in our documentation as well.
>
>
>> - deprecated stable APIs are treated as any stable APIs, they can be
>> removed only in major release, are not allowed to be changed in backward
>> incompatible way in either patch or minor version release
>>
>
> Right, but note that stable non-deprecated APIs provide stronger
> guarantees in major releases (they can't be changed in an incompatible way).
>
>>
>> This means one should be able to upgrade server and recompile/deploy apps
>> with clients to new minor.patch release with dependency version change
>> being only change needed and there would be no drama.
>>
>
> That should have been the case for a while as long as you are using stable
> public APIs.
>
>>
>> Practice/"features" like protocol version being a parameter, and
>> defaulting
>> to latest so auto updated with dependency update which 

[jira] [Resolved] (KAFKA-1421) Error: Could not find or load main class kafka.perf.SimpleConsumerPerformance

2017-07-21 Thread Daryl Erwin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryl Erwin resolved KAFKA-1421.

Resolution: Done

> Error: Could not find or load main class kafka.perf.SimpleConsumerPerformance
> -
>
> Key: KAFKA-1421
> URL: https://issues.apache.org/jira/browse/KAFKA-1421
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
>Reporter: Daryl Erwin
>Assignee: Jun Rao
>Priority: Minor
>  Labels: performance
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Did a base install with 
> sbt update
> sbt package
> I am able to successfully run the console-producer, consumer. I am trying to 
> run the perf scripts (./kafka-simple-consumer-perf-test.sh) but it appears 
> the jar file is not generated. 
> Are the steps that I need to run to create this jar file?
> .. same as:
> Error: Could not find or load main class kafka.perf.ProducerPerformance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Michal Borowiecki

+1 for 1.0 (not for 12.0)


On 21/07/17 15:59, Michael Pearce wrote:

+1

Why not just drop the leading 0, and call next version 12.0.0 instead of 1.0.0, 
I think in my head I’ve always just dropped the leading 0 anyhow.

On 21/07/2017, 04:15, "Neha Narkhede"  wrote:

 +1 on 1.0. It's about time :)
 On Thu, Jul 20, 2017 at 5:11 PM Ewen Cheslack-Postava 
 wrote:

 > Ack on the deprecation, as long as we actually give people the window. I
 > guess we're not doing a good job of communicating what that period is
 > anyway, so we can play a bit fast and loose.
 >
 > -Ewen
 >
 > On Thu, Jul 20, 2017 at 4:59 PM, Guozhang Wang  
wrote:
 >
 > > We do have the KIP in the candidate list of the next release (see wiki
 > > 
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2017.Oct)
 > > as KIP-118:
 > > Drop Support for Java 7 in Kafka 0.11
 > >  > 118%3A+Drop+Support+for+Java+7+in+Kafka+0.11>
 > > .
 > >
 > > Guozhang
 > >
 > > On Thu, Jul 20, 2017 at 4:51 AM, Damian Guy 
 > wrote:
 > >
 > > > +1 on 1.0!
 > > > Are we also going to move to java 8?
 > > > I also think we should drop the Unstable annotations completely.
 > > >
 > > > Cheers,
 > > > Damian
 > > >
 > > > On Wed, 19 Jul 2017 at 21:36 Guozhang Wang  
wrote:
 > > >
 > > > > Hi Stevo,
 > > > >
 > > > > Just trying to add to what Ismael has already replied you:
 > > > >
 > > > >
 > > > > > Practice/"features" like protocol version being a parameter, and
 > > > > defaulting
 > > > > > to latest so auto updated with dependency update which introduces
 > new
 > > > > > protocol/behavior should not be used in public client APIs. To
 > switch
 > > > > > between backward incompatible APIs (contract and behaviors),
 > ideally
 > > > user
 > > > > > should explicitly have to change code and not dependency only, 
but
 > at
 > > > > least
 > > > > > it should be clearly communicated that there are breaking changes
 > to
 > > > > expect
 > > > > > even with just dependency update by e.g. giving major version
 > release
 > > > > clear
 > > > > > meaning. If app dependency on Kafka client library minor.patch on
 > > same
 > > > > > major is updated, and if there's a change in behavior or API
 > > requiring
 > > > > app
 > > > > > code change - it's a bug.
 > > > > >
 > > > > > Change introduced contrary to the SLO, is OK to be reported as 
bug.
 > > > > > Everything else is improvement or feature request.
 > > > > >
 > > > > > If this was the case, and 1.0.0 was released today with APIs as
 > they
 > > > are
 > > > > > now, Scala client APIs even though deprecated would not break and
 > > > require
 > > > > > refactoring with every 1.* minor/patch release, and would only be
 > > > allowed
 > > > > > to be broken or removed in future major release, like 2.0.0
 > > > >
 > > > > Just to clarify, my proposal is that moving forward beyond the next
 > > > release
 > > > > we will not make any public API breaking changes in any of the 
major
 > or
 > > > > minor releases, but will only mark them as "deprecated", and
 > deprecated
 > > > > public APIs will be only considered for removing as early as the 
next
 > > > major
 > > > > release: so if we mark the scala consumer APIs as deprecated in
 > 1.0.0,
 > > we
 > > > > should only be consider removing it at 2.0.0 or even later.
 > > > >
 > > > > > It should be also clear how long is each version supported - e.g.
 > if
 > > > > > minor.patch had meaning that there are no backward incompatible
 > > > changes,
 > > > > > it's OK to file a bug only for current major.minor.patch; 
previous
 > > > major
 > > > > > and its last minor.patch can only have patches released up to 
some
 > > time
 > > > > > like 1 up to 3 months.
 > > > >
 > > > > Currently in practice we have not ever done, for example a bugfix
 > > release
 > > > > on an older major / minor release: i.e. once we have released say
 > > > 0.10.2.0
 > > > > we did not release 0.10.1.2 any more. So practically speaking we do
 > not
 > > > > have a "support period" for older versions yet, and in the next
 > coming
 > > > > release I do not have plans to propose some concrete policy for 
that
 > > > > matter.
 > > > >
 > > > >
 > > > > Guozhang
 > > > >
 > > > >
 > > > >
 > > > > On Wed, Jul 19, 2017 at 2:12 AM, Ismael Juma 
 > > wrote:
 > > > >
 > > > > > Hi Stevo,
 > > > > >
 > > > > > Thanks for your feedback. We 

Re: [DISCUSS] 2017 October release planning and release version

2017-07-21 Thread Michael Pearce
+1

Why not just drop the leading 0, and call next version 12.0.0 instead of 1.0.0, 
I think in my head I’ve always just dropped the leading 0 anyhow.

On 21/07/2017, 04:15, "Neha Narkhede"  wrote:

+1 on 1.0. It's about time :)
On Thu, Jul 20, 2017 at 5:11 PM Ewen Cheslack-Postava 
wrote:

> Ack on the deprecation, as long as we actually give people the window. I
> guess we're not doing a good job of communicating what that period is
> anyway, so we can play a bit fast and loose.
>
> -Ewen
>
> On Thu, Jul 20, 2017 at 4:59 PM, Guozhang Wang  wrote:
>
> > We do have the KIP in the candidate list of the next release (see wiki
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2017.Oct)
> > as KIP-118:
> > Drop Support for Java 7 in Kafka 0.11
> >  > 118%3A+Drop+Support+for+Java+7+in+Kafka+0.11>
> > .
> >
> > Guozhang
> >
> > On Thu, Jul 20, 2017 at 4:51 AM, Damian Guy 
> wrote:
> >
> > > +1 on 1.0!
> > > Are we also going to move to java 8?
> > > I also think we should drop the Unstable annotations completely.
> > >
> > > Cheers,
> > > Damian
> > >
> > > On Wed, 19 Jul 2017 at 21:36 Guozhang Wang  wrote:
> > >
> > > > Hi Stevo,
> > > >
> > > > Just trying to add to what Ismael has already replied you:
> > > >
> > > >
> > > > > Practice/"features" like protocol version being a parameter, and
> > > > defaulting
> > > > > to latest so auto updated with dependency update which introduces
> new
> > > > > protocol/behavior should not be used in public client APIs. To
> switch
> > > > > between backward incompatible APIs (contract and behaviors),
> ideally
> > > user
> > > > > should explicitly have to change code and not dependency only, but
> at
> > > > least
> > > > > it should be clearly communicated that there are breaking changes
> to
> > > > expect
> > > > > even with just dependency update by e.g. giving major version
> release
> > > > clear
> > > > > meaning. If app dependency on Kafka client library minor.patch on
> > same
> > > > > major is updated, and if there's a change in behavior or API
> > requiring
> > > > app
> > > > > code change - it's a bug.
> > > > >
> > > > > Change introduced contrary to the SLO, is OK to be reported as 
bug.
> > > > > Everything else is improvement or feature request.
> > > > >
> > > > > If this was the case, and 1.0.0 was released today with APIs as
> they
> > > are
> > > > > now, Scala client APIs even though deprecated would not break and
> > > require
> > > > > refactoring with every 1.* minor/patch release, and would only be
> > > allowed
> > > > > to be broken or removed in future major release, like 2.0.0
> > > >
> > > > Just to clarify, my proposal is that moving forward beyond the next
> > > release
> > > > we will not make any public API breaking changes in any of the major
> or
> > > > minor releases, but will only mark them as "deprecated", and
> deprecated
> > > > public APIs will be only considered for removing as early as the 
next
> > > major
> > > > release: so if we mark the scala consumer APIs as deprecated in
> 1.0.0,
> > we
> > > > should only be consider removing it at 2.0.0 or even later.
> > > >
> > > > > It should be also clear how long is each version supported - e.g.
> if
> > > > > minor.patch had meaning that there are no backward incompatible
> > > changes,
> > > > > it's OK to file a bug only for current major.minor.patch; previous
> > > major
> > > > > and its last minor.patch can only have patches released up to some
> > time
> > > > > like 1 up to 3 months.
> > > >
> > > > Currently in practice we have not ever done, for example a bugfix
> > release
> > > > on an older major / minor release: i.e. once we have released say
> > > 0.10.2.0
> > > > we did not release 0.10.1.2 any more. So practically speaking we do
> not
> > > > have a "support period" for older versions yet, and in the next
> coming
> > > > release I do not have plans to propose some concrete policy for that
> > > > matter.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > >
> > > > On Wed, Jul 19, 2017 at 2:12 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Hi Stevo,
> > > > >
> > > > > Thanks for your feedback. We should definitely do a better job of
> > > > > documenting things. We basically follow semantic versioning, but
> it's
> > > > > currently a bit confusing because:
> > > > >
> > > > 

Re: [DISCUSS] KIP-180: Add a broker metric specifying the number of consumer group rebalances in progress

2017-07-21 Thread Ismael Juma
Thanks for the KIP, Colin. This will definitely be useful. One question:
would it be useful to have a metric for for the number of groups in each
possible state? The KIP suggests "PreparingRebalance" and "AwaitingSync".
That leaves "Stable", "Dead" and "Empty". Are those not useful?

Ismael

On Thu, Jul 20, 2017 at 6:52 PM, Colin McCabe  wrote:

> Hi all,
>
> I posted "KIP-180: Add a broker metric specifying the number of consumer
> group rebalances in progress" for discussion:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 180%3A+Add+a+broker+metric+specifying+the+number+of+
> consumer+group+rebalances+in+progress
>
> Check it out.
>
> cheers,
> Colin
>


Build failed in Jenkins: kafka-0.11.0-jdk7 #240

2017-07-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5608; Follow-up to fix potential NPE and clarify method name

--
[...truncated 2.42 MB...]
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldWriteToInnerOnPutIfAbsentNoPreviousValue STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldWriteToInnerOnPutIfAbsentNoPreviousValue PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldWriteKeyValueBytesToInnerStoreOnPut STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldWriteKeyValueBytesToInnerStoreOnPut PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnCurrentValueOnPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnCurrentValueOnPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogChangesOnPutAll STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogChangesOnPutAll PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToChangeLogOnPutIfAbsentWhenValueForKeyExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToChangeLogOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED


Re: Consumer throughput drop

2017-07-21 Thread Ismael Juma
Thanks for reporting the results. Maybe you could submit a PR that updates
the ops section?

https://github.com/apache/kafka/blob/trunk/docs/ops.html

Ismael

On Fri, Jul 21, 2017 at 2:49 PM, Ovidiu-Cristian MARCU <
ovidiu-cristian.ma...@inria.fr> wrote:

> After some tuning, I got better results. What I changed, as suggested:
>
> dirty_ratio = 10 (previously 20)
> dirty_background_ratio=3 (previously 10)
>
> It results that disk read I/O is almost completely 0 (I have enough cache,
> the consumer is keeping with the producer).
>
> - producer throughput remains constant ~ 400K/s;
> - consumer throughput (a Flink app) stays in this interval [300K/s,
> 500K/s] even when the cache is filled (there are some variations but are
> not influenced by system’s cache);
>
> I don’t know if Kafka’s documentation is saying something, but this could
> be put somewhere in documentation if you also reproduce my tests and
> consider it useful.
>
> Thanks,
> Ovidiu
>
> > On 21 Jul 2017, at 01:57, Apurva Mehta  wrote:
> >
> > Hi Ovidu,
> >
> > The see-saw behavior is inevitable with linux when you have concurrent
> reads and writes. However, tuning the following two settings may help
> achieve more stable performance (from Jay's link):
> >
> > dirty_ratio
> > Defines a percentage value. Writeout of dirty data begins (via pdflush)
> when dirty data comprises this percentage of total system memory. The
> default value is 20.
> > Red Hat recommends a slightly lower value of 15 for database workloads.
> >
> > dirty_background_ratio
> > Defines a percentage value. Writeout of dirty data begins in the
> background (via pdflush) when dirty data comprises this percentage of total
> memory. The default value is 10. For database workloads, Red Hat recommends
> a lower value of 3.
> >
> > Thanks,
> > Apurva
> >
> >
> > On Thu, Jul 20, 2017 at 12:25 PM, Ovidiu-Cristian MARCU <
> ovidiu-cristian.ma...@inria.fr >
> wrote:
> > Yes, I’m using Debian Jessie 2.6 installed on this hardware [1].
> >
> > It is also my understanding that Kafka is based on system’s cache (Linux
> in this case) which is based on Clock-Pro for page replacement policy,
> doing complex things for general workloads. I will check the tuning
> parameters, but I was hoping for some advices to avoid disk at all when
> reading, considering the system's cache is used completely by Kafka and is
> huge ~128GB - that is to tune Clock-Pro to be smarter when used for
> streaming access patterns.
> >
> > Thanks,
> > Ovidiu
> >
> > [1] https://www.grid5000.fr/mediawiki/index.php/Rennes:
> Hardware#Dell_Poweredge_R630_.28paravance.29  mediawiki/index.php/Rennes:Hardware#Dell_Poweredge_R630_.28paravance.29> <
> https://www.grid5000.fr/mediawiki/index.php/Rennes:
> Hardware#Dell_Poweredge_R630_.28paravance.29  mediawiki/index.php/Rennes:Hardware#Dell_Poweredge_R630_.28paravance.29>>
> >
> > > On 20 Jul 2017, at 21:06, Jay Kreps > wrote:
> > >
> > > I suspect this is on Linux right?
> > >
> > > The way Linux works is it uses a percent of memory to buffer new
> writes, at a certain point it thinks it has too much buffered data and it
> gives high priority to writing that out. The good news about this is that
> the writes are very linear, well layed out, and high-throughput. The
> problem is that it leads to a bit of see-saw behavior.
> > >
> > > Now obviously the drop in performance isn't wrong. When your disk is
> writing data out it is doing work and obviously the read throughput will be
> higher when you are just reading and not writing then when you are doing
> both reading and writing simultaneously. So obviously you can't get the
> no-writing performance when you are also writing (unless you add I/O
> capacity).
> > >
> > > But still these big see-saws in performance are not ideal. You'd
> rather have more constant performance all the time rather than have linux
> bounce back and forth from writing nothing and then frantically writing
> full bore. Fortunately linux provides a set of pagecache tuning parameters
> that let you control this a bit.
> > >
> > > I think these docs cover some of the parameters:
> > > https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-tunables.html <
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-tunables.html> <
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-tunables.html <
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-tunables.html>>
> > >
> > > -Jay
> > >
> > > On Thu, Jul 20, 2017 at 10:24 AM, Ovidiu-Cristian MARCU <
> ovidiu-cristian.ma...@inria.fr 
> 

Re: Consumer throughput drop

2017-07-21 Thread Ovidiu-Cristian MARCU
After some tuning, I got better results. What I changed, as suggested:

dirty_ratio = 10 (previously 20)
dirty_background_ratio=3 (previously 10)

It results that disk read I/O is almost completely 0 (I have enough cache, the 
consumer is keeping with the producer). 

- producer throughput remains constant ~ 400K/s;
- consumer throughput (a Flink app) stays in this interval [300K/s, 500K/s] 
even when the cache is filled (there are some variations but are not influenced 
by system’s cache);

I don’t know if Kafka’s documentation is saying something, but this could be 
put somewhere in documentation if you also reproduce my tests and consider it 
useful.

Thanks,
Ovidiu

> On 21 Jul 2017, at 01:57, Apurva Mehta  wrote:
> 
> Hi Ovidu, 
> 
> The see-saw behavior is inevitable with linux when you have concurrent reads 
> and writes. However, tuning the following two settings may help achieve more 
> stable performance (from Jay's link): 
> 
> dirty_ratio
> Defines a percentage value. Writeout of dirty data begins (via pdflush) when 
> dirty data comprises this percentage of total system memory. The default 
> value is 20.
> Red Hat recommends a slightly lower value of 15 for database workloads.
>  
> dirty_background_ratio
> Defines a percentage value. Writeout of dirty data begins in the background 
> (via pdflush) when dirty data comprises this percentage of total memory. The 
> default value is 10. For database workloads, Red Hat recommends a lower value 
> of 3.
> 
> Thanks,
> Apurva 
> 
> 
> On Thu, Jul 20, 2017 at 12:25 PM, Ovidiu-Cristian MARCU 
> > 
> wrote:
> Yes, I’m using Debian Jessie 2.6 installed on this hardware [1].
> 
> It is also my understanding that Kafka is based on system’s cache (Linux in 
> this case) which is based on Clock-Pro for page replacement policy, doing 
> complex things for general workloads. I will check the tuning parameters, but 
> I was hoping for some advices to avoid disk at all when reading, considering 
> the system's cache is used completely by Kafka and is huge ~128GB - that is 
> to tune Clock-Pro to be smarter when used for streaming access patterns.
> 
> Thanks,
> Ovidiu
> 
> [1] 
> https://www.grid5000.fr/mediawiki/index.php/Rennes:Hardware#Dell_Poweredge_R630_.28paravance.29
>  
> 
>  
>   
> >
> 
> > On 20 Jul 2017, at 21:06, Jay Kreps  > > wrote:
> >
> > I suspect this is on Linux right?
> >
> > The way Linux works is it uses a percent of memory to buffer new writes, at 
> > a certain point it thinks it has too much buffered data and it gives high 
> > priority to writing that out. The good news about this is that the writes 
> > are very linear, well layed out, and high-throughput. The problem is that 
> > it leads to a bit of see-saw behavior.
> >
> > Now obviously the drop in performance isn't wrong. When your disk is 
> > writing data out it is doing work and obviously the read throughput will be 
> > higher when you are just reading and not writing then when you are doing 
> > both reading and writing simultaneously. So obviously you can't get the 
> > no-writing performance when you are also writing (unless you add I/O 
> > capacity).
> >
> > But still these big see-saws in performance are not ideal. You'd rather 
> > have more constant performance all the time rather than have linux bounce 
> > back and forth from writing nothing and then frantically writing full bore. 
> > Fortunately linux provides a set of pagecache tuning parameters that let 
> > you control this a bit.
> >
> > I think these docs cover some of the parameters:
> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-tunables.html
> >  
> > 
> >  
> >  >  
> > >
> >
> > -Jay
> >
> > On Thu, Jul 20, 2017 at 10:24 AM, Ovidiu-Cristian MARCU 
> >  
> >  > >> wrote:
> > Hi guys,
> >
> > I’m relatively new to Kafka’s world. I have an issue I describe below, 
> > maybe you can help me understand this behaviour.
> >
> > I’m running a benchmark using the following setup: one producer 

Jenkins build is back to normal : kafka-trunk-jdk8 #1836

2017-07-21 Thread Apache Jenkins Server
See 




Re: Kafka doc : official repo or web site repo ?

2017-07-21 Thread Paolo Patierno
Hi Ewen,


thank you very much for your answer. I know that you guys are really busy and I 
pushed my PRs during the 0.11.0 release cycle so ... I can understand, no 
worries ;)


Regarding the documentation happy to know that I pushed in the right repo but 
...

... regarding the Kafka Streams new shiny documentation I see that the main 
page on the web site has the following phrase on the top "The easiest way to 
write mission-critical ..."

If I try to search it in the Kafka repo I can't find it (I imagine that it 
should be in docs/streams/index.html ... but it's not there).

Instead, if I search for the same phrase in the kafka-site repo I can find it 
in 0110/streams/index.html.


So it seems that the doc is in the kafka-site repo but not in the kafka repo 
... where am I wrong ?


Thanks,

Paolo.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Ewen Cheslack-Postava 
Sent: Friday, July 21, 2017 2:57 AM
To: dev@kafka.apache.org
Subject: Re: Kafka doc : official repo or web site repo ?

Hi Paolo,

Docs are a little bit confusing. Some of the docs are version-specific and
those live in the main repo and are managed by the same branches that
releases are managed on. Then there is the
https://github.com/apache/kafka-site which contains the actual contents of
the site. During release, the current snapshot of version-specific docs are
copied into the kafka-site repo (as the current version, or a historical
version as you would see at, e.g.,
http://kafka.apache.org/0102/documentation.html).

For your PR, you submitted to the correct location. Sorry for the delay in
review, we tend to get a little backlogged -- not enough committers to
review and commit the volume of PRs we are getting. I've reviewed and
committed that first one and I see you have a bunch of others that we'll
try to get to as well. Once you get a feel for which reviewers can/will
review different parts of the code, you'll be able to more easily tag
committers for review and commit (in the case of Connect, myself (@ewencp),
@hachikuji, and @gwenshap are good targets).

Thanks for contributing! I especially appreciate docs fixes :)

-Ewen

On Thu, Jul 13, 2017 at 1:51 AM, Paolo Patierno  wrote:

> Hi guys,
>
> I have big doubt on where is the doc and what to do for upgrading that.
>
> I see the docs folder in the Kafka repo (but even there I have a PR opened
> for one month on Kafka Connect not yet merged) but then I see the Kafka web
> site repo where there is the same doc.
>
> Reading the new Kafka Stream doc I noticed that it seems to be only in the
> Kafka web site repo and not in the Kafka repo.
>
>
> Can you clarifying me where to submit PRs for doc ? On which side ?
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: [VOTE]: KIP-149: Enabling key access in ValueTransformer, ValueMapper, and ValueJoiner

2017-07-21 Thread Jeyhun Karimov
Thanks Damian.

This vote is now closed and KIP-149 is
accepted with +3 binding +2 non-binding votes.

Cheers,
Jeyhun

On Fri, Jul 21, 2017 at 3:52 PM Damian Guy  wrote:

> Hi Jeyhun,
>
> Feel free to close the vote. It has been accepted.
>
> Thanks,
> Damian
>
> On Mon, 17 Jul 2017 at 06:18 Guozhang Wang  wrote:
>
> > +1. Thanks!
> >
> > On Sat, Jul 8, 2017 at 1:35 AM, Damian Guy  wrote:
> >
> > > +1
> > > On Fri, 7 Jul 2017 at 16:08, Eno Thereska 
> > wrote:
> > >
> > > > +1 (non-binding) Thanks.
> > > >
> > > > Eno
> > > > > On 6 Jul 2017, at 21:49, Gwen Shapira  wrote:
> > > > >
> > > > > +1
> > > > >
> > > > > On Wed, Jul 5, 2017 at 9:25 AM Matthias J. Sax <
> > matth...@confluent.io>
> > > > > wrote:
> > > > >
> > > > >> +1
> > > > >>
> > > > >> On 6/27/17 1:41 PM, Jeyhun Karimov wrote:
> > > > >>> Dear all,
> > > > >>>
> > > > >>> I would like to start the vote on KIP-149 [1].
> > > > >>>
> > > > >>>
> > > > >>> Cheers,
> > > > >>> Jeyhun
> > > > >>>
> > > > >>>
> > > > >>> [1]
> > > > >>>
> > > > >>
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 149%3A+Enabling+key+access+in+ValueTransformer%2C+
> > > ValueMapper%2C+and+ValueJoiner
> > > > >>>
> > > > >>
> > > > >>
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
-- 
-Cheers

Jeyhun


[GitHub] kafka pull request #3553: KAFKA-5608: Follow-up to fix potential NPE and cla...

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3553


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3562: MINOR: Improve log warning to include the log name

2017-07-21 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3562

MINOR: Improve log warning to include the log name



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka tweak-log-warning

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3562.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3562


commit 601da3310c9921d108b4037843bba53ec0cd7966
Author: Ismael Juma 
Date:   2017-07-21T12:38:43Z

Improve log warning to include the log name




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3560: MINOR: Improve log warning to include the log name

2017-07-21 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/3560


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3561: MINOR: Give correct instructions for retaining pre...

2017-07-21 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3561

MINOR: Give correct instructions for retaining previous unclear leader 
election behaviour



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
fix-upgrade-note-for-unclean-leader-election

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3561.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3561


commit c09ab368c2840b89ca7b0e1d2e8f20dcef217c7f
Author: Ismael Juma 
Date:   2017-07-21T12:39:50Z

MINOR: Give correct instructions for retaining previous unclear leader 
election behaviour




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3560: MINOR: Improve log warning to include the log name

2017-07-21 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3560

MINOR: Improve log warning to include the log name



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka tweak-log-and-upgrade-note

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3560.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3560


commit 601da3310c9921d108b4037843bba53ec0cd7966
Author: Ismael Juma 
Date:   2017-07-21T12:38:43Z

Improve log warning to include the log name




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE]: KIP-149: Enabling key access in ValueTransformer, ValueMapper, and ValueJoiner

2017-07-21 Thread Damian Guy
Hi Jeyhun,

Feel free to close the vote. It has been accepted.

Thanks,
Damian

On Mon, 17 Jul 2017 at 06:18 Guozhang Wang  wrote:

> +1. Thanks!
>
> On Sat, Jul 8, 2017 at 1:35 AM, Damian Guy  wrote:
>
> > +1
> > On Fri, 7 Jul 2017 at 16:08, Eno Thereska 
> wrote:
> >
> > > +1 (non-binding) Thanks.
> > >
> > > Eno
> > > > On 6 Jul 2017, at 21:49, Gwen Shapira  wrote:
> > > >
> > > > +1
> > > >
> > > > On Wed, Jul 5, 2017 at 9:25 AM Matthias J. Sax <
> matth...@confluent.io>
> > > > wrote:
> > > >
> > > >> +1
> > > >>
> > > >> On 6/27/17 1:41 PM, Jeyhun Karimov wrote:
> > > >>> Dear all,
> > > >>>
> > > >>> I would like to start the vote on KIP-149 [1].
> > > >>>
> > > >>>
> > > >>> Cheers,
> > > >>> Jeyhun
> > > >>>
> > > >>>
> > > >>> [1]
> > > >>>
> > > >>
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 149%3A+Enabling+key+access+in+ValueTransformer%2C+
> > ValueMapper%2C+and+ValueJoiner
> > > >>>
> > > >>
> > > >>
> > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


[GitHub] kafka pull request #3459: KAFKA-3741: allow users to specify default topic c...

2017-07-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3459


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3741) Allow setting of default topic configs via StreamsConfig

2017-07-21 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-3741.
---
   Resolution: Fixed
Fix Version/s: 0.11.1.0

Issue resolved by pull request 3459
[https://github.com/apache/kafka/pull/3459]

> Allow setting of default topic configs via StreamsConfig
> 
>
> Key: KAFKA-3741
> URL: https://issues.apache.org/jira/browse/KAFKA-3741
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Roger Hoover
>Assignee: Damian Guy
>  Labels: api
> Fix For: 0.11.1.0
>
>
> Kafka Streams currently allows you to specify a replication factor for 
> changelog and repartition topics that it creates.  It should also allow you 
> to specify any other TopicConfig. These should be used as defaults when 
> creating Internal topics. The defaults should be overridden by any configs 
> provided by the StateStoreSuppliers etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-178: Change ReassignPartitionsCommand to use AdminClient

2017-07-21 Thread Tom Bentley
Due to the incorrect initial numbering of this KIP, discussion has moved to
this thread:
http://mail-archives.apache.org/mod_mbox/kafka-dev/201707.mbox/%3CCAMd5Ysy3bY7Fq2xA3sk6BWW6%3D9TjT4%2Bya7mufRf6Wgre-S-UPg%40mail.gmail.com%3E

On 20 July 2017 at 16:47, Ismael Juma  wrote:

> Hi Tom,
>
> Yes, a poll-based approach sounds good since reassignment can take a long
> time. I even think that it should be manual. That is, the user should run a
> command to ask for the status of the rebalance instead of the tool doing it
> automatically.
>
> Ismael
>
> On Thu, Jul 20, 2017 at 4:12 AM, Tom Bentley 
> wrote:
>
> > Hi Ismael,
> >
> > I've been working on the progress reporting assuming that it would be
> > acceptable for the ReassignPartitionsCommand to poll the AdminClient API
> > (and in turn the AdminClient API to poll the broker) in order to report
> > progress in an interactive way.
> >
> > The alternative would, of course, be for the broker to push notify the
> > AdminClient when it became aware of changes in progress. I didn't think
> > this is what you meant as it would be a departure from the Kafka norm of
> > request-response, but thought it worthwhile to check before I spend any
> > more time on a polling-based approach.
> >
> > Thanks,
> >
> > Tom
> >
> >
> >
> > On 19 July 2017 at 16:08, Tom Bentley  wrote:
> >
> > > Ah, thank you! I took the number from the "Next KIP Number: 178" on the
> > > KIP index and didn't check the tables. So this is now KIP-179. The old
> > link
> > > will point you to the right place.
> > >
> > > On 19 July 2017 at 15:55, Ismael Juma  wrote:
> > >
> > >> One more thing, it looks like there is already a KIP-178:
> > >>
> > >> KIP-178: Size-based log directory selection strategy
> > >>
> > >> Ismael
> > >>
> > >> On Wed, Jul 19, 2017 at 7:05 AM, Tom Bentley 
> > >> wrote:
> > >>
> > >> > OK, I will work on adding support for this to the KIP, with the
> > >> intention
> > >> > of a two part implementation.
> > >> >
> > >> > On 19 July 2017 at 14:59, Ismael Juma  wrote:
> > >> >
> > >> > > Hi Tom,
> > >> > >
> > >> > > It's fine for the tool not to have this functionality from the
> > start.
> > >> > > However, since we're adding new Kafka protocol APIs, we need to
> > >> consider
> > >> > > some of these details to ensure we're building towards the end
> > state,
> > >> if
> > >> > > that makes sense. Protocol APIs are used by multiple clients, so
> > >> there is
> > >> > > value in thinking ahead a bit when it comes to the design. The
> > >> > > implementation can often be done in stages.
> > >> > >
> > >> > > Does that make sense?
> > >> > >
> > >> > > Ismael
> > >> > >
> > >> > > On Wed, Jul 19, 2017 at 6:23 AM, Tom Bentley <
> t.j.bent...@gmail.com
> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Hi Ismael,
> > >> > > >
> > >> > > > Answers in-line:
> > >> > > >
> > >> > > > 1. Have you considered how progress would be reported? Partition
> > >> > > > > reassignment can take a long time and it would be good to
> have a
> > >> > > > mechanism
> > >> > > > > for progress reporting.
> > >> > > > >
> > >> > > >
> > >> > > > The ReassignPartitionsCommand doesn't currently have a mechanism
> > to
> > >> > track
> > >> > > > progress. All you can do at the moment is initiate a
> reassignment
> > >> (with
> > >> > > > --execute), and later check whether the assignment is in the
> state
> > >> you
> > >> > > > asked for (with --verify). I agree it would be nice to be able
> to
> > >> track
> > >> > > > progress.
> > >> > > >
> > >> > > > This will be the first 'big' bit of work I've done on Kafka, so
> I
> > >> would
> > >> > > > prefer to limit the scope of this KIP where possible. That
> said, I
> > >> > > suppose
> > >> > > > it could be done by having receiving controllers publish their
> > >> progress
> > >> > > to
> > >> > > > ZooKeeper, and adding Protocol and AdminClient API for getting
> > this
> > >> > > > information. If you're keen on this I can certainly modify the
> KIP
> > >> to
> > >> > add
> > >> > > > this.
> > >> > > >
> > >> > > > Alternatively I could write a second KIP to add this ability.
> What
> > >> > other
> > >> > > > long running tasks are there for which we'd like the ability to
> > >> report
> > >> > > > progress? If there are others it might be possible to come up
> > with a
> > >> > > common
> > >> > > > mechanism.
> > >> > > >
> > >> > > >
> > >> > > > > 2. Removals can only happen in major releases. In your
> example,
> > >> the
> > >> > > > removal
> > >> > > > > could only happen in 2.0.0.
> > >> > > > >
> > >> > > >
> > >> > > > OK, I'll update the KIP.
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>


[DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-07-21 Thread Tom Bentley
Aside: I've started this new DISCUSS thread for KIP-179 since the original
one had the incorrect KIP number 178. The original thread can be found
here:
http://mail-archives.apache.org/mod_mbox/kafka-dev/201707.mbox/%3cCAMd5YszudP+-8z5KTbFh6JscT2p4xFi1=vzwwx+5dccpxry...@mail.gmail.com%3e

I've just updated KIP-179 to support Ismael's request for the command to be
able to support progress reporting of an ongoing partition reassignment.

I'll call out two things which I'm not sure about since I don't yet have
much experience of Kafka being used operationally:

1. As currently constructed the --progress option could report an overall
progress percentage, per-partition percentages and errors. It cannot
provide any kind of time-to-completion estimate. Part of me is loath to do
this, as I'm sure we all remember file transfer dialogs that provide
amusing/baffling time-to-completion estimates. So it might be hard to do
_well_. On the other hand I expect the thing people will be interested in
will often be "when will it be finished?"

2. There is no option for the tool to wait for reassignment completion. I
can imagine users might want to script something to happen after the
reassignment is complete, and without some kind of --wait option they will
have to poll for completion "manually". Having a --wait optin and putting
this polling in the tool means we have a lot more control over how often
such polling happens.

The KIP is available here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-179+-+Change+ReassignPartitionsCommand+to+use+AdminClient

Thanks,

Tom


Re: [VOTE] KIP-173: Add prefix to StreamsConfig to enable setting default internal topic configs

2017-07-21 Thread Damian Guy
Sorry, i mentioned Gouzhang twice in the vote.

Actual votes:
3 binding (Guozhang, Damian, Ismael)
2 non-binding (Eno, Matthias)

Thanks,
Damian

On Fri, 21 Jul 2017 at 10:00 Damian Guy  wrote:

> Hi,
> The Vote for this KIP is now closed.
> KIP-173 has been accepted with
> 3 binding (Guozhang, Damian, Ismael)
> 2 non-binding (Eno, Gouzhang)
>
> Thanks,
> Damian
>
> On Thu, 20 Jul 2017 at 14:29 Ismael Juma  wrote:
>
>> Yes, I know it is a separate thread in the archives, it's just that
>> Gmail's
>> algorithm decided to place it under the same thread. This is a recurring
>> problem and it affects discoverability of vote threads for Gmail users (I
>> assume there are many such users).
>>
>> Ismael
>>
>> On Thu, Jul 20, 2017 at 1:32 AM, Damian Guy  wrote:
>>
>> > Thanks Ismael, AFAICT it is already a separate thread:
>> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201707.mbox/%
>> > 3ccajiktexrb1hbojrm7tmy1hdm_+t+psrz_6pawhnlbokp46o...@mail.gmail.com%3e
>> >
>> > On Wed, 19 Jul 2017 at 23:24 Guozhang Wang  wrote:
>> >
>> > > Per 1. I suggested exposing the constant since we are doing so for
>> > consumer
>> > > and producer configs prefix as well (CONSUMER_PREFIX, etc).
>> > >
>> > > Guozhang
>> > >
>> > > On Wed, Jul 19, 2017 at 6:01 AM, Ismael Juma 
>> wrote:
>> > >
>> > > > Thanks for the KIP, Damian. +1 (binding). A couple of minor
>> comments:
>> > > >
>> > > > 1. Do we need to expose the TOPIC_PREFIX constant?
>> > > > 2. The vote thread ended up inside the discuss thread in Gmail. It
>> may
>> > be
>> > > > worth sending another email to make it clear that the vote is
>> ongoing.
>> > > You
>> > > > can link back to this thread so that the existing votes are still
>> > > counted.
>> > > >
>> > > > Ismael
>> > > >
>> > > > On Mon, Jul 17, 2017 at 4:43 AM, Damian Guy 
>> > > wrote:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > I'd like to kick off the vote for KIP-173:
>> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > > 173%3A+Add+prefix+to+StreamsConfig+to+enable+
>> > setting+default+internal+
>> > > > > topic+configs
>> > > > >
>> > > > > A PR for this can be found here: https://github.com/apache/
>> > > > kafka/pull/3459
>> > > > >
>> > > > > Thanks,
>> > > > > Damian
>> > > > >
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > -- Guozhang
>> > >
>> >
>>
>


Re: [VOTE] KIP-173: Add prefix to StreamsConfig to enable setting default internal topic configs

2017-07-21 Thread Damian Guy
Hi,
The Vote for this KIP is now closed.
KIP-173 has been accepted with
3 binding (Guozhang, Damian, Ismael)
2 non-binding (Eno, Gouzhang)

Thanks,
Damian

On Thu, 20 Jul 2017 at 14:29 Ismael Juma  wrote:

> Yes, I know it is a separate thread in the archives, it's just that Gmail's
> algorithm decided to place it under the same thread. This is a recurring
> problem and it affects discoverability of vote threads for Gmail users (I
> assume there are many such users).
>
> Ismael
>
> On Thu, Jul 20, 2017 at 1:32 AM, Damian Guy  wrote:
>
> > Thanks Ismael, AFAICT it is already a separate thread:
> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201707.mbox/%
> > 3ccajiktexrb1hbojrm7tmy1hdm_+t+psrz_6pawhnlbokp46o...@mail.gmail.com%3e
> >
> > On Wed, 19 Jul 2017 at 23:24 Guozhang Wang  wrote:
> >
> > > Per 1. I suggested exposing the constant since we are doing so for
> > consumer
> > > and producer configs prefix as well (CONSUMER_PREFIX, etc).
> > >
> > > Guozhang
> > >
> > > On Wed, Jul 19, 2017 at 6:01 AM, Ismael Juma 
> wrote:
> > >
> > > > Thanks for the KIP, Damian. +1 (binding). A couple of minor comments:
> > > >
> > > > 1. Do we need to expose the TOPIC_PREFIX constant?
> > > > 2. The vote thread ended up inside the discuss thread in Gmail. It
> may
> > be
> > > > worth sending another email to make it clear that the vote is
> ongoing.
> > > You
> > > > can link back to this thread so that the existing votes are still
> > > counted.
> > > >
> > > > Ismael
> > > >
> > > > On Mon, Jul 17, 2017 at 4:43 AM, Damian Guy 
> > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'd like to kick off the vote for KIP-173:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 173%3A+Add+prefix+to+StreamsConfig+to+enable+
> > setting+default+internal+
> > > > > topic+configs
> > > > >
> > > > > A PR for this can be found here: https://github.com/apache/
> > > > kafka/pull/3459
> > > > >
> > > > > Thanks,
> > > > > Damian
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #1835

2017-07-21 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Make Kafka Connect step in quickstart more Windows friendly

--
[...truncated 972.70 KB...]

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

unit.kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

unit.kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest 
STARTED

unit.kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest 
PASSED

unit.kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion STARTED

unit.kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 PASSED

unit.kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted STARTED

unit.kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted PASSED

unit.kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset STARTED

unit.kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset PASSED

unit.kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest 
STARTED

unit.kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest 
PASSED

unit.kafka.server.KafkaApisTest > 

Re: Kafka Connect Parquet Support?

2017-07-21 Thread Ewen Cheslack-Postava
Theoretically yes, we would want to support this in Confluent's S3
connector. One of the stumbling blocks is that apparently the Parquet code
is somewhat tied to HDFS currently which causes problems when you're not
using the HDFS S3 connectivity. See, e.g.,
https://github.com/confluentinc/kafka-connect-storage-cloud/issues/26, on
the cloud storage/S3 connector.

On Wed, May 31, 2017 at 11:17 AM, Colin McCabe  wrote:

> Hi Clayton,
>
> It seems like an interesting improvement.  Given that Parquet is
> columnar, you would expect some space savings.  I guess the big question
> is, would each batch of records become a single parquet file?  And how
> does this integrate with the existing logic, which might assume that
> each record can be serialized on its own?
>
> best,
> Colin
>
>
> On Sun, May 7, 2017, at 02:36, Clayton Wohl wrote:
> > With the Kafka Connect S3 sink, I can choose Avro or JSON output format.
> > Is
> > there any chance that Parquet will be supported?
> >
> > For record at a time processing, Parquet isn't a good fit. But for
> > reading/writing batches of records, which is what the Kafka Connect Sink
> > is
> > writing, Parquet is generally better than Avro.
> >
> > Would attempting writing support for this be wise to try or not?
>


Jenkins build is back to normal : kafka-trunk-jdk7 #2548

2017-07-21 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-5089) JAR mismatch in KafkaConnect leads to NoSuchMethodError in HDP 2.6

2017-07-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5089.
--
Resolution: Fixed

Going to close this since it should be resolved by 
[KIP-146|https://cwiki.apache.org/confluence/display/KAFKA/KIP-146+-+Classloading+Isolation+in+Connect]
 which provides better classloader isolation. Please reopen if this is still an 
issue even after that feature was added.

> JAR mismatch in KafkaConnect leads to NoSuchMethodError in HDP 2.6
> --
>
> Key: KAFKA-5089
> URL: https://issues.apache.org/jira/browse/KAFKA-5089
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: HDP 2.6, Centos 7.3.1611, 
> kafka-0.10.1.2.6.0.3-8.el6.noarch
>Reporter: Christoph Körner
>
> When I follow the steps on the Getting Started Guide of KafkaConnect 
> (https://kafka.apache.org/quickstart#quickstart_kafkaconnect), it throws an 
> NoSuchMethodError error. 
> {code:borderStyle=solid}
> [root@devbox kafka-broker]# ./bin/connect-standalone.sh 
> config/connect-standalone.properties config/connect-file-source.properties 
> config/ connect-file-sink.properties
> [2017-04-19 14:38:36,583] INFO StandaloneConfig values:
> access.control.allow.methods =
> access.control.allow.origin =
> bootstrap.servers = [localhost:6667]
> internal.key.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> internal.value.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> key.converter = class org.apache.kafka.connect.json.JsonConverter
> offset.flush.interval.ms = 1
> offset.flush.timeout.ms = 5000
> offset.storage.file.filename = /tmp/connect.offsets
> rest.advertised.host.name = null
> rest.advertised.port = null
> rest.host.name = null
> rest.port = 8083
> task.shutdown.graceful.timeout.ms = 5000
> value.converter = class org.apache.kafka.connect.json.JsonConverter
>  (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:180)
> [2017-04-19 14:38:36,756] INFO Logging initialized @714ms 
> (org.eclipse.jetty.util.log:186)
> [2017-04-19 14:38:36,871] INFO Kafka Connect starting 
> (org.apache.kafka.connect.runtime.Connect:52)
> [2017-04-19 14:38:36,872] INFO Herder starting 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:70)
> [2017-04-19 14:38:36,872] INFO Worker starting 
> (org.apache.kafka.connect.runtime.Worker:114)
> [2017-04-19 14:38:36,873] INFO Starting FileOffsetBackingStore with file 
> /tmp/connect.offsets 
> (org.apache.kafka.connect.storage.FileOffsetBackingStore:60)
> [2017-04-19 14:38:36,877] INFO Worker started 
> (org.apache.kafka.connect.runtime.Worker:119)
> [2017-04-19 14:38:36,878] INFO Herder started 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:72)
> [2017-04-19 14:38:36,878] INFO Starting REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer:98)
> [2017-04-19 14:38:37,077] INFO jetty-9.2.15.v20160210 
> (org.eclipse.jetty.server.Server:327)
> [2017-04-19 14:38:37,154] WARN FAILED 
> o.e.j.s.ServletContextHandler@3c46e67a{/,null,STARTING}: 
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map; 
> (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map;
> at 
> org.glassfish.jersey.server.ApplicationHandler.(ApplicationHandler.java:331)
> at 
> org.glassfish.jersey.servlet.WebComponent.(WebComponent.java:392)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369)
> at javax.servlet.GenericServlet.init(GenericServlet.java:241)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:616)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:396)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>   

Re: [VOTE] KIP-167 (Addendum): Add interface for the state store restoration process

2017-07-21 Thread Matthias J. Sax
+1

On 7/20/17 4:22 AM, Bill Bejeck wrote:
> Hi,
> 
> After working on the PR for this KIP I discovered that we need to add and
> additional parameter (TopicPartition) to the StateRestoreListener interface
> methods.
> 
> The addition of the TopicPartition is required as the StateRestoreListener
> is for the entire application, thus all tasks with recovering state stores
> call the same listener instance.  The TopicPartition is needed to
> disambiguate the progress of the state store recovery.
> 
> For those that have voted before, please review the updated KIP
> 
> and
> re-vote.
> 
> Thanks,
> Bill
> 



signature.asc
Description: OpenPGP digital signature