Build failed in Jenkins: kafka-trunk-jdk7 #2720

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: close iterator on doc example

--
[...truncated 921.81 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

Re: [DISCUSS] KIP-91 Provide Intuitive User Timeouts in The Producer

2017-09-06 Thread Sumant Tambe
I'm not sure whether it's a good idea to have two different ways to control
expiration. One option as you suggested is to expire batches based on
whichever happens first (exceed delivery.timeout.ms or exhaust retries).
Second option is to effectively ignore retries even if it's a very low
value. Just set it to MAX_INT regardless of user config.

On a second thought expiring after retries may make sense in cases when the
broker responds *quickly* with some exception to produce requests. From
client's point of view passage of time only (retry.backoff.ms * retries) as
opposed to much slower (request.timeout.ms + retry.backoff.ms) * retries
when a broker is down or there's a network problem. In the former case,
delivery.timeout.ms of 120 seconds may amount to 100s of retries, which
feels quite unnecessary.

-Sumant

On 6 September 2017 at 21:27, Becket Qin  wrote:

> Hey Sumant,
>
> I agree with Jun that we can set the default value of retries to MAX_INT.
>
> Initially I was also thinking that retries can be deprecated. But after a
> second thought, I feel it may not be necessary to deprecate retries. With
> the newly added delivery.timeout.ms, the producer will expire a batch
> either when delivery.timeout.ms is hit or when retries has exhausted,
> whichever comes first. Different users may choose different flavor. It is
> introducing one more option, but seems reasonable, in some cases, users may
> want to at least retry once before expire a batch.
>
> Thanks,
>
> JIangjie (Becket) Qin
>
> On Wed, Sep 6, 2017 at 3:14 PM, Jun Rao  wrote:
>
> > Hi, Sumant,
> >
> > The diagram in the wiki seems to imply that delivery.timeout.ms doesn't
> > include the batching time.
> >
> > For retries, probably we can just default it to MAX_INT?
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Wed, Sep 6, 2017 at 10:26 AM, Sumant Tambe  wrote:
> >
> > > 120 seconds default sounds good to me. Throwing ConfigException instead
> > of
> > > WARN is fine. Added clarification that the producer waits the full
> > > request.timeout.ms for the in-flight request. This implies that user
> > might
> > > be notified of batch expiry while a batch is still in-flight.
> > >
> > > I don't recall if we discussed our point of view that existing configs
> > like
> > > retries become redundant/deprecated with this feature. IMO, retries
> > config
> > > becomes meaningless due to the possibility of incorrect configs like
> > > delivery.timeout.ms > linger.ms + retries * (request..timeout.ms +
> > > retry.backoff.ms), retries should be basically interpreted as MAX_INT?
> > > What
> > > will be the default?
> > >
> > > So do we ignore retries config or throw a ConfigException if weirdness
> > like
> > > above is detected?
> > >
> > > -Sumant
> > >
> > >
> > > On 5 September 2017 at 17:34, Ismael Juma  wrote:
> > >
> > > > Thanks for updating the KIP, Sumant. A couple of points:
> > > >
> > > > 1. I think the default for delivery.timeout.ms should be higher than
> > 30
> > > > seconds given that we previously would reset the clock once the batch
> > was
> > > > sent. The value should be large enough that batches are not expired
> due
> > > to
> > > > expected events like a new leader being elected due to broker
> failure.
> > > > Would it make sense to use a conservative value like 120 seconds?
> > > >
> > > > 2. The producer currently throws an exception for configuration
> > > > combinations that don't make sense. We should probably do the same
> here
> > > for
> > > > consistency (the KIP currently proposes a log warning).
> > > >
> > > > 3. We should mention that we will not cancel in flight requests until
> > the
> > > > request timeout even though we'll expire the batch early if needed.
> > > >
> > > > I think we should start the vote tomorrow so that we have a chance of
> > > > hitting the KIP freeze for 1.0.0.
> > > >
> > > > Ismael
> > > >
> > > > On Wed, Sep 6, 2017 at 1:03 AM, Sumant Tambe 
> > wrote:
> > > >
> > > > > I've updated the kip-91 writeup
> > > > >  > > > > 91+Provide+Intuitive+User+Timeouts+in+The+Producer>
> > > > > to capture some of the discussion here. Please confirm if it's
> > > > sufficiently
> > > > > accurate. Feel free to edit it if you think some explanation can be
> > > > better
> > > > > and has been agreed upon here.
> > > > >
> > > > > How do you proceed from here?
> > > > >
> > > > > -Sumant
> > > > >
> > > > > On 30 August 2017 at 12:59, Jun Rao  wrote:
> > > > >
> > > > > > Hi, Jiangjie,
> > > > > >
> > > > > > I mis-understood Jason's approach earlier. It does seem to be a
> > good
> > > > one.
> > > > > > We still need to calculate the selector timeout based on the
> > > remaining
> > > > > > delivery.timeout.ms to call the callback on time, but we can
> > always
> > > > wait
> > > > > > for an inflight request based on request.timeout.ms.
> 

[GitHub] kafka pull request #3714: close iterator on doc example

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3714


---


Jenkins build is back to normal : kafka-trunk-jdk8 #1984

2017-09-06 Thread Apache Jenkins Server
See 




Re: A quick question on StickyAssignor

2017-09-06 Thread Vahid S Hashemian
Hi Hu Xi,

This is a typo. It should read

if a consumer A has 2+ fewer topic partitions assigned to it compared to 
another consumer B, none of the topic partitions assigned to A can be 
assigned to B.

For this assignor, the partition movement should not widen the existing 
balance gap among consumers.

Thanks for catching it. I'll fix the typo in the KIP to avoid further 
confusion.

--Vahid



From:   Hu Xi 
To: "dev@kafka.apache.org" 
Date:   09/06/2017 08:42 PM
Subject:A quick question on StickyAssignor



Hello Dev,
I am a little confused about the statement below in KIP-54 (StickyAssignor 
things):


  *   if a consumer A has 2+ fewer topic partitions assigned to it 
compared to another consumer B, none of the topic partitions assigned to B 
can be assigned to A.

Is it a typo or do I misunderstand anything? Why partitions assigned to B 
cannot be reassigned to A in such a situation when A has fewer partitions 
than B? What're the thoughts under this design? Thanks.







Re: [DISCUSS] KIP-91 Provide Intuitive User Timeouts in The Producer

2017-09-06 Thread Becket Qin
Hey Sumant,

I agree with Jun that we can set the default value of retries to MAX_INT.

Initially I was also thinking that retries can be deprecated. But after a
second thought, I feel it may not be necessary to deprecate retries. With
the newly added delivery.timeout.ms, the producer will expire a batch
either when delivery.timeout.ms is hit or when retries has exhausted,
whichever comes first. Different users may choose different flavor. It is
introducing one more option, but seems reasonable, in some cases, users may
want to at least retry once before expire a batch.

Thanks,

JIangjie (Becket) Qin

On Wed, Sep 6, 2017 at 3:14 PM, Jun Rao  wrote:

> Hi, Sumant,
>
> The diagram in the wiki seems to imply that delivery.timeout.ms doesn't
> include the batching time.
>
> For retries, probably we can just default it to MAX_INT?
>
> Thanks,
>
> Jun
>
>
> On Wed, Sep 6, 2017 at 10:26 AM, Sumant Tambe  wrote:
>
> > 120 seconds default sounds good to me. Throwing ConfigException instead
> of
> > WARN is fine. Added clarification that the producer waits the full
> > request.timeout.ms for the in-flight request. This implies that user
> might
> > be notified of batch expiry while a batch is still in-flight.
> >
> > I don't recall if we discussed our point of view that existing configs
> like
> > retries become redundant/deprecated with this feature. IMO, retries
> config
> > becomes meaningless due to the possibility of incorrect configs like
> > delivery.timeout.ms > linger.ms + retries * (request..timeout.ms +
> > retry.backoff.ms), retries should be basically interpreted as MAX_INT?
> > What
> > will be the default?
> >
> > So do we ignore retries config or throw a ConfigException if weirdness
> like
> > above is detected?
> >
> > -Sumant
> >
> >
> > On 5 September 2017 at 17:34, Ismael Juma  wrote:
> >
> > > Thanks for updating the KIP, Sumant. A couple of points:
> > >
> > > 1. I think the default for delivery.timeout.ms should be higher than
> 30
> > > seconds given that we previously would reset the clock once the batch
> was
> > > sent. The value should be large enough that batches are not expired due
> > to
> > > expected events like a new leader being elected due to broker failure.
> > > Would it make sense to use a conservative value like 120 seconds?
> > >
> > > 2. The producer currently throws an exception for configuration
> > > combinations that don't make sense. We should probably do the same here
> > for
> > > consistency (the KIP currently proposes a log warning).
> > >
> > > 3. We should mention that we will not cancel in flight requests until
> the
> > > request timeout even though we'll expire the batch early if needed.
> > >
> > > I think we should start the vote tomorrow so that we have a chance of
> > > hitting the KIP freeze for 1.0.0.
> > >
> > > Ismael
> > >
> > > On Wed, Sep 6, 2017 at 1:03 AM, Sumant Tambe 
> wrote:
> > >
> > > > I've updated the kip-91 writeup
> > > >  > > > 91+Provide+Intuitive+User+Timeouts+in+The+Producer>
> > > > to capture some of the discussion here. Please confirm if it's
> > > sufficiently
> > > > accurate. Feel free to edit it if you think some explanation can be
> > > better
> > > > and has been agreed upon here.
> > > >
> > > > How do you proceed from here?
> > > >
> > > > -Sumant
> > > >
> > > > On 30 August 2017 at 12:59, Jun Rao  wrote:
> > > >
> > > > > Hi, Jiangjie,
> > > > >
> > > > > I mis-understood Jason's approach earlier. It does seem to be a
> good
> > > one.
> > > > > We still need to calculate the selector timeout based on the
> > remaining
> > > > > delivery.timeout.ms to call the callback on time, but we can
> always
> > > wait
> > > > > for an inflight request based on request.timeout.ms.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > > On Tue, Aug 29, 2017 at 5:16 PM, Becket Qin 
> > > > wrote:
> > > > >
> > > > > > Yeah, I think expiring a batch but still wait for the response is
> > > > > probably
> > > > > > reasonable given the result is not guaranteed anyways.
> > > > > >
> > > > > > @Jun,
> > > > > >
> > > > > > I think the frequent PID reset may still be possible if we do not
> > > wait
> > > > > for
> > > > > > the in-flight response to return. Consider two partitions p0 and
> > p1,
> > > > the
> > > > > > deadline of the batches for p0 are T + 10, T + 30, T + 50... The
> > > > deadline
> > > > > > of the batches for p1 are T + 20, T + 40, T + 60... Assuming each
> > > > request
> > > > > > takes more than 10 ms to get the response. The following sequence
> > may
> > > > be
> > > > > > possible:
> > > > > >
> > > > > > T: PID0 send batch0_p0(PID0), batch0_p1(PID0)
> > > > > > T + 10: PID0 expires batch0_p0(PID0), without resetting PID,
> sends
> > > > > > batch1_p0(PID0) and batch0_p1(PID0, retry)
> > > > > > T + 20: PID0 expires 

A quick question on StickyAssignor

2017-09-06 Thread Hu Xi
Hello Dev,
I am a little confused about the statement below in KIP-54 (StickyAssignor 
things):


  *   if a consumer A has 2+ fewer topic partitions assigned to it compared to 
another consumer B, none of the topic partitions assigned to B can be assigned 
to A.

Is it a typo or do I misunderstand anything? Why partitions assigned to B 
cannot be reassigned to A in such a situation when A has fewer partitions than 
B? What're the thoughts under this design? Thanks.



[GitHub] kafka pull request #3667: KAFKA-5606: Review consumer's RequestFuture usage ...

2017-09-06 Thread jedichien
Github user jedichien closed the pull request at:

https://github.com/apache/kafka/pull/3667


---


Jenkins build is back to normal : kafka-trunk-jdk7 #2718

2017-09-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #1983

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Log encountered exception during rebalance

--
[...truncated 2.03 MB...]

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testSingleOption STARTED

org.apache.kafka.common.security.JaasContextTest > testSingleOption PASSED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes STARTED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes PASSED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
STARTED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionName STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionName PASSED

org.apache.kafka.common.security.JaasContextTest > testControlFlag STARTED

org.apache.kafka.common.security.JaasContextTest > testControlFlag PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 

[GitHub] kafka pull request #3796: MINOR: KIP-138 renaming of string names

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3796


---


[GitHub] kafka pull request #3769: MINOR: Log encountered exception during rebalance

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3769


---


Jenkins build is back to normal : kafka-trunk-jdk8 #1982

2017-09-06 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-192 - Provide cleaner semantics when idempotence is enabled

2017-09-06 Thread Apurva Mehta
Hi Jason,

Thanks for the comments:

   1. I have also updated the KIP to indicate that the
   DuplicateSequenceException could be the new error code returned in the
   producer callback as a result of tightening up the semantics of the
   OutOfOrderSequenceException.
   2. I think returning the message format version in the TopicMetadata of
   the MetadatResponse is a great idea. It is general enough, and could be put
   to other uses in the future. Also, it is more efficient to send the right
   message format to the broker to begin with, so that's another thing going
   for this solution. I have updated the KIP to reflect this change.

Tom: I have update the KIP so that the 'safe' value of 'enable.idempotence'
is converted to 'requested' everywhere.

Thanks,
Apurva



On Tue, Sep 5, 2017 at 11:17 AM, Jason Gustafson  wrote:

> The proposal looks good. Two minor comments:
>
> 1. Can we call out how we handle the duplicate case? This is a change in
> behavior since we currently raise OutOfOrderSequence in this case.
>
> 2. Instead of passing through `idempotenceLevel` in the ProduceRequest, I
> wonder if we should have a field for the minimum required message format.
> When using enable.idempotence=required, we could set the minimum required
> version to v2. For enable.idempotence=requested, we could use v0. The
> advantage is that we may find other uses for a more general field in the
> future. Alternatively, maybe we really should be returning the message
> format version of each topic in the TopicMetadata response. A nice bonus of
> doing so is that it gives the producer the ability to craft the right
> format version ahead of time and avoid the need for conversion on the
> broker.
>
> Thanks,
> Jason
>
> On Tue, Aug 29, 2017 at 4:32 PM, Apurva Mehta  wrote:
>
> > Hi,
> >
> > In the discussion of KIP-185 (enable idempotence by default), we
> discovered
> > some shortcomings of the existing idempotent producer implementation.
> > Fixing these issues requires changes to the ProduceRequest and
> > ProduceResponse protocols as well as changes to the values of the
> > 'enable.idempotence' producer config.
> >
> > Hence, I split off those changes into another KIP so as to decouple the
> two
> > issues. Please have a look at my follow up KIP below:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 192+%3A+Provide+cleaner+semantics+when+idempotence+is+enabled
> >
> > KIP-185 depends on KIP-192, and I hope to make progress on the latter
> > independently.
> >
> > Thanks,
> > Apurva
> >
>


Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-09-06 Thread Vahid S Hashemian
+1. Thanks for the KIP.

--Vahid



From:   Guozhang Wang 
To: "dev@kafka.apache.org" 
Date:   09/06/2017 03:41 PM
Subject:Re: [VOTE] KIP-176 : Remove deprecated new-consumer option 
for tools



+1. Thanks.

On Wed, Sep 6, 2017 at 7:57 AM, Ismael Juma  wrote:

> Thanks for the KIP. +1 (binding). Please make it clear in the KIP that
> removal will happen in 2.0.0.
>
> Ismael
>
> On Tue, Aug 8, 2017 at 11:53 AM, Paolo Patierno 
> wrote:
>
> > Hi devs,
> >
> >
> > I didn't see any more comments about this KIP. The JIRAs related to 
the
> > first step (so making --new-consumer as deprecated with warning 
messages)
> > are merged.
> >
> > I'd like to start a vote for this KIP.
> >
> >
> > Thanks,
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Windows Embedded & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno<
https://urldefense.proofpoint.com/v2/url?u=http-3A__twitter.com_ppatierno=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=JR2f85JOCHQwbXDKcQkgD6ay-ECdCkrh3HaWqtSjF5w=YUFjd7tCwfVz6mZjZ8KbxeH_yQLhzwBXTmm8Wwf_4Wk=
 
>
> > Linkedin : paolopatierno<
https://urldefense.proofpoint.com/v2/url?u=http-3A__it.linkedin.com_in_paolopatierno=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=JR2f85JOCHQwbXDKcQkgD6ay-ECdCkrh3HaWqtSjF5w=-q3fI0mMvYa2M1PyPrZxDOFWZoyt66zllRYNw00vIIk=
 
>
> > Blog : DevExperience<
https://urldefense.proofpoint.com/v2/url?u=http-3A__paolopatierno.wordpress.com_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=JR2f85JOCHQwbXDKcQkgD6ay-ECdCkrh3HaWqtSjF5w=M0NFIAt5g9yjQXa4D-DHFs4-UQ3F0KWHfVFPLIaLAyg=
 
>
> >
>



-- 
-- Guozhang






Build failed in Jenkins: kafka-trunk-jdk7 #2717

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: fix typos on web doc's upgrade guide

--
[...truncated 921.67 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

[jira] [Created] (KAFKA-5849) Add partitioned produce consume test

2017-09-06 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5849:
--

 Summary: Add partitioned produce consume test
 Key: KAFKA-5849
 URL: https://issues.apache.org/jira/browse/KAFKA-5849
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Add partitioned produce consume test



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-183 - Change PreferredReplicaLeaderElectionCommand to use AdminClient

2017-09-06 Thread Colin McCabe
On Wed, Sep 6, 2017, at 01:18, Tom Bentley wrote:
> Hi Colin,
> 
> Thanks for taking the time to respond.
> 
> On 5 September 2017 at 22:22, Colin McCabe  wrote:
> 
> > ...
> > Why does there need to be a map at all in the API?
> 
> 
> From a purely technical PoV there doesn't, but doing something else would
> make the API inconsistent with other similar AdminClient *Results
> classes,
> which all expose a Map directly.
> 
> 
> > Why not just have
> > something like this:
> >
> 
> I agree this would be a better solution. I will update the KIP and ask
> people to vote again. (Is that the right process?)
> 
> It might be worth bearing this in mind for future AdminClient APIs:
> Exposing a Map directly means you can't retrofit handling a null argument
> to mean "all the things", whereas wrapping the map would allow that.

That's a good point.

I guess the important thing to keep in mind is that if you return a map
from a results class, it has to be instantiated eagerly.  It has to be
something you know before any RPCs are made, async actions are
performed, etc.

best,
Colin

> 
> Thanks again,
> 
> Tom


[GitHub] kafka pull request #3804: MINOR: fixed typos

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3804


---


[GitHub] kafka-site pull request #74: Added portoseguro, micronauticsresearch & cj lo...

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/74


---


Re: [DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-09-06 Thread Colin McCabe
On Wed, Sep 6, 2017, at 00:20, Tom Bentley wrote:
> Hi Ted and Colin,
> 
> Thanks for the comments.
> 
> It seems you're both happier with reassign rather than assign, so I'm
> happy
> to stick with that.
> 
> 
> On 5 September 2017 at 18:46, Colin McCabe  wrote:
> 
> > ...
> 
> 
> > Do we expect that reducing the number of partitions will ever be
> > supported by this API?  It seems like decreasing would require a
> > different API-- one which supported data movement, had a "check status
> > of this operation" feature, etc. etc.  If this API is only ever going to
> > be used to increase the number of partitions, I think we should just
> > call it "increasePartitionCount" to avoid confusion.
> >
> 
> I thought a little about the decrease possibility (hence the static
> factory
> methods on PartitionCount), but not really in enough detail. I suppose a
> decrease process could look like this:
> 
> 1. Producers start partitioning based on the decreased partition count.
> 2. Consumers continue to consume from all partitions.
> 3. At some point all the records in the old partitions have expired and
> they can be deleted.
> 
> This wouldn't work for compacted topics. Of course a more aggressive
> strategy is also possible where we forcibly move data from the partitions
> to be deleted.
> 
> Anyway, in either case the process would be long running, whereas the
> increase case is fast, so the semantics are quite different. So I agree,
> lets rename the method to make clear that it's only for increasing the
> partition count.
> 
> >
> > > Outstanding questions:
> > >
> > > 1. Is the proposed alterInterReplicaThrottle() API really better than
> > > changing the throttle via alterConfigs()?
> >
> > That's a good point.  I would argue that we should just use alterConfigs
> > to set the broker configuration, rather than having a special RPC just
> > for this.
> >
> 
> Yes, I'm minded to agree.
> 
> The reason I originally thought a special API might be better was if
> people
> felt that the DynamicConfig mechanism (which seems to exist only to
> support
> these throttles) was an implementation detail of the throttles. But I now
> realise that they're visible via kafka-topics.sh, so they're effectively
> already a public API.
> 
> 
> >
> > ...
> > > Would it be a problem that
> > > triggering the reassignment required ClusterAction on the Cluster, but
> > > throttling the assignment required Alter on the Topic? What if a user had
> > > the former permission, but not the latter?
> >
> > We've been trying to reserve ClusterAction on Cluster for
> > broker-initiated operations.  Alter on Cluster is the ACL for "root
> > stuff" and I would argue that it should be what we use here.
> >
> > For reconfiguring the broker, I think we should follow KIP-133 and use
> > AlterConfigs on the Broker resource.  (Of course, if you just use the
> > existing alterConfigs call, you get this without any special effort.)
> >
> 
> Yes, sorry, what I put in the email about authorisation wasn't what I put
> in the KIP (I revised the KIP after drafting the email and then forgot to
> update the email).
> 
> Although KIP-133 proposes a Broker resource, I don't see one in the code
> and KIP-133 was supposedly delivered in 0.11. Can anyone fill in the
> story
> here? Is it simply because the functionality to update broker configs
> hasn't been implemented yet?

Look in
./clients/src/main/java/org/apache/kafka/common/config/ConfigResource.java,
for the BROKER resource.  I bet you're looking at the Resource class
used for ACLs, which is a different class.

> 
> As currently proposed, both reassignPartitions() and
> alterInterBrokerThrottle()
> require Alter on the Cluster. If we used alterConfigs() to set the
> throttles then we create a situation where the triggering of the
> reassignment required Alter(Cluster), but the throttling required
> Alter(Broker), and the user might have the former but not the latter. I
> don't think this is likely to be a big deal in practice, but maybe others
> disagree?

Alter:Cluster is essentially root, though.  If you have Alter:Cluster
and you don't have AlterConfigs:Broker, you can just create a new ACL
giving it to yourself (creating and deleting ACLs is one of the powers
of Alter:Cluster)

cheers,
Colin

> 
> 
> > >
> > > 2. Is reassignPartitions() really the best name? I find the distinction
> > > between reassigning and just assigning to be of limited value, and
> > > "reassign" is misleading when the APIs now used for changing the
> > > replication factor too. Since the API is asynchronous/long running, it
> > > might be better to indicate that in the name some how. What about
> > > startPartitionAssignment()?
> >
> > Good idea -- I like the idea of using "start" or "initiate" to indicate
> > that this is kicking off a long-running operation.
> >
> > "reassign" seemed like a natural choice to me since this is changing an
> > existing assignment.  It was assigned (when the topic it was 

Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-09-06 Thread Guozhang Wang
+1. Thanks.

On Wed, Sep 6, 2017 at 7:57 AM, Ismael Juma  wrote:

> Thanks for the KIP. +1 (binding). Please make it clear in the KIP that
> removal will happen in 2.0.0.
>
> Ismael
>
> On Tue, Aug 8, 2017 at 11:53 AM, Paolo Patierno 
> wrote:
>
> > Hi devs,
> >
> >
> > I didn't see any more comments about this KIP. The JIRAs related to the
> > first step (so making --new-consumer as deprecated with warning messages)
> > are merged.
> >
> > I'd like to start a vote for this KIP.
> >
> >
> > Thanks,
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Windows Embedded & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk8 #1981

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5726; KafkaConsumer.subscribe() overload that takes just Pattern

--
[...truncated 4.85 MB...]
org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin 

Re: [DISCUSS] KIP-91 Provide Intuitive User Timeouts in The Producer

2017-09-06 Thread Jun Rao
Hi, Sumant,

The diagram in the wiki seems to imply that delivery.timeout.ms doesn't
include the batching time.

For retries, probably we can just default it to MAX_INT?

Thanks,

Jun


On Wed, Sep 6, 2017 at 10:26 AM, Sumant Tambe  wrote:

> 120 seconds default sounds good to me. Throwing ConfigException instead of
> WARN is fine. Added clarification that the producer waits the full
> request.timeout.ms for the in-flight request. This implies that user might
> be notified of batch expiry while a batch is still in-flight.
>
> I don't recall if we discussed our point of view that existing configs like
> retries become redundant/deprecated with this feature. IMO, retries config
> becomes meaningless due to the possibility of incorrect configs like
> delivery.timeout.ms > linger.ms + retries * (request..timeout.ms +
> retry.backoff.ms), retries should be basically interpreted as MAX_INT?
> What
> will be the default?
>
> So do we ignore retries config or throw a ConfigException if weirdness like
> above is detected?
>
> -Sumant
>
>
> On 5 September 2017 at 17:34, Ismael Juma  wrote:
>
> > Thanks for updating the KIP, Sumant. A couple of points:
> >
> > 1. I think the default for delivery.timeout.ms should be higher than 30
> > seconds given that we previously would reset the clock once the batch was
> > sent. The value should be large enough that batches are not expired due
> to
> > expected events like a new leader being elected due to broker failure.
> > Would it make sense to use a conservative value like 120 seconds?
> >
> > 2. The producer currently throws an exception for configuration
> > combinations that don't make sense. We should probably do the same here
> for
> > consistency (the KIP currently proposes a log warning).
> >
> > 3. We should mention that we will not cancel in flight requests until the
> > request timeout even though we'll expire the batch early if needed.
> >
> > I think we should start the vote tomorrow so that we have a chance of
> > hitting the KIP freeze for 1.0.0.
> >
> > Ismael
> >
> > On Wed, Sep 6, 2017 at 1:03 AM, Sumant Tambe  wrote:
> >
> > > I've updated the kip-91 writeup
> > >  > > 91+Provide+Intuitive+User+Timeouts+in+The+Producer>
> > > to capture some of the discussion here. Please confirm if it's
> > sufficiently
> > > accurate. Feel free to edit it if you think some explanation can be
> > better
> > > and has been agreed upon here.
> > >
> > > How do you proceed from here?
> > >
> > > -Sumant
> > >
> > > On 30 August 2017 at 12:59, Jun Rao  wrote:
> > >
> > > > Hi, Jiangjie,
> > > >
> > > > I mis-understood Jason's approach earlier. It does seem to be a good
> > one.
> > > > We still need to calculate the selector timeout based on the
> remaining
> > > > delivery.timeout.ms to call the callback on time, but we can always
> > wait
> > > > for an inflight request based on request.timeout.ms.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, Aug 29, 2017 at 5:16 PM, Becket Qin 
> > > wrote:
> > > >
> > > > > Yeah, I think expiring a batch but still wait for the response is
> > > > probably
> > > > > reasonable given the result is not guaranteed anyways.
> > > > >
> > > > > @Jun,
> > > > >
> > > > > I think the frequent PID reset may still be possible if we do not
> > wait
> > > > for
> > > > > the in-flight response to return. Consider two partitions p0 and
> p1,
> > > the
> > > > > deadline of the batches for p0 are T + 10, T + 30, T + 50... The
> > > deadline
> > > > > of the batches for p1 are T + 20, T + 40, T + 60... Assuming each
> > > request
> > > > > takes more than 10 ms to get the response. The following sequence
> may
> > > be
> > > > > possible:
> > > > >
> > > > > T: PID0 send batch0_p0(PID0), batch0_p1(PID0)
> > > > > T + 10: PID0 expires batch0_p0(PID0), without resetting PID, sends
> > > > > batch1_p0(PID0) and batch0_p1(PID0, retry)
> > > > > T + 20: PID0 expires batch0_p1(PID0, retry), resets the PID to
> PID1,
> > > > sends
> > > > > batch1_p0(PID0, retry) and batch1_p1(PID1)
> > > > > T + 30: PID1 expires batch1_p0(PID0, retry), without resetting PID,
> > > sends
> > > > > batch2_p0(PID1) and batch1_p1(PID1, retry)
> > > > > T + 40: PID1 expires batch1_p1(PID1, retry), resets the PID to
> PID2,
> > > > sends
> > > > > batch2_p0(PID1, retry) and sends batch2_p1(PID2)
> > > > > 
> > > > >
> > > > > In the above example, the producer will reset PID once every two
> > > > requests.
> > > > > The example did not take retry backoff into consideration, but it
> > still
> > > > > seems possible to encounter frequent PID reset if we do not wait
> for
> > > the
> > > > > request to finish. Also, in this case we will have a lot of retries
> > and
> > > > > mixture of PIDs which seem to be pretty complicated.
> > > > >
> > > > > I think Jason's suggestion will address both concerns, 

Re: [VOTE] KIP-188 - Add new metrics to support health checks

2017-09-06 Thread Jun Rao
Hi, Raijini,

Thanks for the KIP. +1. Just a minor comment.

Since we only measure MessageConversionsTimeMs at the request type level,
is it useful to collect the following metrics at the topic level?

*MBean*:
kafka.server:type=BrokerTopicMetrics,name=FetchMessageConversionsPerSec,topic=([-.\w]+)

*MBean*:
kafka.server:type=BrokerTopicMetrics,name=ProduceMessageConversionsPerSec,topic=([-.\w]+)


Also, for the ZK latency metric, Onur added a new ZookeeperClient wrapper
and is in the middle of converting existing zkClient usage to the new
wrapper. So, we probably want to add the latency metric in the new wrapper.

Jun

On Thu, Aug 24, 2017 at 10:50 AM, Rajini Sivaram 
wrote:

> Hi all,
>
> I would like to start the vote on KIP-188 that adds additional metrics to
> support health checks for Kafka Ops. Details are here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 188+-+Add+new+metrics+to+support+health+checks
>
> Thank you,
>
> Rajini
>


Re: [VOTE] KIP-191: KafkaConsumer.subscribe() overload that takes just Pattern

2017-09-06 Thread Guozhang Wang
Thanks for confirming.

On Wed, Sep 6, 2017 at 2:07 PM, Ismael Juma  wrote:

> The PR for this was merged an hour or two ago.
>
> Ismael
>
> On 6 Sep 2017 9:51 pm, "Guozhang Wang"  wrote:
>
> > Thanks Attila,
> >
> > Please note that the feature freeze deadline is about two weeks away to
> > have your PR to be merged to trunk for the coming 1.0.0 release
> >
> >
> > Guozhang
> >
> >
> > On Wed, Sep 6, 2017 at 1:31 AM, Attila Kreiner 
> wrote:
> >
> > > Hi All,
> > >
> > > No more votes received for this, so the final decision is: accepted.
> > >
> > > Thx,
> > > Attila
> > >
> > > 2017-09-02 6:57 GMT+02:00 Attila Kreiner :
> > >
> > > > Hi Jason,
> > > >
> > > > Sorry, I didn't know that one. Then I assume we should wait a few
> days
> > > > before moving forward.
> > > >
> > > > Regards,
> > > > Attila
> > > >
> > > > 2017-09-01 18:01 GMT+02:00 Jason Gustafson :
> > > >
> > > >> Hi Attila,
> > > >>
> > > >> We should allow 3 days for the vote even if it is clear it will
> pass.
> > > >>
> > > >> Thanks,
> > > >> Jason
> > > >>
> > > >> On Fri, Sep 1, 2017 at 8:16 AM, Attila Kreiner 
> > > wrote:
> > > >>
> > > >> > Hi All,
> > > >> >
> > > >> > Looks like accepted. Updating KIP page.
> > > >> >
> > > >> > Thx,
> > > >> > Attila
> > > >> >
> > > >> > 2017-08-31 19:22 GMT+02:00 Matthias J. Sax  >:
> > > >> >
> > > >> > > +1
> > > >> > >
> > > >> > > On 8/31/17 8:49 AM, Jason Gustafson wrote:
> > > >> > > > +1
> > > >> > > >
> > > >> > > > On Thu, Aug 31, 2017 at 8:36 AM, Mickael Maison <
> > > >> > > mickael.mai...@gmail.com>
> > > >> > > > wrote:
> > > >> > > >
> > > >> > > >> +1 non binding
> > > >> > > >> Thanks
> > > >> > > >>
> > > >> > > >> On Thu, Aug 31, 2017 at 8:35 AM, Guozhang Wang <
> > > wangg...@gmail.com
> > > >> >
> > > >> > > wrote:
> > > >> > > >>> +1
> > > >> > > >>>
> > > >> > > >>> On Thu, Aug 31, 2017 at 7:57 AM, Bill Bejeck <
> > bbej...@gmail.com
> > > >
> > > >> > > wrote:
> > > >> > > >>>
> > > >> > >  +1
> > > >> > > 
> > > >> > >  Thanks,
> > > >> > >  Bill
> > > >> > > 
> > > >> > >  On Thu, Aug 31, 2017 at 10:31 AM, Vahid S Hashemian <
> > > >> > >  vahidhashem...@us.ibm.com> wrote:
> > > >> > > 
> > > >> > > > +1 (non-binding)
> > > >> > > >
> > > >> > > > Thanks.
> > > >> > > > --Vahid
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > From:   Molnár Bálint 
> > > >> > > > To: dev@kafka.apache.org
> > > >> > > > Date:   08/31/2017 02:13 AM
> > > >> > > > Subject:Re: [VOTE] KIP-191:
> > KafkaConsumer.subscribe()
> > > >> > > overload
> > > >> > > > that takes just Pattern
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > +1 (non-binding)
> > > >> > > >
> > > >> > > > 2017-08-31 10:33 GMT+02:00 Manikumar <
> > > manikumar.re...@gmail.com
> > > >> >:
> > > >> > > >
> > > >> > > >> +1 (non-binding)
> > > >> > > >>
> > > >> > > >> On Thu, Aug 31, 2017 at 1:53 PM, Ismael Juma <
> > > >> isma...@gmail.com>
> > > >> > >  wrote:
> > > >> > > >>
> > > >> > > >>> Thanks for the KIP, +1 (binding).
> > > >> > > >>>
> > > >> > > >>> Ismael
> > > >> > > >>>
> > > >> > > >>> On 31 Aug 2017 8:38 am, "Attila Kreiner" <
> > att...@kreiner.hu
> > > >
> > > >> > > >> wrote:
> > > >> > > >>>
> > > >> > > >>> Hi All,
> > > >> > > >>>
> > > >> > > >>> Thx for the comments, I pretty much see a consensus
> here.
> > So
> > > >> I'd
> > > >> > > >> like
> > > >> > > > to
> > > >> > > >>> start the vote for:
> > > >> > > >>>
> > > >> > > > https://urldefense.proofpoint.
> com/v2/url?u=https-3A__cwiki.
> > > >> > > > apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> > > >> > > > iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> > > >> > > kjJc7uSVcviKUc=-
> > > >> > > > tCQqMGtotzV0UuQdwhdYJ745XQzBJp5lz9O1oM9QgA=Wq-_8a-
> > > >> > > > 94g2UGfy8hlJcx9WMdxK0WJRZ7V2ex2qKPpY=
> > > >> > > >
> > > >> > > >>> 191%3A+KafkaConsumer.
> > > >> > > >>> subscribe%28%29+overload+that+takes+just+Pattern
> > > >> > > >>>
> > > >> > > >>> Cheers,
> > > >> > > >>> Attila
> > > >> > > >>>
> > > >> > > >>
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > 
> > > >> > > >>>
> > > >> > > >>>
> > > >> > > >>>
> > > >> > > >>> --
> > > >> > > >>> -- Guozhang
> > > >> > > >>
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> >
> > > >>
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-5848) KafkaConsumer should validate topics/TopicPartitions on subscribe/assign

2017-09-06 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5848:
--

 Summary: KafkaConsumer should validate topics/TopicPartitions on 
subscribe/assign
 Key: KAFKA-5848
 URL: https://issues.apache.org/jira/browse/KAFKA-5848
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Matthias J. Sax
Priority: Minor


Currently, {{KafkaConsumer}} checks if the provided topics on {{subscribe()}} 
and {{TopicPartition}} on {{assign()}} don't contain topic names that are 
{{null}} or an empty string. 

However, it could do some more validation:
 - check if invalid topic characters are in the string (this might be feasible 
for {Patterns}}, too?)
 - check if provided partition numbers are valid (ie, not negative and maybe 
not larger than the available partitions?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-189 - Improve principal builder interface and add support for SASL

2017-09-06 Thread Jason Gustafson
Hi All,

When implementing this, I found that the SecurityProtocol class has some
internal details which we might not want to expose to users (in particular
to enable testing). Since it's still useful to know the security protocol
in use in some cases, and since the security protocol names are already
exposed in configuration (and hence cannot easily change), I have modified
the method in AuthenticationContext to return the name of the security
protocol instead. Let me know if there are any concerns with this change.
Otherwise, I will close out the vote.

Thanks,
Jason

On Tue, Sep 5, 2017 at 11:10 AM, Ismael Juma  wrote:

> Thanks for the KIP, +1 (binding).
>
> Ismael
>
> On Wed, Aug 30, 2017 at 4:51 PM, Jason Gustafson 
> wrote:
>
> > I'd like to open the vote for KIP-189:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 189%3A+Improve+principal+builder+interface+and+add+support+for+SASL.
> > Thanks to everyone who helped review.
> >
> > -Jason
> >
>


Re: [VOTE] KIP-191: KafkaConsumer.subscribe() overload that takes just Pattern

2017-09-06 Thread Ismael Juma
The PR for this was merged an hour or two ago.

Ismael

On 6 Sep 2017 9:51 pm, "Guozhang Wang"  wrote:

> Thanks Attila,
>
> Please note that the feature freeze deadline is about two weeks away to
> have your PR to be merged to trunk for the coming 1.0.0 release
>
>
> Guozhang
>
>
> On Wed, Sep 6, 2017 at 1:31 AM, Attila Kreiner  wrote:
>
> > Hi All,
> >
> > No more votes received for this, so the final decision is: accepted.
> >
> > Thx,
> > Attila
> >
> > 2017-09-02 6:57 GMT+02:00 Attila Kreiner :
> >
> > > Hi Jason,
> > >
> > > Sorry, I didn't know that one. Then I assume we should wait a few days
> > > before moving forward.
> > >
> > > Regards,
> > > Attila
> > >
> > > 2017-09-01 18:01 GMT+02:00 Jason Gustafson :
> > >
> > >> Hi Attila,
> > >>
> > >> We should allow 3 days for the vote even if it is clear it will pass.
> > >>
> > >> Thanks,
> > >> Jason
> > >>
> > >> On Fri, Sep 1, 2017 at 8:16 AM, Attila Kreiner 
> > wrote:
> > >>
> > >> > Hi All,
> > >> >
> > >> > Looks like accepted. Updating KIP page.
> > >> >
> > >> > Thx,
> > >> > Attila
> > >> >
> > >> > 2017-08-31 19:22 GMT+02:00 Matthias J. Sax :
> > >> >
> > >> > > +1
> > >> > >
> > >> > > On 8/31/17 8:49 AM, Jason Gustafson wrote:
> > >> > > > +1
> > >> > > >
> > >> > > > On Thu, Aug 31, 2017 at 8:36 AM, Mickael Maison <
> > >> > > mickael.mai...@gmail.com>
> > >> > > > wrote:
> > >> > > >
> > >> > > >> +1 non binding
> > >> > > >> Thanks
> > >> > > >>
> > >> > > >> On Thu, Aug 31, 2017 at 8:35 AM, Guozhang Wang <
> > wangg...@gmail.com
> > >> >
> > >> > > wrote:
> > >> > > >>> +1
> > >> > > >>>
> > >> > > >>> On Thu, Aug 31, 2017 at 7:57 AM, Bill Bejeck <
> bbej...@gmail.com
> > >
> > >> > > wrote:
> > >> > > >>>
> > >> > >  +1
> > >> > > 
> > >> > >  Thanks,
> > >> > >  Bill
> > >> > > 
> > >> > >  On Thu, Aug 31, 2017 at 10:31 AM, Vahid S Hashemian <
> > >> > >  vahidhashem...@us.ibm.com> wrote:
> > >> > > 
> > >> > > > +1 (non-binding)
> > >> > > >
> > >> > > > Thanks.
> > >> > > > --Vahid
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > From:   Molnár Bálint 
> > >> > > > To: dev@kafka.apache.org
> > >> > > > Date:   08/31/2017 02:13 AM
> > >> > > > Subject:Re: [VOTE] KIP-191:
> KafkaConsumer.subscribe()
> > >> > > overload
> > >> > > > that takes just Pattern
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > +1 (non-binding)
> > >> > > >
> > >> > > > 2017-08-31 10:33 GMT+02:00 Manikumar <
> > manikumar.re...@gmail.com
> > >> >:
> > >> > > >
> > >> > > >> +1 (non-binding)
> > >> > > >>
> > >> > > >> On Thu, Aug 31, 2017 at 1:53 PM, Ismael Juma <
> > >> isma...@gmail.com>
> > >> > >  wrote:
> > >> > > >>
> > >> > > >>> Thanks for the KIP, +1 (binding).
> > >> > > >>>
> > >> > > >>> Ismael
> > >> > > >>>
> > >> > > >>> On 31 Aug 2017 8:38 am, "Attila Kreiner" <
> att...@kreiner.hu
> > >
> > >> > > >> wrote:
> > >> > > >>>
> > >> > > >>> Hi All,
> > >> > > >>>
> > >> > > >>> Thx for the comments, I pretty much see a consensus here.
> So
> > >> I'd
> > >> > > >> like
> > >> > > > to
> > >> > > >>> start the vote for:
> > >> > > >>>
> > >> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > >> > > > apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> > >> > > > iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> > >> > > kjJc7uSVcviKUc=-
> > >> > > > tCQqMGtotzV0UuQdwhdYJ745XQzBJp5lz9O1oM9QgA=Wq-_8a-
> > >> > > > 94g2UGfy8hlJcx9WMdxK0WJRZ7V2ex2qKPpY=
> > >> > > >
> > >> > > >>> 191%3A+KafkaConsumer.
> > >> > > >>> subscribe%28%29+overload+that+takes+just+Pattern
> > >> > > >>>
> > >> > > >>> Cheers,
> > >> > > >>> Attila
> > >> > > >>>
> > >> > > >>
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > 
> > >> > > >>>
> > >> > > >>>
> > >> > > >>>
> > >> > > >>> --
> > >> > > >>> -- Guozhang
> > >> > > >>
> > >> > > >
> > >> > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: 1.0.0 KIPs Update

2017-09-06 Thread Guozhang Wang
Hi Vahid,

Yes I have just added it while sending this email :)


Guozhang

On Wed, Sep 6, 2017 at 1:54 PM, Vahid S Hashemian  wrote:

> Hi Guozhang,
>
> Thanks for the heads-up.
>
> Can KIP-163 be added to the list?
> The proposal for this KIP is accepted, and the PR is ready for review.
>
> Thanks.
> --Vahid
>
>
>
> From:   Guozhang Wang 
> To: "dev@kafka.apache.org" 
> Date:   09/06/2017 01:45 PM
> Subject:1.0.0 KIPs Update
>
>
>
> Hello folks,
>
> This is a heads up on 1.0.0 progress:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.a
> pache.org_confluence_pages_viewpage.action-3FpageId-3D717649
> 13=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_
> xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=bLvgeykOujjty9joOuWXD4wZab
> o1CV0pULY4eqBxqzk=90UN7ejzCQmdPOyRR_2z304xLUSBCtOYi0KqhAo4EyU=
>
>
> We have one week left towards the KIP deadline, which is Sept. 13th. There
> are still a lot of KIPs that under discussion / voting process. For the
> KIP
> proposer, please keep in mind that the voting has to be done before the
> deadline in order to be added into the coming release.
>
>
> Thanks,
> -- Guozhang
>
>
>
>
>


-- 
-- Guozhang


Re: 1.0.0 KIPs Update

2017-09-06 Thread Vahid S Hashemian
Hi Guozhang,

Thanks for the heads-up.

Can KIP-163 be added to the list?
The proposal for this KIP is accepted, and the PR is ready for review.

Thanks.
--Vahid



From:   Guozhang Wang 
To: "dev@kafka.apache.org" 
Date:   09/06/2017 01:45 PM
Subject:1.0.0 KIPs Update



Hello folks,

This is a heads up on 1.0.0 progress:

https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=bLvgeykOujjty9joOuWXD4wZabo1CV0pULY4eqBxqzk=90UN7ejzCQmdPOyRR_2z304xLUSBCtOYi0KqhAo4EyU=
 


We have one week left towards the KIP deadline, which is Sept. 13th. There
are still a lot of KIPs that under discussion / voting process. For the 
KIP
proposer, please keep in mind that the voting has to be done before the
deadline in order to be added into the coming release.


Thanks,
-- Guozhang






Re: [VOTE] KIP-191: KafkaConsumer.subscribe() overload that takes just Pattern

2017-09-06 Thread Guozhang Wang
Thanks Attila,

Please note that the feature freeze deadline is about two weeks away to
have your PR to be merged to trunk for the coming 1.0.0 release


Guozhang


On Wed, Sep 6, 2017 at 1:31 AM, Attila Kreiner  wrote:

> Hi All,
>
> No more votes received for this, so the final decision is: accepted.
>
> Thx,
> Attila
>
> 2017-09-02 6:57 GMT+02:00 Attila Kreiner :
>
> > Hi Jason,
> >
> > Sorry, I didn't know that one. Then I assume we should wait a few days
> > before moving forward.
> >
> > Regards,
> > Attila
> >
> > 2017-09-01 18:01 GMT+02:00 Jason Gustafson :
> >
> >> Hi Attila,
> >>
> >> We should allow 3 days for the vote even if it is clear it will pass.
> >>
> >> Thanks,
> >> Jason
> >>
> >> On Fri, Sep 1, 2017 at 8:16 AM, Attila Kreiner 
> wrote:
> >>
> >> > Hi All,
> >> >
> >> > Looks like accepted. Updating KIP page.
> >> >
> >> > Thx,
> >> > Attila
> >> >
> >> > 2017-08-31 19:22 GMT+02:00 Matthias J. Sax :
> >> >
> >> > > +1
> >> > >
> >> > > On 8/31/17 8:49 AM, Jason Gustafson wrote:
> >> > > > +1
> >> > > >
> >> > > > On Thu, Aug 31, 2017 at 8:36 AM, Mickael Maison <
> >> > > mickael.mai...@gmail.com>
> >> > > > wrote:
> >> > > >
> >> > > >> +1 non binding
> >> > > >> Thanks
> >> > > >>
> >> > > >> On Thu, Aug 31, 2017 at 8:35 AM, Guozhang Wang <
> wangg...@gmail.com
> >> >
> >> > > wrote:
> >> > > >>> +1
> >> > > >>>
> >> > > >>> On Thu, Aug 31, 2017 at 7:57 AM, Bill Bejeck  >
> >> > > wrote:
> >> > > >>>
> >> > >  +1
> >> > > 
> >> > >  Thanks,
> >> > >  Bill
> >> > > 
> >> > >  On Thu, Aug 31, 2017 at 10:31 AM, Vahid S Hashemian <
> >> > >  vahidhashem...@us.ibm.com> wrote:
> >> > > 
> >> > > > +1 (non-binding)
> >> > > >
> >> > > > Thanks.
> >> > > > --Vahid
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > From:   Molnár Bálint 
> >> > > > To: dev@kafka.apache.org
> >> > > > Date:   08/31/2017 02:13 AM
> >> > > > Subject:Re: [VOTE] KIP-191: KafkaConsumer.subscribe()
> >> > > overload
> >> > > > that takes just Pattern
> >> > > >
> >> > > >
> >> > > >
> >> > > > +1 (non-binding)
> >> > > >
> >> > > > 2017-08-31 10:33 GMT+02:00 Manikumar <
> manikumar.re...@gmail.com
> >> >:
> >> > > >
> >> > > >> +1 (non-binding)
> >> > > >>
> >> > > >> On Thu, Aug 31, 2017 at 1:53 PM, Ismael Juma <
> >> isma...@gmail.com>
> >> > >  wrote:
> >> > > >>
> >> > > >>> Thanks for the KIP, +1 (binding).
> >> > > >>>
> >> > > >>> Ismael
> >> > > >>>
> >> > > >>> On 31 Aug 2017 8:38 am, "Attila Kreiner"  >
> >> > > >> wrote:
> >> > > >>>
> >> > > >>> Hi All,
> >> > > >>>
> >> > > >>> Thx for the comments, I pretty much see a consensus here. So
> >> I'd
> >> > > >> like
> >> > > > to
> >> > > >>> start the vote for:
> >> > > >>>
> >> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> >> > > > apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> >> > > > iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> >> > > kjJc7uSVcviKUc=-
> >> > > > tCQqMGtotzV0UuQdwhdYJ745XQzBJp5lz9O1oM9QgA=Wq-_8a-
> >> > > > 94g2UGfy8hlJcx9WMdxK0WJRZ7V2ex2qKPpY=
> >> > > >
> >> > > >>> 191%3A+KafkaConsumer.
> >> > > >>> subscribe%28%29+overload+that+takes+just+Pattern
> >> > > >>>
> >> > > >>> Cheers,
> >> > > >>> Attila
> >> > > >>>
> >> > > >>
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > 
> >> > > >>>
> >> > > >>>
> >> > > >>>
> >> > > >>> --
> >> > > >>> -- Guozhang
> >> > > >>
> >> > > >
> >> > >
> >> > >
> >> >
> >>
> >
> >
>



-- 
-- Guozhang


1.0.0 KIPs Update

2017-09-06 Thread Guozhang Wang
Hello folks,

This is a heads up on 1.0.0 progress:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913

We have one week left towards the KIP deadline, which is Sept. 13th. There
are still a lot of KIPs that under discussion / voting process. For the KIP
proposer, please keep in mind that the voting has to be done before the
deadline in order to be added into the coming release.


Thanks,
-- Guozhang


Re: [VOTE] KIP-175: Additional '--describe' views for ConsumerGroupCommand

2017-09-06 Thread Guozhang Wang
Sorry for the delay, just made a pass over the wiki page. +1

On Mon, Aug 14, 2017 at 12:02 AM, Manikumar 
wrote:

> +1 (non-binding)
>
> On Fri, Aug 11, 2017 at 8:09 PM, Mickael Maison 
> wrote:
>
> > +1 non-binding, thanks Vahid
> >
> > On Wed, Aug 9, 2017 at 9:31 PM, Jason Gustafson 
> > wrote:
> > > Thanks for the KIP. +1
> > >
> > > On Thu, Jul 27, 2017 at 2:04 PM, Vahid S Hashemian <
> > > vahidhashem...@us.ibm.com> wrote:
> > >
> > >> Hi all,
> > >>
> > >> Thanks to everyone who participated in the discussion on KIP-175, and
> > >> provided feedback.
> > >> The KIP can be found at
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
> > >> .
> > >> I believe the concerns have been addressed in the recent version of
> the
> > >> KIP; so I'd like to start a vote.
> > >>
> > >> Thanks.
> > >> --Vahid
> > >>
> > >>
> >
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-5847) Plugin option to filter consumer and producer messages on the broker

2017-09-06 Thread Brian Hawkins (JIRA)
Brian Hawkins created KAFKA-5847:


 Summary: Plugin option to filter consumer and producer messages on 
the broker
 Key: KAFKA-5847
 URL: https://issues.apache.org/jira/browse/KAFKA-5847
 Project: Kafka
  Issue Type: Wish
  Components: core
Affects Versions: 0.10.2.1
Reporter: Brian Hawkins


The idea is that I could specify a plugin that would receive a message, after 
authorization but before it is written to the log.  The plugin could then 
modify or reject the message before passing it on.  A good place for this would 
be in KafkaApis.scala in handleProducerRequest.

Similarly a message could be modified before it is sent to the consumer.

I have two use cases in mind: 
1. deal with large messages, the interceptor/filter would write the message to 
a large storage server (think s3).
2. encrypt data before being written to the log.

I'm planning on doing this work, just curious if others are interested so I can 
make a pull request of it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5846) Use singleton NoOpConsumerRebalanceListener in subscribe() call where listener is not specified

2017-09-06 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5846:
-

 Summary: Use singleton NoOpConsumerRebalanceListener in 
subscribe() call where listener is not specified
 Key: KAFKA-5846
 URL: https://issues.apache.org/jira/browse/KAFKA-5846
 Project: Kafka
  Issue Type: Task
Reporter: Ted Yu
Priority: Minor


Currently KafkaConsumer creates instance of NoOpConsumerRebalanceListener for 
each subscribe() call where ConsumerRebalanceListener is not specified:
{code}
public void subscribe(Pattern pattern) {
subscribe(pattern, new NoOpConsumerRebalanceListener());
{code}
We can create a singleton NoOpConsumerRebalanceListener to be used in such 
scenarios.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site pull request #50: Typo in kafka 0.10.2 topics operation doc

2017-09-06 Thread jlisam
Github user jlisam closed the pull request at:

https://github.com/apache/kafka-site/pull/50


---


[GitHub] kafka-site issue #50: Typo in kafka 0.10.2 topics operation doc

2017-09-06 Thread jlisam
Github user jlisam commented on the issue:

https://github.com/apache/kafka-site/pull/50
  
sure thing


---


[GitHub] kafka-site issue #50: Typo in kafka 0.10.2 topics operation doc

2017-09-06 Thread ewencp
Github user ewencp commented on the issue:

https://github.com/apache/kafka-site/pull/50
  
@jlisam Could you close out this PR since the changes were retargeted to 
the main Kafka repo?


---


Build failed in Jenkins: kafka-0.11.0-jdk7 #301

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5756; Connect WorkerSourceTask synchronization issue on flush

--
[...truncated 2.46 MB...]

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnNullIfKeyDoesntExist PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionDuringRebalance STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionDuringRebalance PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionOnAllDuringRebalance STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionOnAllDuringRebalance PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRange STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRange PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldFindValueForKeyWhenMultiStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldFindValueForKeyWhenMultiStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionOnRangeDuringRebalance STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionOnRangeDuringRebalance PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRangeAcrossMultipleKVStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRangeAcrossMultipleKVStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportAllAcrossMultipleStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportAllAcrossMultipleStores PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldCleanupSegmentsThatHaveExpired STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldCleanupSegmentsThatHaveExpired PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldBaseSegmentIntervalOnRetentionAndNumSegments STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldBaseSegmentIntervalOnRetentionAndNumSegments PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentsWithinTimeRangeOutOfOrder STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentsWithinTimeRangeOutOfOrder PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldCloseAllOpenSegments STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldCloseAllOpenSegments PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentsWithinTimeRange STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentsWithinTimeRange PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldNotCreateSegmentThatIsAlreadyExpired STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldNotCreateSegmentThatIsAlreadyExpired PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > shouldCreateSegments 
STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > shouldCreateSegments 
PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldOpenExistingSegments STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldOpenExistingSegments PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentIdsFromTimestamp STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentIdsFromTimestamp PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > shouldRollSegments 
STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > shouldRollSegments 
PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentNameFromId STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentNameFromId PASSED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentForTimestamp STARTED

org.apache.kafka.streams.state.internals.SegmentsTest > 
shouldGetSegmentForTimestamp PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldFetchExactKeys STARTED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldFetchExactKeys PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > shouldRemove 
STARTED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > shouldRemove 
PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 

[GitHub] kafka-site pull request #74: Added portoseguro, micronauticsresearch & cj lo...

2017-09-06 Thread ewencp
Github user ewencp commented on a diff in the pull request:

https://github.com/apache/kafka-site/pull/74#discussion_r137371720
  
--- Diff: powered-by.html ---
@@ -412,7 +412,22 @@
 "logo": "rabobank.jpg",
 "logoBgColor": "#ff",
 "description": "Rabobank is one of the 3 largest banks in the 
Netherlands. Its digital nervous system, the Business Event Bus, is powered by 
Apache Kafka and Kafka Streams."
-}
+},{
+"link":  "http://www.portoseguro.com.br/;,
--- End diff --

Nit: can we make indentation match the above for consistency? If we don't 
do this we tend to get people's editors reformatting things which adds noise to 
the diffs


---


[GitHub] kafka-site pull request #75: Fix HTML markup

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/75


---


Build failed in Jenkins: kafka-trunk-jdk7 #2716

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5726; KafkaConsumer.subscribe() overload that takes just Pattern

--
[...truncated 921.84 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED


[GitHub] kafka pull request #3805: MINOR: Implement toString for NetworkClient

2017-09-06 Thread bbaugher
GitHub user bbaugher opened a pull request:

https://github.com/apache/kafka/pull/3805

MINOR: Implement toString for NetworkClient



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbaugher/kafka inFlightRequest_toString

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3805.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3805


commit b8a8f89cacfc5f60d87dd55ea85818b9175fa96a
Author: Bryan Baugher 
Date:   2017-09-06T19:36:43Z

MINOR: Implement toString for NetworkClient




---


Re: integration between pull request and JIRA

2017-09-06 Thread Matthias J. Sax
You can subscribe to single PR if you want, too. (That actually happens,
when you get tagged or comment on one, ie, you get auto subscribed to
the PR.)

There is a "Subscribe" button on the right hand side.

-Matthias


On 9/5/17 8:57 PM, Ted Yu wrote:
> bq. I did get tagged or I did comment on etc.
> 
> What if nobody tags me on the PR and I don't comment on it ?
> 
> Cheers
> 
> On Tue, Sep 5, 2017 at 8:55 PM, Matthias J. Sax 
> wrote:
> 
 If a person watches github PR, that person watches conversations on all
 PRs,
>>
>> One can just "not watch" Kafka's Github repo. I don't watch it either
>> and thus I get emails for only those PRs I did get tagged or I did
>> comment on etc.
>>
>> Would this not work for you?
>>
>>
>> -Matthias
>>
>> On 9/5/17 7:31 PM, Ted Yu wrote:
>>> If a person watches github PR, that person watches conversations on all
>>> PRs, not just the one he / she intends to pay attention to.
>>>
>>> Quite often this leads to ton of emails in his / her inbox which is
>>> distracting.
>>>
>>> If the conversation is posted from PR to JIRA, watcher is per PR / JIRA.
>>> This is much focused.
>>>
>>> Cheers
>>>
>>> On Tue, Sep 5, 2017 at 7:20 PM, Matthias J. Sax 
>>> wrote:
>>>
 This integration was never set up for Kafka.

 I personally don't see any advantage in this, as it just duplicates
 everything and does not add value IMHO. The PRs are linked and one can
 go to the PR to read the discussion if interested.

 Or what do you think the value would be?


 -Matthias


 On 9/5/17 6:16 PM, Ted Yu wrote:
> Hi,
> Currently the conversations on pull request are not posted back to
>> JIRA.
>
> Is there technical hurdle preventing this from being done ?
>
> Other Apache projects, such as Flink, establish automatic post from
>> pull
> request to JIRA.
>
> Cheers
>


>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


[GitHub] kafka pull request #3804: MINOR: fixed typos

2017-09-06 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3804

MINOR: fixed typos



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3804.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3804


commit 60c2f5e560a1f622fe6d828e5a3dfaac25310464
Author: Matthias J. Sax 
Date:   2017-09-06T19:05:00Z

MINOR: fixed typos




---


Build failed in Jenkins: kafka-trunk-jdk7 #2715

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5756; Connect WorkerSourceTask synchronization issue on flush

--
[...truncated 922.40 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED


[GitHub] kafka pull request #3669: KAFKA-5726: KafkaConsumer.subscribe() overload tha...

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3669


---


[GitHub] kafka-site issue #75: Fix HTML markup

2017-09-06 Thread mjsax
Github user mjsax commented on the issue:

https://github.com/apache/kafka-site/pull/75
  
Call for review and merging @guozhangwang 


---


[GitHub] kafka-site pull request #75: Fix HTML markup

2017-09-06 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka-site/pull/75

Fix HTML markup



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka-site hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/75.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #75


commit b78dfa8b463ce261cbf4f46414bb958a29077e30
Author: Matthias J. Sax 
Date:   2017-09-06T18:26:55Z

Fix HTML markup




---


[jira] [Resolved] (KAFKA-5756) Synchronization issue on flush

2017-09-06 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5756.

   Resolution: Fixed
Fix Version/s: 0.11.0.1
   1.0.0

Issue resolved by pull request 3702
[https://github.com/apache/kafka/pull/3702]

> Synchronization issue on flush
> --
>
> Key: KAFKA-5756
> URL: https://issues.apache.org/jira/browse/KAFKA-5756
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Oleg Kuznetsov
> Fix For: 1.0.0, 0.11.0.1
>
>
> Access to *OffsetStorageWriter#toFlush* is not synchronized in *doFlush()* 
> method, whereas this collection can be accessed from 2 different threads:
> - *WorkerSourceTask.execute()*, finally block
> - *SourceTaskOffsetCommitter*, from periodic flush task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3702: KAFKA-5756 Synchronization issue on flush

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3702


---


Re: [DISCUSS] KIP-91 Provide Intuitive User Timeouts in The Producer

2017-09-06 Thread Sumant Tambe
120 seconds default sounds good to me. Throwing ConfigException instead of
WARN is fine. Added clarification that the producer waits the full
request.timeout.ms for the in-flight request. This implies that user might
be notified of batch expiry while a batch is still in-flight.

I don't recall if we discussed our point of view that existing configs like
retries become redundant/deprecated with this feature. IMO, retries config
becomes meaningless due to the possibility of incorrect configs like
delivery.timeout.ms > linger.ms + retries * (request..timeout.ms +
retry.backoff.ms), retries should be basically interpreted as MAX_INT? What
will be the default?

So do we ignore retries config or throw a ConfigException if weirdness like
above is detected?

-Sumant


On 5 September 2017 at 17:34, Ismael Juma  wrote:

> Thanks for updating the KIP, Sumant. A couple of points:
>
> 1. I think the default for delivery.timeout.ms should be higher than 30
> seconds given that we previously would reset the clock once the batch was
> sent. The value should be large enough that batches are not expired due to
> expected events like a new leader being elected due to broker failure.
> Would it make sense to use a conservative value like 120 seconds?
>
> 2. The producer currently throws an exception for configuration
> combinations that don't make sense. We should probably do the same here for
> consistency (the KIP currently proposes a log warning).
>
> 3. We should mention that we will not cancel in flight requests until the
> request timeout even though we'll expire the batch early if needed.
>
> I think we should start the vote tomorrow so that we have a chance of
> hitting the KIP freeze for 1.0.0.
>
> Ismael
>
> On Wed, Sep 6, 2017 at 1:03 AM, Sumant Tambe  wrote:
>
> > I've updated the kip-91 writeup
> >  > 91+Provide+Intuitive+User+Timeouts+in+The+Producer>
> > to capture some of the discussion here. Please confirm if it's
> sufficiently
> > accurate. Feel free to edit it if you think some explanation can be
> better
> > and has been agreed upon here.
> >
> > How do you proceed from here?
> >
> > -Sumant
> >
> > On 30 August 2017 at 12:59, Jun Rao  wrote:
> >
> > > Hi, Jiangjie,
> > >
> > > I mis-understood Jason's approach earlier. It does seem to be a good
> one.
> > > We still need to calculate the selector timeout based on the remaining
> > > delivery.timeout.ms to call the callback on time, but we can always
> wait
> > > for an inflight request based on request.timeout.ms.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Tue, Aug 29, 2017 at 5:16 PM, Becket Qin 
> > wrote:
> > >
> > > > Yeah, I think expiring a batch but still wait for the response is
> > > probably
> > > > reasonable given the result is not guaranteed anyways.
> > > >
> > > > @Jun,
> > > >
> > > > I think the frequent PID reset may still be possible if we do not
> wait
> > > for
> > > > the in-flight response to return. Consider two partitions p0 and p1,
> > the
> > > > deadline of the batches for p0 are T + 10, T + 30, T + 50... The
> > deadline
> > > > of the batches for p1 are T + 20, T + 40, T + 60... Assuming each
> > request
> > > > takes more than 10 ms to get the response. The following sequence may
> > be
> > > > possible:
> > > >
> > > > T: PID0 send batch0_p0(PID0), batch0_p1(PID0)
> > > > T + 10: PID0 expires batch0_p0(PID0), without resetting PID, sends
> > > > batch1_p0(PID0) and batch0_p1(PID0, retry)
> > > > T + 20: PID0 expires batch0_p1(PID0, retry), resets the PID to PID1,
> > > sends
> > > > batch1_p0(PID0, retry) and batch1_p1(PID1)
> > > > T + 30: PID1 expires batch1_p0(PID0, retry), without resetting PID,
> > sends
> > > > batch2_p0(PID1) and batch1_p1(PID1, retry)
> > > > T + 40: PID1 expires batch1_p1(PID1, retry), resets the PID to PID2,
> > > sends
> > > > batch2_p0(PID1, retry) and sends batch2_p1(PID2)
> > > > 
> > > >
> > > > In the above example, the producer will reset PID once every two
> > > requests.
> > > > The example did not take retry backoff into consideration, but it
> still
> > > > seems possible to encounter frequent PID reset if we do not wait for
> > the
> > > > request to finish. Also, in this case we will have a lot of retries
> and
> > > > mixture of PIDs which seem to be pretty complicated.
> > > >
> > > > I think Jason's suggestion will address both concerns, i.e. we fire
> the
> > > > callback at exactly delivery.timeout.ms, but we will still wait for
> > the
> > > > response to be returned before sending the next request.
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > >
> > > > On Tue, Aug 29, 2017 at 4:00 PM, Jun Rao  wrote:
> > > >
> > > > > Hmm, I thought delivery.timeout.ms bounds the time from a message
> is
> > > in
> > > > > the
> > > > > accumulator (i.e., when send() returns) to the time when the
> 

[GitHub] kafka pull request #3803: KAFKA-5845; KafkaController should send LeaderAndI...

2017-09-06 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/3803

KAFKA-5845; KafkaController should send LeaderAndIsrRequest to broker which 
starts very soon after shutdown



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-5845

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3803.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3803


commit 58e8100769610cad883e73cb38302ae76b48d66d
Author: Dong Lin 
Date:   2017-08-27T23:58:08Z

KAFKA-5845; KafkaController should send LeaderAndIsrRequest to brokers 
which starts very soon after shutdown




---


[jira] [Created] (KAFKA-5845) KafkaController should send LeaderAndIsrRequest to brokers which starts very soon after shutdown

2017-09-06 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-5845:
---

 Summary: KafkaController should send LeaderAndIsrRequest to 
brokers which starts very soon after shutdown
 Key: KAFKA-5845
 URL: https://issues.apache.org/jira/browse/KAFKA-5845
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Problem: undesired data in a temporary topic

2017-09-06 Thread Louis Verret
Hello, 

I am writing to you because we are facing a technical issue regarding the 
development of our data streaming application using Kafka. 

Our application consists of two Java main: The Producer and The Consumer . 
The Producer periodically reads data lines from a file and sends them (as a 
message) into a Kafka topic. 
The Consumer uses a Kafka stream processor (i.e a Java application that 
implements the Kafka Streams interface Processor ). Hence, this Java 
application overrides the two Java methods process() and punctate() of the 
Processor Interface. The process() method stores in a KeyValueStore instance 
the lines of a Kafka message in a topic. The offset is updated in order to not 
double read a message. The punctate() method reads the KeyValueStore instance 
and processes the data. 

The problem is this: between two runnings of the Producer and Consumer main (in 
parallel), data lines are stored as residue in a temporary topic called 
test-name_of_the_store-changelog. When executing the Consumer and the Launcher 
the second time, we believe that the data residue are re-load from the topic t 
est-name_of_the_store-changelog into the current topic. Then, the first data 
read by the Launcher at the second execution is undesired data (last data of 
the first execution). When we delete the t est-name_of_the_store-changelog 
topic from command line, the problem is fixed. Since command line executions 
are different between OS distributions, we want to delete this topic from the 
application Java code just at the beginning of the running. Is there a way to 
realize that using the Kafka Streams API ? 

Best regards, 

Louis 




Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-09-06 Thread Ismael Juma
Thanks for the KIP. +1 (binding). Please make it clear in the KIP that
removal will happen in 2.0.0.

Ismael

On Tue, Aug 8, 2017 at 11:53 AM, Paolo Patierno  wrote:

> Hi devs,
>
>
> I didn't see any more comments about this KIP. The JIRAs related to the
> first step (so making --new-consumer as deprecated with warning messages)
> are merged.
>
> I'd like to start a vote for this KIP.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: Fw: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-09-06 Thread Ted Yu
Looks good to me.

bq. *specifying the --zookeeper for the farmer *

*farmer -> former*

On Wed, Sep 6, 2017 at 1:42 AM, Paolo Patierno  wrote:

> Hi devs,
>
>
> I haven't seen any votes for this since last month.
>
> Is there something that should be addressed in the KIP (it didn't have any
> comments anymore and for this reason I started the vote).
>
>
> Thanks.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Paolo Patierno 
> Sent: Tuesday, August 8, 2017 10:53 AM
> To: dev@kafka.apache.org
> Subject: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools
>
>
> Hi devs,
>
>
> I didn't see any more comments about this KIP. The JIRAs related to the
> first step (so making --new-consumer as deprecated with warning messages)
> are merged.
>
> I'd like to start a vote for this KIP.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: [VOTE] KIP-183 - Change PreferredReplicaLeaderElectionCommand to use AdminClient

2017-09-06 Thread Tom Bentley
Unfortunately I've had to make a small change to the
ElectPreferredLeadersResult, because exposing a Map was incompatible with the case where
electPreferredLeaders() was called with a null partitions argument. The
change exposes methods to access the map which return futures, rather than
exposing the map (and crucially its keys) directly.

This is described in more detail in the [DISCUSS] thread.

Please take a look and recast your votes:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-183+-+Change+
PreferredReplicaLeaderElectionCommand+to+use+AdminClient#KIP-183-
ChangePreferredReplicaLeaderElectionCommandtouseAdminClient-AdminClient:
electPreferredLeaders()

Thanks,

Tom

On 4 September 2017 at 10:52, Ismael Juma  wrote:

> Hi Tom,
>
> You can update the KIP for minor things like that. Worth updating the
> thread if it's something that is done during the PR review.
>
> With regards to exceptions, yes, that's definitely desired. I filed a JIRA
> a while back for this:
>
> https://issues.apache.org/jira/browse/KAFKA-5445
>
> Ideally, new methods that we add would have this so that we don't increase
> the tech debt that already exists.
>
> Ismael
>
> On Mon, Sep 4, 2017 at 10:11 AM, Tom Bentley 
> wrote:
>
> > Hi Jun,
> >
> > You're correct about those other expected errors. If it's OK to update
> the
> > KIP after the vote I'll add those.
> >
> > But this makes me wonder about the value of documenting expected errors
> in
> > the Javadocs for the AdminClient (on the Results class, to be specific).
> > Currently we don't do this, but it would be helpful for people using the
> > AdminClient to know the kinds of errors they should expect, for testing
> > purposes for example. On the other hand it's a maintenance burden. Should
> > we start documenting likely errors like this?
> >
> > Cheers,
> >
> > Tom
> >
> > On 4 September 2017 at 10:10, Tom Bentley  wrote:
> >
> > > I see three +1s, no +0s and no -1, so the vote passes.
> > >
> > > Thanks to those who voted and/or commented on the discussion thread.
> > >
> > > On 1 September 2017 at 07:36, Gwen Shapira  wrote:
> > >
> > >> Thank you! +1 (binding).
> > >>
> > >> On Thu, Aug 31, 2017 at 9:48 AM Jun Rao  wrote:
> > >>
> > >> > Hi, Tom,
> > >> >
> > >> > Thanks for the KIP. +1. Just one more minor comment. It seems that
> the
> > >> > ElectPreferredLeadersResponse
> > >> > should expect at least 3 other types of errors : (1) request timeout
> > >> > exception, (2) leader rebalance in-progress exception, (3) can't
> move
> > to
> > >> > the preferred replica exception (i.e., preferred replica not in sync
> > >> yet).
> > >> >
> > >> > Jun
> > >> >
> > >> > On Tue, Aug 29, 2017 at 8:56 AM, Tom Bentley  >
> > >> > wrote:
> > >> >
> > >> > > Hi all,
> > >> > >
> > >> > > I would like to start the vote on KIP-183 which will provide an
> > >> > AdminClient
> > >> > > interface for electing the preferred replica, and refactor the
> > >> > > kafka-preferred-replica-election.sh tool to use this interface.
> > More
> > >> > > details here:
> > >> > >
> > >> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 183+-+Change+
> > >> > > PreferredReplicaLeaderElectionCommand+to+use+AdminClient
> > >> > >
> > >> > >
> > >> > > Regards,
> > >> > >
> > >> > > Tom
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>


[GitHub] kafka pull request #3802: KAFKA-5844: add groupBy(selector, serialized) to K...

2017-09-06 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3802

KAFKA-5844: add groupBy(selector, serialized) to Ktable

add `KTable#groupBy(KeyValueMapper, Serialized)` and deprecate the overload 
with `Serde` params


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka kip-182-ktable-groupby

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3802.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3802


commit 56bfba7ab8b58a20287f93ce662d9a2b3a0ae141
Author: Damian Guy 
Date:   2017-09-06T13:48:37Z

add groupBy(selector, serialized) to Ktable




---


[jira] [Created] (KAFKA-5844) Add groupBy(KeyValueMapper, Serialized) to KTable

2017-09-06 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5844:
-

 Summary: Add groupBy(KeyValueMapper, Serialized) to KTable
 Key: KAFKA-5844
 URL: https://issues.apache.org/jira/browse/KAFKA-5844
 Project: Kafka
  Issue Type: Sub-task
  Components: streams
Reporter: Damian Guy
Assignee: Damian Guy
 Fix For: 1.0.0


part of KIP-182
add {{KTable#groupBy(KeyValueMapper, Serialized)}} and deprecate the overload 
with {{Serde}} params



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3801: MINOR: Include response in request log

2017-09-06 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3801

MINOR: Include response in request log

It's implemented such that there is no overhead if request logging is
disabled.

Also:
- Reduce metrics computation duplication in `updateRequestMetrics`
- Change a couple of log calls to use string interpolation instead of 
`format`
- Fix a few compiler warnings related to unused imports and unused default
arguments.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka log-response-in-request-log

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3801.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3801


commit f3d3c7436e5049b833655797757af736b0d8f7a1
Author: Ismael Juma 
Date:   2017-09-06T12:23:44Z

MINOR: Add response to the request log

commit 529a6fdd2bb929c04abe76d9e2644ca1fcd47d85
Author: Ismael Juma 
Date:   2017-09-06T12:39:22Z

Attempt at reducing duplication in `updateRequestMetrics`

commit da2dd69c92ca44b44ca436d1bf682f79596c8c0d
Author: Ismael Juma 
Date:   2017-09-06T12:40:12Z

Use string interpolation instead of `format` in a couple of log calls

commit 200915466ca5647ffd20198d35790412ae90f5c9
Author: Ismael Juma 
Date:   2017-09-06T12:40:54Z

Fix some compiler warnings




---


Build failed in Jenkins: kafka-trunk-jdk7 #2714

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] KAFKA-5819; Add Joined class and relevant KStream join overloads

--
[...truncated 921.66 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED


Build failed in Jenkins: kafka-trunk-jdk7 #2713

2017-09-06 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] KAFKA-5817; Add Serialized class and overloads to KStream#groupBy 
and

--
[...truncated 2.02 MB...]
org.apache.kafka.clients.producer.KafkaProducerTest > 
testTopicRefreshInMetadata PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testPartitionsForWithNullTopic STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testPartitionsForWithNullTopic PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testNoSerializerProvided 
STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testNoSerializerProvided 
PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorFailureCloseResource STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorFailureCloseResource PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorWithSerializers STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testConstructorWithSerializers PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testHeaders STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testHeaders PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > testMetadataFetch STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > testMetadataFetch PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testMetadataFetchOnStaleMetadata STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testMetadataFetchOnStaleMetadata PASSED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInterceptorPartitionSetOnTooLargeRecord STARTED

org.apache.kafka.clients.producer.KafkaProducerTest > 
testInterceptorPartitionSetOnTooLargeRecord PASSED

org.apache.kafka.clients.producer.ProducerRecordTest > testInvalidRecords 
STARTED

org.apache.kafka.clients.producer.ProducerRecordTest > testInvalidRecords PASSED

org.apache.kafka.clients.producer.ProducerRecordTest > testEqualsAndHashCode 
STARTED

org.apache.kafka.clients.producer.ProducerRecordTest > testEqualsAndHashCode 
PASSED

org.apache.kafka.clients.producer.RecordMetadataTest > testNullChecksum STARTED

org.apache.kafka.clients.producer.RecordMetadataTest > testNullChecksum PASSED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithRelativeOffset STARTED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithRelativeOffset PASSED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithMissingRelativeOffset STARTED

org.apache.kafka.clients.producer.RecordMetadataTest > 
testConstructionWithMissingRelativeOffset PASSED

org.apache.kafka.clients.ClientUtilsTest > testOnlyBadHostname STARTED

org.apache.kafka.clients.ClientUtilsTest > testOnlyBadHostname PASSED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses STARTED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest > testNoPort STARTED

org.apache.kafka.clients.ClientUtilsTest > testNoPort PASSED

org.apache.kafka.clients.NodeApiVersionsTest > 
testUsableVersionNoDesiredVersionReturnsLatestUsable STARTED

org.apache.kafka.clients.NodeApiVersionsTest > 
testUsableVersionNoDesiredVersionReturnsLatestUsable PASSED

org.apache.kafka.clients.NodeApiVersionsTest > 
testUsableVersionCalculationNoKnownVersions STARTED

org.apache.kafka.clients.NodeApiVersionsTest > 
testUsableVersionCalculationNoKnownVersions PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testVersionsToString STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testVersionsToString PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUnsupportedVersionsToString 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUnsupportedVersionsToString 
PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testDesiredVersionTooLarge 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testDesiredVersionTooLarge PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testDesiredVersionTooSmall 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testDesiredVersionTooSmall PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testDesiredVersion STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testDesiredVersion PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionOutOfRange 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionOutOfRange 
PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUnknownApiVersionsToString 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUnknownApiVersionsToString 
PASSED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionLatestVersions 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionLatestVersions 
PASSED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResponse STARTED


[GitHub] kafka pull request #3776: KAFKA-5819: Add Joined class and relevant KStream ...

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3776


---


Re: [VOTE] 0.11.0.1 RC0

2017-09-06 Thread Damian Guy
Resending as i wasn't part of the kafka-clients mailing list

On Tue, 5 Sep 2017 at 21:34 Damian Guy  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.11.0.1.
>
> This is a bug fix release and it includes fixes and improvements from 49
> JIRAs (including a few critical bugs).
>
> Release notes for the 0.11.0.1 release:
> http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Saturday, September 9, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/javadoc/
>
> * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.1 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=a8aa61266aedcf62e45b3595a2cf68c819ca1a6c
>
>
> * Documentation:
> Note the documentation can't be pushed live due to changes that will not
> go live until the release. You can manually verify by downloading
> http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/kafka_2.11-0.11.0.1-site-docs.tgz
>
>
> * Protocol:
> http://kafka.apache.org/0110/protocol.html
>
> * Successful Jenkins builds for the 0.11.0 branch:
> Unit/integration tests:
> https://builds.apache.org/job/kafka-0.11.0-jdk7/298
>
> System tests:
> http://confluent-kafka-0-11-0-system-test-results.s3-us-west-2.amazonaws.com/2017-09-05--001.1504612096--apache--0.11.0--7b6e5f9/report.html
>
> /**
>
> Thanks,
> Damian
>


[GitHub] kafka pull request #3772: KAFKA-5817: Add Serialized class and overloads to ...

2017-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3772


---


Re: [VOTE] KIP-188 - Add new metrics to support health checks

2017-09-06 Thread Rajini Sivaram
Ok, thanks, leaving as is.

On Wed, Sep 6, 2017 at 1:22 AM, Jason Gustafson  wrote:

> >
> > I think I prefer the names with `Message` in them. For people less
> familiar
> > with Kafka, it makes it a bit clearer, I think.
>
>
> Works for me.
>
> On Tue, Sep 5, 2017 at 5:19 PM, Ismael Juma  wrote:
>
> > I think I prefer the names with `Message` in them. For people less
> familiar
> > with Kafka, it makes it a bit clearer, I think.
> >
> > Ismael
> >
> > On Wed, Sep 6, 2017 at 12:39 AM, Rajini Sivaram  >
> > wrote:
> >
> > > I am ok with dropping 'Message'. So the names would be
> > > FetchConversionsPerSec,
> > > ProduceConversionsPerSec and ConversionsTimeMs. The first two sound
> fine.
> > > Not so sure about ConversionsTimeMs, but since it appears with
> > > Produce/Fetch as the request tag, it should be ok. I haven't updated
> the
> > > KIP yet. If there are no objections, I will update the KIP tomorrow.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Tue, Sep 5, 2017 at 7:23 PM, Jason Gustafson 
> > > wrote:
> > >
> > > > >
> > > > > I was wondering about the message versus record question. The fact
> > that
> > > > we
> > > > > already have MessagesInPerSec seemed to favour the former. The
> other
> > > > aspect
> > > > > is that for produce requests, we can up convert as well, so it
> seemed
> > > > > better to keep it generic.
> > > >
> > > >
> > > > Yeah, so I thought maybe we could bypass the question and drop
> > `Message`
> > > > from the names if they were already clear enough. I'm fine with
> either
> > > way.
> > > >
> > > > On Tue, Sep 5, 2017 at 11:09 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > I was wondering about the message versus record question. The fact
> > that
> > > > we
> > > > > already have MessagesInPerSec seemed to favour the former. The
> other
> > > > aspect
> > > > > is that for produce requests, we can up convert as well, so it
> seemed
> > > > > better to keep it generic.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Tue, Sep 5, 2017 at 6:51 PM, Jason Gustafson <
> ja...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > +1 Lots of good stuff in here.
> > > > > >
> > > > > > One minor nit: in the name `FetchDownConversionsPerSec`, it's
> > > implicit
> > > > > that
> > > > > > down-conversion is for messages. Could we do the same for
> > > > > > `MessageConversionsTimeMs` and drop the `Message`? Then we don't
> > have
> > > > to
> > > > > > decide if it should be 'Record' instead.
> > > > > >
> > > > > > On Tue, Sep 5, 2017 at 10:20 AM, Ismael Juma 
> > > > wrote:
> > > > > >
> > > > > > > Thanks Rajini.
> > > > > > >
> > > > > > > 1. I meant a topic metric, but we could have one for fetch and
> > one
> > > > for
> > > > > > > produce differentiated by the additional tag. The advantage is
> > that
> > > > the
> > > > > > > name would be consistent with the request metric for message
> > > > > conversions.
> > > > > > > However, on closer inspection, this would make the name
> > > inconsistent
> > > > > with
> > > > > > > the broker topic metrics:
> > > > > > >
> > > > > > > val totalProduceRequestRate =
> > > > > > > newMeter(BrokerTopicStats.TotalProduceRequestsPerSec,
> > "requests",
> > > > > > > TimeUnit.SECONDS, tags)
> > > > > > > val totalFetchRequestRate =
> > > > > > > newMeter(BrokerTopicStats.TotalFetchRequestsPerSec,
> "requests",
> > > > > > > TimeUnit.SECONDS, tags)
> > > > > > >
> > > > > > > So, we maybe we can instead go for
> FetchMessageConversionsPerSeco
> > > nd
> > > > > and
> > > > > > > ProduceMessageConversionsPerSec.
> > > > > > >
> > > > > > > 2. Sounds good.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Tue, Sep 5, 2017 at 5:46 PM, Rajini Sivaram <
> > > > > rajinisiva...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Ismael,
> > > > > > > >
> > > > > > > > 1. At the moment FetchDownConversionsPerSec is a topic metric
> > > while
> > > > > > > > MessageConversionTimeMs is a request metric which indicates
> > > > > > Produce/Fetch
> > > > > > > > as a tag. Are you suggesting that we should convert
> > > > > > > > FetchDownConversionsPerSec to a request metric called
> > > > > > > > MessageConversionsPerSec
> > > > > > > > for fetch requests?
> > > > > > > >
> > > > > > > > 2. TemporaryMessageSize for Produce/Fetch would indicate the
> > > space
> > > > > > > > allocated for conversions. For other requests, this metric
> will
> > > not
> > > > > be
> > > > > > > > created (unless we find a request where the size is large and
> > > this
> > > > > > > > information is useful).
> > > > > > > >
> > > > > > > > Thank you,
> > > > > > > >
> > > > > > > > Rajini
> > > > > > > >
> > > > > > > >
> > > > > > > > On Tue, Sep 5, 2017 at 4:55 PM, Ismael Juma <
> ism...@juma.me.uk
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Rajini, +1 (binding) from me. Just a few 

Fw: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-09-06 Thread Paolo Patierno
Hi devs,


I haven't seen any votes for this since last month.

Is there something that should be addressed in the KIP (it didn't have any 
comments anymore and for this reason I started the vote).


Thanks.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Tuesday, August 8, 2017 10:53 AM
To: dev@kafka.apache.org
Subject: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools


Hi devs,


I didn't see any more comments about this KIP. The JIRAs related to the first 
step (so making --new-consumer as deprecated with warning messages) are merged.

I'd like to start a vote for this KIP.


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: [VOTE] KIP-191: KafkaConsumer.subscribe() overload that takes just Pattern

2017-09-06 Thread Attila Kreiner
Hi All,

No more votes received for this, so the final decision is: accepted.

Thx,
Attila

2017-09-02 6:57 GMT+02:00 Attila Kreiner :

> Hi Jason,
>
> Sorry, I didn't know that one. Then I assume we should wait a few days
> before moving forward.
>
> Regards,
> Attila
>
> 2017-09-01 18:01 GMT+02:00 Jason Gustafson :
>
>> Hi Attila,
>>
>> We should allow 3 days for the vote even if it is clear it will pass.
>>
>> Thanks,
>> Jason
>>
>> On Fri, Sep 1, 2017 at 8:16 AM, Attila Kreiner  wrote:
>>
>> > Hi All,
>> >
>> > Looks like accepted. Updating KIP page.
>> >
>> > Thx,
>> > Attila
>> >
>> > 2017-08-31 19:22 GMT+02:00 Matthias J. Sax :
>> >
>> > > +1
>> > >
>> > > On 8/31/17 8:49 AM, Jason Gustafson wrote:
>> > > > +1
>> > > >
>> > > > On Thu, Aug 31, 2017 at 8:36 AM, Mickael Maison <
>> > > mickael.mai...@gmail.com>
>> > > > wrote:
>> > > >
>> > > >> +1 non binding
>> > > >> Thanks
>> > > >>
>> > > >> On Thu, Aug 31, 2017 at 8:35 AM, Guozhang Wang > >
>> > > wrote:
>> > > >>> +1
>> > > >>>
>> > > >>> On Thu, Aug 31, 2017 at 7:57 AM, Bill Bejeck 
>> > > wrote:
>> > > >>>
>> > >  +1
>> > > 
>> > >  Thanks,
>> > >  Bill
>> > > 
>> > >  On Thu, Aug 31, 2017 at 10:31 AM, Vahid S Hashemian <
>> > >  vahidhashem...@us.ibm.com> wrote:
>> > > 
>> > > > +1 (non-binding)
>> > > >
>> > > > Thanks.
>> > > > --Vahid
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > From:   Molnár Bálint 
>> > > > To: dev@kafka.apache.org
>> > > > Date:   08/31/2017 02:13 AM
>> > > > Subject:Re: [VOTE] KIP-191: KafkaConsumer.subscribe()
>> > > overload
>> > > > that takes just Pattern
>> > > >
>> > > >
>> > > >
>> > > > +1 (non-binding)
>> > > >
>> > > > 2017-08-31 10:33 GMT+02:00 Manikumar > >:
>> > > >
>> > > >> +1 (non-binding)
>> > > >>
>> > > >> On Thu, Aug 31, 2017 at 1:53 PM, Ismael Juma <
>> isma...@gmail.com>
>> > >  wrote:
>> > > >>
>> > > >>> Thanks for the KIP, +1 (binding).
>> > > >>>
>> > > >>> Ismael
>> > > >>>
>> > > >>> On 31 Aug 2017 8:38 am, "Attila Kreiner" 
>> > > >> wrote:
>> > > >>>
>> > > >>> Hi All,
>> > > >>>
>> > > >>> Thx for the comments, I pretty much see a consensus here. So
>> I'd
>> > > >> like
>> > > > to
>> > > >>> start the vote for:
>> > > >>>
>> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
>> > > > apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
>> > > > iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
>> > > kjJc7uSVcviKUc=-
>> > > > tCQqMGtotzV0UuQdwhdYJ745XQzBJp5lz9O1oM9QgA=Wq-_8a-
>> > > > 94g2UGfy8hlJcx9WMdxK0WJRZ7V2ex2qKPpY=
>> > > >
>> > > >>> 191%3A+KafkaConsumer.
>> > > >>> subscribe%28%29+overload+that+takes+just+Pattern
>> > > >>>
>> > > >>> Cheers,
>> > > >>> Attila
>> > > >>>
>> > > >>
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > 
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>> --
>> > > >>> -- Guozhang
>> > > >>
>> > > >
>> > >
>> > >
>> >
>>
>
>


Re: [VOTE] KIP-138: Change punctuate semantics

2017-09-06 Thread Michael Noll
Thanks, Guozhang.

FWIW here's my (non-binding) +1.  Sorry for the delay, Gmail decided to
mark this thread as spam :roll-eyes:.

-Michael



On Tue, Sep 5, 2017 at 10:36 PM, Guozhang Wang  wrote:

> Thanks for your inputs. The main motivation is indeed to achieve
> consistency as we use "wall-clock-time" in some existing classes already,
> e.g.
> http://kafka.apache.org/0101/javadoc/index.html?org/apache/
> kafka/streams/processor/WallclockTimestampExtractor.html
> .
>
> Since there is no objections I will go ahead and do the renaming now.
>
> Guozhang
>
>
> On Thu, Aug 31, 2017 at 1:24 AM, Michal Borowiecki <
> michal.borowie...@openbet.com> wrote:
>
> > +0 ambivalent about the naming but do agree that should be kept
> consistent
> >
> >
> >
> > On 31/08/17 00:43, Matthias J. Sax wrote:
> >
> >> +1
> >>
> >> On 8/30/17 12:00 PM, Bill Bejeck wrote:
> >>
> >>> +1
> >>>
> >>> On Wed, Aug 30, 2017 at 1:06 PM, Damian Guy 
> >>> wrote:
> >>>
> >>> +1
> 
>  On Wed, 30 Aug 2017 at 17:49 Guozhang Wang 
> wrote:
> 
>  Hello Michal and community:
> >
> > While working on updating the web docs and java docs for this KIP, I
> > felt
> > that the term SYSTEM_TIME a bit confusing sometimes from a reader's
> > perspective as we are actually talking about wall-clock time. I'd
> hence
> > like to propose an minor addendum to this adopted KIP before the
> > release
> >
>  to
> 
> > rename this enum:
> >
> > SYSTEM_TIME
> >
> > to
> >
> > WALL_CLOCK_TIME
> >
> >
> > For people who have voted on this KIP, could you vote again for this
> > addendum (detailed discussions can be found in this PR:
> > https://github.com/apache/kafka/pull/3732#issuecomment-326043657)?
> >
> >
> > Guozhang
> >
> >
> >
> >
> > On Sat, May 13, 2017 at 8:13 AM, Michal Borowiecki <
> > michal.borowie...@openbet.com> wrote:
> >
> > Thank you all!
> >>
> >> This KIP passed the vote with 3 binding and 5 non-binding +1s:
> >>
> >> +1 (binding) from Guozhang Wang, Ismael Juma and Ewen
> Cheslack-Postava
> >>
> >> +1 (non-binding) from Matthias J. Sax, Bill Bejeck, Eno Thereska,
> Arun
> >> Mathew and Thomas Becker
> >>
> >>
> >> Created KAFKA-5233  jira/browse/KAFKA-5233>
> >> created to track implementation.
> >> It's been a fantastic experience for me working with this great
> >>
> > community
> 
> > to produce a KIP for the first time.
> >> Big thank you to everyone who contributed!
> >>
> >> Cheers,
> >> Michał
> >>
> >>
> >> On 12/05/17 02:01, Ismael Juma wrote:
> >>
> >> Michal, you have enough votes, would you like to close the vote?
> >>
> >> Ismael
> >>
> >> On Thu, May 11, 2017 at 4:49 PM, Ewen Cheslack-Postava <
> >>
> > e...@confluent.io> 
> 
> > wrote:
> >>
> >>
> >> +1 (binding)
> >>
> >> -Ewen
> >>
> >> On Thu, May 11, 2017 at 7:12 AM, Ismael Juma  <
> >>
> > ism...@juma.me.uk> wrote:
> 
> >
> >> Thanks for the KIP, Michal. +1(binding) from me.
> >>
> >> Ismael
> >>
> >> On Sat, May 6, 2017 at 6:18 PM, Michal Borowiecki <
> >>
> > michal.borowie...@openbet.com> wrote:
> 
> >
> >> Hi all,
> >>
> >> Given I'm not seeing any contentious issues remaining on the
> >> discussion
> >> thread, I'd like to initiate the vote for:
> >>
> >> KIP-138: Change punctuate semantics
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-138%
> >> 3A+Change+punctuate+semantics
> >>
> >>
> >> Thanks,
> >> Michał
> >> --  Michal
> >> Borowiecki
> >> Senior Software Engineer L4
> >> T: +44 208 742 1600 <+44%2020%208742%201600> <020%208742%201600>
> >>
> >> +44 203 249 8448 <+44%2020%203249%208448> <020%203249%208448>
> >>
> >>
> >>
> >> E: michal.borowie...@openbet.com
> >> W: www.openbet.com
> >> OpenBet Ltd
> >>
> >> Chiswick Park Building 9
> >>
> >> 566 Chiswick High Rd
> >>
> >> London
> >>
> >> W4 5XT
> >>
> >> UK  >>
> > email_promo>
> 
> > This message is confidential and intended only for the addressee. If
> >> you
> >> have received this message in error, please immediately notify
> >>
> > thepostmas...@openbet.com and delete it from your system as well as
>  any
> 
> > copies. The content of e-mails as well as traffic data may be
> monitored
> >>
> >> by
> >>
> >> OpenBet for employment and security purposes. To protect the
> >> environment
> >> please 

Re: [DISCUSS] KIP-183 - Change PreferredReplicaLeaderElectionCommand to use AdminClient

2017-09-06 Thread Tom Bentley
Hi Colin,

Thanks for taking the time to respond.

On 5 September 2017 at 22:22, Colin McCabe  wrote:

> ...
> Why does there need to be a map at all in the API?


>From a purely technical PoV there doesn't, but doing something else would
make the API inconsistent with other similar AdminClient *Results classes,
which all expose a Map directly.


> Why not just have
> something like this:
>

I agree this would be a better solution. I will update the KIP and ask
people to vote again. (Is that the right process?)

It might be worth bearing this in mind for future AdminClient APIs:
Exposing a Map directly means you can't retrofit handling a null argument
to mean "all the things", whereas wrapping the map would allow that.

Thanks again,

Tom


[GitHub] kafka pull request #3800: KAFKA-5764: Add toLowerCase support to sasl.kerber...

2017-09-06 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/3800

KAFKA-5764:  Add toLowerCase support to sasl.kerberos.principal.to.local 
rule



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka KAFKA-5764-REGEX

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3800.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3800


commit b155f3fd869d6ad00a02b76ca18303baac31e617
Author: Manikumar Reddy 
Date:   2017-09-05T10:35:05Z

KAFKA-5764:  Add toLowerCase support to sasl.kerberos.principal.to.local 
rules




---


[GitHub] kafka pull request #3799: KAFKA-5597 [WIP] Alternate way to do Metrics docs ...

2017-09-06 Thread wushujames
GitHub user wushujames opened a pull request:

https://github.com/apache/kafka/pull/3799

KAFKA-5597 [WIP] Alternate way to do Metrics docs generation

I was about to start on the next round of autogeneration of metrics docs, 
but I wanted to @guozhangwang 's opinion on this first. This is a possible 
alternate way to do autogeneration of metrics docs, that possibly looks a 
little nicer for the developer. Posting this to get some feedback on if the 
original way looks better, or if this new way looks better.

Instead of having the metrics registry and the 
org.apache.kafka.common.metrics.Metrics object be separate things, have the 
metrics registry hold a copy of the Metrics object. That way, all the 
metricInstance stuff is hidden, and we don't have to make sure that the metrics 
registry and the Metrics object are configured identicailly (with the same 
tags).

I personally think this looks a little better. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wushujames/kafka 
producer_sender_metrics_docs_different

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3799.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3799


commit 7b48ab7030c58879103bccf8c3f7dff59444a6a6
Author: James Cheng 
Date:   2017-09-06T07:23:23Z

Instead of having the metrics registry and the 
org.apache.kafka.common.metrics.Metrics object be separate things, have the 
metrics registry hold a copy of the Metrics object. That way, all the 
metricInstance stuff is hidden, and we don't have to make sure they are 
configured identicailly (with the same tags).




---


Re: [DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-09-06 Thread Tom Bentley
Hi Ted and Colin,

Thanks for the comments.

It seems you're both happier with reassign rather than assign, so I'm happy
to stick with that.


On 5 September 2017 at 18:46, Colin McCabe  wrote:

> ...


> Do we expect that reducing the number of partitions will ever be
> supported by this API?  It seems like decreasing would require a
> different API-- one which supported data movement, had a "check status
> of this operation" feature, etc. etc.  If this API is only ever going to
> be used to increase the number of partitions, I think we should just
> call it "increasePartitionCount" to avoid confusion.
>

I thought a little about the decrease possibility (hence the static factory
methods on PartitionCount), but not really in enough detail. I suppose a
decrease process could look like this:

1. Producers start partitioning based on the decreased partition count.
2. Consumers continue to consume from all partitions.
3. At some point all the records in the old partitions have expired and
they can be deleted.

This wouldn't work for compacted topics. Of course a more aggressive
strategy is also possible where we forcibly move data from the partitions
to be deleted.

Anyway, in either case the process would be long running, whereas the
increase case is fast, so the semantics are quite different. So I agree,
lets rename the method to make clear that it's only for increasing the
partition count.

>
> > Outstanding questions:
> >
> > 1. Is the proposed alterInterReplicaThrottle() API really better than
> > changing the throttle via alterConfigs()?
>
> That's a good point.  I would argue that we should just use alterConfigs
> to set the broker configuration, rather than having a special RPC just
> for this.
>

Yes, I'm minded to agree.

The reason I originally thought a special API might be better was if people
felt that the DynamicConfig mechanism (which seems to exist only to support
these throttles) was an implementation detail of the throttles. But I now
realise that they're visible via kafka-topics.sh, so they're effectively
already a public API.


>
> ...
> > Would it be a problem that
> > triggering the reassignment required ClusterAction on the Cluster, but
> > throttling the assignment required Alter on the Topic? What if a user had
> > the former permission, but not the latter?
>
> We've been trying to reserve ClusterAction on Cluster for
> broker-initiated operations.  Alter on Cluster is the ACL for "root
> stuff" and I would argue that it should be what we use here.
>
> For reconfiguring the broker, I think we should follow KIP-133 and use
> AlterConfigs on the Broker resource.  (Of course, if you just use the
> existing alterConfigs call, you get this without any special effort.)
>

Yes, sorry, what I put in the email about authorisation wasn't what I put
in the KIP (I revised the KIP after drafting the email and then forgot to
update the email).

Although KIP-133 proposes a Broker resource, I don't see one in the code
and KIP-133 was supposedly delivered in 0.11. Can anyone fill in the story
here? Is it simply because the functionality to update broker configs
hasn't been implemented yet?

As currently proposed, both reassignPartitions() and alterInterBrokerThrottle()
require Alter on the Cluster. If we used alterConfigs() to set the
throttles then we create a situation where the triggering of the
reassignment required Alter(Cluster), but the throttling required
Alter(Broker), and the user might have the former but not the latter. I
don't think this is likely to be a big deal in practice, but maybe others
disagree?


> >
> > 2. Is reassignPartitions() really the best name? I find the distinction
> > between reassigning and just assigning to be of limited value, and
> > "reassign" is misleading when the APIs now used for changing the
> > replication factor too. Since the API is asynchronous/long running, it
> > might be better to indicate that in the name some how. What about
> > startPartitionAssignment()?
>
> Good idea -- I like the idea of using "start" or "initiate" to indicate
> that this is kicking off a long-running operation.
>
> "reassign" seemed like a natural choice to me since this is changing an
> existing assignment.  It was assigned (when the topic it was created)--
> now it's being re-assigned.  If you change it to just "assign" then it
> feels confusing to me.  A new user might ask if "assigning partitions"
> is a step that they should take after creating a topic, similar to how
> subscribing to topics is a step you take after creating a consumer.
>

Yeah, I find this new user argument persuasive, so I'm happy to stick with
reassign.

Thanks again for the feedback,

Tom


[GitHub] kafka pull request #3798: KAFKA-5841: AbstractIndex should offer `makeReadOn...

2017-09-06 Thread huxihx
GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/3798

KAFKA-5841: AbstractIndex should offer `makeReadOnly` method

AbstractIndex should offer `makeReadOnly` method that changed the 
underlying MappedByteBuffer read-only.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAFKA-5841

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3798.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3798


commit a2d97ff6c814368ac7e7eadc63569de36d3965af
Author: huxihx 
Date:   2017-09-06T06:48:54Z

KAFKA-5841: AbstractIndex should offer `makeReadOnly` method as mentioned 
in comments




---