Re: About Testing Stream Applications Documentation

2019-09-18 Thread uğur
Hi Bruno,

Let me see if it is possible to edit it for me.

Regards,
Ugur Yeter

On Thu, 19 Sep 2019, 00:48 Bruno Cadonna,  wrote:

> Hi Ugur,
>
> Your finding looks correct to me. Do you mind fixing this issue?
>
> Best,
> Bruno
>
> On Tue, Sep 17, 2019 at 12:54 PM uğur  wrote:
> >
> > Hi,
> >
> > I am not sure if it is the right email address to write about this topic,
> > please correct me if I am wrong.
> >
> > As I read documentation of Testing Stream Application, I believe I
> noticed
> > wrong naming of method called shouldNotUpdateStoreForLargerValue. As I
> > understand it, either the name should change
> > to shouldUpdateStoreForLargerValue or assertion should be different.
> >
> > Regards,
> > Ugur Yeter
>


Build failed in Jenkins: kafka-trunk-jdk11 #821

2019-09-18 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update dependencies for Kafka 2.4 (part 2) (#7333)

--
[...truncated 2.63 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3910

2019-09-18 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] Add recent versions of Kafka to the matrix of ConnectDistributedTest

--
[...truncated 2.44 MB...]

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.LogAppendTimeTest > testProduceConsume STARTED

kafka.api.LogAppendTimeTest > testProduceConsume PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED
ERROR: Could not install GRADLE_4_10_3_HOME
java.lang.NullPointerException

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance STARTED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance PASSED

kafka.api.ConsumerBounceTest > testClose STARTED
kafka.api.ConsumerBounceTest.testClose failed, log available in 

Jenkins build is back to normal : kafka-trunk-jdk11 #820

2019-09-18 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-479 Add Materialized to Join

2019-09-18 Thread Bill Bejeck
Good catch!  I meant to propose the name to be "StreamJoin". I have updated
the KIP accordingly.

As for the name, I originally had "StreamJoined" and updated it after some
comments on the KIP.
I do feel that the name "StreamJoin" is better in this case since it is
used to represent a stream join configuration vs. "StreamJoined" which
feels more like it's being used as a verb (past tense).

WDYT?




On Wed, Sep 18, 2019 at 4:48 PM Guozhang Wang  wrote:

> Hello Bill,
>
> The KIP's proposal has the code snippet name as "StreamJoined" but the
> class name defined is StreamJoin.Which one did you propose? Personally I
> think StreamJoined with better aligned with other control objects, but if
> you think otherwise is better I can be convinced too :)
>
>
> Guozhang
>
> On Wed, Sep 18, 2019 at 4:38 PM Bill Bejeck  wrote:
>
> > All, since we have updated KIP-479
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoin+config+object+to+Join
> > and
> > seem to have completed the discussion for the updates, I'd like to call
> for
> > everyone to vote again.
> >
> > Thanks,
> > Bill
> >
> > On Fri, Aug 2, 2019 at 10:46 AM Bill Bejeck  wrote:
> >
> > > +1 (binding) from myself.
> > >
> > >
> > > This vote has been open for 7 days now. so I'm closing this vote
> thread.
> > >
> > > KIP-479 had the following votes:
> > >
> > > binding +1s: 3 (Guozhang, Matthias, and Bill)
> > > -1 votes: none
> > >
> > > Thanks to everyone who voted and participated in the discussion for
> this
> > > KIP!
> > >
> > > -Bill
> > >
> > > On Mon, Jul 29, 2019 at 6:03 PM Guozhang Wang 
> > wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> On Thu, Jul 25, 2019 at 7:39 PM Matthias J. Sax <
> matth...@confluent.io>
> > >> wrote:
> > >>
> > >> > +1 (binding)
> > >> >
> > >> > On 7/25/19 1:05 PM, Bill Bejeck wrote:
> > >> > > All,
> > >> > >
> > >> > > After a great discussion on KIP-479 (
> > >> > >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+Materialized+to+Join
> > >> > )
> > >> > > I'd
> > >> > > like to start a vote.
> > >> > >
> > >> > > Thanks,
> > >> > > Bill
> > >> > >
> > >> >
> > >> >
> > >>
> > >> --
> > >> -- Guozhang
> > >>
> > >
> >
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-479 Add Materialized to Join

2019-09-18 Thread Guozhang Wang
Hello Bill,

The KIP's proposal has the code snippet name as "StreamJoined" but the
class name defined is StreamJoin.Which one did you propose? Personally I
think StreamJoined with better aligned with other control objects, but if
you think otherwise is better I can be convinced too :)


Guozhang

On Wed, Sep 18, 2019 at 4:38 PM Bill Bejeck  wrote:

> All, since we have updated KIP-479
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoin+config+object+to+Join
> and
> seem to have completed the discussion for the updates, I'd like to call for
> everyone to vote again.
>
> Thanks,
> Bill
>
> On Fri, Aug 2, 2019 at 10:46 AM Bill Bejeck  wrote:
>
> > +1 (binding) from myself.
> >
> >
> > This vote has been open for 7 days now. so I'm closing this vote thread.
> >
> > KIP-479 had the following votes:
> >
> > binding +1s: 3 (Guozhang, Matthias, and Bill)
> > -1 votes: none
> >
> > Thanks to everyone who voted and participated in the discussion for this
> > KIP!
> >
> > -Bill
> >
> > On Mon, Jul 29, 2019 at 6:03 PM Guozhang Wang 
> wrote:
> >
> >> +1 (binding)
> >>
> >> On Thu, Jul 25, 2019 at 7:39 PM Matthias J. Sax 
> >> wrote:
> >>
> >> > +1 (binding)
> >> >
> >> > On 7/25/19 1:05 PM, Bill Bejeck wrote:
> >> > > All,
> >> > >
> >> > > After a great discussion on KIP-479 (
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+Materialized+to+Join
> >> > )
> >> > > I'd
> >> > > like to start a vote.
> >> > >
> >> > > Thanks,
> >> > > Bill
> >> > >
> >> >
> >> >
> >>
> >> --
> >> -- Guozhang
> >>
> >
>


-- 
-- Guozhang


Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2019-09-18 Thread Boyang Chen
Bump this thread to see if someone could also review!

On Mon, Sep 9, 2019 at 5:00 PM Boyang Chen 
wrote:

> Thank you Jason! Addressed the comments.
>
> Thank you Guozhang for explaining. I will document the timeout setting
> reasoning in the design doc.
>
>
> On Mon, Sep 9, 2019 at 1:49 PM Guozhang Wang  wrote:
>
>> On Fri, Sep 6, 2019 at 6:33 PM Boyang Chen 
>> wrote:
>>
>> > Thanks Guozhang, I have polished the design doc to make it sync with
>> > current KIP. As for overriding default timeout values, I guess it's
>> already
>> > stated in the KIP to set txn timeout to 10s, are you suggesting we
>> should
>> > also put down this recommendation on the KIP for non-stream EOS users?
>> >
>> > My comment is not for changing any produce / consumer default config
>> values, but for the Streams configs, to make sure that our
>> overridden config values respect the above rules. That is, we check the
>> actual value used in the config if they are ever overridden by users, and
>> if the above were not true we can log a warning that it may be risky to
>> encounter some unnecessary rebalances.
>>
>> Again, this is not something we need to include in the KIP since it is not
>> part of public APIs, just to emphasize that the internal implementation
>> can
>> have some safety guard like this.
>>
>> Guozhang
>>
>>
>>
>> > Boyang
>> >
>> > On Thu, Sep 5, 2019 at 8:43 PM Guozhang Wang 
>> wrote:
>> >
>> > > Hello Boyang,
>> > >
>> > > Just realized one thing about timeout configurations that we should
>> > > consider including in this KIP as well:
>> > >
>> > > 1) In Producer we have: max.block.ms (default value 60sec),
>> > > request.timeout
>> > > (30sec), delivery.timeout.ms (120sec), transaction.timeout (60sec)
>> > > 2) In Consumer we have: session.timeout (10sec), request.timeout
>> (30sec),
>> > > default.api.timeout.ms (60sec).
>> > >
>> > > Within a transaction (i.e. after we've beginTxn), we could potentially
>> > call
>> > > consumer blocking APIs that depend on default.api.timeout.ms, and
>> call
>> > > producer blocking APIs that depend on max.block.ms. Also, if the
>> user is
>> > > following a consumer->producer pattern, then it could be kicked and
>> > fenced
>> > > either by txn or by consumer group session.
>> > >
>> > > So we want to make sure that in the caller, e.g. Kafka Streams:
>> > >
>> > > 1) transaction.timeout < max.block.ms
>> > > 2) transaction.timeout < delivery.timeout.ms
>> > > 3) transaction.timeout < default.api.timeout.ms
>> > > 4) transaction.timeout ~= default.api.timeout.ms (I think this is
>> > already
>> > > mentioned in the KIP, just wanted to bring this up again)
>> > >
>> > > We do not need to override the default since not every users are
>> > following
>> > > the consumer -> producer pattern, but in cases like Streams where it
>> is
>> > > indeed the case, we should override the default values to obey the
>> above
>> > > rules.
>> > >
>> > > Guozhang
>> > >
>> > >
>> > >
>> > > On Thu, Sep 5, 2019 at 5:47 PM Guozhang Wang 
>> wrote:
>> > >
>> > > > Thanks Boyang, I'm +1 on the KIP.
>> > > >
>> > > > Could you also update the detailed design doc
>> > > >
>> > >
>> >
>> https://docs.google.com/document/d/1LhzHGeX7_Lay4xvrEXxfciuDWATjpUXQhrEIkph9qRE/edit
>> > > which
>> > > > seems a bit out-dated with the latest proposal?
>> > > >
>> > > >
>> > > > Guozhang
>> > > >
>> > > > On Wed, Sep 4, 2019 at 10:45 AM Boyang Chen <
>> > reluctanthero...@gmail.com>
>> > > > wrote:
>> > > >
>> > > >> Hey all,
>> > > >>
>> > > >> I would like to start the vote for KIP-447
>> > > >> <
>> > > >>
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+Producer+scalability+for+exactly+once+semantics
>> > > >> >.
>> > > >> This is a very important step to improve Kafka Streams scalability
>> in
>> > > >> exactly-once semantics, by avoiding linearly increasing number of
>> > > >> producers
>> > > >> with topic partition increases.
>> > > >>
>> > > >> Thanks,
>> > > >> Boyang
>> > > >>
>> > > >
>> > > >
>> > > > --
>> > > > -- Guozhang
>> > > >
>> > >
>> > >
>> > > --
>> > > -- Guozhang
>> > >
>> >
>>
>>
>> --
>> -- Guozhang
>>
>


Re: [VOTE] KIP-479 Add Materialized to Join

2019-09-18 Thread Bill Bejeck
All, since we have updated KIP-479
https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoin+config+object+to+Join
and
seem to have completed the discussion for the updates, I'd like to call for
everyone to vote again.

Thanks,
Bill

On Fri, Aug 2, 2019 at 10:46 AM Bill Bejeck  wrote:

> +1 (binding) from myself.
>
>
> This vote has been open for 7 days now. so I'm closing this vote thread.
>
> KIP-479 had the following votes:
>
> binding +1s: 3 (Guozhang, Matthias, and Bill)
> -1 votes: none
>
> Thanks to everyone who voted and participated in the discussion for this
> KIP!
>
> -Bill
>
> On Mon, Jul 29, 2019 at 6:03 PM Guozhang Wang  wrote:
>
>> +1 (binding)
>>
>> On Thu, Jul 25, 2019 at 7:39 PM Matthias J. Sax 
>> wrote:
>>
>> > +1 (binding)
>> >
>> > On 7/25/19 1:05 PM, Bill Bejeck wrote:
>> > > All,
>> > >
>> > > After a great discussion on KIP-479 (
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+Materialized+to+Join
>> > )
>> > > I'd
>> > > like to start a vote.
>> > >
>> > > Thanks,
>> > > Bill
>> > >
>> >
>> >
>>
>> --
>> -- Guozhang
>>
>


Re: [DISCUSS] KIP-479: Add Materialized to Join

2019-09-18 Thread Bill Bejeck
Since we seem to have a consensus with the updated KIP, I'll start a
re-vote thread.

On Tue, Sep 17, 2019 at 9:51 PM John Roesler  wrote:

> Hi Bill,
>
> For the record, the current proposal looks good to me also.
>
> Thanks,
> -John
>
> On Tue, Sep 17, 2019 at 5:06 PM Matthias J. Sax 
> wrote:
> >
> > > Just to clarify I'll update `as(StoreSupplier, StoreSupplier)` to
> > > `with(..., ...)` and change the class name from `StreamJoined` to
> > > `StreamJoin`
> >
> >
> > Thanks Bill. SGTM.
> >
> >
> >
> > -Matthias
> >
> >
> > On 9/17/19 4:52 PM, aishwarya kumar wrote:
> > > Hi Bill,
> > >
> > > Thanks for clarifying, and the KIP looks great!!
> > >
> > > Best regards,
> > > Aishwarya
> > >
> > > On Tue, Sep 17, 2019, 6:16 PM Bill Bejeck  wrote:
> > >
> > >> Hi Aishwarya,
> > >>
> > >> On Tue, Sep 17, 2019 at 1:41 PM aishwarya kumar 
> > >> wrote:
> > >>
> > >>> Will this be applicable to Kstream-Ktable joins as well? Or do users
> > >> always
> > >>> materialize these joins explicitly?
> > >>>
> > >>
> > >> No, this change applies to KStream-KStream joins only.  With
> KStream-KTable
> > >> joins KafkaStreams has materialized the KTable already, and we don't
> need
> > >> to do anything with the KStream instance in this case.
> > >>
> > >>
> > >>> I'm not sure if its even necessary (or makes sense), just trying to
> > >>> understand why the change is applicable to Kstream joins only?
> > >>>
> > >>
> > >> The full details are in the KIP, but in a nutshell, we needed to
> break the
> > >> naming of the StateStore from `Joined.withName` and provide users a
> way to
> > >> name the StateStore explicitly.  While making the changes, we
> realized it
> > >> would be beneficial to give users the ability to use different
> WindowStore
> > >> implementations.  The most likely reason to use different WindowStores
> > >> would be to enable in-memory joins.
> > >>
> > >>
> > >>> Best,
> > >>> Aishwarya
> > >>>
> > >>
> > >> Regards,
> > >> Bill
> > >>
> > >>
> > >>>
> > >>> On Tue, Sep 17, 2019 at 4:05 PM Bill Bejeck 
> wrote:
> > >>>
> >  Guozhang,
> > 
> >  Thanks for the comments.
> > 
> >  Yes, you are correct in your assessment regarding names, *if* the
> users
> >  provide their own StoreSuppliers  When constructing as
> StoreSupplier,
> > >> the
> >  name can't be null, and additionally, there is no way to update the
> > >> name.
> >  Further downstream, the underlying StateStore instances use the
> > >> provided
> >  name for registration so they must be unique.  If users don't
> provide
> >  distinct names for the StoreSuppliers, KafkaStreams will thow a
> >  StreamsException when building the topology.
> > 
> >  Thanks,
> >  Bill
> > 
> > 
> > 
> >  On Tue, Sep 17, 2019 at 9:39 AM Guozhang Wang 
> > >>> wrote:
> > 
> > > Hello Bill,
> > >
> > > Thanks for the updated KIP. I made a pass on the StreamJoined
> > >> section.
> >  Just
> > > a quick question from user's perspective: if a user wants to
> provide
> > >> a
> > > customized store-supplier, she is forced to also provide a name via
> > >> the
> > > store-supplier. So there's no way to say "I want to provide my own
> > >>> store
> > > engine but let the library decide its name", is that right?
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Tue, Sep 17, 2019 at 8:53 AM Bill Bejeck 
> > >> wrote:
> > >
> > >> Bumping this discussion as we need to re-vote before the KIP
> > >>> deadline.
> > >>
> > >> On Fri, Sep 13, 2019 at 10:29 AM Bill Bejeck 
> >  wrote:
> > >>
> > >>> Hi All,
> > >>>
> > >>> While working on the implementation of KIP-479, some issues came
> > >> to
> > > light
> > >>> that the KIP as written won't work.  I have updated the KIP with
> > >> a
> > >> solution
> > >>> I believe will solve the original problem as well as address the
> > >>> impediment to the initial approach.
> > >>>
> > >>> This update is a significant change, so please review the updated
> > >>> KIP
> > >>>
> > >>
> > >
> > 
> > >>>
> > >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoined+config+object+to+Join
> > >> and
> > >>> provide feedback.  After we conclude the discussion, there will
> > >> be
> > >>> a
> > >>> re-vote.
> > >>>
> > >>> Thanks!
> > >>> Bill
> > >>>
> > >>> On Wed, Jul 17, 2019 at 7:01 PM Guozhang Wang <
> > >> wangg...@gmail.com>
> > >> wrote:
> > >>>
> >  Hi Bill, thanks for your explanations. I'm on board with your
> >  decision
> >  too.
> > 
> > 
> >  Guozhang
> > 
> >  On Wed, Jul 17, 2019 at 10:20 AM Bill Bejeck  > >>>
> > > wrote:
> > 
> > > Thanks for the response, John.
> > >
> > >> If I can offer my thoughts, it seems better to just document
> > >>> on
> > > 

Re: About Testing Stream Applications Documentation

2019-09-18 Thread Bruno Cadonna
Hi Ugur,

Your finding looks correct to me. Do you mind fixing this issue?

Best,
Bruno

On Tue, Sep 17, 2019 at 12:54 PM uğur  wrote:
>
> Hi,
>
> I am not sure if it is the right email address to write about this topic,
> please correct me if I am wrong.
>
> As I read documentation of Testing Stream Application, I believe I noticed
> wrong naming of method called shouldNotUpdateStoreForLargerValue. As I
> understand it, either the name should change
> to shouldUpdateStoreForLargerValue or assertion should be different.
>
> Regards,
> Ugur Yeter


Re: [DISCUSS] KIP-519: Make SSL context/engine configuration extensible

2019-09-18 Thread Maulin Vasavada
Hi Clement

Here are my thoughts based on my latest re-write attempt and learnings,

1. I think that it will be a great value to keep both classes separate -
SslFactory and SslEngineFactory and having method reconfigurableConfigs()
in the SslEngineFactory. Here is the reasoning,

a.  It is kind of a Decorator pattern to me - even without named like one
SslFactory is acting as a decorator/surrogate to the SslEngineFactory and
helping it get created and re-created as needed based on the
terms/conditions specified by SslEngineFactory (via reconfigurableConfigs()
method)

b. SslEngineFactory will be pluggable class. By keeping the SslFactory
reconfigurable with delegation of reconfigurableConfigs() to
SslEngineFactory it allows the implementation of SslEngineFactory to be
worry free of - How Kafka manages reconfigurations. The contract is -
Kafka's SslFactory will ask the implementation to provide which
configurations it is ready to be reconfigured for. Rest of the logic for
triggering and reconfiguring and validation is in SslFactory.

c. The current validation in SslFactory about inter-broker-ssl handshake
AND verifying that certificate chain doesn't change via dynamic config
changes is rightly owned by SslFactory. We should not give flexibility to
SslEngineFactory to decide if they want that validation or not.

d. If SslEngineFactory fails to be re-created with new dynamic config
changes the constructor will throw some exception and the SslFactory will
fail the validateReconfiguration() call resulting in no-change. Hence the
validation if the new config is right is still controlled by the
SslEngineFactory without necessarily having explicit validate method
(assuming if you had a point about - we should keep validation of changed
configs in the pluggable class)


2. About the keystore validation in SslFactory - as I mentioned in above
points,

a. I feel it is Kafka's policy that it wants to mandate that validation
regardless of the SslEngineFactory's implementation. I feel that regardless
of customized implementation it is doing a 'logical' enforcement. I don't
see many cases where you will end up changing certificate chain (I can't
say the same about SANs entries though. see my below points). Hence that
validation is reasonable to be generally enforced for dynamic config
changes. If you change something violating that validation, you can avoid
making such changes via dynamic configuration and do a rolling restarts of
the boxes.

b. If the implementation doesn't use keystore then automatically no
validation will happen. Hence I don't see any issue with SslEngineFactory's
implementations not having requirement to use keystores.

c. There could be an argument however about - what it validates currently
and is there a scope of change. Example: It validates SANs entries and that
to me is a challenge because I have had scenarios where I kept adding more
VIPs in my certs SANs entries without really changing any certificate
chain. The existing validation will fail that setup unnecessarily. Given
that - there could be change in SslFactory but that doesn't still make that
validation eligible to go to SslEngineFactory implementations.


3. I am still in two minds about your point on - not using existing SSL
Reconfigurable configs to be used by SslFactory on top of
SslEngineFactory's reconfigurable configs. The reason for that is-

a. I agree with you on that we should not worry about existing SSL
reconfigurable configs in new changed code for SslFactory. Why depend on
something you really don't need. However, Rajini's point is- if we decide
to add more configs in the SSL reconfigurable configs which may be common
across SslEngineFactory's implementations, it will make it easier. Again,
just to make it easier we should not do it upfront. So now you see why I am
double minded on it while more leaning toward your suggestion.

4. I think I totally miss what you refer by
config.getConfiguredInstance(key, Class). Which Kafka existing class you
are referring to when you do that? Do we have that in KafkaConfig? If you
can clarify me on that I can think more about your input on it.

5. Now above all means-

a. We will have createEngine(), reconfigurableConfigs(), keystore(),
truststore() methods in the SslEngineFactory interface. However the return
type for keystore and truststore method can't be existing SecurityStore.
For that I already thought of the replacement with KeystoreHolder class
which only contains references to java's KeyStore object and Kafka's
Password object making it feasible for us to return non-implementation
specific return type.

b. We didn't talk about shouldBeRebuilt() so far at all given other
important conflicts to resolve. We will get to it once we can hash out
other stuff.

6. On Rajini's point on 'push notifications' for client side code when the
key/trust store changes,

" - For client-side, custom SslEngineFactory implementations could
   reconfigure themselves, we don't really need SslFactory to 

Re: [DISCUSS] KIP-525 - Return topic metadata and configs in CreateTopics response

2019-09-18 Thread Colin McCabe
Hi Rajini,

Thanks for the KIP.  I think this will be a great improvement.

For NumPartitions, ReplicationFactor, and Configs, we need some reasonable 
default value in the RPC which can be used for requests that are too old to 
contain this information.  I'd suggest 0, 0, and null, respectively.  That way 
we can, for example, distinguish between a response with zero configs and a 
response that's too old to have config information.

I'm curious what you think about having three functions in CreateTopicsResult 
rather than one.  Maybe:

 >  public KafkaFuture config(String topic);
 >  public KafkaFuture numPartitions(String topic);
 >  public KafkaFuture replicationFactor(String topic);

Or is it better to have the "public KafkaFuture topicConfig(String 
topic)" method?

best,
Colin


On Tue, Sep 17, 2019, at 02:12, Rajini Sivaram wrote:
> Hi all,
> 
> Since this is minor KIP, I will start vote tomorrow if there are no
> concerns.
> 
> Thank you,
> 
> Rajini
> 
> On Fri, Sep 13, 2019 at 10:17 PM Rajini Sivaram 
> wrote:
> 
> > Hi all,
> >
> > I would like to start discussion on KIP-525 to return topic configs in
> > CreateTopics response:
> >
> >-
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-525+-+Return+topic+metadata+and+configs+in+CreateTopics+response
> >
> >
> > When validateOnly=false, this will be the actual configs of the created
> > config. If validateOnly=true, this will be the configs with which the topic
> > would have been created. This provides an alternative to
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-234%3A+add+support+for+getting+topic+defaults+from+AdminClient
> > .
> >
> > Comments and suggestions welcome.
> >
> > Thank you,
> >
> > Rajini
> >
> >
>


[jira] [Resolved] (KAFKA-8922) Failed to get end offsets for topic partitions of global store

2019-09-18 Thread Raman Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raman Gupta resolved KAFKA-8922.

Resolution: Invalid

Closing as the error had nothing to do with streams -- just general broker 
unavailability which was reported with a poor error message by the client. 
Still don't know why the broker were unavailable but, hey, that's Kafka!

> Failed to get end offsets for topic partitions of global store
> --
>
> Key: KAFKA-8922
> URL: https://issues.apache.org/jira/browse/KAFKA-8922
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raman Gupta
>Priority: Major
>
> I have a Kafka stream that fails with this error on startup every time:
> {code}
> org.apache.kafka.streams.errors.StreamsException: Failed to get end offsets 
> for topic partitions of global store test-uiService-dlq-events-table-store 
> after 0 retry attempts. You can increase the number of retries via 
> configuration parameter `retries`.
> at 
> org.apache.kafka.streams.processor.internals.GlobalStateManagerImpl.register(GlobalStateManagerImpl.java:186)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.register(AbstractProcessorContext.java:101)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:207)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.state.internals.KeyValueToTimestampedKeyValueByteStoreAdapter.init(KeyValueToTimestampedKeyValueByteStoreAdapter.java:87)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:58)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:112)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.GlobalStateManagerImpl.initialize(GlobalStateManagerImpl.java:123)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.GlobalStateUpdateTask.initialize(GlobalStateUpdateTask.java:61)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.GlobalStreamThread$StateConsumer.initialize(GlobalStreamThread.java:229)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.GlobalStreamThread.initialize(GlobalStreamThread.java:345)
>  ~[kafka-streams-2.3.0.jar:?]
> at 
> org.apache.kafka.streams.processor.internals.GlobalStreamThread.run(GlobalStreamThread.java:270)
>  ~[kafka-streams-2.3.0.jar:?]
> Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to get 
> offsets by times in 30001ms
> {code}
> The stream was working fine and then this started happening.
> The stream now throws this error on every start. I am now going to attempt to 
> reset the stream and delete its local state.
> I hate to say it, but Kafka Streams suck. Its problem after problem.
> UPDATE: Some more info: it appears that the brokers may have gotten into some 
> kind of crazy state, for an unknown reason, and now they are just shrinking 
> and expanding ISRs repeatedly. Trying to figure out the root cause of this 
> craziness.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #819

2019-09-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8841; Reduce overhead of ReplicaManager.updateFollowerFetchState

--
[...truncated 2.63 MB...]

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
sourceShouldNotBeEqualForDifferentNamesWithSameTopicList STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
sourceShouldNotBeEqualForDifferentNamesWithSameTopicList PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSourceWithSameTopic STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testTopicGroupsByStateStore STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddSourceWithoutOffsetReset STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddSourceWithoutOffsetReset PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithDuplicates STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullStateStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullStateStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddTimestampExtractorPerSource STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddTimestampExtractorPerSource PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternSourceTopic STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternSourceTopic PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsExternal STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsExternal PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithEmptyParents STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithEmptyParents PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldThrowIfBothTopicAndPatternAreNotNull STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldThrowIfBothTopicAndPatternAreNotNull PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithWrongParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternMatchesAlreadyProvidedTopicSource STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testPatternMatchesAlreadyProvidedTopicSource PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddSinkConnectedWithMultipleParent PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testSubscribeTopicNameAndPattern STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testSubscribeTopicNameAndPattern PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullTopicChooserWhenAddingSink STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAllowNullTopicChooserWhenAddingSink PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddPatternSourceWithoutOffsetReset STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldAddPatternSourceWithoutOffsetReset PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAddNullInternalTopic STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
shouldNotAddNullInternalTopic PASSED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithWrongParent STARTED

org.apache.kafka.streams.processor.internals.InternalTopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED


Re: [VOTE] KIP-511: Collect and Expose Client's Name and Version in the Brokers

2019-09-18 Thread Colin McCabe
That's fair.  We could use the existing error code in the response and pass 
back something like INVALID_REQUEST.

I'm not sure if we want to add an error string field just for this (although 
they're a good idea in general...)

best,
Colin

On Wed, Sep 18, 2019, at 12:31, Magnus Edenhill wrote:
> > I think we should force client software names and versions to follow a
> regular expression and disconnect if they do not.
> 
> Disconnecting is not really a great error propagation method since it
> leaves the client oblivious to what went wrong.
> Instead suggest we return an ApiVersionResponse with an error code and a
> human-readable error message field.
> 
> 
> 
> Den ons 18 sep. 2019 kl 20:05 skrev Colin McCabe :
> 
> > Hi David,
> >
> > Thanks for the KIP!
> >
> > Nitpick: in the intro paragraph, "Operators of Apache Kafka clusters have
> > literally no information about the clients connected to their clusters"
> > seems a bit too strong.  We have some information, right?  For example, the
> > client ID, where clients are connecting from, etc.  Maybe it would be
> > clearer to say "very little information about the type of client
> > software..."
> >
> > Instead of ClientName and ClientVersion, how about ClientSoftwareName and
> > ClientSoftwareVersion?  This might make it clearer what the fields are
> > for.  I can see people getting confused about the difference between
> > ClientName and ClientId, which sound pretty similar.  Adding "Software" to
> > the field name makes it much clearer what the difference is between these
> > fields.
> >
> > In the "ApiVersions Request/Response Handling" section, it seems like
> > there is some out-of-date text.  Specifically, it says "we propose to add
> > the supported version of the ApiVersionsRequest in the response sent back
> > to the client alongside the error...".  But on the other hand, elsewhere in
> > the KIP, we say "ApiVersionsResponse is bumped to version 3 but does not
> > have any changes in the schema"  Based on the offline discussion we had,
> > the correct text is the latter (we're not changing ApiVersionsRerponse).
> > So we should remove the text about adding stuff to ApiVersionsResponse.
> >
> > In a similar vein, the comment "  // Version 3 is similar to version 2"
> > should be " // Version 3 is identical to version 2" or something like
> > that.  Although I guess technically things which are identical are also
> > similar, the current phrasing could be misleading.
> >
> > Now that KIP-482 has been accepted, I think there are a few things that
> > are worth clarifying in the KIP.  Firstly, ApiVersionsRequest v3 should be
> > a "flexible version".  Mainly, that means its request header will support
> > optional tagged fields.  However, ApiVersionsResponse v3 will *not* support
> > optional tagged fields in its response header.  This is necessary because--
> > as you said-- the broker must look at a fixed offset to find the error
> > code, regardless of the response version.
> >
> > I think we should force client software names and versions to follow a
> > regular expression and disconnect if they do not.  This will prevent issues
> > when using these strings in metrics names.  Probably we want something like:
> >
> > [\.\-A-Za-z0-9]?[\.\-A-Za-z0-9 ]*[\.\-A-Za-z0-9]?
> >
> > Notice this does _not* include underscores, since they get converted to
> > dots in JMX, causing ambiguity.  It also doesn't allow the first or last
> > character to be a space.
> >
> > best,
> > Colin
> >
> >
> > On Mon, Sep 16, 2019, at 04:39, Mickael Maison wrote:
> > > +1 (non binding)
> > > Thanks for the KIP!
> > >
> > > On Mon, Sep 16, 2019 at 12:07 PM David Jacot 
> > wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I would like to start a vote on KIP-511:
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> > > >
> > > > Best,
> > > > David
> > >
> >
>


Re: [DISCUSS] KIP-522: Update BrokerApiVersionsCommand to use AdminClient

2019-09-18 Thread Colin McCabe
Hi Mickael,

Just to give you the context, this is something that's been discussed several 
times.

Check out https://issues.apache.org/jira/browse/KAFKA-5214 and 
https://issues.apache.org/jira/browse/KAFKA-5723 , as well as this pull 
request: https://github.com/apache/kafka/pull/3012

I think there was some concern that having a public API that returned API 
version information would encourage people to write code that only worked 
against specific broker versions.  I remember Ismael and Jay expressed this 
concern, though I can't find the email threads now...

best,
Colin


On Mon, Sep 16, 2019, at 02:39, Mickael Maison wrote:
> Hi all,
> 
> I've created a KIP to add listApiVersions support to the AdminClient.
> This will allow us to update the BrokerApiVersionsCommand tool and
> more importantly allow users to detect API support and build flexible
> client applications:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-522%3A+Update+BrokerApiVersionsCommand+to+use+AdminClient
> 
> As always, feedback and comments are welcome!
> Thanks
>


[jira] [Created] (KAFKA-8923) Revisit OnlineReplica state change in reassignments

2019-09-18 Thread Stanislav Kozlovski (Jira)
Stanislav Kozlovski created KAFKA-8923:
--

 Summary: Revisit OnlineReplica state change in reassignments
 Key: KAFKA-8923
 URL: https://issues.apache.org/jira/browse/KAFKA-8923
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski


In the replica state machine, when switching a partition to an OnlineReplica, 
we conditionally send a LeaderAndIsr request when the partition is available in 
the `partitionLeadershipInfo` structure. This happens when we switch states 
during a partition reassignment. It does not happen when a partition is created 
for the first time, as it is not present in `partitionLeadershipInfo` at that 
time


This is a bit weird, because an OnlineReplica implies that the replica is 
alive, not necessarily in the ISR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-511: Collect and Expose Client's Name and Version in the Brokers

2019-09-18 Thread Magnus Edenhill
> I think we should force client software names and versions to follow a
regular expression and disconnect if they do not.

Disconnecting is not really a great error propagation method since it
leaves the client oblivious to what went wrong.
Instead suggest we return an ApiVersionResponse with an error code and a
human-readable error message field.



Den ons 18 sep. 2019 kl 20:05 skrev Colin McCabe :

> Hi David,
>
> Thanks for the KIP!
>
> Nitpick: in the intro paragraph, "Operators of Apache Kafka clusters have
> literally no information about the clients connected to their clusters"
> seems a bit too strong.  We have some information, right?  For example, the
> client ID, where clients are connecting from, etc.  Maybe it would be
> clearer to say "very little information about the type of client
> software..."
>
> Instead of ClientName and ClientVersion, how about ClientSoftwareName and
> ClientSoftwareVersion?  This might make it clearer what the fields are
> for.  I can see people getting confused about the difference between
> ClientName and ClientId, which sound pretty similar.  Adding "Software" to
> the field name makes it much clearer what the difference is between these
> fields.
>
> In the "ApiVersions Request/Response Handling" section, it seems like
> there is some out-of-date text.  Specifically, it says "we propose to add
> the supported version of the ApiVersionsRequest in the response sent back
> to the client alongside the error...".  But on the other hand, elsewhere in
> the KIP, we say "ApiVersionsResponse is bumped to version 3 but does not
> have any changes in the schema"  Based on the offline discussion we had,
> the correct text is the latter (we're not changing ApiVersionsRerponse).
> So we should remove the text about adding stuff to ApiVersionsResponse.
>
> In a similar vein, the comment "  // Version 3 is similar to version 2"
> should be " // Version 3 is identical to version 2" or something like
> that.  Although I guess technically things which are identical are also
> similar, the current phrasing could be misleading.
>
> Now that KIP-482 has been accepted, I think there are a few things that
> are worth clarifying in the KIP.  Firstly, ApiVersionsRequest v3 should be
> a "flexible version".  Mainly, that means its request header will support
> optional tagged fields.  However, ApiVersionsResponse v3 will *not* support
> optional tagged fields in its response header.  This is necessary because--
> as you said-- the broker must look at a fixed offset to find the error
> code, regardless of the response version.
>
> I think we should force client software names and versions to follow a
> regular expression and disconnect if they do not.  This will prevent issues
> when using these strings in metrics names.  Probably we want something like:
>
> [\.\-A-Za-z0-9]?[\.\-A-Za-z0-9 ]*[\.\-A-Za-z0-9]?
>
> Notice this does _not* include underscores, since they get converted to
> dots in JMX, causing ambiguity.  It also doesn't allow the first or last
> character to be a space.
>
> best,
> Colin
>
>
> On Mon, Sep 16, 2019, at 04:39, Mickael Maison wrote:
> > +1 (non binding)
> > Thanks for the KIP!
> >
> > On Mon, Sep 16, 2019 at 12:07 PM David Jacot 
> wrote:
> > >
> > > Hi all,
> > >
> > > I would like to start a vote on KIP-511:
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> > >
> > > Best,
> > > David
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #3909

2019-09-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8841; Reduce overhead of ReplicaManager.updateFollowerFetchState

--
[...truncated 2.63 MB...]
org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowIfBuiltInMetricsVersionInvalid STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowIfBuiltInMetricsVersionInvalid PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingConsumerIsolationLevelIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyOptimizationWhenNotExplicitlyAddedToConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyOptimizationWhenNotExplicitlyAddedToConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetGlobalConsumerConfigs 
STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetGlobalConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfRestoreConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix STARTED

org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionIfMaxInFlightRequestsPerConnectionIsInvalidStringIfEosEnabled
 STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionIfMaxInFlightRequestsPerConnectionIsInvalidStringIfEosEnabled
 PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedGlobalConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedGlobalConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAcceptBuiltInMetricsVersion0100To23 STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAcceptBuiltInMetricsVersion0100To23 PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
consumerConfigShouldContainAdminClientConfigsForRetriesAndRetryBackOffMsWithAdminPrefix
 STARTED

org.apache.kafka.streams.StreamsConfigTest > 
consumerConfigShouldContainAdminClientConfigsForRetriesAndRetryBackOffMsWithAdminPrefix
 PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfGlobalConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfGlobalConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
testGetGlobalConsumerConfigsWithGlobalConsumerOverridenPrefix STARTED

org.apache.kafka.streams.StreamsConfigTest > 
testGetGlobalConsumerConfigsWithGlobalConsumerOverridenPrefix PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInFlightRequestsGreaterThanFiveIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInFlightRequestsGreaterThanFiveIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest 

Re: [VOTE] KIP-475: New Metric to Measure Number of Tasks on a Connector

2019-09-18 Thread Cyrus Vafadari
I'd like to bump the thread to get review on the PR.
https://github.com/apache/kafka/pull/6843

Thanks, all!

On Wed, Jul 3, 2019 at 9:33 AM Cyrus Vafadari  wrote:

> Thanks for the input, everyone! I'm marking this KIP as accepted with
> 3 binding votes (Gwen, Guozhang, Randall)
> 3 nonbinding votes (Andrew, Konstantine, Ryanne)
>
> The PR is updated at https://github.com/apache/kafka/pull/6843/files
>
>
> On Sun, Jun 23, 2019 at 10:52 AM Andrew Schofield <
> andrew_schofi...@live.com> wrote:
>
>> +1 (non-binding). Nice KIP.
>>
>> On 23/06/2019, 17:27, "Gwen Shapira"  wrote:
>>
>> +1 from me too
>>
>> On Tue, Jun 18, 2019, 10:30 AM Guozhang Wang 
>> wrote:
>>
>> > Cyrus, thanks for the updates, +1.
>> >
>> > On Mon, Jun 17, 2019 at 10:58 PM Cyrus Vafadari > >
>> > wrote:
>> >
>> > > Many thanks for the feedback. Per Gwen's suggestion, I've updated
>> the KIP
>> > > to specify that the task count will be per-worker (no additional
>> MBean
>> > tag,
>> > > since each process is a worker) and per-connector (MBean tag).
>> > >
>> > > On Mon, Jun 17, 2019 at 8:24 PM Cyrus Vafadari <
>> cy...@confluent.io>
>> > wrote:
>> > >
>> > > > I meant to write:
>> > > > I've also updated the KIP to clarify that every task must have
>> exactly
>> > > one
>> > > > non-null *status* at all times.
>> > > >
>> > > > On Mon, Jun 17, 2019 at 6:55 PM Cyrus Vafadari <
>> cy...@confluent.io>
>> > > wrote:
>> > > >
>> > > >> Guozhang,
>> > > >>
>> > > >> Both of Kafka's implementations of "StatusBackingStore"
>> immediately
>> > > >> delete the task from the backign store when you try to set it
>> to
>> > > DESTROYED,
>> > > >> so we'd actually expect it to always be zero. A nonzero number
>> of
>> > > destroyed
>> > > >> tasks would either indicate a new implementation of
>> > StatusBackingStore,
>> > > or
>> > > >> a malfunctioning StatusBackingStore (e.g. caches out of sync
>> with
>> > > compacted
>> > > >> topic). This metric will usually be uninteresting, and was only
>> > included
>> > > >> for completeness. It could possibly catch a bug.
>> > > >>
>> > > >> Gwen,
>> > > >> I had not considered this option. I agree there is an
>> advantage to
>> > > having
>> > > >> more granular data about both connector and worker. The main
>> > > disadvantage
>> > > >> would be that it increases the number of metrics by a factor of
>> > > >> num_workers, but I'd say this is an acceptable tradeoff.
>> Another
>> > > advantage
>> > > >> of your suggestion is that the public interfaces for
>> WorkerConnector
>> > > would
>> > > >> be unchanged, and the new metrics can be added within the
>> Worker
>> > class.
>> > > >>
>> > > >> I've also updated the KIP to clarify that every task must have
>> exactly
>> > > >> one non-null task at all times.
>> > > >>
>> > > >> On Mon, Jun 17, 2019 at 1:41 PM Guozhang Wang <
>> wangg...@gmail.com>
>> > > wrote:
>> > > >>
>> > > >>> Hello Cyrus,
>> > > >>>
>> > > >>> Thanks for the KIP. I just have one nit question about Connect
>> > > destroyed
>> > > >>> tasks: is it an ever-increasing number? If yes, the
>> corresponding
>> > > metric
>> > > >>> value would be increasing indefinitely as well. Is that
>> intentional?
>> > > >>>
>> > > >>> Otherwise, lgtm.
>> > > >>>
>> > > >>>
>> > > >>> Guozhang
>> > > >>>
>> > > >>> On Mon, Jun 17, 2019 at 1:14 PM Gwen Shapira <
>> g...@confluent.io>
>> > > wrote:
>> > > >>>
>> > > >>> > Sorry to join so late, but did we consider a single set of
>> > task-count
>> > > >>> > metrics and using tags to scope each data point to a
>> specific
>> > > >>> > connector and worker (and in the future perhaps also user)?
>> > > >>> >
>> > > >>> > It will make analysis of the data easier - someone may want
>> to
>> > > >>> > breakdown tasks by both worker and connector to detect
>> imbalanced
>> > > >>> > assignments.
>> > > >>> >
>> > > >>> > Are there downsides to this approach?
>> > > >>> >
>> > > >>> > And a small nit: it will be good to capture in the KIP what
>> are the
>> > > >>> > expectations regarding overlap and disjointness of the
>> proposed
>> > > >>> > metrics. For example, is running+paused+failed = total? Can
>> a task
>> > be
>> > > >>> > failed and destroyed and therefore count in 2 of those
>> metrics?
>> > > >>> >
>> > > >>> > On Thu, Jun 6, 2019 at 12:29 PM Cyrus Vafadari <
>> cy...@confluent.io
>> > >
>> > > >>> wrote:
>> > > >>> > >
>> > > >>> > > Konstantine,
>> > > >>> > >
>> > > >>> > > This is a good suggestion. Since the suggestion to add 2
>> > additional
>> > > >>> > > statuses analogous to the 3 proposed, it is a very minor
>> change
>> > of
>> > > 

Re: [VOTE] KIP-496: Administrative API to delete consumer offsets

2019-09-18 Thread David Jacot
Indeed, I have forgotten to add the action. There will be a new action «
—delete-offsets ». Sorry!

 *Proposed API*
kafka-consumer-groups.sh --bootstrap-server 
—delete-offsets --group  --topic :
ex: --bootstrap-server localhost:9092 --group my-group --topic
topic1 --topic topic2:0,1,2

When partitions is not provided, all partitions are used.

Best,
David

Le mer. 18 sept. 2019 à 20:09, Colin McCabe  a écrit :

> On Tue, Sep 17, 2019, at 09:07, David Jacot wrote:
> > Hi all,
> >
> > We haven't included the changes in the command line tool to support the
> new
> > API. Therefore,
> > I would like to amend the current KIP to cover the changes in the
> > `kafka-consumer-groups`
> > command line tool. The change is rather small and it does not need to add
> > any new arguments
> > to the command line tool. so it doesn't make sense to create a new KIP
> for
> > it.
> >
> > *Proposed API*
> > kafka-consumer-groups.sh --bootstrap-server  --group
> >  --topic :
> > ex: --bootstrap-server localhost:9092 --group my-group --topic topic1
> > --topic topic2:0,1,2
> >
> > When partitions not provided, all partitions are used.
>
> Hmm.  I think I'm missing something here.  If you try the command you
> specified, you get:
>
> > Command must include exactly one action: --list, --describe, --delete,
> --reset-offsets
>
> Did you mean to add a new action here that was offsets-related?
>
> best,
> Colin
>
> >
> > What do you think?
> >
> > Best,
> > David
> >
> >
> > On Fri, Sep 13, 2019 at 6:42 PM Colin McCabe  wrote:
> >
> > > Hi David,
> > >
> > > Sounds good.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Fri, Sep 13, 2019, at 04:45, David Jacot wrote:
> > > > Hi all,
> > > >
> > > > I would like to do another modification to the proposal. In the
> proposal,
> > > > the OffsetDeleteResponse
> > > > doesn't have a top level error field so I would like to add one. Many
> > > > errors concern the whole
> > > > group (e.g. GROUP_ID_NOT_FOUND) so it would be great to have a way to
> > > > communicate them
> > > > back to the client without having to set such errors for all the
> > > requested
> > > > partitions. It makes the
> > > > error handling on the client easier and cleaner.
> > > >
> > > > *Proposed API with the ErrorCode:*
> > > > {
> > > >   "apiKey": 47,
> > > >   "type": "response",
> > > >   "name": "OffsetDeleteResponse",
> > > >   "validVersions": "0",
> > > >   "fields": [
> > > > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> > > >   "about": "The top-level error code, or 0 if there was no
> error." },
> > > > { "name": "ThrottleTimeMs",  "type": "int32",  "versions": "0+",
> > > > "ignorable": true,
> > > >   "about": "The duration in milliseconds for which the request
> was
> > > > throttled due to a quota violation, or zero if the request did not
> > > violate
> > > > any quota." },
> > > > { "name": "Topics", "type": "[]OffsetDeleteResponseTopic",
> > > "versions":
> > > > "0+",
> > > >   "about": "The responses for each topic.", "fields": [
> > > > { "name": "Name", "type": "string", "versions": "0+",
> "mapKey":
> > > > true,
> > > >   "about": "The topic name." },
> > > > { "name": "Partitions", "type":
> > > "[]OffsetDeleteResponsePartition",
> > > > "versions": "0+",
> > > >   "about": "The responses for each partition in the topic.",
> > > > "fields": [
> > > > { "name": "PartitionIndex", "type": "int32", "versions":
> > > "0+",
> > > > "mapKey": true,
> > > >   "about": "The partition index." },
> > > > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> > > >   "about": "The error code, or 0 if there was no error."
> }
> > > >   ]
> > > > }
> > > >   ]
> > > > }
> > > >   ]
> > > > }
> > > >
> > > > I would like to know if there are any concerns or objections
> regarding
> > > this
> > > > change before updating the KIP.
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Wed, Sep 4, 2019 at 9:24 AM David Jacot 
> wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > While implementing the KIP, I have realized that a new error code
> and
> > > > > exception is required to notify the caller that offsets of a topic
> can
> > > not
> > > > > be deleted because the group is actively subscribed to the topic.
> > > > >
> > > > > I would like to know if there are any concerns with these changes
> > > before
> > > > > updating the KIP.
> > > > >
> > > > > *Proposed API:*
> > > > > GROUP_SUBSCRIBED_TO_TOPIC(86, "The consumer group is actively
> > > subscribed
> > > > > to the topic", GroupSubscribedToTopicException::new);
> > > > >
> > > > > public class GroupSubscribedToTopicException extends ApiException {
> > > > > public GroupSubscribedToTopicException(String message) {
> > > > > super(message);
> > > > > }
> > > > > }
> > > > >
> > > > > Best,
> > > > > David
> > > > >
> > > > > On Fri, Aug 16, 2019 at 10:58 AM Mickael Maison <
> > > 

Re: [DISCUSS] Streams-Broker compatibility regression in 2.2.1 release

2019-09-18 Thread Matthias J. Sax
Thanks Ismael and Bill.

It seems that nobody objects with the proposal. Hence, I prepared the
following PR to update the upgrade notes.

 - https://github.com/apache/kafka/pull/7363 (trunk and 2.3)
 - https://github.com/apache/kafka/pull/7364 (2.2)
 - https://github.com/apache/kafka-site/pull/229

Will also revert the cherry-pick commit in 2.1 branch and update the
Jira ticket.


-Matthias


On 9/16/19 4:22 PM, Ismael Juma wrote:
> +1 to the proposal. Let's also highlight this in the release notes for
> 2.2.2 and 2.3.0 please.
> 
> Ismael
> 
> On Wed, Sep 11, 2019 at 10:23 AM Matthias J. Sax 
> wrote:
> 
>> Hi,
>>
>> recently a user reported an issue upgrading a Kafka Streams application
>> from 2.2.0 to 2.2.1 (cf
>> https://mail-archives.apache.org/mod_mbox/kafka-users/201908.mbox/
>> )
>>
>> After some investigation, we identified
>> https://issues.apache.org/jira/browse/KAFKA-7895 to be the root cause of
>> the problem.
>>
>> The fix for KAFKA-7895 is using message headers and thus requires broker
>> version 0.11.0 (or newer) and message format 0.11 (or newer). Hence,
>> while a Kafka Streams application version 2.2.0 is compatible to older
>> brokers (0.10.1 and 0.10.2) and only requires message format 0.10, the
>> backward compatibility was broken accidentally in 2.2.1.
>>
>> The fix is also contained in 2.3.0 release and cherry-picked to 2.1
>> branch (2.1.2 is not released yet, and thus 2.1 users are not affected
>> as this point).
>>
>> Note: only users that use `suppress()` operator in their program are
>> affected.
>>
>> We should not break streams-broker backward compatibility in bug-fix
>> releases at all and avoid for minor releases. However, it seems
>> necessary to have the fix in 2.3.0 though -- otherwise, `suppress()` is
>> effectively useless and it does not seem to be a good idea to fix the
>> bug only in the next major release. Hence, trading-off some backward
>> compatibility in a minor release seems to be acceptable for this case,
>> considering that 0.11.0 was release 2 years ago.
>>
>> For 2.2.1, it is more challenging to decide how to move forward, because
>> we should not have broken streams-broker compatibility but 2.2.1 is
>> already released and we can only react after the fact.
>>
>>   From my point of view, the best way is to keep the fix and update the
>> release notes and documentation accordingly. The main reason for my
>> suggestions is that we would expect a majority of users to be on 0.11.0
>> brokers already and the fix will work for them -- reverting the fix in
>> 2.2.2 seems to be worse for all those users on newer broker versions. We
>> also know that `suppress()` is a highly demanded feature and a lot of
>> people waited for a fix.
>>
>>   The expected minority of users that are on 0.10.1 / 0.10.2 brokers, or
>> newer brokers but still on message format 0.10 would either need to stay
>> on Kafka Streams 2.2.0 or upgrade their brokers/message format
>> accordingly. However, upgrading brokers/message format is de-facto
>> already required for 2.2.1 and thus keeping the fix would not make the
>> situation worse.
>>
>> For 2.1, I would suggest to revert the fix to make sure we don't break
>> streams-broker compatibility for 2.1.x users. If those users need the
>> fix for `suppress()` they need to upgrade to 2.2.1/2.3.0 or newer and
>> make sure their brokers are on 0.11.0 with message format 0.11, too.
>>
>>
>> TL;DR; the proposal is:
>>
>> (1) revert the fix for KAFKA-7895 in 2.1 branch
>> (2) keep the fix for KAFKA-7895 in 2.2.1 and 2.3.0
>>
>> Impact:
>>
>>  - Kafka Streams 2.1.x and 2.2.0 applications are backward compatible
>> back to 0.10.1 brokers, requiring message format 0.10
>>  - Kafka Streams 2.2.2 / 2.3.0 application (or newer) are backward
>> compatible back to 0.11.0 brokers, requiring message format 0.11
>>
>>
>> Please let us know what you think about this proposal.
>>
>>
>> -Matthias
>>
>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-496: Administrative API to delete consumer offsets

2019-09-18 Thread Colin McCabe
On Tue, Sep 17, 2019, at 09:07, David Jacot wrote:
> Hi all,
> 
> We haven't included the changes in the command line tool to support the new
> API. Therefore,
> I would like to amend the current KIP to cover the changes in the
> `kafka-consumer-groups`
> command line tool. The change is rather small and it does not need to add
> any new arguments
> to the command line tool. so it doesn't make sense to create a new KIP for
> it.
> 
> *Proposed API*
> kafka-consumer-groups.sh --bootstrap-server  --group
>  --topic :
> ex: --bootstrap-server localhost:9092 --group my-group --topic topic1
> --topic topic2:0,1,2
> 
> When partitions not provided, all partitions are used.

Hmm.  I think I'm missing something here.  If you try the command you 
specified, you get:

> Command must include exactly one action: --list, --describe, --delete, 
> --reset-offsets

Did you mean to add a new action here that was offsets-related?

best,
Colin

> 
> What do you think?
> 
> Best,
> David
> 
> 
> On Fri, Sep 13, 2019 at 6:42 PM Colin McCabe  wrote:
> 
> > Hi David,
> >
> > Sounds good.
> >
> > best,
> > Colin
> >
> >
> > On Fri, Sep 13, 2019, at 04:45, David Jacot wrote:
> > > Hi all,
> > >
> > > I would like to do another modification to the proposal. In the proposal,
> > > the OffsetDeleteResponse
> > > doesn't have a top level error field so I would like to add one. Many
> > > errors concern the whole
> > > group (e.g. GROUP_ID_NOT_FOUND) so it would be great to have a way to
> > > communicate them
> > > back to the client without having to set such errors for all the
> > requested
> > > partitions. It makes the
> > > error handling on the client easier and cleaner.
> > >
> > > *Proposed API with the ErrorCode:*
> > > {
> > >   "apiKey": 47,
> > >   "type": "response",
> > >   "name": "OffsetDeleteResponse",
> > >   "validVersions": "0",
> > >   "fields": [
> > > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> > >   "about": "The top-level error code, or 0 if there was no error." },
> > > { "name": "ThrottleTimeMs",  "type": "int32",  "versions": "0+",
> > > "ignorable": true,
> > >   "about": "The duration in milliseconds for which the request was
> > > throttled due to a quota violation, or zero if the request did not
> > violate
> > > any quota." },
> > > { "name": "Topics", "type": "[]OffsetDeleteResponseTopic",
> > "versions":
> > > "0+",
> > >   "about": "The responses for each topic.", "fields": [
> > > { "name": "Name", "type": "string", "versions": "0+", "mapKey":
> > > true,
> > >   "about": "The topic name." },
> > > { "name": "Partitions", "type":
> > "[]OffsetDeleteResponsePartition",
> > > "versions": "0+",
> > >   "about": "The responses for each partition in the topic.",
> > > "fields": [
> > > { "name": "PartitionIndex", "type": "int32", "versions":
> > "0+",
> > > "mapKey": true,
> > >   "about": "The partition index." },
> > > { "name": "ErrorCode", "type": "int16", "versions": "0+",
> > >   "about": "The error code, or 0 if there was no error." }
> > >   ]
> > > }
> > >   ]
> > > }
> > >   ]
> > > }
> > >
> > > I would like to know if there are any concerns or objections regarding
> > this
> > > change before updating the KIP.
> > >
> > > Best,
> > > David
> > >
> > > On Wed, Sep 4, 2019 at 9:24 AM David Jacot  wrote:
> > >
> > > > Hi all,
> > > >
> > > > While implementing the KIP, I have realized that a new error code and
> > > > exception is required to notify the caller that offsets of a topic can
> > not
> > > > be deleted because the group is actively subscribed to the topic.
> > > >
> > > > I would like to know if there are any concerns with these changes
> > before
> > > > updating the KIP.
> > > >
> > > > *Proposed API:*
> > > > GROUP_SUBSCRIBED_TO_TOPIC(86, "The consumer group is actively
> > subscribed
> > > > to the topic", GroupSubscribedToTopicException::new);
> > > >
> > > > public class GroupSubscribedToTopicException extends ApiException {
> > > > public GroupSubscribedToTopicException(String message) {
> > > > super(message);
> > > > }
> > > > }
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Fri, Aug 16, 2019 at 10:58 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > >> +1 (non binding)
> > > >> Thanks!
> > > >>
> > > >> On Thu, Aug 15, 2019 at 11:53 PM Colin McCabe 
> > wrote:
> > > >> >
> > > >> > On Thu, Aug 15, 2019, at 11:47, Jason Gustafson wrote:
> > > >> > > Hey Colin, I think deleting all offsets is equivalent to deleting
> > the
> > > >> > > group, which can be done with the `deleteConsumerGroups` api. I
> > > >> debated
> > > >> > > whether there should be a way to delete partitions for all
> > > >> unsubscribed
> > > >> > > topics, but I decided to start with a simple API.
> > > >> >
> > > >> > That's a fair point-- deleting the group covers the main use-case
> > for
> > > >> deleting 

Re: [VOTE] KIP-511: Collect and Expose Client's Name and Version in the Brokers

2019-09-18 Thread Colin McCabe
Hi David,

Thanks for the KIP!

Nitpick: in the intro paragraph, "Operators of Apache Kafka clusters have 
literally no information about the clients connected to their clusters" seems a 
bit too strong.  We have some information, right?  For example, the client ID, 
where clients are connecting from, etc.  Maybe it would be clearer to say "very 
little information about the type of client software..."

Instead of ClientName and ClientVersion, how about ClientSoftwareName and 
ClientSoftwareVersion?  This might make it clearer what the fields are for.  I 
can see people getting confused about the difference between ClientName and 
ClientId, which sound pretty similar.  Adding "Software" to the field name 
makes it much clearer what the difference is between these fields.

In the "ApiVersions Request/Response Handling" section, it seems like there is 
some out-of-date text.  Specifically, it says "we propose to add the supported 
version of the ApiVersionsRequest in the response sent back to the client 
alongside the error...".  But on the other hand, elsewhere in the KIP, we say 
"ApiVersionsResponse is bumped to version 3 but does not have any changes in 
the schema"  Based on the offline discussion we had, the correct text is the 
latter (we're not changing ApiVersionsRerponse).  So we should remove the text 
about adding stuff to ApiVersionsResponse.

In a similar vein, the comment "  // Version 3 is similar to version 2" should 
be " // Version 3 is identical to version 2" or something like that.  Although 
I guess technically things which are identical are also similar, the current 
phrasing could be misleading.

Now that KIP-482 has been accepted, I think there are a few things that are 
worth clarifying in the KIP.  Firstly, ApiVersionsRequest v3 should be a 
"flexible version".  Mainly, that means its request header will support 
optional tagged fields.  However, ApiVersionsResponse v3 will *not* support 
optional tagged fields in its response header.  This is necessary because-- as 
you said-- the broker must look at a fixed offset to find the error code, 
regardless of the response version.

I think we should force client software names and versions to follow a regular 
expression and disconnect if they do not.  This will prevent issues when using 
these strings in metrics names.  Probably we want something like:

[\.\-A-Za-z0-9]?[\.\-A-Za-z0-9 ]*[\.\-A-Za-z0-9]?

Notice this does _not* include underscores, since they get converted to dots in 
JMX, causing ambiguity.  It also doesn't allow the first or last character to 
be a space.

best,
Colin


On Mon, Sep 16, 2019, at 04:39, Mickael Maison wrote:
> +1 (non binding)
> Thanks for the KIP!
> 
> On Mon, Sep 16, 2019 at 12:07 PM David Jacot  wrote:
> >
> > Hi all,
> >
> > I would like to start a vote on KIP-511:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> >
> > Best,
> > David
>


[jira] [Created] (KAFKA-8922) Failed to get end offsets for topic partitions of global store

2019-09-18 Thread Raman Gupta (Jira)
Raman Gupta created KAFKA-8922:
--

 Summary: Failed to get end offsets for topic partitions of global 
store
 Key: KAFKA-8922
 URL: https://issues.apache.org/jira/browse/KAFKA-8922
 Project: Kafka
  Issue Type: Bug
Reporter: Raman Gupta


I have a Kafka stream that fails with this error on startup every time:

{code}
org.apache.kafka.streams.errors.StreamsException: Failed to get end offsets for 
topic partitions of global store test-uiService-dlq-events-table-store after 0 
retry attempts. You can increase the number of retries via configuration 
parameter `retries`.
at 
org.apache.kafka.streams.processor.internals.GlobalStateManagerImpl.register(GlobalStateManagerImpl.java:186)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.register(AbstractProcessorContext.java:101)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:207)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.state.internals.KeyValueToTimestampedKeyValueByteStoreAdapter.init(KeyValueToTimestampedKeyValueByteStoreAdapter.java:87)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:58)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:112)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.processor.internals.GlobalStateManagerImpl.initialize(GlobalStateManagerImpl.java:123)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.processor.internals.GlobalStateUpdateTask.initialize(GlobalStateUpdateTask.java:61)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.processor.internals.GlobalStreamThread$StateConsumer.initialize(GlobalStreamThread.java:229)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.processor.internals.GlobalStreamThread.initialize(GlobalStreamThread.java:345)
 ~[kafka-streams-2.3.0.jar:?]
at 
org.apache.kafka.streams.processor.internals.GlobalStreamThread.run(GlobalStreamThread.java:270)
 ~[kafka-streams-2.3.0.jar:?]
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to get 
offsets by times in 30001ms
{code}

The stream was working fine and then this started happening.

The stream now throws this error on every start. I am now going to attempt to 
reset the stream and delete its local state.

I hate to say it, but Kafka Streams suck. Its problem after problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8921) Avoid excessive info logs in the client side for incremental fetch

2019-09-18 Thread Zhanxiang (Patrick) Huang (Jira)
Zhanxiang (Patrick) Huang created KAFKA-8921:


 Summary: Avoid excessive info logs in the client side for 
incremental fetch
 Key: KAFKA-8921
 URL: https://issues.apache.org/jira/browse/KAFKA-8921
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Zhanxiang (Patrick) Huang
Assignee: Zhanxiang (Patrick) Huang


 

Currently in FetchSessionHandler::handleResponse, the following info log will 
get printed out excessively when the session is evicted from the server-side 
cache even though there is nothing wrong with the fetch request and client 
cannot do much to improve it.
{noformat}
Node xxx was unable to process the fetch request with (sessionId=xxx, 
epoch=xxx): FETCH_SESSION_ID_NOT_FOUND.
 
{noformat}
 

Moreover, when the fetch request gets throttled, the following info logs will 
also get printed out, which are very misleading.
{noformat}
Node xxx sent an invalid full fetch response with ... 
Node xxx sent an invalid incremental fetch response with ...
{noformat}
 

We should avoid logging these things in INFO level and print out more 
informative logs for throttling.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8913) Document topic based configs & ISR settings for Streams apps

2019-09-18 Thread Vinoth Chandar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinoth Chandar resolved KAFKA-8913.
---
Resolution: Fixed

> Document topic based configs & ISR settings for Streams apps
> 
>
> Key: KAFKA-8913
> URL: https://issues.apache.org/jira/browse/KAFKA-8913
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Reporter: Vinoth Chandar
>Assignee: Vinoth Chandar
>Priority: Major
>
> Noticed that it was not clear how to configure the internal topics . 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8745) DumpLogSegments doesn't show keys, when the message is null

2019-09-18 Thread James Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Cheng resolved KAFKA-8745.

Fix Version/s: 2.4.0
 Reviewer: Guozhang Wang
   Resolution: Fixed

> DumpLogSegments doesn't show keys, when the message is null
> ---
>
> Key: KAFKA-8745
> URL: https://issues.apache.org/jira/browse/KAFKA-8745
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.3.0
>Reporter: James Cheng
>Assignee: James Cheng
>Priority: Major
> Fix For: 2.4.0
>
>
> When DumpLogSegments encounters a message with a message key, but no message 
> value, it doesn't print out the message key.
>  
> {noformat}
> $ ~/kafka_2.11-2.2.0/bin/kafka-run-class.sh kafka.tools.DumpLogSegments 
> --files compacted-0/.log --print-data-log
> Dumping compacted-0/.log
> Starting offset: 0
> baseOffset: 0 lastOffset: 3 count: 4 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 0 CreateTime: 1564696640073 size: 113 magic: 
> 2 compresscodec: NONE crc: 206507478 isvalid: true
> | offset: 2 CreateTime: 1564696640073 keysize: 4 valuesize: -1 sequence: -1 
> headerKeys: []
> | offset: 3 CreateTime: 1564696640073 keysize: 4 valuesize: -1 sequence: -1 
> headerKeys: []
> {noformat}
> It should print out the message key.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8841) Optimize Partition.maybeIncrementLeaderHW

2019-09-18 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8841.

Resolution: Fixed

> Optimize Partition.maybeIncrementLeaderHW
> -
>
> Key: KAFKA-8841
> URL: https://issues.apache.org/jira/browse/KAFKA-8841
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Lucas Bradstreet
>Priority: Major
>
> We've observed a noticeable cost in `Partition.maybeIncrementLeaderHW` during 
> profiling. Basically just all the unnecessary collection copies and 
> iterations over the replica set. Since this is on the hot path for handling 
> follower fetches, we should probably put some effort into optimizing this. 
> A couple ideas:
>  # Don't do all those copies.
>  # Checking whether hw needs incrementing is not necessary unless an end 
> offset actually changed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-517: Add consumer metrics to observe user poll behavior

2019-09-18 Thread Mickael Maison
+1 (non binding)
Thanks for the KIP, I can see this being really useful!

On Wed, Sep 18, 2019 at 4:40 PM Kevin Lu  wrote:
>
> Hi All,
>
> Since we have a bit of support, I'd like to start the vote on KIP-517:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior
>
> Thanks!
>
> Regards,
> Kevin


Re: [DISCUSS] KIP-521: Enable redirection of Connect's log4j messages to a file by default

2019-09-18 Thread Randall Hauch
Thanks, Konstantine. I'm looking forward to a vote.

On Wed, Sep 18, 2019 at 8:50 AM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Hi Randall.
>
> I'd also love to enable MDC by default.
> However I agree with your concerns regarding compatibility. Runtime logs
> are a form of public interface (as I believe has been discussed before) and
> therefore enabling MDC in the console requires a major version bump. Now,
> enabling it only for the file output, indeed, would not be backwards
> incompatible, given that this is a new configuration. However, same as you,
> I also err on the side of keeping these two views (console and file logs)
> in sync for the sake of simplicity. This will also allow users to keep
> changing settings for both loggers in one place, which I suspect will cover
> most of the common and simple use cases.
>
> So I think we agree, MDC still remains only one line change away but soon
> will become the default option.
> Thanks for your thoughts!
>
> Konstantine
>
> On Mon, Sep 16, 2019 at 6:42 PM Randall Hauch  wrote:
>
> > Thanks for tackling this, Konstantine.
> >
> > The KIP looks great. My only question is about whether to enable the
> recent
> > MDC variable in the file log format, but for backward compatibility
> reasons
> > keep it as-is for the console. I suspect using the same format in the log
> > files and the console would be preferred, and to have users change it in
> > one place. WDYT?
> >
> > Randall
> >
> > On Wed, Sep 11, 2019 at 7:06 PM Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > Thanks Gwen!
> > > Indeed, it's a common setup and it's been missing for some time. I
> agree,
> > > it'll be nice to have this in place by default.
> > > I'm guessing previous attempts missed that such a change needs a KIP.
> > >
> > > Cheers,
> > > Konstantine
> > >
> > >
> > >
> > > On Wed, Sep 11, 2019 at 2:16 PM Gwen Shapira 
> wrote:
> > >
> > > > Great idea. It will greatly improve the ops experience. Can't believe
> > > > we didn't do it before.
> > > >
> > > > On Wed, Sep 11, 2019 at 2:07 PM Konstantine Karantasis
> > > >  wrote:
> > > > >
> > > > > *** Missed the [DISCUSS] tag in the previous email. Reposting here,
> > > > please
> > > > > reply in this thread instead ***
> > > > >
> > > > > Hi all.
> > > > >
> > > > > While we are in the midst of some very interesting KIP discussions,
> > I'd
> > > > > like to bring a brief and useful KIP on the table as well.
> > > > >
> > > > > It's about enabling redirection of log4j logging to a file for
> Kafka
> > > > > Connect by default, in a way similar to how this is done for Kafka
> > > > brokers
> > > > > today.
> > > > >
> > > > > You might find it short and straightforward but, still, needs to be
> > > > > discussed as a KIP since it's an externally visible (yet
> compatible)
> > > > change
> > > > > in how Connect logs its status during runtime.
> > > > >
> > > > > Here's a link to the KIP:
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-521%3A+Enable+redirection+of+Connect%27s+log4j+messages+to+a+file+by+default
> > > > >
> > > > > Looking forward to your comments!
> > > > > Konstantine
> > > >
> > >
> >
>


[VOTE] KIP-517: Add consumer metrics to observe user poll behavior

2019-09-18 Thread Kevin Lu
Hi All,

Since we have a bit of support, I'd like to start the vote on KIP-517:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior

Thanks!

Regards,
Kevin


Re: [VOTE] KIP-373: Allow users to create delegation tokens for other users

2019-09-18 Thread Viktor Somogyi-Vass
Hi All,

Harsha, Ryanne: thanks for the vote!

I'd like to bump this again as today is the KIP freeze date and there is
still one binding vote needed which I'm hoping to get in order to have this
included in 2.4.

Thanks,
Viktor

On Tue, Sep 17, 2019 at 1:18 AM Ryanne Dolan  wrote:

> +1 non-binding
>
> Ryanne
>
> On Mon, Sep 16, 2019, 5:11 PM Harsha Ch  wrote:
>
> > +1 (binding). Thanks for the KIP Viktor
> >
> > Thanks,
> >
> > Harsha
> >
> > On Mon, Sep 16, 2019 at 3:02 AM, Viktor Somogyi-Vass <
> > viktorsomo...@gmail.com > wrote:
> >
> > >
> > >
> > >
> > > Hi All,
> > >
> > >
> > >
> > > I'd like to bump this again in order to get some more binding votes
> > and/or
> > > feedback in the hope we can push this in for 2.4.
> > >
> > >
> > >
> > > Thank you Manikumar, Gabor and Ryanne so far for the votes! (the last
> two
> > > were on the discussion thread after starting the vote but I think it
> > still
> > > counts :) )
> > >
> > >
> > >
> > > Thanks,
> > > Viktor
> > >
> > >
> > >
> > > On Wed, Aug 21, 2019 at 1:44 PM Manikumar < manikumar. reddy@ gmail.
> > com (
> > > manikumar.re...@gmail.com ) > wrote:
> > >
> > >
> > >>
> > >>
> > >> Hi,
> > >>
> > >>
> > >>
> > >> +1 (binding).
> > >>
> > >>
> > >>
> > >> Thanks for the updated KIP. LGTM.
> > >>
> > >>
> > >>
> > >> Thanks,
> > >> Manikumar
> > >>
> > >>
> > >>
> > >> On Tue, Aug 6, 2019 at 3:14 PM Viktor Somogyi-Vass < viktorsomogyi@
> > gmail.
> > >> com ( viktorsomo...@gmail.com ) >
> > >> wrote:
> > >>
> > >>
> > >>>
> > >>>
> > >>> Hi All,
> > >>>
> > >>>
> > >>>
> > >>> Bumping this, I'd be happy to get some additional feedback and/or
> > votes.
> > >>>
> > >>>
> > >>>
> > >>> Thanks,
> > >>> Viktor
> > >>>
> > >>>
> > >>>
> > >>> On Wed, Jul 31, 2019 at 11:04 AM Viktor Somogyi-Vass < viktorsomogyi@
> > gmail.
> > >>> com ( viktorsomo...@gmail.com ) > wrote:
> > >>>
> > >>>
> > 
> > 
> >  Hi All,
> > 
> > 
> > 
> >  I'd like to start a vote on this KIP.
> > 
> > 
> > >>>
> > >>>
> > >>
> > >>
> > >>
> > >> https:/ / cwiki. apache. org/ confluence/ display/ KAFKA/
> > KIP-373%3A+Allow+users+to+create+delegation+tokens+for+other+users
> > >> (
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-373%3A+Allow+users+to+create+delegation+tokens+for+other+users
> > >> )
> > >>
> > >>
> > >>>
> > 
> > 
> >  To summarize it: the proposed feature would allow users (usually
> >  superusers) to create delegation tokens for other users. This is
> > 
> > 
> > >>>
> > >>>
> > >>>
> > >>> especially
> > >>>
> > >>>
> > 
> > 
> >  helpful in Spark where the delegation token created this way can be
> >  distributed to workers.
> > 
> > 
> > 
> >  I'd be happy to receive any votes or additional feedback.
> > 
> > 
> > 
> >  Viktor
> > 
> > 
> > >>>
> > >>>
> > >>
> > >>
> > >
> > >
> > >
>


Build failed in Jenkins: kafka-trunk-jdk11 #818

2019-09-18 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8913: Document topic based configs & ISR settings for Streams 
apps

--
[...truncated 2.40 MB...]

kafka.controller.ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers FAILED
java.lang.IllegalStateException: Shutdown in progress
at 
java.base/java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.base/java.lang.Runtime.addShutdownHook(Runtime.java:215)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:262)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:232)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:242)
at kafka.utils.TestUtils$.tempDir(TestUtils.scala:103)
at kafka.utils.TestUtils$.createBrokerConfig(TestUtils.scala:284)
at 
kafka.controller.ControllerChannelManagerTest.(ControllerChannelManagerTest.scala:41)

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion STARTED
kafka.controller.ControllerChannelManagerTest.testStopReplicaInterBrokerProtocolVersion
 failed, log available in 


kafka.controller.ControllerChannelManagerTest > 
testStopReplicaInterBrokerProtocolVersion FAILED
java.lang.IllegalStateException: Shutdown in progress
at 
java.base/java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.base/java.lang.Runtime.addShutdownHook(Runtime.java:215)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:262)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:232)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:242)
at kafka.utils.TestUtils$.tempDir(TestUtils.scala:103)
at kafka.utils.TestUtils$.createBrokerConfig(TestUtils.scala:284)
at 
kafka.controller.ControllerChannelManagerTest.(ControllerChannelManagerTest.scala:41)

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers STARTED
kafka.controller.ControllerChannelManagerTest.testStopReplicaSentOnlyToLiveAndShuttingDownBrokers
 failed, log available in 


kafka.controller.ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers FAILED
java.lang.IllegalStateException: Shutdown in progress
at 
java.base/java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.base/java.lang.Runtime.addShutdownHook(Runtime.java:215)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:262)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:232)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:242)
at kafka.utils.TestUtils$.tempDir(TestUtils.scala:103)
at kafka.utils.TestUtils$.createBrokerConfig(TestUtils.scala:284)
at 
kafka.controller.ControllerChannelManagerTest.(ControllerChannelManagerTest.scala:41)

kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
STARTED
kafka.controller.ControllerChannelManagerTest.testStopReplicaGroupsByBroker 
failed, log available in 


kafka.controller.ControllerChannelManagerTest > testStopReplicaGroupsByBroker 
FAILED
java.lang.IllegalStateException: Shutdown in progress
at 
java.base/java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.base/java.lang.Runtime.addShutdownHook(Runtime.java:215)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:262)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:232)
at org.apache.kafka.test.TestUtils.tempDirectory(TestUtils.java:242)
at kafka.utils.TestUtils$.tempDir(TestUtils.scala:103)
at kafka.utils.TestUtils$.createBrokerConfig(TestUtils.scala:284)
at 
kafka.controller.ControllerChannelManagerTest.(ControllerChannelManagerTest.scala:41)

kafka.controller.ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr STARTED
kafka.controller.ControllerChannelManagerTest.testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr
 failed, log available in 

Re: [DISCUSS] KIP-521: Enable redirection of Connect's log4j messages to a file by default

2019-09-18 Thread Konstantine Karantasis
Hi Randall.

I'd also love to enable MDC by default.
However I agree with your concerns regarding compatibility. Runtime logs
are a form of public interface (as I believe has been discussed before) and
therefore enabling MDC in the console requires a major version bump. Now,
enabling it only for the file output, indeed, would not be backwards
incompatible, given that this is a new configuration. However, same as you,
I also err on the side of keeping these two views (console and file logs)
in sync for the sake of simplicity. This will also allow users to keep
changing settings for both loggers in one place, which I suspect will cover
most of the common and simple use cases.

So I think we agree, MDC still remains only one line change away but soon
will become the default option.
Thanks for your thoughts!

Konstantine

On Mon, Sep 16, 2019 at 6:42 PM Randall Hauch  wrote:

> Thanks for tackling this, Konstantine.
>
> The KIP looks great. My only question is about whether to enable the recent
> MDC variable in the file log format, but for backward compatibility reasons
> keep it as-is for the console. I suspect using the same format in the log
> files and the console would be preferred, and to have users change it in
> one place. WDYT?
>
> Randall
>
> On Wed, Sep 11, 2019 at 7:06 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Thanks Gwen!
> > Indeed, it's a common setup and it's been missing for some time. I agree,
> > it'll be nice to have this in place by default.
> > I'm guessing previous attempts missed that such a change needs a KIP.
> >
> > Cheers,
> > Konstantine
> >
> >
> >
> > On Wed, Sep 11, 2019 at 2:16 PM Gwen Shapira  wrote:
> >
> > > Great idea. It will greatly improve the ops experience. Can't believe
> > > we didn't do it before.
> > >
> > > On Wed, Sep 11, 2019 at 2:07 PM Konstantine Karantasis
> > >  wrote:
> > > >
> > > > *** Missed the [DISCUSS] tag in the previous email. Reposting here,
> > > please
> > > > reply in this thread instead ***
> > > >
> > > > Hi all.
> > > >
> > > > While we are in the midst of some very interesting KIP discussions,
> I'd
> > > > like to bring a brief and useful KIP on the table as well.
> > > >
> > > > It's about enabling redirection of log4j logging to a file for Kafka
> > > > Connect by default, in a way similar to how this is done for Kafka
> > > brokers
> > > > today.
> > > >
> > > > You might find it short and straightforward but, still, needs to be
> > > > discussed as a KIP since it's an externally visible (yet compatible)
> > > change
> > > > in how Connect logs its status during runtime.
> > > >
> > > > Here's a link to the KIP:
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-521%3A+Enable+redirection+of+Connect%27s+log4j+messages+to+a+file+by+default
> > > >
> > > > Looking forward to your comments!
> > > > Konstantine
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #3908

2019-09-18 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Remove duplicate definition of transactional.id.expiration.ms

[bbejeck] remove unused import (#7345)

[matthias] KAFKA-8913: Document topic based configs & ISR settings for Streams 
apps

--
[...truncated 6.02 MB...]
org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical STARTED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectWithDefaultValue 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectWithDefaultValue 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
dateToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
dateToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectOptional STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectOptional PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
timeToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
timeToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > floatToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testCacheSchemaToConnectConversion PASSED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectOptionalWithDefaultValue PASSED

[jira] [Created] (KAFKA-8920) Provide a new properties to enable/disable expired-token-cleaner scheduler

2019-09-18 Thread Chatchavit Nitipongpun (Jira)
Chatchavit Nitipongpun created KAFKA-8920:
-

 Summary: Provide a new properties to enable/disable 
expired-token-cleaner scheduler
 Key: KAFKA-8920
 URL: https://issues.apache.org/jira/browse/KAFKA-8920
 Project: Kafka
  Issue Type: Improvement
  Components: config
Affects Versions: 2.3.0
Reporter: Chatchavit Nitipongpun


Currently, the expired-token-cleaner scheduler is automatically started up when 
the delegation.token.master.key is set.

Providing this new property to enable/disable this scheduler would allow the 
developers to manipulate expired tokens externally by themselves, and Kafka 
process does not need to spend its resource to clean up the expired tokens, 
which is not the main feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.1-jdk8 #230

2019-09-18 Thread Apache Jenkins Server
See