Build failed in Jenkins: kafka-trunk-jdk14 #254

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] make produce-sync flush (#8925)


--
[...truncated 3.19 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task 

Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-06-30 Thread Dániel Urbán
Hi Manikumar,

Thanks for the comments.
1. Will change this - thought that "command-config" is used for admin
clients.
2. It's not necessary, just felt like a nice quality-of-life feature - will
remove it.

Thanks,
Daniel

On Tue, Jun 30, 2020 at 4:16 AM Manikumar  wrote:

> Hi Daniel,
>
> Thanks for working on this KIP.  Proposed changes looks good to me,
>
> minor comments:
> 1. We use "command-config" option name in most of the cmdline tools to pass
> config
> properties file. We can use the same name here.
>
> 2. Not sure, if we need a separate option to pass an consumer property.
> fewer options are better.
>
> Thanks,
> Manikumar
>
> On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán 
> wrote:
>
> > Hi,
> >
> > I see that this KIP turned somewhat inactive - I'd like to pick it up and
> > work on it if it is okay.
> > Part of the work is done, as switching to the Consumer API is already in
> > trunk, but some functionality is still missing.
> >
> > I've seen the current PR and the discussion so far, only have a few
> things
> > to add:
> > - I like the idea of the topic-partition argument, it would be useful to
> > filter down to specific partitions.
> > - Instead of a topic list arg, a pattern would be more powerful, and also
> > fit better with the other tools (e.g. how the kafka-topics tool works).
> >
> > Regards,
> > Daniel
> >
>


Build failed in Jenkins: kafka-trunk-jdk11 #1606

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] make produce-sync flush (#8925)


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4678

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] make produce-sync flush (#8925)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[jira] [Created] (KAFKA-10221) Backport fix for KAFKA-9603 to 2.5

2020-06-30 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-10221:
-

 Summary: Backport fix for KAFKA-9603 to 2.5 
 Key: KAFKA-10221
 URL: https://issues.apache.org/jira/browse/KAFKA-10221
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.5.0
Reporter: Bruno Cadonna


The fix for [KAFKA-9603|https://issues.apache.org/jira/browse/KAFKA-9603] shall 
be backported to 2.5. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.5-jdk8 #161

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10212: Describing a topic with the TopicCommand fails if


--
[...truncated 2.93 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED


Re: [VOTE] KIP-623: Add "internal-topics" option to streams application reset tool

2020-06-30 Thread Bruno Cadonna
Hi,

I have already brought this up in the discussion thread.

Should we not run a dry-run in any case to avoid inadvertently
deleting topics of other applications?

I know it is a backward incompatible change if users use it in
scripts, but I think it is still worth discussing it. I would to hear
what committers think about it.

Best,
Bruno

On Tue, Jun 30, 2020 at 5:33 AM Boyang Chen  wrote:
>
> Thanks John for the great suggestion. +1 for enforcing the prefix check for
> the `--internal-topics` list.
>
> On Mon, Jun 29, 2020 at 5:11 PM John Roesler  wrote:
>
> > Oh, I meant to say, if that’s the case, then I’m +1 (binding) also :)
> >
> > Thanks again,
> > John
> >
> > On Mon, Jun 29, 2020, at 19:09, John Roesler wrote:
> > > Thanks for the KIP, Joel!
> > >
> > > It seems like a good pattern to document would be to first run with
> > > —dry-run and without —internal-topics to list all potential topics, and
> > > then to use —internal-topics if needed to limit the internal topics to
> > > delete.
> > >
> > > Just to make sure, would we have a sanity check to forbid including
> > > arbitrary topics? I.e., it seems like —internal-topics should require
> > > all the topics to be prefixed with the app id. Is that right?
> > >
> > > Thanks,
> > > John
> > >
> > > On Mon, Jun 29, 2020, at 18:25, Guozhang Wang wrote:
> > > > Thanks Joel, the KIP lgtm.
> > > >
> > > > A minor suggestion is to explain where users can get the list of
> > internal
> > > > topics of a given application, and maybe also add it as part of the
> > helper
> > > > scripts, for example via topology description.
> > > >
> > > > Overall, I'm +1 as well (binding).
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Sat, Jun 27, 2020 at 4:33 AM Joel Wee  wrote:
> > > >
> > > > > Thanks Boyang, I think what you’ve said makes sense. I’ve made the
> > > > > motivation clearer now:
> > > > >
> > > > >
> > > > > Users may want to specify which internal topics should be deleted. At
> > > > > present, the streams reset tool deletes all topics that start with "<
> > > > > application.id>-" and there are no options to
> > > > > control it.
> > > > >
> > > > > The `--internal-topics` option is especially useful when there are
> > prefix
> > > > > conflicts between applications, e.g. "app" and "app-v2". In this
> > case, if
> > > > > we want to reset "app", the reset tool’s default behaviour will
> > delete both
> > > > > the internal topics of "app" and "app-v2" (since both are prefixed by
> > > > > "app-"). With the `--internal-topics` option, we can provide
> > internal topic
> > > > > names for "app" and delete the internal topics for "app" without
> > deleting
> > > > > the internal topics for "app-v2".
> > > > >
> > > > > Best
> > > > >
> > > > > Joel
> > > > >
> > > > > On 27 Jun 2020, at 2:07 AM, Boyang Chen  > > > > > wrote:
> > > > >
> > > > > Thanks for driving the proposal Joel, I have a minor suggestion:  we
> > should
> > > > > be more clear about why we introduce this flag, so it would be
> > better to
> > > > > also state clearly in the document for the default behavior as well,
> > such
> > > > > like:
> > > > >
> > > > > Comma-separated list of internal topics to be deleted. By default,
> > > > > Streams reset tool will delete all topics prefixed by the
> > > > > application.id.
> > > > >
> > > > > This flag is useful when you need to keep certain topics intact due
> > to
> > > > > the prefix conflict with another application (such like "app" vs
> > > > > "app-v2").
> > > > >
> > > > > With provided internal topic names for "app", the reset tool will
> > only
> > > > > delete internal topics associated with "app", instead of both "app"
> > > > > and "app-v2".
> > > > >
> > > > >
> > > > > Other than that, +1 from me (binding).
> > > > >
> > > > > On Wed, Jun 24, 2020 at 1:19 PM Joel Wee  >  > > > > joel@outlook.com>> wrote:
> > > > >
> > > > > Apologies. Changing the subject.
> > > > >
> > > > > On 24 Jun 2020, at 9:14 PM, Joel Wee  > > > > joel@outlook.com> > > > > joel@outlook.com>> wrote:
> > > > >
> > > > > Hi all
> > > > >
> > > > > I would like to start a vote for KIP-623, which adds the option
> > > > > --internal-topics to the streams-application-reset-tool:
> > > > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158862177
> > > > > .
> > > > >
> > > > > Please let me know what you think.
> > > > >
> > > > > Best
> > > > >
> > > > > Joel
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >


Re: [DISCUSS] KIP-616: Rename implicit Serdes instances in kafka-streams-scala

2020-06-30 Thread Yuriy Badalyantc
Hi everybody!

Looks like a discussion about KIP-513 could take a while. I think we should
move forward with KIP-616 without waiting for KIP-513.

I created a new pull request for KIP-616:
https://github.com/apache/kafka/pull/8955. It contains a new
`org.apache.kafka.streams.scala.serialization.Serdes` object without name
clash. An old one was marked as deprecated. This change is backward
compatible and it could be merged in any further release.

On Wed, Jun 3, 2020 at 12:41 PM Yuriy Badalyantc  wrote:

> Hi, John
>
> Thanks for pointing that out. I expressed my thoughts about KIP-513 and
> its connection to KIP-616 in the KIP-513 mail list.
>
> - Yuriy
>
> On Sun, May 31, 2020 at 1:26 AM John Roesler  wrote:
>
>> Hi Yuriy,
>>
>> I was just looking back at KIP-513, and I’m wondering if there’s any
>> overlap we should consider here, or if they are just orthogonal.
>>
>> Thanks,
>> -John
>>
>> On Thu, May 28, 2020, at 21:36, Yuriy Badalyantc wrote:
>> > At the current moment, I think John's plan is better than the original
>> plan
>> > described in the KIP. I think we should create a new `Serdes` in another
>> > package. The old one will be deprecated.
>> >
>> > - Yuriy
>> >
>> > On Fri, May 29, 2020 at 8:58 AM John Roesler 
>> wrote:
>> >
>> > > Thanks, Matthias,
>> > >
>> > > If we go with the approach Yuriy and I agreed on, to deprecate and
>> replace
>> > > the whole class and not just a few of the methods, then the timeline
>> is
>> > > less of a concern. Under that plan, Yuriy can just write the new class
>> > > exactly the way he wants and people can cleanly swap over to the new
>> > > pattern when they are ready.
>> > >
>> > > The timeline was more significant if we were just going to deprecate
>> some
>> > > methods and add new methods to the existing class. That plan requires
>> two
>> > > implementation phases, where we first deprecate the existing methods
>> and
>> > > later swap the implicits at the same time we remove the deprecated
>> members.
>> > > Aside from the complexity of that approach, it’s not a breakage free
>> path,
>> > > as some users would be forced to continue using the deprecated members
>> > > until a future release drops them, breaking their source code, and
>> only
>> > > then can they update their code.
>> > >
>> > > That wouldn’t be the end of the world, and we’ve had to do the same
>> thing
>> > > in the past with the implicit conversations, but this is a much wider
>> > > scope, since it’s all the serdes. I’m happy with the new plan, since
>> it’s
>> > > not only one step, but also it provides everyone a breakage-free path.
>> > >
>> > > We can still consider dropping the deprecated class in 3.0; I just
>> wanted
>> > > to clarify how the timeline issue has changed.
>> > >
>> > > Thanks,
>> > > John
>> > >
>> > > On Thu, May 28, 2020, at 20:34, Matthias J. Sax wrote:
>> > > > I am not a Scale person, so I cannot really contribute much.
>> However for
>> > > > the deprecation period, if we get the change into 2.7, it might be
>> ok to
>> > > > remove the deprecated classed in 3.0.
>> > > >
>> > > > It would only be one minor release in between what is a little bit
>> short
>> > > > (we usually prefer at least two minor released, better three), but
>> if we
>> > > > have a good reason for it, it might be ok.
>> > > >
>> > > > If we cannot remove it in 3.0, it seems there would be a 4.0 in
>> about a
>> > > > year(?) when ZK removal is finished and we can remove the deprecated
>> > > > code than.
>> > > >
>> > > >
>> > > > -Matthias
>> > > >
>> > > > On 5/28/20 7:39 AM, John Roesler wrote:
>> > > > > Hi Yuriy,
>> > > > >
>> > > > > Sounds good to me! I had a feeling we were bringing different
>> context
>> > > > > to the discussion; thanks for sticking with the conversation
>> until we
>> > > got
>> > > > > it hashed out.
>> > > > >
>> > > > > I'm glad you prefer Serde*s*, since having multiple different
>> classes
>> > > with
>> > > > > the same name leads to all kinds of trouble. "Serdes" seems
>> relatively
>> > > > > safe because people in the Scala lib won't be using the Java
>> Serdes
>> > > class,
>> > > > > and they won't be using the deprecated and non-deprecated one at
>> the
>> > > > > same time.
>> > > > >
>> > > > > Thank again,
>> > > > > -John
>> > > > >
>> > > > > On Thu, May 28, 2020, at 02:21, Yuriy Badalyantc wrote:
>> > > > >> Ok, I understood you, John. I wasn't sure about kafka deprecation
>> > > policy
>> > > > >> and thought that the full cycle could be done with 2.7 version.
>> > > Waiting for
>> > > > >> 3.0 is too much, I agree with it.
>> > > > >>
>> > > > >> So, I think creating one more `Serdes` in another package is our
>> way.
>> > > I
>> > > > >> suggest one of the following:
>> > > > >> 1. `org.apache.kafka.streams.scala.serde.Serdes`
>> > > > >> 2. `org.apache.kafka.streams.scala.serialization.Serdes`
>> > > > >>
>> > > > >> About `Serde` vs `Serdes`. I'm strongly against `Serde` because
>> it
>> > > would
>> > > > >> lead to a new name 

[jira] [Created] (KAFKA-10220) NPE when describing resources

2020-06-30 Thread Edoardo Comar (Jira)
Edoardo Comar created KAFKA-10220:
-

 Summary: NPE when describing resources
 Key: KAFKA-10220
 URL: https://issues.apache.org/jira/browse/KAFKA-10220
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Edoardo Comar


In current trunk code 
Describing a topic can fail with an NPE in the broker



on the line 

{{          resource.configurationKeys.asScala.forall(_.contains(configName))}}

 

(configurationKeys is null?)

{{[2020-06-30 11:10:39,464] ERROR [Admin Manager on Broker 0]: Error processing 
describe configs request for resource DescribeConfigsResource(resourceType=2, 
resourceName='topic1', configurationKeys=null) 
(kafka.server.AdminManager)}}{{java.lang.NullPointerException}}{{at 
kafka.server.AdminManager.$anonfun$describeConfigs$3(AdminManager.scala:395)}}{{at
 
kafka.server.AdminManager.$anonfun$describeConfigs$3$adapted(AdminManager.scala:393)}}{{at
 
scala.collection.TraversableLike.$anonfun$filterImpl$1(TraversableLike.scala:248)}}{{at
 scala.collection.Iterator.foreach(Iterator.scala:929)}}{{at 
scala.collection.Iterator.foreach$(Iterator.scala:929)}}{{at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1417)}}{{at 
scala.collection.IterableLike.foreach(IterableLike.scala:71)}}{{at 
scala.collection.IterableLike.foreach$(IterableLike.scala:70)}}{{at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54)}}{{at 
scala.collection.TraversableLike.filterImpl(TraversableLike.scala:247)}}{{at 
scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:245)}}{{at 
scala.collection.AbstractTraversable.filterImpl(Traversable.scala:104)}}{{at 
scala.collection.TraversableLike.filter(TraversableLike.scala:259)}}{{at 
scala.collection.TraversableLike.filter$(TraversableLike.scala:259)}}{{at 
scala.collection.AbstractTraversable.filter(Traversable.scala:104)}}{{at 
kafka.server.AdminManager.createResponseConfig$1(AdminManager.scala:393)}}{{at 
kafka.server.AdminManager.$anonfun$describeConfigs$1(AdminManager.scala:412)}}{{at
 scala.collection.immutable.List.map(List.scala:283)}}{{at 
kafka.server.AdminManager.describeConfigs(AdminManager.scala:386)}}{{at 
kafka.server.KafkaApis.handleDescribeConfigsRequest(KafkaApis.scala:2595)}}{{at 
kafka.server.KafkaApis.handle(KafkaApis.scala:165)}}{{at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)}}{{at 
java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #230

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9509: Increase timeout when consuming records to fix flaky 
test in


--
[...truncated 7.68 MB...]
org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testLeftJoin STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testLeftJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testWindowing STARTED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldRequireCopartitionedStreams STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldRequireCopartitionedStreams PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldClearTableEntryOnNullValueUpdates STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldClearTableEntryOnNullValueUpdates PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldNotJoinOnTableUpdates STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldNotJoinOnTableUpdates PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldNotJoinWithEmptyTableOnStreamUpdates STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldNotJoinWithEmptyTableOnStreamUpdates PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldLogAndMeterWhenSkippingNullLeftKey STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldLogAndMeterWhenSkippingNullLeftKey PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldLogAndMeterWhenSkippingNullLeftValue STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldLogAndMeterWhenSkippingNullLeftValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldJoinOnlyIfMatchFoundOnStreamUpdates STARTED

org.apache.kafka.streams.kstream.internals.KStreamKTableJoinTest > 
shouldJoinOnlyIfMatchFoundOnStreamUpdates PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableRightJoinTest > 
shouldLogAndMeterSkippedRecordsDueToNullLeftKey STARTED

org.apache.kafka.streams.kstream.internals.KTableKTableRightJoinTest > 
shouldLogAndMeterSkippedRecordsDueToNullLeftKey PASSED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallInitOfAdapteeTransformer STARTED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallInitOfAdapteeTransformer PASSED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallTransformOfAdapteeTransformerAndReturnSingletonIterable STARTED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallTransformOfAdapteeTransformerAndReturnSingletonIterable PASSED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldAlwaysGetNewAdapterTransformer STARTED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldAlwaysGetNewAdapterTransformer PASSED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallTransformOfAdapteeTransformerAndReturnEmptyIterable STARTED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallTransformOfAdapteeTransformerAndReturnEmptyIterable PASSED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallCloseOfAdapteeTransformer STARTED

org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapterTest > 
shouldCallCloseOfAdapteeTransformer PASSED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
shouldAlwaysOverlap STARTED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
shouldAlwaysOverlap PASSED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
cannotCompareUnlimitedWindowWithDifferentWindowType STARTED

org.apache.kafka.streams.kstream.internals.UnlimitedWindowTest > 
cannotCompareUnlimitedWindowWithDifferentWindowType PASSED

org.apache.kafka.streams.kstream.internals.MaterializedInternalTest > 
shouldGenerateStoreNameWithPrefixIfProvidedNameIsNull STARTED

org.apache.kafka.streams.kstream.internals.MaterializedInternalTest > 
shouldGenerateStoreNameWithPrefixIfProvidedNameIsNull PASSED

org.apache.kafka.streams.kstream.internals.MaterializedInternalTest > 
shouldUseProvidedStoreNameWhenSet STARTED

org.apache.kafka.streams.kstream.internals.MaterializedInternalTest > 
shouldUseProvidedStoreNameWhenSet PASSED

org.apache.kafka.streams.kstream.internals.MaterializedInternalTest > 
shouldUseStoreNameOfSupplierWhenProvided STARTED

org.apache.kafka.streams.kstream.internals.MaterializedInternalTest > 

Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-06-30 Thread Dániel Urbán
Hi,

To help with the discussion, I also have a PR for this KIP now. reflecting
the current state of the KIP: https://github.com/apache/kafka/pull/8957.
I would like to ask a committer to start the test job on it.

One thing I realised though is that there is a KIP id collision, there is
another KIP with the same id:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
What is the protocol in this case? Should I acquire a new id for the
GetOffsetShell KIP, and update it?

Thanks in advance,
Daniel

Dániel Urbán  ezt írta (időpont: 2020. jún.
30., K, 9:23):

> Hi Manikumar,
>
> Thanks for the comments.
> 1. Will change this - thought that "command-config" is used for admin
> clients.
> 2. It's not necessary, just felt like a nice quality-of-life feature - will
> remove it.
>
> Thanks,
> Daniel
>
> On Tue, Jun 30, 2020 at 4:16 AM Manikumar 
> wrote:
>
> > Hi Daniel,
> >
> > Thanks for working on this KIP.  Proposed changes looks good to me,
> >
> > minor comments:
> > 1. We use "command-config" option name in most of the cmdline tools to
> pass
> > config
> > properties file. We can use the same name here.
> >
> > 2. Not sure, if we need a separate option to pass an consumer property.
> > fewer options are better.
> >
> > Thanks,
> > Manikumar
> >
> > On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán 
> > wrote:
> >
> > > Hi,
> > >
> > > I see that this KIP turned somewhat inactive - I'd like to pick it up
> and
> > > work on it if it is okay.
> > > Part of the work is done, as switching to the Consumer API is already
> in
> > > trunk, but some functionality is still missing.
> > >
> > > I've seen the current PR and the discussion so far, only have a few
> > things
> > > to add:
> > > - I like the idea of the topic-partition argument, it would be useful
> to
> > > filter down to specific partitions.
> > > - Instead of a topic list arg, a pattern would be more powerful, and
> also
> > > fit better with the other tools (e.g. how the kafka-topics tool works).
> > >
> > > Regards,
> > > Daniel
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk14 #255

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9893: Configurable TCP connection timeout and improve the initial


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] 

回复: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-06-30 Thread Hu Xi
That's a great KIP for GetOffsetShell tool. I have a question about the 
multiple-topic lookup situation.

In a secured environment, what does the tool output if it has DESCRIBE 
privileges for some topics but hasn't for others?


发件人: Dániel Urbán 
发送时间: 2020年6月30日 22:15
收件人: dev@kafka.apache.org 
主题: Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for 
multiple topics, minimize the number of requests to server

Hi Manikumar,
Thanks, went ahead and assigned a new ID, it is KIP-635 now:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
Daniel

Manikumar  ezt írta (időpont: 2020. jún. 30., K,
16:03):

> Hi,
>
> Yes, we can assign new id to this KIP.
>
> Thanks.
>
> On Tue, Jun 30, 2020 at 6:59 PM Dániel Urbán 
> wrote:
>
> > Hi,
> >
> > To help with the discussion, I also have a PR for this KIP now.
> reflecting
> > the current state of the KIP: https://github.com/apache/kafka/pull/8957.
> > I would like to ask a committer to start the test job on it.
> >
> > One thing I realised though is that there is a KIP id collision, there is
> > another KIP with the same id:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
> > What is the protocol in this case? Should I acquire a new id for the
> > GetOffsetShell KIP, and update it?
> >
> > Thanks in advance,
> > Daniel
> >
> > Dániel Urbán  ezt írta (időpont: 2020. jún.
> > 30., K, 9:23):
> >
> > > Hi Manikumar,
> > >
> > > Thanks for the comments.
> > > 1. Will change this - thought that "command-config" is used for admin
> > > clients.
> > > 2. It's not necessary, just felt like a nice quality-of-life feature -
> > will
> > > remove it.
> > >
> > > Thanks,
> > > Daniel
> > >
> > > On Tue, Jun 30, 2020 at 4:16 AM Manikumar 
> > > wrote:
> > >
> > > > Hi Daniel,
> > > >
> > > > Thanks for working on this KIP.  Proposed changes looks good to me,
> > > >
> > > > minor comments:
> > > > 1. We use "command-config" option name in most of the cmdline tools
> to
> > > pass
> > > > config
> > > > properties file. We can use the same name here.
> > > >
> > > > 2. Not sure, if we need a separate option to pass an consumer
> property.
> > > > fewer options are better.
> > > >
> > > > Thanks,
> > > > Manikumar
> > > >
> > > > On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán 
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I see that this KIP turned somewhat inactive - I'd like to pick it
> up
> > > and
> > > > > work on it if it is okay.
> > > > > Part of the work is done, as switching to the Consumer API is
> already
> > > in
> > > > > trunk, but some functionality is still missing.
> > > > >
> > > > > I've seen the current PR and the discussion so far, only have a few
> > > > things
> > > > > to add:
> > > > > - I like the idea of the topic-partition argument, it would be
> useful
> > > to
> > > > > filter down to specific partitions.
> > > > > - Instead of a topic list arg, a pattern would be more powerful,
> and
> > > also
> > > > > fit better with the other tools (e.g. how the kafka-topics tool
> > works).
> > > > >
> > > > > Regards,
> > > > > Daniel
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4679

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9893: Configurable TCP connection timeout and improve the initial


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task 

Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-06-30 Thread Manikumar
Hi,

Yes, we can assign new id to this KIP.

Thanks.

On Tue, Jun 30, 2020 at 6:59 PM Dániel Urbán  wrote:

> Hi,
>
> To help with the discussion, I also have a PR for this KIP now. reflecting
> the current state of the KIP: https://github.com/apache/kafka/pull/8957.
> I would like to ask a committer to start the test job on it.
>
> One thing I realised though is that there is a KIP id collision, there is
> another KIP with the same id:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
> What is the protocol in this case? Should I acquire a new id for the
> GetOffsetShell KIP, and update it?
>
> Thanks in advance,
> Daniel
>
> Dániel Urbán  ezt írta (időpont: 2020. jún.
> 30., K, 9:23):
>
> > Hi Manikumar,
> >
> > Thanks for the comments.
> > 1. Will change this - thought that "command-config" is used for admin
> > clients.
> > 2. It's not necessary, just felt like a nice quality-of-life feature -
> will
> > remove it.
> >
> > Thanks,
> > Daniel
> >
> > On Tue, Jun 30, 2020 at 4:16 AM Manikumar 
> > wrote:
> >
> > > Hi Daniel,
> > >
> > > Thanks for working on this KIP.  Proposed changes looks good to me,
> > >
> > > minor comments:
> > > 1. We use "command-config" option name in most of the cmdline tools to
> > pass
> > > config
> > > properties file. We can use the same name here.
> > >
> > > 2. Not sure, if we need a separate option to pass an consumer property.
> > > fewer options are better.
> > >
> > > Thanks,
> > > Manikumar
> > >
> > > On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I see that this KIP turned somewhat inactive - I'd like to pick it up
> > and
> > > > work on it if it is okay.
> > > > Part of the work is done, as switching to the Consumer API is already
> > in
> > > > trunk, but some functionality is still missing.
> > > >
> > > > I've seen the current PR and the discussion so far, only have a few
> > > things
> > > > to add:
> > > > - I like the idea of the topic-partition argument, it would be useful
> > to
> > > > filter down to specific partitions.
> > > > - Instead of a topic list arg, a pattern would be more powerful, and
> > also
> > > > fit better with the other tools (e.g. how the kafka-topics tool
> works).
> > > >
> > > > Regards,
> > > > Daniel
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-06-30 Thread Dániel Urbán
Hi Manikumar,
Thanks, went ahead and assigned a new ID, it is KIP-635 now:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
Daniel

Manikumar  ezt írta (időpont: 2020. jún. 30., K,
16:03):

> Hi,
>
> Yes, we can assign new id to this KIP.
>
> Thanks.
>
> On Tue, Jun 30, 2020 at 6:59 PM Dániel Urbán 
> wrote:
>
> > Hi,
> >
> > To help with the discussion, I also have a PR for this KIP now.
> reflecting
> > the current state of the KIP: https://github.com/apache/kafka/pull/8957.
> > I would like to ask a committer to start the test job on it.
> >
> > One thing I realised though is that there is a KIP id collision, there is
> > another KIP with the same id:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
> > What is the protocol in this case? Should I acquire a new id for the
> > GetOffsetShell KIP, and update it?
> >
> > Thanks in advance,
> > Daniel
> >
> > Dániel Urbán  ezt írta (időpont: 2020. jún.
> > 30., K, 9:23):
> >
> > > Hi Manikumar,
> > >
> > > Thanks for the comments.
> > > 1. Will change this - thought that "command-config" is used for admin
> > > clients.
> > > 2. It's not necessary, just felt like a nice quality-of-life feature -
> > will
> > > remove it.
> > >
> > > Thanks,
> > > Daniel
> > >
> > > On Tue, Jun 30, 2020 at 4:16 AM Manikumar 
> > > wrote:
> > >
> > > > Hi Daniel,
> > > >
> > > > Thanks for working on this KIP.  Proposed changes looks good to me,
> > > >
> > > > minor comments:
> > > > 1. We use "command-config" option name in most of the cmdline tools
> to
> > > pass
> > > > config
> > > > properties file. We can use the same name here.
> > > >
> > > > 2. Not sure, if we need a separate option to pass an consumer
> property.
> > > > fewer options are better.
> > > >
> > > > Thanks,
> > > > Manikumar
> > > >
> > > > On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán 
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I see that this KIP turned somewhat inactive - I'd like to pick it
> up
> > > and
> > > > > work on it if it is okay.
> > > > > Part of the work is done, as switching to the Consumer API is
> already
> > > in
> > > > > trunk, but some functionality is still missing.
> > > > >
> > > > > I've seen the current PR and the discussion so far, only have a few
> > > > things
> > > > > to add:
> > > > > - I like the idea of the topic-partition argument, it would be
> useful
> > > to
> > > > > filter down to specific partitions.
> > > > > - Instead of a topic list arg, a pattern would be more powerful,
> and
> > > also
> > > > > fit better with the other tools (e.g. how the kafka-topics tool
> > works).
> > > > >
> > > > > Regards,
> > > > > Daniel
> > > > >
> > > >
> > >
> >
>


????,????????????kafka-API??????????????????.

2020-06-30 Thread Koray
,??API,??,.


?? : KStreamBuilder??,??from??. 
??kafka 10.0..,??,??. 
: 
http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html









Build failed in Jenkins: kafka-trunk-jdk14 #257

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update AlterConfigsOptions Javadoc (#8958)

[github] KAFKA-10200: Fix testability of PAPI with windowed stores (#8927)

[github] KAFKA-4996: Fix findbugs multithreaded correctness warnings for streams

[github] MINOR: Do not swallow exception when collecting PIDs (#8914)


--
[...truncated 3.19 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4681

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10200: Fix testability of PAPI with windowed stores (#8927)

[github] KAFKA-4996: Fix findbugs multithreaded correctness warnings for streams

[github] MINOR: Do not swallow exception when collecting PIDs (#8914)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-06-30 Thread Dániel Urbán
That's a good question. In the PR I submitted, it would result in a list of
partitions contained by a topic for which the user has DESCRIBE privilege.
The tool utilizes Consumer.listTopics, so unauthorized topics are not
present in the response at all. The current version in trunk simply reports
that the topic does not exist if the user has no DESCRIBE privilege for it.

It would be hard (impossible?) to support detailed information about
unauthorized topics when using a pattern as a filter. It could be
manageable if the tool only supported a list of topics.

Maybe the only improvement needed is to explicitly document that the tool
only scans the authorized topics?

Hu Xi  ezt írta (időpont: 2020. jún. 30., K, 17:04):

> That's a great KIP for GetOffsetShell tool. I have a question about the
> multiple-topic lookup situation.
>
> In a secured environment, what does the tool output if it has DESCRIBE
> privileges for some topics but hasn't for others?
>
> 
> 发件人: Dániel Urbán 
> 发送时间: 2020年6月30日 22:15
> 收件人: dev@kafka.apache.org 
> 主题: Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support
> for multiple topics, minimize the number of requests to server
>
> Hi Manikumar,
> Thanks, went ahead and assigned a new ID, it is KIP-635 now:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
> Daniel
>
> Manikumar  ezt írta (időpont: 2020. jún. 30.,
> K,
> 16:03):
>
> > Hi,
> >
> > Yes, we can assign new id to this KIP.
> >
> > Thanks.
> >
> > On Tue, Jun 30, 2020 at 6:59 PM Dániel Urbán 
> > wrote:
> >
> > > Hi,
> > >
> > > To help with the discussion, I also have a PR for this KIP now.
> > reflecting
> > > the current state of the KIP:
> https://github.com/apache/kafka/pull/8957.
> > > I would like to ask a committer to start the test job on it.
> > >
> > > One thing I realised though is that there is a KIP id collision, there
> is
> > > another KIP with the same id:
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
> > > What is the protocol in this case? Should I acquire a new id for the
> > > GetOffsetShell KIP, and update it?
> > >
> > > Thanks in advance,
> > > Daniel
> > >
> > > Dániel Urbán  ezt írta (időpont: 2020.
> jún.
> > > 30., K, 9:23):
> > >
> > > > Hi Manikumar,
> > > >
> > > > Thanks for the comments.
> > > > 1. Will change this - thought that "command-config" is used for admin
> > > > clients.
> > > > 2. It's not necessary, just felt like a nice quality-of-life feature
> -
> > > will
> > > > remove it.
> > > >
> > > > Thanks,
> > > > Daniel
> > > >
> > > > On Tue, Jun 30, 2020 at 4:16 AM Manikumar  >
> > > > wrote:
> > > >
> > > > > Hi Daniel,
> > > > >
> > > > > Thanks for working on this KIP.  Proposed changes looks good to me,
> > > > >
> > > > > minor comments:
> > > > > 1. We use "command-config" option name in most of the cmdline tools
> > to
> > > > pass
> > > > > config
> > > > > properties file. We can use the same name here.
> > > > >
> > > > > 2. Not sure, if we need a separate option to pass an consumer
> > property.
> > > > > fewer options are better.
> > > > >
> > > > > Thanks,
> > > > > Manikumar
> > > > >
> > > > > On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán <
> urb.dani...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I see that this KIP turned somewhat inactive - I'd like to pick
> it
> > up
> > > > and
> > > > > > work on it if it is okay.
> > > > > > Part of the work is done, as switching to the Consumer API is
> > already
> > > > in
> > > > > > trunk, but some functionality is still missing.
> > > > > >
> > > > > > I've seen the current PR and the discussion so far, only have a
> few
> > > > > things
> > > > > > to add:
> > > > > > - I like the idea of the topic-partition argument, it would be
> > useful
> > > > to
> > > > > > filter down to specific partitions.
> > > > > > - Instead of a topic list arg, a pattern would be more powerful,
> > and
> > > > also
> > > > > > fit better with the other tools (e.g. how the kafka-topics tool
> > > works).
> > > > > >
> > > > > > Regards,
> > > > > > Daniel
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Running system tests on mac

2020-06-30 Thread Colin McCabe
Ducktape runs on Python 2.  You can't use it with Python 3, as you are trying 
to do here.

If anyone's interested in porting it to Python 3 it would be a good change.

Otherwise, using docker as suggested here seems to be the best way to go.

best,
Colin

On Mon, Jun 29, 2020, at 02:14, Gokul Ramanan Subramanian wrote:
> Hi.
> 
> Has anyone had luck running Kafka system tests on a Mac. I have a MacOS
> Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
> *ducktape tests/kafkatest/tests* yields the following error, making it look
> like some Python incompatibility issue.
> 
> $ ducktape tests/kafkatest/tests
> Traceback (most recent call last):
>   File "/Users/gokusubr/.pyenv/versions/3.6.9/bin/ducktape", line 11, in
> 
> load_entry_point('ducktape', 'console_scripts', 'ducktape')()
>   File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 487, in load_entry_point
> return get_distribution(dist).load_entry_point(group, name)
>   File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 2728, in load_entry_point
> return ep.load()
>   File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 2346, in load
> return self.resolve()
>   File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 2352, in resolve
> module = __import__(self.module_name, fromlist=['__name__'], 
> level=0)
>   File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/ducktape-0.7.6-py3.6.egg/ducktape/command_line/main.py",
> line 127
> print "parameters are not valid json: " + str(e.message)
>   ^
> SyntaxError: invalid syntax
> 
> I followed the instructions in tests/README.md to setup a cluster of 9
> worker machines. That worked well. When I ran *python setup.py develop* to
> install the necessary dependencies (including ducktape), I got similar
> errors to above, but the overall command completed successfully.
> 
> Any help appreciated.
> 
> Thanks.
>


Re: Running system tests on mac

2020-06-30 Thread Gokul Ramanan Subramanian
Thanks Colin.

While at the subject of system tests, there are a few times I see tests
timed out (even on a large machine such as m5.4xlarge EC2 with Linux). Are
there any knobs that system tests provide to control timeouts / throughputs
across all tests?
Thanks.

On Tue, Jun 30, 2020 at 6:32 PM Colin McCabe  wrote:

> Ducktape runs on Python 2.  You can't use it with Python 3, as you are
> trying to do here.
>
> If anyone's interested in porting it to Python 3 it would be a good change.
>
> Otherwise, using docker as suggested here seems to be the best way to go.
>
> best,
> Colin
>
> On Mon, Jun 29, 2020, at 02:14, Gokul Ramanan Subramanian wrote:
> > Hi.
> >
> > Has anyone had luck running Kafka system tests on a Mac. I have a MacOS
> > Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
> > *ducktape tests/kafkatest/tests* yields the following error, making it
> look
> > like some Python incompatibility issue.
> >
> > $ ducktape tests/kafkatest/tests
> > Traceback (most recent call last):
> >   File "/Users/gokusubr/.pyenv/versions/3.6.9/bin/ducktape", line 11, in
> > 
> > load_entry_point('ducktape', 'console_scripts', 'ducktape')()
> >   File
> >
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> > line 487, in load_entry_point
> > return get_distribution(dist).load_entry_point(group, name)
> >   File
> >
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> > line 2728, in load_entry_point
> > return ep.load()
> >   File
> >
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> > line 2346, in load
> > return self.resolve()
> >   File
> >
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> > line 2352, in resolve
> > module = __import__(self.module_name, fromlist=['__name__'],
> > level=0)
> >   File
> >
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/ducktape-0.7.6-py3.6.egg/ducktape/command_line/main.py",
> > line 127
> > print "parameters are not valid json: " + str(e.message)
> >   ^
> > SyntaxError: invalid syntax
> >
> > I followed the instructions in tests/README.md to setup a cluster of 9
> > worker machines. That worked well. When I ran *python setup.py develop*
> to
> > install the necessary dependencies (including ducktape), I got similar
> > errors to above, but the overall command completed successfully.
> >
> > Any help appreciated.
> >
> > Thanks.
> >
>


Permission to create a KIP

2020-06-30 Thread Jeremy Custenborder
Hello All,

Could someone grant permissions to create a KIP to the user jcustenborder?

Thanks!


Permission to create a KIP

2020-06-30 Thread Mohamed Chebbi

Hi

could somone grant permission to create a KIP to user mhmdchebbi?


Best Reguards



[jira] [Resolved] (KAFKA-4996) Fix findbugs multithreaded correctness warnings for streams

2020-06-30 Thread Leah Thomas (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leah Thomas resolved KAFKA-4996.

Resolution: Fixed

> Fix findbugs multithreaded correctness warnings for streams
> ---
>
> Key: KAFKA-4996
> URL: https://issues.apache.org/jira/browse/KAFKA-4996
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Colin McCabe
>Assignee: Leah Thomas
>Priority: Major
>  Labels: newbie
>
> Fix findbugs multithreaded correctness warnings for streams
> {code}
> Multithreaded correctness Warnings
>   
>   
> 
>   
>   
>   
> 
>Code Warning   
>   
>   
> 
>AT   Sequence of calls to java.util.concurrent.ConcurrentHashMap may not 
> be atomic in 
> org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(long, 
> ProcessorContext) 
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.KafkaStreams.stateListener; locked 66% of time   
>   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.processor.internals.StreamThread.stateListener; 
> locked 66% of time
>   
>  
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.processor.TopologyBuilder.applicationId; locked 50% 
> of time   
>   
>  
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.context; locked 
> 66% of time   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.cache; locked 60% 
> of time   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.context; locked 
> 66% of time   
>   
>   
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.name; locked 60% 
> of time   
>   
>  
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.CachingWindowStore.serdes; locked 
> 70% of time   
>   
>
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.RocksDBStore.db; locked 63% of time  
>   
>   
> 
>IS   Inconsistent synchronization of 
> org.apache.kafka.streams.state.internals.RocksDBStore.serdes; locked 76% of 
> time  
>   
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-06-30 Thread Colin McCabe
Hi Rajini,

OK.  Let's remove the encrypted credentials from ListScramUsersResponse and the 
associated API.  I have updated the KIP-- take a look when you get a chance.

best,
Colin


On Fri, May 15, 2020, at 06:54, Rajini Sivaram wrote:
> Hi Colin,
> 
> We have used different approaches for kafka-configs using ZooKeeper and
> using brokers until now. This is based on the fact that whatever you can
> access using kafka-configs with ZooKeeper, you can also access directly
> using ZooKeeper shell. For example, you can retrieve any config stored in
> ZooKeeper including sensitive configs. They are encrypted, so you will need
> the secret for decoding it, but you can see all encrypted values. Similarly
> for SCRAM credentials, you can retrieve the encoded credentials. We allow
> this because if you have physical access to ZK, you could have obtained it
> from ZK anyway. Our recommendation is to use ZK for SCRAM only if ZK is
> secure.
> 
> With AdminClient, we have been more conservative because we are now giving
> access over the network. You cannot retrieve any sensitive broker configs,
> even in encrypted form. I think it would be better to follow the same model
> for SCRAM credentials. It is not easy to decode the encoded SCRAM
> credentials, but it is not impossible. In particular, it is prone to
> dictionary attacks. I think the only information we need to return from
> `listScramUsers` is the SCRAM mechanism that is supported for that user.
> 
> Regards,
> 
> Rajini
> 
> 
> On Fri, May 15, 2020 at 9:25 AM Tom Bentley  wrote:
> 
> > Hi Colin,
> >
> > The AdminClient should do the hashing, right?  I don't see any advantage to
> > > doing it externally.
> >
> >
> > I'm happy so long as the AdminClient interface doesn't require users to do
> > the hashing themselves.
> >
> > I do think we should support setting the salt explicitly, but really only
> > > for testing purposes.  Normally, it should be randomized.
> > >
> >
> > >
> > > I also wonder a little about consistency with the other APIs which have
> > > > separate create/alter/delete methods. I imagine you considered exposing
> > > > separate methods in the Java API,  implementing them using the same
> > RPC,
> > > > but can you share your rationale?
> > >
> > > I wanted this to match up with the command-line API, which doesn't
> > > distinguish between create and alter.
> > >
> >
> > OK, makes sense.
> >
> > Cheers,
> >
> > Tom
> >
>


[jira] [Reopened] (KAFKA-10166) Excessive TaskCorruptedException seen in testing

2020-06-30 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman reopened KAFKA-10166:
-
  Assignee: (was: Bruno Cadonna)

Found two edge cases we missed earlier so I'm reopening this blocker; the fixes 
are minor so I'll have a PR ready shortly cc [~rhauch]

> Excessive TaskCorruptedException seen in testing
> 
>
> Key: KAFKA-10166
> URL: https://issues.apache.org/jira/browse/KAFKA-10166
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 2.6.0
>
>
> As the title indicates, long-running test applications with injected network 
> "outages" seem to hit TaskCorruptedException more than expected.
> Seen occasionally on the ALOS application (~20 times in two days in one case, 
> for example), and very frequently with EOS (many times per day)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1608

2020-06-30 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix typo in ssl.client.auth config doc description (#8956)

[github] MINOR: Update AlterConfigsOptions Javadoc (#8958)

[github] KAFKA-10200: Fix testability of PAPI with windowed stores (#8927)

[github] KAFKA-4996: Fix findbugs multithreaded correctness warnings for streams

[github] MINOR: Do not swallow exception when collecting PIDs (#8914)


--
[...truncated 3.19 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED


Re: 回复: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for multiple topics, minimize the number of requests to server

2020-06-30 Thread wang120445...@sina.com
maybe it just likes RBAC’s  show tables;



wang120445...@sina.com
 
发件人: Hu Xi
发送时间: 2020-06-30 23:04
收件人: dev@kafka.apache.org
主题: 回复: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for 
multiple topics, minimize the number of requests to server
That's a great KIP for GetOffsetShell tool. I have a question about the 
multiple-topic lookup situation.
 
In a secured environment, what does the tool output if it has DESCRIBE 
privileges for some topics but hasn't for others?
 

发件人: Dániel Urbán 
发送时间: 2020年6月30日 22:15
收件人: dev@kafka.apache.org 
主题: Re: [DISCUSS] KIP-308: GetOffsetShell: new KafkaConsumer API, support for 
multiple topics, minimize the number of requests to server
 
Hi Manikumar,
Thanks, went ahead and assigned a new ID, it is KIP-635 now:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
Daniel
 
Manikumar  ezt írta (időpont: 2020. jún. 30., K,
16:03):
 
> Hi,
>
> Yes, we can assign new id to this KIP.
>
> Thanks.
>
> On Tue, Jun 30, 2020 at 6:59 PM Dániel Urbán 
> wrote:
>
> > Hi,
> >
> > To help with the discussion, I also have a PR for this KIP now.
> reflecting
> > the current state of the KIP: https://github.com/apache/kafka/pull/8957.
> > I would like to ask a committer to start the test job on it.
> >
> > One thing I realised though is that there is a KIP id collision, there is
> > another KIP with the same id:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
> > What is the protocol in this case? Should I acquire a new id for the
> > GetOffsetShell KIP, and update it?
> >
> > Thanks in advance,
> > Daniel
> >
> > Dániel Urbán  ezt írta (időpont: 2020. jún.
> > 30., K, 9:23):
> >
> > > Hi Manikumar,
> > >
> > > Thanks for the comments.
> > > 1. Will change this - thought that "command-config" is used for admin
> > > clients.
> > > 2. It's not necessary, just felt like a nice quality-of-life feature -
> > will
> > > remove it.
> > >
> > > Thanks,
> > > Daniel
> > >
> > > On Tue, Jun 30, 2020 at 4:16 AM Manikumar 
> > > wrote:
> > >
> > > > Hi Daniel,
> > > >
> > > > Thanks for working on this KIP.  Proposed changes looks good to me,
> > > >
> > > > minor comments:
> > > > 1. We use "command-config" option name in most of the cmdline tools
> to
> > > pass
> > > > config
> > > > properties file. We can use the same name here.
> > > >
> > > > 2. Not sure, if we need a separate option to pass an consumer
> property.
> > > > fewer options are better.
> > > >
> > > > Thanks,
> > > > Manikumar
> > > >
> > > > On Wed, Jun 24, 2020 at 8:53 PM Dániel Urbán 
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I see that this KIP turned somewhat inactive - I'd like to pick it
> up
> > > and
> > > > > work on it if it is okay.
> > > > > Part of the work is done, as switching to the Consumer API is
> already
> > > in
> > > > > trunk, but some functionality is still missing.
> > > > >
> > > > > I've seen the current PR and the discussion so far, only have a few
> > > > things
> > > > > to add:
> > > > > - I like the idea of the topic-partition argument, it would be
> useful
> > > to
> > > > > filter down to specific partitions.
> > > > > - Instead of a topic list arg, a pattern would be more powerful,
> and
> > > also
> > > > > fit better with the other tools (e.g. how the kafka-topics tool
> > works).
> > > > >
> > > > > Regards,
> > > > > Daniel
> > > > >
> > > >
> > >
> >
>


Kafka Exactly-Once Semantics in .NET support

2020-06-30 Thread Saher Ahwal
Hi

I am working on exactly-once semantics with Kafka and I have streaming
scenario of read-process-write. I noticed the new exactly-once scalability
design with correctness in case of partition reassignment here:
https://docs.google.com/document/d/1LhzHGeX7_Lay4xvrEXxfciuDWATjpUXQhrEIkph9qRE/edit#

It all looks great and I understand we need to rely on
PendingTransatcionException and ProducerFencedException. However, we are
using confluent library and .NET and I don't see any of these exceptions.

How do I ensure correctness of fencing of zombie producer transactions when
using .NET confluent library? What exceptions are retry-able and what are
not? Can I call abortTransaction on any Kafka exception ? I don't find good
examples in the documentation.
Any pointers or answers are kindly appreciated.

Thanks in advance
Saher



-- 
*Saher Ahwal*

*Massachusetts Institute of Technology '13, '14*
*Department of Electrical Engineering and Computer Science*
*sa...@alum.mit.edu  | 617 680 4877*


Re: [DISCUSS] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-06-30 Thread William Bottrell
Thanks, John! I made the change. How much longer should I let there be
discussion before starting a VOTE?

On Sat, Jun 27, 2020 at 6:50 AM John Roesler  wrote:

> Thanks, Will,
>
> That looks good to me. I would only add "cached" or something
> to indicate that it wouldn't just transparently look up the current
> System.currentTimeMillis every time.
>
> For example:
> /**
>  * Returns current cached wall-clock system timestamp in milliseconds.
>  *
>  * @return the current cached wall-clock system timestamp in milliseconds
>  */
> long currentSystemTimeMs();
>
> I don't want to give specific information about _when_ exactly the
> timestamp cache will be updated, so that we can adjust it in the
> future, but it does seem important to make people aware that they
> won't see the timestamp advance during the execution of
> Processor.process(), for example.
>
> With that modification, I'll be +1 on this proposal.
>
> Thanks again for the KIP!
> -John
>
> On Thu, Jun 25, 2020, at 02:32, William Bottrell wrote:
> > Thanks, John! I appreciate you adjusting my lingo. I made the change to
> the
> > KIP. I will add the note about system time to the javadoc.
> >
> > On Wed, Jun 24, 2020 at 6:52 PM John Roesler 
> wrote:
> >
> > > Hi Will,
> > >
> > > This proposal looks good to me overall. Thanks for the contribution!
> > >
> > > Just a couple of minor notes:
> > >
> > > The system time method would return a cached timestamp that Streams
> looks
> > > up once when it starts processing a record. This may be confusing, so
> it
> > > might be good to state it in the javadoc.
> > >
> > > I thought the javadoc for the stream time might be a bit confusing. We
> > > normally talk about “Tasks” not “partition groups” in the public api.
> Maybe
> > > just saying that it’s “the maximum timestamp of any record yet
> processed by
> > > the task” would be both high level and accurate.
> > >
> > > Thanks again!
> > > -John
> > >
> > > On Mon, Jun 22, 2020, at 02:10, William Bottrell wrote:
> > > > Thanks, Bruno. I updated the KIP, so hopefully it makes more sense.
> > > Thanks
> > > > to Matthias J. Sax and Piotr Smolinski for helping with details.
> > > >
> > > > I welcome more feedback. Let me know if something doesn't make sense
> or I
> > > > need to provide more detail. Also, feel free to enlighten me. Thanks!
> > > >
> > > > On Thu, Jun 11, 2020 at 1:11 PM Bruno Cadonna 
> > > wrote:
> > > >
> > > > > Hi Will,
> > > > >
> > > > > Thank you for the KIP.
> > > > >
> > > > > 1. Could you elaborate a bit more on the motivation in the KIP? An
> > > > > example would make the motivation clearer.
> > > > >
> > > > > 2. In section "Proposed Changes" you do not need to show the
> > > > > implementation and describe internals. A description of the
> expected
> > > > > behavior of the newly added methods should suffice.
> > > > >
> > > > > 3. In "Compatibility, Deprecation, and Migration Plan" you should
> > > > > state that the change is backward compatible because the two
> methods
> > > > > will be added and no other method will be changed or removed.
> > > > >
> > > > > Best,
> > > > > Bruno
> > > > >
> > > > > On Wed, Jun 10, 2020 at 10:06 AM William Bottrell <
> bottre...@gmail.com
> > > >
> > > > > wrote:
> > > > > >
> > > > > > Add currentSystemTimeMs and currentStreamTimeMs to
> ProcessorContext
> > > > > > <
> > > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-622%3A+Add+currentSystemTimeMs+and+currentStreamTimeMs+to+ProcessorContext
> > > > > >
> > > > > >
> > > > > > I am extremely new to Kafka, but thank you to John Roesler and
> > > Matthias
> > > > > J.
> > > > > > Sax for pointing me in the right direction. I accept any and all
> > > > > feedback.
> > > > > >
> > > > > > Thanks,
> > > > > > Will
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-623: Add "internal-topics" option to streams application reset tool

2020-06-30 Thread Boyang Chen
Hey Bruno,

I agree adding a prompt would be a nice precaution, but it is not backward
compatible as you suggested and could make the automation harder to
achieve.

If you want, we may consider starting a separate ticket to discuss whether
adding a prompt to let users be aware of the topics that are about to
delete. However, this is also inverting the assumptions we made about
`--dry-run` mode, which would become useless to me once we are adding a
prompt asking users whether they want to remove these topics completely, or
do nothing at all.

Back to KIP-623, I think this discussion could be held in orthogonal, which
applies to more general considerations of reducing human errors, etc.

Boyang

On Tue, Jun 30, 2020 at 12:55 AM Bruno Cadonna  wrote:

> Hi,
>
> I have already brought this up in the discussion thread.
>
> Should we not run a dry-run in any case to avoid inadvertently
> deleting topics of other applications?
>
> I know it is a backward incompatible change if users use it in
> scripts, but I think it is still worth discussing it. I would to hear
> what committers think about it.
>
> Best,
> Bruno
>
> On Tue, Jun 30, 2020 at 5:33 AM Boyang Chen 
> wrote:
> >
> > Thanks John for the great suggestion. +1 for enforcing the prefix check
> for
> > the `--internal-topics` list.
> >
> > On Mon, Jun 29, 2020 at 5:11 PM John Roesler 
> wrote:
> >
> > > Oh, I meant to say, if that’s the case, then I’m +1 (binding) also :)
> > >
> > > Thanks again,
> > > John
> > >
> > > On Mon, Jun 29, 2020, at 19:09, John Roesler wrote:
> > > > Thanks for the KIP, Joel!
> > > >
> > > > It seems like a good pattern to document would be to first run with
> > > > —dry-run and without —internal-topics to list all potential topics,
> and
> > > > then to use —internal-topics if needed to limit the internal topics
> to
> > > > delete.
> > > >
> > > > Just to make sure, would we have a sanity check to forbid including
> > > > arbitrary topics? I.e., it seems like —internal-topics should require
> > > > all the topics to be prefixed with the app id. Is that right?
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Mon, Jun 29, 2020, at 18:25, Guozhang Wang wrote:
> > > > > Thanks Joel, the KIP lgtm.
> > > > >
> > > > > A minor suggestion is to explain where users can get the list of
> > > internal
> > > > > topics of a given application, and maybe also add it as part of the
> > > helper
> > > > > scripts, for example via topology description.
> > > > >
> > > > > Overall, I'm +1 as well (binding).
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Sat, Jun 27, 2020 at 4:33 AM Joel Wee 
> wrote:
> > > > >
> > > > > > Thanks Boyang, I think what you’ve said makes sense. I’ve made
> the
> > > > > > motivation clearer now:
> > > > > >
> > > > > >
> > > > > > Users may want to specify which internal topics should be
> deleted. At
> > > > > > present, the streams reset tool deletes all topics that start
> with "<
> > > > > > application.id>-" and there are no
> options to
> > > > > > control it.
> > > > > >
> > > > > > The `--internal-topics` option is especially useful when there
> are
> > > prefix
> > > > > > conflicts between applications, e.g. "app" and "app-v2". In this
> > > case, if
> > > > > > we want to reset "app", the reset tool’s default behaviour will
> > > delete both
> > > > > > the internal topics of "app" and "app-v2" (since both are
> prefixed by
> > > > > > "app-"). With the `--internal-topics` option, we can provide
> > > internal topic
> > > > > > names for "app" and delete the internal topics for "app" without
> > > deleting
> > > > > > the internal topics for "app-v2".
> > > > > >
> > > > > > Best
> > > > > >
> > > > > > Joel
> > > > > >
> > > > > > On 27 Jun 2020, at 2:07 AM, Boyang Chen <
> reluctanthero...@gmail.com
> > > > > > > wrote:
> > > > > >
> > > > > > Thanks for driving the proposal Joel, I have a minor
> suggestion:  we
> > > should
> > > > > > be more clear about why we introduce this flag, so it would be
> > > better to
> > > > > > also state clearly in the document for the default behavior as
> well,
> > > such
> > > > > > like:
> > > > > >
> > > > > > Comma-separated list of internal topics to be deleted. By
> default,
> > > > > > Streams reset tool will delete all topics prefixed by the
> > > > > > application.id.
> > > > > >
> > > > > > This flag is useful when you need to keep certain topics intact
> due
> > > to
> > > > > > the prefix conflict with another application (such like "app" vs
> > > > > > "app-v2").
> > > > > >
> > > > > > With provided internal topic names for "app", the reset tool will
> > > only
> > > > > > delete internal topics associated with "app", instead of both
> "app"
> > > > > > and "app-v2".
> > > > > >
> > > > > >
> > > > > > Other than that, +1 from me (binding).
> > > > > >
> > > > > > On Wed, Jun 24, 2020 at 1:19 PM Joel Wee  > >  > > > > > joel@outlook.com>> wrote:

Re: [DISCUSS] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-06-30 Thread Boyang Chen
Thanks Will for the KIP. A couple questions and suggestions:

1. I think for new APIs to make most sense, we should add a minimal example
demonstrating how it could be useful to structure unit tests w/o the new
APIs.
2. If this is a testing-only feature, could we only add it
to MockProcessorContext?
3. Regarding the API, since this will be added to the ProcessorContext with
many subclasses, does it make sense to provide default implementations as
well?

Boyang

On Tue, Jun 30, 2020 at 6:56 PM William Bottrell 
wrote:

> Thanks, John! I made the change. How much longer should I let there be
> discussion before starting a VOTE?
>
> On Sat, Jun 27, 2020 at 6:50 AM John Roesler  wrote:
>
> > Thanks, Will,
> >
> > That looks good to me. I would only add "cached" or something
> > to indicate that it wouldn't just transparently look up the current
> > System.currentTimeMillis every time.
> >
> > For example:
> > /**
> >  * Returns current cached wall-clock system timestamp in milliseconds.
> >  *
> >  * @return the current cached wall-clock system timestamp in milliseconds
> >  */
> > long currentSystemTimeMs();
> >
> > I don't want to give specific information about _when_ exactly the
> > timestamp cache will be updated, so that we can adjust it in the
> > future, but it does seem important to make people aware that they
> > won't see the timestamp advance during the execution of
> > Processor.process(), for example.
> >
> > With that modification, I'll be +1 on this proposal.
> >
> > Thanks again for the KIP!
> > -John
> >
> > On Thu, Jun 25, 2020, at 02:32, William Bottrell wrote:
> > > Thanks, John! I appreciate you adjusting my lingo. I made the change to
> > the
> > > KIP. I will add the note about system time to the javadoc.
> > >
> > > On Wed, Jun 24, 2020 at 6:52 PM John Roesler 
> > wrote:
> > >
> > > > Hi Will,
> > > >
> > > > This proposal looks good to me overall. Thanks for the contribution!
> > > >
> > > > Just a couple of minor notes:
> > > >
> > > > The system time method would return a cached timestamp that Streams
> > looks
> > > > up once when it starts processing a record. This may be confusing, so
> > it
> > > > might be good to state it in the javadoc.
> > > >
> > > > I thought the javadoc for the stream time might be a bit confusing.
> We
> > > > normally talk about “Tasks” not “partition groups” in the public api.
> > Maybe
> > > > just saying that it’s “the maximum timestamp of any record yet
> > processed by
> > > > the task” would be both high level and accurate.
> > > >
> > > > Thanks again!
> > > > -John
> > > >
> > > > On Mon, Jun 22, 2020, at 02:10, William Bottrell wrote:
> > > > > Thanks, Bruno. I updated the KIP, so hopefully it makes more sense.
> > > > Thanks
> > > > > to Matthias J. Sax and Piotr Smolinski for helping with details.
> > > > >
> > > > > I welcome more feedback. Let me know if something doesn't make
> sense
> > or I
> > > > > need to provide more detail. Also, feel free to enlighten me.
> Thanks!
> > > > >
> > > > > On Thu, Jun 11, 2020 at 1:11 PM Bruno Cadonna 
> > > > wrote:
> > > > >
> > > > > > Hi Will,
> > > > > >
> > > > > > Thank you for the KIP.
> > > > > >
> > > > > > 1. Could you elaborate a bit more on the motivation in the KIP?
> An
> > > > > > example would make the motivation clearer.
> > > > > >
> > > > > > 2. In section "Proposed Changes" you do not need to show the
> > > > > > implementation and describe internals. A description of the
> > expected
> > > > > > behavior of the newly added methods should suffice.
> > > > > >
> > > > > > 3. In "Compatibility, Deprecation, and Migration Plan" you should
> > > > > > state that the change is backward compatible because the two
> > methods
> > > > > > will be added and no other method will be changed or removed.
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On Wed, Jun 10, 2020 at 10:06 AM William Bottrell <
> > bottre...@gmail.com
> > > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > Add currentSystemTimeMs and currentStreamTimeMs to
> > ProcessorContext
> > > > > > > <
> > > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-622%3A+Add+currentSystemTimeMs+and+currentStreamTimeMs+to+ProcessorContext
> > > > > > >
> > > > > > >
> > > > > > > I am extremely new to Kafka, but thank you to John Roesler and
> > > > Matthias
> > > > > > J.
> > > > > > > Sax for pointing me in the right direction. I accept any and
> all
> > > > > > feedback.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Will
> > > > > >
> > > > >
> > > >
> > >
> >
>