Jenkins build is back to normal : kafka-trunk-jdk11 #1488

2020-05-21 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #4559

2020-05-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-05-21 Thread Sophie Blee-Goldman
Hm, the case of `all()` does seem to present a dilemma in the case of
variable-length keys.

In the case of fixed-length keys, you can just compute the keys that
correspond
to the maximum and minimum serialized bytes, then perform a `range()` query
instead of an `all()`. If your keys don't have a well-defined ordering such
that
you can't determine the MAX_KEY, then you probably don't care about the
iterator order anyway.

 But with variable-length keys, there is no MAX_KEY. If all your keys were
just
of the form 'a', 'aa', 'a', 'aaa' then in fact the only way to
figure out the
maximum key in the store is by using `all()` -- and without a reverse
iterator, you're
doomed to iterate through every single key just to answer that simple
question.

That said, I still think determining the iterator order based on the
to/from bytes
makes a lot of intuitive sense and gives the API a nice symmetry. What if
we
solved the `all()` problem by just giving `all()` a reverse form to
complement it?
Ie we would have `all()` and `reverseAll()`, or something to that effect.

On Thu, May 21, 2020 at 3:41 PM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Thanks John.
>
> Agree. I like the first approach as well, with StreamsConfig flag passing
> by via ProcessorContext.
>
> Another positive effect with "reverse parameters" is that in the case of
> `fetch(keyFrom, keyTo, timeFrom, timeTo)` users can decide _which_ pair to
> flip, whether with `ReadDirection` enum it apply to both.
>
> The only issue I've found while reviewing the KIP is that `all()` won't fit
> within this approach.
>
> We could remove it from the KIP and argue that for WindowStore,
> `fetchAll(0, Long.MAX_VALUE)` can be used to get all in reverse order, and
> for KeyValueStore, no ordering guarantees are provided.
>
> If there is consensus with this changes, I will go and update the KIP.
>
> On Thu, May 21, 2020 at 3:33 PM John Roesler  wrote:
>
> > Hi Jorge,
> >
> > Thanks for that idea. I agree, a feature flag would protect anyone
> > who may be depending on the current behavior.
> >
> > It seems better to locate the feature flag in the initialization logic of
> > the store, rather than have a method on the "live" store that changes
> > its behavior on the fly.
> >
> > It seems like there are two options here, one is to add a new config:
> >
> > StreamsConfig.ENABLE_BACKWARDS_ITERATION =
> >   "enable.backwards.iteration
> >
> > Or we can add a feature flag in Materialized, like
> >
> > Materialized.enableBackwardsIteration()
> >
> > I think I'd personally lean toward the config, for the following reason.
> > The concern that Sophie raised is that someone's program may depend
> > on the existing contract of getting an empty iterator. We don't want to
> > switch behavior when they aren't expecting it, so we provide them a
> > config to assert that they _are_ expecting the new behavior, which
> > means they take responsibility for updating their code to expect the new
> > behavior.
> >
> > There doesn't seem to be a reason to offer a choice of behaviors on a
> > per-query, or per-store basis. We just want people to be not surprised
> > by this change in general.
> >
> > What do you think?
> > Thanks,
> > -John
> >
> > On Wed, May 20, 2020, at 17:37, Jorge Quilcate wrote:
> > > Thank you both for the great feedback.
> > >
> > > I like the "fancy" proposal :), and how it removes the need for
> > > additional API methods. And with a feature flag on `StateStore`,
> > > disabled by default, should no break current users.
> > >
> > > The only side-effect I can think of is that: by moving the flag
> upwards,
> > > all later operations become affected; which might be ok for most (all?)
> > > cases. I can't think of an scenario where this would be an issue, just
> > > want to point this out.
> > >
> > > If moving to this approach, I'd like to check if I got this right
> before
> > > updating the KIP:
> > >
> > > - only `StateStore` will change by having a new method:
> > > `backwardIteration()`, `false` by default to keep things compatible.
> > > - then all `*Stores` will have to update their implementation based on
> > > this flag.
> > >
> > >
> > > On 20/05/2020 21:02, Sophie Blee-Goldman wrote:
> > > >> There's no possibility that someone could be relying
> > > >> on iterating over that range in increasing order, because that's not
> > what
> > > >> happens. However, they could indeed be relying on getting an empty
> > > > iterator
> > > >
> > > > I just meant that they might be relying on the assumption that the
> > range
> > > > query
> > > > will never return results with decreasing keys. The empty iterator
> > wouldn't
> > > > break that contract, but of course a surprise reverse iterator would.
> > > >
> > > > FWIW I actually am in favor of automatically converting to a reverse
> > > > iterator,
> > > > I just thought we should consider whether this should be off by
> > default or
> > > > even possible to disable at all.
> > > >
> > > 

Build failed in Jenkins: kafka-trunk-jdk14 #116

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9980: Fix bug where alterClientQuotas could not set default 
client


--
[...truncated 3.10 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED


Re: Want to remove the archive

2020-05-21 Thread Luke Chen
Hi Satya,
I think Matthias had replied your mail on 5/11. Please check it here:
https://lists.apache.org/thread.html/r6bd3cf8f092accdf59c4b5b878d62faaa1a9de074b31ecf4cb7a0b64%40%3Cdev.kafka.apache.org%3E

Thanks.

On Wed, May 20, 2020 at 9:08 PM Satya Kotni  wrote:

> Any update on this?
>
> Thanks & Regards
> Satya Kotni
>
> On Mon, 11 May 2020 at 10:54, Satya Kotni  wrote:
>
> >
> > Hi,
> >> Please help me in removing this from the  below archive
> >>
> >> https://www.mail-archive.com/dev@kafka.apache.org/msg104541.html
> >>
> >> Best Regards
> >> Satya Kotni
> >>
> >
>


[jira] [Resolved] (KAFKA-9942) ConfigCommand fails to set client quotas for default users with --bootstrap-server.

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-9942.
-
Fix Version/s: 2.5.1
   2.6.0
   Resolution: Fixed

> ConfigCommand fails to set client quotas for default users with 
> --bootstrap-server.
> ---
>
> Key: KAFKA-9942
> URL: https://issues.apache.org/jira/browse/KAFKA-9942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Cheng Tan
>Priority: Major
> Fix For: 2.6.0, 2.5.1
>
>
> {quote}$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> {quote}
> This usage of --entity-default with --bootstrap-server for alternating 
> configs will trigger the exception below. Similar for --describe
> {quote}/opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker04:9093 
> --describe --entity-type clients --entity-default --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownServerException: Path must not end with 
> / character
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
> at 
> kafka.admin.ConfigCommand$.getAllClientQuotasConfigs(ConfigCommand.scala:501)
> at kafka.admin.ConfigCommand$.getClientQuotasConfig(ConfigCommand.scala:487)
> at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:361)
> at kafka.admin.ConfigCommand$.processCommand(ConfigCommand.scala:292)
> at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:91)
> at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> Caused by: org.apache.kafka.common.errors.UnknownServerException: Path must 
> not end with / character
> {quote}
> However, if the --entity-type is brokers, the alternation works fine. 
> {quote}$ No exception, works properly
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-default --alter --add-config unclean.leader.election.enable=true
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> {quote}
>  
> Update:
>  
> For --describe:
> Commands work properly:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type brokers --entity-default
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe --broker-defaults
> {quote}
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type topics --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type users --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe 
> --user-defaults
>  
> {quote}
>  
> For --alter:
> Commands work properly:
> {quote}bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 
> max.messages.bytes=128000 --entity-type topics --entity-default (an entity 
> name must be specified with --alter of topics)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> unclean.leader.election.enable=true --entity-type brokers --entity-default
> {quote}
>  
> Commands do not work:
> {quote}bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter 
> --add-config producer_byte_rate=10,consumer_byte_rate=10 
> --entity-type clients --entity-default
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --entity-type users --entity-default (No exception 
> thrown but failed to add the config)
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=10,consumer_byte_rate=10 --client-defaults
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 
> producer_byte_rate=4 --user-defaults (No exception thrown but failed to 
> add the config)
>  
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9980) Fix bug where alterClientQuotas could not set default client quotas

2020-05-21 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-9980.
-
Fix Version/s: 2.5.1
   Resolution: Fixed

> Fix bug where alterClientQuotas could not set default client quotas
> ---
>
> Key: KAFKA-9980
> URL: https://issues.apache.org/jira/browse/KAFKA-9980
> Project: Kafka
>  Issue Type: Bug
>Reporter: Cheng Tan
>Assignee: Brian Byrne
>Priority: Major
> Fix For: 2.5.1
>
>
> quota_tests.py is failing. Specifically for this test:
> {quote}
>  [INFO:2020-05-11 19:22:47,493]: RunnerClient: Loading test \{'directory': 
> '/opt/kafka-dev/tests/kafkatest/tests/client', 'file_name': 'quota_test.py', 
> 'method_name': 'test_quota', 'cls_name': 'QuotaTest', 'injected_args': 
> {'quota_type': 'client-id', 'override_quota': False}}
> {quote}
>  
> I log into the docker container and do
>  
> {quote}
>  /opt/kafka-dev/bin/kafka-configs.sh --bootstrap-server ducker03:9093 
> --describe --entity-type clients --command-config 
> /opt/kafka-dev/bin/hi.properties
> {quote}
>  
>  and the command return
>  
> {quote}Configs for the default client-id are consumer_byte_rate=200.0, 
> producer_byte_rate=250.0
>  Configs for client-id 'overridden_id' are consumer_byte_rate=1.0E9, 
> producer_byte_rate=1.0E9
>  Seems like the config is properly but the quota is not effective
>   
> {quote}
>  For investigation, I added a logging at 
> {quote}{{AdminZKClient.changeConfigs()}}
> {quote}
>  
>  
> {quote}def changeConfigs(entityType: String, entityName: String, configs: 
> Properties): Unit =
> {
>         warn(s"entityType = $entityType entityName = $entityName configs = 
> $configs") ...
> }
> {quote}
> And use --bootstrap-server and --zookeeper to --alter the default client 
> quota. I got
>  
> {quote}
>  Alter with --zookeeper:WARN entityType = clients entityName =  
> configs = \{producer_byte_rate=10, consumer_byte_rate=10} 
> (kafka.zk.AdminZkClient)
> {quote}
>  
>  and
>  
> {quote}
>  Alter with --bootstrap-server:WARN entityType = clients entityName = 
> %3Cdefault%3E configs = \{producer_byte_rate=10, 
> consumer_byte_rate=10} (kafka.zk.AdminZkClient)
> {quote}
>  
>  I guess the encoding difference might cause the issue. The encoding happens 
> in
>  
> {quote}
>  Sanitizer.sanitize()
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-612: Ability to Limit Connection Creation Rate on Brokers

2020-05-21 Thread Anna Povzner
The vote for KIP-612 has passed with 3 binding and 3 non-binding +1s, and
no objections.


Thanks everyone for reviews and feedback,

Anna

On Tue, May 19, 2020 at 2:41 AM Rajini Sivaram 
wrote:

> +1 (binding)
>
> Thanks for the KIP, Anna!
>
> Regards,
>
> Rajini
>
>
> On Tue, May 19, 2020 at 9:32 AM Alexandre Dupriez <
> alexandre.dupr...@gmail.com> wrote:
>
> > +1 (non-binding)
> >
> > Thank you for the KIP!
> >
> >
> > Le mar. 19 mai 2020 à 07:57, David Jacot  a écrit :
> > >
> > > +1 (non-binding)
> > >
> > > Thanks for the KIP, Anna!
> > >
> > > On Tue, May 19, 2020 at 7:12 AM Satish Duggana <
> satish.dugg...@gmail.com
> > >
> > > wrote:
> > >
> > > > +1 (non-binding)
> > > > Thanks Anna for the nice feature to control the connection creation
> > rate
> > > > from the clients.
> > > >
> > > > On Tue, May 19, 2020 at 8:16 AM Gwen Shapira 
> > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Thank you for driving this, Anna
> > > > >
> > > > > On Mon, May 18, 2020 at 4:55 PM Anna Povzner 
> > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I would like to start the vote on KIP-612: Ability to limit
> > connection
> > > > > > creation rate on brokers.
> > > > > >
> > > > > > For reference, here is the KIP wiki:
> > > > > >
> > > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-612%3A+Ability+to+Limit+Connection+Creation+Rate+on+Brokers
> > > > > >
> > > > > > And discussion thread:
> > > > > >
> > > > > >
> > > > >
> > > >
> >
> https://lists.apache.org/thread.html/r61162661fa307d0bc5c8326818bf223a689c49e1c828c9928ee26969%40%3Cdev.kafka.apache.org%3E
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Anna
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Gwen Shapira
> > > > > Engineering Manager | Confluent
> > > > > 650.450.2760 | @gwenshap
> > > > > Follow us: Twitter | blog
> > > > >
> > > >
> >
>


Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-05-21 Thread Jorge Esteban Quilcate Otoya
Thanks John.

Agree. I like the first approach as well, with StreamsConfig flag passing
by via ProcessorContext.

Another positive effect with "reverse parameters" is that in the case of
`fetch(keyFrom, keyTo, timeFrom, timeTo)` users can decide _which_ pair to
flip, whether with `ReadDirection` enum it apply to both.

The only issue I've found while reviewing the KIP is that `all()` won't fit
within this approach.

We could remove it from the KIP and argue that for WindowStore,
`fetchAll(0, Long.MAX_VALUE)` can be used to get all in reverse order, and
for KeyValueStore, no ordering guarantees are provided.

If there is consensus with this changes, I will go and update the KIP.

On Thu, May 21, 2020 at 3:33 PM John Roesler  wrote:

> Hi Jorge,
>
> Thanks for that idea. I agree, a feature flag would protect anyone
> who may be depending on the current behavior.
>
> It seems better to locate the feature flag in the initialization logic of
> the store, rather than have a method on the "live" store that changes
> its behavior on the fly.
>
> It seems like there are two options here, one is to add a new config:
>
> StreamsConfig.ENABLE_BACKWARDS_ITERATION =
>   "enable.backwards.iteration
>
> Or we can add a feature flag in Materialized, like
>
> Materialized.enableBackwardsIteration()
>
> I think I'd personally lean toward the config, for the following reason.
> The concern that Sophie raised is that someone's program may depend
> on the existing contract of getting an empty iterator. We don't want to
> switch behavior when they aren't expecting it, so we provide them a
> config to assert that they _are_ expecting the new behavior, which
> means they take responsibility for updating their code to expect the new
> behavior.
>
> There doesn't seem to be a reason to offer a choice of behaviors on a
> per-query, or per-store basis. We just want people to be not surprised
> by this change in general.
>
> What do you think?
> Thanks,
> -John
>
> On Wed, May 20, 2020, at 17:37, Jorge Quilcate wrote:
> > Thank you both for the great feedback.
> >
> > I like the "fancy" proposal :), and how it removes the need for
> > additional API methods. And with a feature flag on `StateStore`,
> > disabled by default, should no break current users.
> >
> > The only side-effect I can think of is that: by moving the flag upwards,
> > all later operations become affected; which might be ok for most (all?)
> > cases. I can't think of an scenario where this would be an issue, just
> > want to point this out.
> >
> > If moving to this approach, I'd like to check if I got this right before
> > updating the KIP:
> >
> > - only `StateStore` will change by having a new method:
> > `backwardIteration()`, `false` by default to keep things compatible.
> > - then all `*Stores` will have to update their implementation based on
> > this flag.
> >
> >
> > On 20/05/2020 21:02, Sophie Blee-Goldman wrote:
> > >> There's no possibility that someone could be relying
> > >> on iterating over that range in increasing order, because that's not
> what
> > >> happens. However, they could indeed be relying on getting an empty
> > > iterator
> > >
> > > I just meant that they might be relying on the assumption that the
> range
> > > query
> > > will never return results with decreasing keys. The empty iterator
> wouldn't
> > > break that contract, but of course a surprise reverse iterator would.
> > >
> > > FWIW I actually am in favor of automatically converting to a reverse
> > > iterator,
> > > I just thought we should consider whether this should be off by
> default or
> > > even possible to disable at all.
> > >
> > > On Tue, May 19, 2020 at 7:42 PM John Roesler 
> wrote:
> > >
> > >> Thanks for the response, Sophie,
> > >>
> > >> I wholeheartedly agree we should take as much into account as possible
> > >> up front, rather than regretting our decisions later. I actually do
> share
> > >> your vague sense of worry, which was what led me to say initially
> that I
> > >> thought my counterproposal might be "too fancy". Sometimes, it's
> better
> > >> to be explicit instead of "elegant", if we think more people will be
> > >> confused
> > >> than not.
> > >>
> > >> I really don't think that there's any danger of "relying on a bug"
> here,
> > >> although
> > >> people certainly could be relying on current behavior. One thing to be
> > >> clear
> > >> about (which I just left a more detailed comment in KAFKA-8159 about)
> is
> > >> that
> > >> when we say something like key1 > key2, this ordering is defined by
> the
> > >> serde's output and nothing else.
> > >>
> > >> Currently, thanks to your fix in
> https://github.com/apache/kafka/pull/6521
> > >> ,
> > >> the store contract is that for range scans, if from > to, then the
> store
> > >> must
> > >> return an empty iterator. There's no possibility that someone could be
> > >> relying
> > >> on iterating over that range in increasing order, because that's not
> what
> > >> happens. However, they could 

Build failed in Jenkins: kafka-trunk-jdk14 #115

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9780: Deprecate commit records without record metadata (#8379)


--
[...truncated 377.20 KB...]

org.apache.kafka.common.record.MemoryRecordsTest > testFilterToBatchDiscard[28 
magic=2, firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testFilterToBatchDiscard[28 
magic=2, firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testBuildEndTxnMarker[28 
magic=2, firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testBuildEndTxnMarker[28 
magic=2, firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testHasRoomForMethod[28 
magic=2, firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testHasRoomForMethod[28 
magic=2, firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToPreservesLogAppendTime[28 magic=2, firstOffset=57, 
compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToPreservesLogAppendTime[28 magic=2, firstOffset=57, 
compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToPreservesPartitionLeaderEpoch[28 magic=2, firstOffset=57, 
compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToPreservesPartitionLeaderEpoch[28 magic=2, firstOffset=57, 
compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testFilterTo[28 magic=2, 
firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testFilterTo[28 magic=2, 
firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToWithUndersizedBuffer[28 magic=2, firstOffset=57, 
compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToWithUndersizedBuffer[28 magic=2, firstOffset=57, 
compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > 
testHasRoomForMethodWithHeaders[28 magic=2, firstOffset=57, 
compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > 
testHasRoomForMethodWithHeaders[28 magic=2, firstOffset=57, 
compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToPreservesProducerInfo[28 magic=2, firstOffset=57, 
compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToPreservesProducerInfo[28 magic=2, firstOffset=57, 
compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testWithRecords[28 magic=2, 
firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testWithRecords[28 magic=2, 
firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[28 magic=2, 
firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testIterator[28 magic=2, 
firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testEmptyBatchDeletion[28 
magic=2, firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testEmptyBatchDeletion[28 
magic=2, firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testEmptyBatchRetention[28 
magic=2, firstOffset=57, compressionType=LZ4] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testEmptyBatchRetention[28 
magic=2, firstOffset=57, compressionType=LZ4] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testChecksum[29 magic=2, 
firstOffset=57, compressionType=ZSTD] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testChecksum[29 magic=2, 
firstOffset=57, compressionType=ZSTD] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testNextBatchSize[29 
magic=2, firstOffset=57, compressionType=ZSTD] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > testNextBatchSize[29 
magic=2, firstOffset=57, compressionType=ZSTD] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToAlreadyCompactedLog[29 magic=2, firstOffset=57, 
compressionType=ZSTD] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToAlreadyCompactedLog[29 magic=2, firstOffset=57, 
compressionType=ZSTD] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToEmptyBatchRetention[29 magic=2, firstOffset=57, 
compressionType=ZSTD] STARTED

org.apache.kafka.common.record.MemoryRecordsTest > 
testFilterToEmptyBatchRetention[29 magic=2, firstOffset=57, 
compressionType=ZSTD] PASSED

org.apache.kafka.common.record.MemoryRecordsTest > testFilterToBatchDiscard[29 
magic=2, firstOffset=57, compressionType=ZSTD] STARTED


Build failed in Jenkins: kafka-2.5-jdk8 #129

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Correct MirrorMaker2 integration test configs for Connect


--
[...truncated 2.10 MB...]

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldEmitTombstoneWhenDeletingNonJoiningRecords[leftJoin=false, 
optimization=none, materialized=false, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=false, 
optimization=none, materialized=false, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=false, 
optimization=none, materialized=false, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=false, optimization=none, 
materialized=false, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=false, optimization=none, 
materialized=false, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=false, optimization=none, 
materialized=false, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=false, optimization=none, 
materialized=false, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyInnerJoinMultiIntegrationTest
 > shouldInnerJoinMultiPartitionQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyInnerJoinMultiIntegrationTest
 > shouldInnerJoinMultiPartitionQueryable PASSED

org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[0: eosEnabled=false] STARTED

org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[0: eosEnabled=false] PASSED

org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[1: eosEnabled=true] STARTED

org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[1: eosEnabled=true] PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForNonSegmentedStateStore[exactly_once]
 STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForNonSegmentedStateStore[exactly_once]
 PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldExposeRocksDBMetricsForSegmentedStateStoreBeforeAndAfterFailureWithEmptyStateDir[exactly_once]
 STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldExposeRocksDBMetricsForSegmentedStateStoreBeforeAndAfterFailureWithEmptyStateDir[exactly_once]
 PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForSegmentedStateStore[exactly_once]
 STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForSegmentedStateStore[exactly_once]
 PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldExposeRocksDBMetricsForNonSegmentedStateStoreBeforeAndAfterFailureWithEmptyStateDir[exactly_once]
 STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldExposeRocksDBMetricsForNonSegmentedStateStoreBeforeAndAfterFailureWithEmptyStateDir[exactly_once]
 PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForNonSegmentedStateStore[at_least_once]
 STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForNonSegmentedStateStore[at_least_once]
 PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldExposeRocksDBMetricsForSegmentedStateStoreBeforeAndAfterFailureWithEmptyStateDir[at_least_once]
 STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldExposeRocksDBMetricsForSegmentedStateStoreBeforeAndAfterFailureWithEmptyStateDir[at_least_once]
 PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForSegmentedStateStore[at_least_once]
 STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForSegmentedStateStore[at_least_once]
 PASSED


Build failed in Jenkins: kafka-trunk-jdk14 #114

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Deploy VerifiableClient in constructor to avoid test timeouts


--
[...truncated 5.06 MB...]

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=false, 
optimization=all, materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=false, 
optimization=all, materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=false, optimization=all, 
materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=false, optimization=all, 
materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=false, optimization=all, 
materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=false, optimization=all, 
materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldUnsubscribeOldForeignKeyIfLeftSideIsUpdated[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldUnsubscribeOldForeignKeyIfLeftSideIsUpdated[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldNotEmitTombstonesWhenDeletingNonExistingRecords[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldNotEmitTombstonesWhenDeletingNonExistingRecords[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldEmitTombstoneWhenDeletingNonJoiningRecords[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldEmitTombstoneWhenDeletingNonJoiningRecords[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] STARTED

org.apache.kafka.streams.integration.StoreQueryIntegrationTest > 
shouldQueryAllStalePartitionStores PASSED

org.apache.kafka.streams.integration.StoreQueryIntegrationTest > 
shouldQuerySpecificStalePartitionStores STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=false, 
optimization=all, materialized=false, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=false, optimization=all, 
materialized=false, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=false, optimization=all, 
materialized=false, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=false, optimization=all, 
materialized=false, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=false, optimization=all, 
materialized=false, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldUnsubscribeOldForeignKeyIfLeftSideIsUpdated[leftJoin=false, 
optimization=all, materialized=false, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldUnsubscribeOldForeignKeyIfLeftSideIsUpdated[leftJoin=false, 
optimization=all, materialized=false, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldNotEmitTombstonesWhenDeletingNonExistingRecords[leftJoin=false, 
optimization=all, materialized=false, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldNotEmitTombstonesWhenDeletingNonExistingRecords[leftJoin=false, 
optimization=all, materialized=false, rejoin=false] PASSED


Build failed in Jenkins: kafka-2.4-jdk8 #213

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Correct MirrorMaker2 integration test configs for Connect


--
[...truncated 2.29 MB...]

kafka.api.SaslSslAdminClientIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAttemptToCreateInvalidAcls 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAttemptToCreateInvalidAcls 
PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAclAuthorizationDenied STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAclAuthorizationDenied PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations2 STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations2 PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAclDelete STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAclDelete PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeReplicaLogDirs STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeReplicaLogDirs PASSED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevels STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevels SKIPPED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevelsCannotResetRootLogger STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevelsCannotResetRootLogger SKIPPED

kafka.api.SaslSslAdminClientIntegrationTest > 
testInvalidAlterPartitionReassignments STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testInvalidAlterPartitionReassignments PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testInvalidAlterConfigs STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testInvalidAlterConfigs PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testElectUncleanLeadersNoop 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testElectUncleanLeadersNoop PASSED

kafka.api.SaslSslAdminClientIntegrationTest > 
testAlterLogDirsAfterDeleteRecords STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testAlterLogDirsAfterDeleteRecords PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testElectUncleanLeadersAndNoop 
STARTED
ERROR: Could not install GRADLE_4_10_3_HOME
java.lang.NullPointerException

kafka.api.SaslSslAdminClientIntegrationTest > testElectUncleanLeadersAndNoop 
PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testElectPreferredLeaders STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testElectPreferredLeaders PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDeleteConsumerGroupOffsets 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDeleteConsumerGroupOffsets 
PASSED

kafka.api.SaslSslAdminClientIntegrationTest > 
testListReassignmentsDoesNotShowNonReassigningPartitions STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testListReassignmentsDoesNotShowNonReassigningPartitions PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testConsumeAfterDeleteRecords 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testConsumeAfterDeleteRecords 
PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords PASSED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevelsDoesNotWorkWithInvalidConfigs 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevelsDoesNotWorkWithInvalidConfigs 
SKIPPED

kafka.api.SaslSslAdminClientIntegrationTest > 
testDescribeConfigsForLog4jLogLevels STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testDescribeConfigsForLog4jLogLevels PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testMinimumRequestTimeouts STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testMinimumRequestTimeouts PASSED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevelsCanResetLoggerToCurrentRoot STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testIncrementalAlterConfigsForLog4jLogLevelsCanResetLoggerToCurrentRoot SKIPPED

kafka.api.SaslSslAdminClientIntegrationTest > testForceClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testForceClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testListNodes STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testListNodes PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDelayedClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4558

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9780: Deprecate commit records without record metadata (#8379)


--
[...truncated 1.79 MB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceOffsetsIncreaseMonotonically STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceOffsetsIncreaseMonotonically PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceMonotonicallyIncreasingStartOffsets STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceMonotonicallyIncreasingStartOffsets PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.StopReplicaRequestTest > testStopReplicaRequest STARTED

kafka.server.StopReplicaRequestTest > testStopReplicaRequest PASSED

kafka.server.AlterReplicaLogDirsRequestTest > testAlterReplicaLogDirsRequest 
STARTED

kafka.server.AlterReplicaLogDirsRequestTest > testAlterReplicaLogDirsRequest 
PASSED

kafka.server.AlterReplicaLogDirsRequestTest > 
testAlterReplicaLogDirsRequestErrorCode STARTED

kafka.server.AlterReplicaLogDirsRequestTest > 
testAlterReplicaLogDirsRequestErrorCode PASSED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadClientId STARTED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadClientId PASSED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadConfigKey STARTED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadConfigKey PASSED

kafka.server.ClientQuotasRequestTest > testDescribeClientQuotasMatchPartial 
STARTED

kafka.server.ClientQuotasRequestTest > testDescribeClientQuotasMatchPartial 
PASSED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasRequestValidateOnly 
STARTED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasRequestValidateOnly 
PASSED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadUser STARTED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadUser PASSED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasEmptyEntity STARTED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasEmptyEntity PASSED

kafka.server.ClientQuotasRequestTest > testClientQuotasSanitized STARTED

kafka.server.ClientQuotasRequestTest > testClientQuotasSanitized PASSED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadConfigValue 
STARTED

kafka.server.ClientQuotasRequestTest > testAlterClientQuotasBadConfigValue 
PASSED

kafka.server.ClientQuotasRequestTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1487

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9780: Deprecate commit records without record metadata (#8379)


--
[...truncated 2.11 MB...]
org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactAndDeleteTopicsForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactAndDeleteTopicsForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest > 
shouldUpgradeFromEosAlphaToEosBeta[false] PASSED

org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest > 
shouldUpgradeFromEosAlphaToEosBeta[true] STARTED

org.apache.kafka.streams.integration.SmokeTestDriverIntegrationTest > 
shouldWorkWithRebalance STARTED

Exception: java.lang.AssertionError thrown from the UncaughtExceptionHandler in 
thread "appDir2-StreamThread-1"

Exception: java.lang.AssertionError thrown from the UncaughtExceptionHandler in 
thread "appDir1-StreamThread-1"

Exception: java.lang.AssertionError thrown from the UncaughtExceptionHandler in 
thread "appDir2-StreamThread-1"

org.apache.kafka.streams.integration.SmokeTestDriverIntegrationTest > 
shouldWorkWithRebalance PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
shouldNotAccessJoinStoresWhenGivingName[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
shouldNotAccessJoinStoresWhenGivingName[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] PASSED


Re: [DISCUSS] KIP-602 - Change default value for client.dns.lookup

2020-05-21 Thread Ismael Juma
Badai, would you like to start a vote on this KIP?

Ismael

On Wed, May 20, 2020 at 7:45 AM Rajini Sivaram 
wrote:

> Deprecating for removal in 3.0 sounds good.
>
> On Wed, May 20, 2020 at 3:33 PM Ismael Juma  wrote:
>
> > Is there any reason to use "use_first_dns_ip"? Should we remove it
> > completely? Or at least deprecate it for removal in 3.0?
> >
> > Ismael
> >
> >
> > On Wed, May 20, 2020, 1:39 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Badai,
> > >
> > > Thanks for the KIP, sounds like a useful change. Perhaps we should call
> > the
> > > new option `use_first_dns_ip` (not `_ips` since it refers to one). We
> > > should also mention in the KIP that only one type of address (ipv4 or
> > ipv6,
> > > based on the first one) will be used - that is the current behaviour
> for
> > > `use_all_dns_ips`.  Since we are changing `default` to be exactly the
> > same
> > > as `use_all_dns_ips`, it will be good to mention that explicitly under
> > > Public Interfaces.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Mon, May 18, 2020 at 1:44 AM Badai Aqrandista 
> > > wrote:
> > >
> > > > Ismael
> > > >
> > > > What do you think of the PR and the explanation regarding the issue
> > > raised
> > > > in KIP-235?
> > > >
> > > > Should I go ahead and build a proper PR?
> > > >
> > > > Thanks
> > > > Badai
> > > >
> > > > On Mon, May 11, 2020 at 8:53 AM Badai Aqrandista  >
> > > > wrote:
> > > >
> > > > > Ismael
> > > > >
> > > > > PR created: https://github.com/apache/kafka/pull/8644/files
> > > > >
> > > > > Also, as this is my first PR, please let me know if I missed
> > anything.
> > > > >
> > > > > Thanks
> > > > > Badai
> > > > >
> > > > > On Mon, May 11, 2020 at 8:19 AM Badai Aqrandista <
> ba...@confluent.io
> > >
> > > > > wrote:
> > > > >
> > > > >> Ismael
> > > > >>
> > > > >> Thank you for responding.
> > > > >>
> > > > >> KIP-235 modified ClientUtils#parseAndValidateAddresses [1] to
> > resolve
> > > an
> > > > >> address alias (i.e. bootstrap server) into multiple addresses.
> This
> > is
> > > > why
> > > > >> it would break SSL hostname verification when the bootstrap server
> > is
> > > > an IP
> > > > >> address, i.e. it will resolve the IP address to an FQDN and use
> that
> > > > FQDN
> > > > >> in the SSL handshake.
> > > > >>
> > > > >> However, what I am proposing is to modify ClientUtils#resolve [2],
> > > which
> > > > >> is only used in ClusterConnectionStates#currentAddress [3], to get
> > the
> > > > >> resolved InetAddress of the address to connect to. And
> > > > >> ClusterConnectionStates#currentAddress is only used by
> > > > >> NetworkClient#initiateConnect [4] to create InetSocketAddress to
> > > > establish
> > > > >> the socket connection to the broker.
> > > > >>
> > > > >> Therefore, as far as I know, this change will not affect higher
> > level
> > > > >> protocol like SSL or SASL.
> > > > >>
> > > > >> PR coming after this.
> > > > >>
> > > > >> Thanks
> > > > >> Badai
> > > > >>
> > > > >> [1]
> > > > >>
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L51
> > > > >> [2]
> > > > >>
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClientUtils.java#L111
> > > > >> [3]
> > > > >>
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java#L403
> > > > >> [4]
> > > > >>
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/2.5.0/clients/src/main/java/org/apache/kafka/clients/NetworkClient.java#L955
> > > > >>
> > > > >> On Sun, May 10, 2020 at 10:18 AM Ismael Juma 
> > > wrote:
> > > > >>
> > > > >>> Hi Badai,
> > > > >>>
> > > > >>> I think this is a good change. Can you please address the issues
> > > raised
> > > > >>> by KIP-235? That was the reason why we did not do it previously.
> > > > >>>
> > > > >>> Ismael
> > > > >>>
> > > > >>> On Mon, Apr 27, 2020 at 5:46 PM Badai Aqrandista <
> > ba...@confluent.io
> > > >
> > > > >>> wrote:
> > > > >>>
> > > >  Hi everyone
> > > > 
> > > >  I have opened this KIP to have client.dns.lookup default value
> > > changed
> > > >  to
> > > >  "use_all_dns_ips".
> > > > 
> > > > 
> > > > 
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-602%3A+Change+default+value+for+client.dns.lookup
> > > > 
> > > >  Feedback appreciated.
> > > > 
> > > >  PS: I'm new here so please let me know if I miss anything.
> > > > 
> > > >  --
> > > >  Thanks,
> > > >  Badai
> > > > 
> > > > >>>
> > > > >>
> > > > >> --
> > > > >> Thanks,
> > > > >> Badai
> > > > >>
> > > > >>
> > > > >
> > > > > --
> > > > > Thanks,
> > > > > Badai
> > > > >
> > > > >
> > > >
> > > > --
> > > > Thanks,
> > > > Badai
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Nikolay Izhikov
Ismael, thanks for the clarification.

I updated the KIP according to your proposal.

> 21 мая 2020 г., в 17:06, Ismael Juma  написал(а):
> 
> Given what we've seen in the test, it would be good to mention that TLS 1.3
> will not work for users who have configured ciphers explicitly. If such
> users want to use TLS 1.3, they will have to update the list of ciphers to
> include TLS 1.3 ciphers (which use a different naming convention). TLS 1.2
> will continue to work as usual, so there is no compatibility issue.
> 
> Ismael
> 
> On Tue, May 19, 2020 at 12:19 PM Nikolay Izhikov 
> wrote:
> 
>> PR - https://github.com/apache/kafka/pull/8695
>> 
>>> 18 мая 2020 г., в 23:30, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello, Colin
>>> 
>>> We need hack only because TLSv1.3 not supported in java8.
>>> 
 Java 8 will receive TLS 1.3 support later this year (
>> https://java.com/en/jre-jdk-cryptoroadmap.html)
>>> 
>>> We can
>>> 
>>> 1. Enable TLSv1.3 for java11 for now. And after java8 get TLSv1.3
>> support remove it.
>>> 2. Or we can wait and enable it after java8 update.
>>> 
>>> What do you think?
>>> 
 18 мая 2020 г., в 22:51, Ismael Juma  написал(а):
 
 Yeah, agreed. One option is to actually only change this in Apache Kafka
 3.0 and avoid the hack altogether. We could make TLS 1.3 the default and
 have 1.2 as one of the enabled protocols.
 
 Ismael
 
 On Mon, May 18, 2020 at 12:24 PM Colin McCabe 
>> wrote:
 
> Hmm.  It would be good to figure out if we are going to remove this
> compatibility hack in the next major release of Kafka?  In other
>> words, in
> Kafka 3.0, will we enable TLS 1.3 by default even if the cipher suite
>> is
> specified?
> 
> best,
> Colin
> 
> 
> On Mon, May 18, 2020, at 09:26, Ismael Juma wrote:
>> Sounds good.
>> 
>> Ismael
>> 
>> 
>> On Mon, May 18, 2020, 9:03 AM Nikolay Izhikov 
> wrote:
>> 
 A safer approach may be to only add TLS 1.3 to the list if the
>> cipher
>>> suite config has not been specified.
 So, if TLS 1.3 is added to the list by Kafka, it would seem that it
>>> would not work if the user specified a list of cipher suites for
> previous
>>> TLS versions
>>> 
>>> Let’s just add test for this case?
>>> I can prepare the preliminary PR for this KIP and add this kind of
> test to
>>> it.
>>> 
>>> What do you think?
>>> 
>>> 
 18 мая 2020 г., в 18:59, Nikolay Izhikov 
>>> написал(а):
 
> 1. I meant that `ssl.protocol` is TLSv1.2 while
> `ssl.enabled.protocols`
>>> is `TLSv1.2, TLSv1.3`. How do these two configs interact
 
 `ssl.protocol` is what will be used, by default, in this KIP is
>> stays
>>> unchanged (TLSv1.2) Please, see [1]
 `ssl.enabled.protocols` is list of protocols that  *can* be used.
> This
>>> value is just passed to the `SSLEngine` implementation.
 Please, see DefaultSslEngineFactory#createSslEngine [2]
 
> 2. My question is not about obsolete protocols, it is about people
>>> using TLS 1.2 with specified cipher suites. How will that behave when
> TLS
>>> 1.3 is enabled by default?
 
 They don’t change anything and all just work as expected on java11.
 
> 3. An additional question is how does this impact Java 8 users?
 
 Yes.
 If SSLEngine doesn’t support TLSv1.3 then java8 users should
> explicitly
>>> modify `ssl.enabled.protocols` and set it to `TLSv1.2`.
 
 [1]
>>> 
> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L218
 [2]
>>> 
> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L164
 
> 18 мая 2020 г., в 17:34, Ismael Juma 
> написал(а):
> 
> Nikolay,
> 
> Thanks for the comments. More below:
> 
> 1. I meant that `ssl.protocol` is TLSv1.2 while
> `ssl.enabled.protocols`
>>> is `TLSv1.2, TLSv1.3`. How do these two configs interact?
> 2. My question is not about obsolete protocols, it is about people
>>> using TLS 1.2 with specified cipher suites. How will that behave when
> TLS
>>> 1.3 is enabled by default?
> 3. An additional question is how does this impact Java 8 users?
> Java 8
>>> will receive TLS 1.3 support later this year (
>>> https://java.com/en/jre-jdk-cryptoroadmap.html), but it currently
>> does
>>> not support it. One way to handle this would be to check if the
> underlying
>>> JVM supports TLS 1.3 before enabling it.
> 
> I hope this clarifies my questions.
> 
> Ismael
> 
> On Mon, May 18, 2020 at 6:44 

Build failed in Jenkins: kafka-trunk-jdk11 #1486

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Deploy VerifiableClient in constructor to avoid test timeouts


--
[...truncated 3.10 MB...]
org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> 

Re: [VOTE] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Nikolay Izhikov
Thanks everyone!

After 3+ business days since this thread started, I'm concluding the vote
on KIP-573.

The KIP has passed with:

3 binding votes from Ismael Juma, Rajini Sivaram, Manikumar.

Thank you all for voting!

> 21 мая 2020 г., в 19:50, Ismael Juma  написал(а):
> 
> Nikolay, you have enough votes and 72 hours have passed, so you can close
> this vote as successful whenever you're ready.
> 
> Ismael
> 
> On Mon, Mar 2, 2020 at 10:55 AM Nikolay Izhikov  wrote:
> 
>> Hello.
>> 
>> I would like to start vote for KIP-573: Enable TLSv1.3 by default
>> 
>> KIP -
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
>> Discussion thread -
>> https://lists.apache.org/thread.html/r1158b6caf416e7db802780de71115b3e2d3ef2c4664b7ec8cb32ea86%40%3Cdev.kafka.apache.org%3E



[jira] [Resolved] (KAFKA-9780) Deprecate commit records without record metadata

2020-05-21 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9780.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk` after 
[KIP-586|https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata]
 has been adopted

> Deprecate commit records without record metadata
> 
>
> Key: KAFKA-9780
> URL: https://issues.apache.org/jira/browse/KAFKA-9780
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.4.1
>Reporter: Mario Molina
>Assignee: Mario Molina
>Priority: Minor
> Fix For: 2.6.0
>
>
> Since KIP-382 (MirrorMaker 2.0) a new method {{commitRecord}} was included in 
> {{SourceTask}} class to be called by the worker adding a new parameter with 
> the record metadata. The old {{commitRecord}} method is called and from the 
> new one and it's preserved just for backwards compatibility.
> The idea is to deprecate this method so that we could remove it in a future 
> release.
> There is a KIP for this ticket: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-589: Add API to update Replica state in Controller

2020-05-21 Thread Jose Garcia Sancio
+1. LGTM David!

On Wed, May 20, 2020 at 12:22 PM David Arthur  wrote:
>
> Hello, all. I'd like to start the vote for KIP-589 which proposes to add a
> new AlterReplicaState RPC.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-589+Add+API+to+update+Replica+state+in+Controller
>
> Cheers,
> David



-- 
-Jose


Re: [VOTE] KIP-610: Error Reporting in Sink Connectors

2020-05-21 Thread Randall Hauch
The vote has been open for >72 hours, and the KIP is adopted with three +1
binding votes (Konstantine, Ewen, me), one +1 non-binding vote (Andrew),
and no -1 votes.

I'll update the KIP and the AK 2.6.0 plan.

Thanks, everyone.

On Tue, May 19, 2020 at 4:33 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> +1 (binding)
>
> I like how the KIP looks now too. Quite active discussions within the past
> few days, which I found very useful.
>
> There's some room to allow in the future the connector developers to decide
> whether they want greater control over error reporting or they want the
> framework to keep providing the reasonable guarantees that this KIP now
> describes. The API is expressive enough to accommodate such improvements if
> they are warranted, but its current form seems quite adequate to support
> efficient end-to-end error reporting for sink connectors.
>
> Thanks for introducing this KIP Aakash!
>
> One last minor comment around naming:
> Currently both the names ErrantRecordReporter and failedRecordReporter are
> used. Using the same name everywhere seems preferable, so feel free to
> choose the one that you prefer.
>
> Regards,
> Konstantine
>
> On Tue, May 19, 2020 at 2:30 PM Ewen Cheslack-Postava 
> wrote:
>
> > +1 (binding)
> >
> > This will be a nice improvement. From the discussion thread it's clear
> this
> > is tricky to get right, nice work!
> >
> > On Tue, May 19, 2020 at 8:16 AM Andrew Schofield <
> > andrew_schofi...@live.com>
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > This is now looking very nice.
> > >
> > > Andrew Schofield
> > >
> > > On 19/05/2020, 16:11, "Randall Hauch"  wrote:
> > >
> > > Thank you, Aakash, for putting together this KIP and shepherding
> the
> > > discussion. Also, many thanks to all those that participated in the
> > > very
> > > active discussion. I'm actually very happy with the current
> proposal,
> > > am
> > > confident that it is a valuable improvement to the Connect
> framework,
> > > and
> > > know that it will be instrumental in making sink tasks easily able
> to
> > > report problematic records and keep running.
> > >
> > > +1 (binding)
> > >
> > > Best regards,
> > >
> > > Randall
> > >
> > > On Sun, May 17, 2020 at 6:59 PM Aakash Shah 
> > > wrote:
> > >
> > > > Hello all,
> > > >
> > > > I'd like to open a vote for KIP-610:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
> > > >
> > > > Thanks,
> > > > Aakash
> > > >
> > >
> > >
> >
>


Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-05-21 Thread Boyang Chen
Thanks David, I agree the wording here is not clear, and the fellow broker
should just send a new CreateTopicRequest in this case.

In the meantime, we had some offline discussion about the Envelope API as
well. Although it provides certain privileges like data embedding and
principal embedding, it creates a security hole by letting a malicious user
impersonate any forwarding broker, thus pretending to be any admin user.
Passing the principal around also increases the vulnerability, compared
with other standard ways such as passing a verified token, but it is
unfortunately not fully supported with Kafka security.

So for the security concerns, we are abandoning the Envelope approach and
fallback to just forward the raw admin requests. The authentication will
happen on the receiving broker side instead of on the controller, so that
we are able to stripped off the principal fields and only include the
principal in header as optional field for audit logging purpose.
Furthermore, we shall propose adding a separate endpoint for
broker-controller communication which should be recommended to enable
secure connections so that a malicious client could not pretend to be a
broker and perform impersonation.

Let me know your thoughts.

Best,
Boyang

On Tue, May 19, 2020 at 12:17 AM David Jacot  wrote:

> Hi Boyang,
>
> I've got another question regarding the auto topic creation case. The KIP
> says: "Currently the target broker shall just utilize its own ZK client to
> create
> internal topics, which is disallowed in the bridge release. For above
> scenarios,
> non-controller broker shall just forward a CreateTopicRequest to the
> controller
> instead and let controller take care of the rest, while waiting for the
> response
> in the meantime." There will be no request to forward in this case, right?
> Instead,
> a CreateTopicsRequest is created and sent to the controller node.
>
> When the CreateTopicsRequest is sent as a side effect of the
> MetadataRequest,
> it would be good to know the principal and the clientId in the controller
> (quota,
> audit, etc.). Do you plan to use the Envelope API for this case as well or
> to call
> the regular API directly? Another was to phrase it would be: Shall the
> internal
> CreateTopicsRequest be sent with the original metadata (principal,
> clientId, etc.)
> of the MetadataRequest or as an admin request?
>
> Best,
> David
>
> On Fri, May 8, 2020 at 2:04 AM Guozhang Wang  wrote:
>
> > Just to adds a bit more FYI here related to the last question from David:
> > in KIP-595 while implementing the new requests we are also adding a
> > "KafkaNetworkChannel" which is used for brokers to send vote / fetch
> > records, so besides the discussion on listeners I think implementation
> wise
> > we can also consider consolidating a lot of those into the same
> call-trace
> > as well -- of course this is not related to public APIs so maybe just
> needs
> > to be coordinated among developments:
> >
> > 1. Broker -> Controller: ISR Change, Topic Creation, Admin Redirect
> > (KIP-497).
> > 2. Controller -> Broker: LeaderAndISR / MetadataUpdate; though these are
> > likely going to be deprecated post KIP-500.
> > 3. Txn Coordinator -> Broker: TxnMarker
> >
> >
> > Guozhang
> >
> > On Wed, May 6, 2020 at 8:58 PM Boyang Chen 
> > wrote:
> >
> > > Hey David,
> > >
> > > thanks for the feedbacks!
> > >
> > > On Wed, May 6, 2020 at 2:06 AM David Jacot 
> wrote:
> > >
> > > > Hi Boyang,
> > > >
> > > > While re-reading the KIP, I've got few small questions/comments:
> > > >
> > > > 1. When auto topic creation is enabled, brokers will send a
> > > > CreateTopicRequest
> > > > to the controller instead of writing to ZK directly. It means that
> > > > creation of these
> > > > topics are subject to be rejected with an error if a
> CreateTopicPolicy
> > is
> > > > used. Today,
> > > > it bypasses the policy entirely. I suppose that clusters allowing
> auto
> > > > topic creation
> > > > don't have a policy in place so it is not a big deal. I suggest to
> call
> > > > out explicitly the
> > > > limitation in the KIP though.
> > > >
> > > > That's a good idea, will add to the KIP.
> > >
> > >
> > > > 2. In the same vein as my first point. How do you plan to handle
> errors
> > > > when internal
> > > > topics are created by a broker? Do you plan to retry retryable errors
> > > > indefinitely?
> > > >
> > > > I checked a bit on the admin client handling of the create topic RPC.
> > It
> > > seems that
> > > the only retriable exceptions at the moment are NOT_CONTROLLER and
> > > REQUEST_TIMEOUT.
> > > So I guess we just need to retry on these exceptions?
> > >
> > >
> > > > 3. Could you clarify which listener will be used for the internal
> > > requests?
> > > > Do you plan
> > > > to use the control plane listener or perhaps the inter-broker
> listener?
> > > >
> > > > As we discussed in the KIP, currently the internal design for
> > > broker->controller channel has not been
> > > done yet, and I feel it 

Build failed in Jenkins: kafka-trunk-jdk8 #4557

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Added unit tests for ConnectionQuotas (#8650)

[github] MINOR: Deploy VerifiableClient in constructor to avoid test timeouts


--
[...truncated 1.48 MB...]

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride PASSED

kafka.server.DelayedOperationTest > testTryCompleteLockContention STARTED

kafka.server.DelayedOperationTest > testTryCompleteLockContention PASSED

kafka.server.DelayedOperationTest > testTryCompleteWithMultipleThreads STARTED

kafka.server.DelayedOperationTest > testTryCompleteWithMultipleThreads PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testDelayedFuture STARTED

kafka.server.DelayedOperationTest > testDelayedFuture PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithDeletePartitionAndExistingPartitionAndOlderLeaderEpoch 
STARTED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithDeletePartitionAndExistingPartitionAndOlderLeaderEpoch PASSED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithDeletePartitionAndExistingPartitionAndEqualLeaderEpoch 
STARTED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithDeletePartitionAndExistingPartitionAndEqualLeaderEpoch PASSED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithExistingPartitionAndDeleteSentinel STARTED

kafka.log.LogTest > testCorruptIndexRebuild PASSED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted STARTED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithExistingPartitionAndDeleteSentinel PASSED

kafka.server.ReplicaManagerTest > testPreferredReplicaAsLeader STARTED

kafka.server.ReplicaManagerTest > testPreferredReplicaAsLeader PASSED

kafka.server.ReplicaManagerTest > testFetchRequestRateMetrics STARTED

kafka.server.ReplicaManagerTest > testFetchRequestRateMetrics PASSED

kafka.server.ReplicaManagerTest > testReplicaSelector STARTED

kafka.server.ReplicaManagerTest > testReplicaSelector PASSED

kafka.server.ReplicaManagerTest > testFetchBeyondHighWatermark STARTED

kafka.log.LogTest > shouldDeleteTimeBasedSegmentsReadyToBeDeleted PASSED

kafka.log.LogTest > testReadWithTooSmallMaxLength STARTED

kafka.server.ReplicaManagerTest > testFetchBeyondHighWatermark PASSED

kafka.server.ReplicaManagerTest > testStopReplicaWithStaleControllerEpoch 
STARTED

kafka.server.ReplicaManagerTest > testStopReplicaWithStaleControllerEpoch PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping STARTED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > 
testBecomeFollowerWhenLeaderIsUnchangedButMissedLeaderUpdate STARTED

kafka.server.ReplicaManagerTest > 
testBecomeFollowerWhenLeaderIsUnchangedButMissedLeaderUpdate PASSED

kafka.server.ReplicaManagerTest > testFollowerStateNotUpdatedIfLogReadFails 
STARTED

kafka.server.ReplicaManagerTest > testFollowerStateNotUpdatedIfLogReadFails 
PASSED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithExistingPartitionAndNewerLeaderEpoch STARTED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithExistingPartitionAndNewerLeaderEpoch PASSED

kafka.server.ReplicaManagerTest > testFencedErrorCausedByBecomeLeader STARTED

kafka.server.ReplicaManagerTest > testFencedErrorCausedByBecomeLeader PASSED

kafka.server.ReplicaManagerTest > testClearFetchPurgatoryOnStopReplica STARTED

kafka.server.ReplicaManagerTest > testClearFetchPurgatoryOnStopReplica PASSED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithInexistentPartitionAndPartitionsDelete STARTED

kafka.server.ReplicaManagerTest > 
testStopReplicaWithInexistentPartitionAndPartitionsDelete PASSED

kafka.server.ReplicaManagerTest > testFetchFromLeaderAlwaysAllowed STARTED

kafka.server.ReplicaManagerTest > testFetchFromLeaderAlwaysAllowed PASSED

kafka.server.ReplicaManagerTest > 
testFetchMessagesWhenNotFollowerForOnePartition STARTED

kafka.server.ReplicaManagerTest > 

Build failed in Jenkins: kafka-trunk-jdk14 #113

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Correct MirrorMaker2 integration test configs for Connect

[github] MINOR: Added unit tests for ConnectionQuotas (#8650)


--
[...truncated 1.77 MB...]

kafka.admin.TopicCommandWithAdminClientTest > 
testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithNegativePartitionCount STARTED

kafka.admin.LeaderElectionCommandTest > testPreferredReplicaElection PASSED

kafka.admin.LeaderElectionCommandTest > testInvalidBroker STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithNegativePartitionCount PASSED

kafka.admin.TopicCommandWithAdminClientTest > testAlterWhenTopicDoesntExist 
STARTED

kafka.admin.LeaderElectionCommandTest > testInvalidBroker PASSED

kafka.admin.LeaderElectionCommandTest > testPartitionWithoutTopic STARTED

kafka.admin.TopicCommandWithAdminClientTest > testAlterWhenTopicDoesntExist 
PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testParseAssignmentPartitionsOfDifferentSize STARTED

kafka.admin.LeaderElectionCommandTest > testPartitionWithoutTopic PASSED

kafka.admin.LeaderElectionCommandTest > testMissingElectionType STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testParseAssignmentPartitionsOfDifferentSize PASSED

kafka.admin.TopicCommandWithAdminClientTest > testCreateAlterTopicWithRackAware 
STARTED

kafka.admin.LeaderElectionCommandTest > testMissingElectionType PASSED

kafka.admin.LeaderElectionCommandTest > testMissingTopicPartitionSelection 
STARTED

kafka.admin.LeaderElectionCommandTest > testMissingTopicPartitionSelection 
PASSED

kafka.admin.LeaderElectionCommandTest > testTopicPartition STARTED

kafka.admin.TopicCommandWithAdminClientTest > testCreateAlterTopicWithRackAware 
PASSED

kafka.admin.TopicCommandWithAdminClientTest > testTopicDeletion STARTED

kafka.admin.TopicCommandWithAdminClientTest > testTopicDeletion PASSED

kafka.admin.TopicCommandWithAdminClientTest > testCreateWithDefaults STARTED

kafka.admin.TopicCommandWithAdminClientTest > testCreateWithDefaults PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testDescribeReportOverriddenConfigs STARTED

kafka.admin.LeaderElectionCommandTest > testTopicPartition PASSED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicOnly STARTED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicOnly PASSED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicOnly STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testDescribeReportOverriddenConfigs PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithAssignmentAndPartitionCount STARTED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicOnly PASSED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testCreateWithAssignmentAndPartitionCount PASSED

kafka.admin.TopicCommandWithAdminClientTest > testListTopicsWithWhitelist 
STARTED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition PASSED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition STARTED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition PASSED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition STARTED

kafka.admin.TopicCommandWithAdminClientTest > testListTopicsWithWhitelist PASSED

kafka.admin.TopicCommandWithAdminClientTest > 
testAlterAssignmentWithMoreAssignmentThanPartitions STARTED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition PASSED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly STARTED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly PASSED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition STARTED

kafka.admin.TopicCommandWithAdminClientTest > 
testAlterAssignmentWithMoreAssignmentThanPartitions PASSED

kafka.admin.TopicCommandWithAdminClientTest > testCreateWithDefaultPartitions 
STARTED

kafka.admin.DeleteOffsetsConsumerGroupCommandIntegrationTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1485

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Correct MirrorMaker2 integration test configs for Connect

[github] MINOR: Added unit tests for ConnectionQuotas (#8650)


--
[...truncated 1.58 MB...]

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.admin.DeleteTopicTest > testDeleteTopicAlreadyMarkedAsDeleted PASSED

kafka.admin.DeleteTopicTest > testIncreasePartitionCountDuringDeleteTopic 
STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testUnscheduleProducerTask STARTED

kafka.utils.SchedulerTest > testUnscheduleProducerTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testMockSchedulerLocking STARTED

kafka.utils.SchedulerTest > testMockSchedulerLocking PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters STARTED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter PASSED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting STARTED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldReturnEmptyMapForEmptyFile STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldReturnEmptyMapForEmptyFile PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldThrowIfVersionIsNotRecognised STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldThrowIfVersionIsNotRecognised PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
testLazyOffsetCheckpointFileInvalidLogDir STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
testLazyOffsetCheckpointFileInvalidLogDir PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldPersistAndOverwriteAndReloadFile STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > 
shouldPersistAndOverwriteAndReloadFile PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > shouldHandleMultipleLines 
STARTED

kafka.server.checkpoints.OffsetCheckpointFileTest > shouldHandleMultipleLines 
PASSED

kafka.server.checkpoints.OffsetCheckpointFileTest > 

Re: [VOTE] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Ismael Juma
Nikolay, you have enough votes and 72 hours have passed, so you can close
this vote as successful whenever you're ready.

Ismael

On Mon, Mar 2, 2020 at 10:55 AM Nikolay Izhikov  wrote:

> Hello.
>
> I would like to start vote for KIP-573: Enable TLSv1.3 by default
>
> KIP -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
> Discussion thread -
> https://lists.apache.org/thread.html/r1158b6caf416e7db802780de71115b3e2d3ef2c4664b7ec8cb32ea86%40%3Cdev.kafka.apache.org%3E


Build failed in Jenkins: kafka-trunk-jdk8 #4556

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Correct MirrorMaker2 integration test configs for Connect


--
[...truncated 1.44 MB...]

kafka.api.PlaintextAdminIntegrationTest > testAlterReplicaLogDirs STARTED

kafka.api.PlaintextAdminIntegrationTest > testAlterReplicaLogDirs PASSED

kafka.api.PlaintextAdminIntegrationTest > testLogStartOffsetCheckpoint STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.PlaintextAdminIntegrationTest > testLogStartOffsetCheckpoint PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testAlterConfigsForLog4jLogLevelsDoesNotWork STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testAlterConfigsForLog4jLogLevelsDoesNotWork SKIPPED

kafka.api.PlaintextAdminIntegrationTest > testAclOperations STARTED

kafka.api.PlaintextAdminIntegrationTest > testAclOperations PASSED

kafka.api.PlaintextAdminIntegrationTest > testDescribeCluster STARTED

kafka.api.PlaintextAdminIntegrationTest > testDescribeCluster PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForManyPartitions STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForManyPartitions PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForUnknownPartitions STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForUnknownPartitions PASSED

kafka.api.PlaintextAdminIntegrationTest > testCreatePartitions STARTED

kafka.api.PlaintextAdminIntegrationTest > testCreatePartitions PASSED

kafka.api.PlaintextAdminIntegrationTest > testDescribeNonExistingTopic STARTED

kafka.api.PlaintextAdminIntegrationTest > testDescribeNonExistingTopic PASSED

kafka.api.PlaintextAdminIntegrationTest > testMetadataRefresh STARTED

kafka.api.PlaintextAdminIntegrationTest > testMetadataRefresh PASSED

kafka.api.PlaintextAdminIntegrationTest > testDescribeAndAlterConfigs STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.PlaintextAdminIntegrationTest > testDescribeAndAlterConfigs PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForAllPartitions STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForAllPartitions PASSED

kafka.api.PlaintextAdminIntegrationTest > testLogStartOffsetAfterDeleteRecords 
STARTED

kafka.api.PlaintextAdminIntegrationTest > testLogStartOffsetAfterDeleteRecords 
PASSED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForOnePartition STARTED

kafka.api.PlaintextAdminIntegrationTest > 
testElectUncleanLeadersForOnePartition PASSED

kafka.api.PlaintextAdminIntegrationTest > testValidIncrementalAlterConfigs 
STARTED

kafka.api.PlaintextAdminIntegrationTest > testValidIncrementalAlterConfigs 
PASSED

kafka.api.PlaintextAdminIntegrationTest > testInvalidIncrementalAlterConfigs 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextAdminIntegrationTest > testInvalidIncrementalAlterConfigs 
PASSED

kafka.api.PlaintextAdminIntegrationTest > testSeekAfterDeleteRecords STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testClose STARTED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush STARTED

kafka.api.PlaintextAdminIntegrationTest > testSeekAfterDeleteRecords PASSED

kafka.api.PlaintextAdminIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.PlaintextAdminIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.PlaintextAdminIntegrationTest > testNullConfigs STARTED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition STARTED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextAdminIntegrationTest > testNullConfigs PASSED

kafka.api.PlaintextAdminIntegrationTest > testDescribeConfigsForTopic STARTED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextAdminIntegrationTest > testDescribeConfigsForTopic PASSED

kafka.api.PlaintextAdminIntegrationTest > testConsumerGroups STARTED

kafka.api.PlaintextAdminIntegrationTest > testConsumerGroups PASSED


Re: [VOTE] KIP-545 support automated consumer offset sync across clusters in MM 2.0

2020-05-21 Thread Harsha Ch
+1 (binding). Good addition to MM 2.

-Harsha

On Thu, May 21, 2020 at 8:36 AM, Manikumar < manikumar.re...@gmail.com > wrote:

> 
> 
> 
> +1 (binding).
> 
> 
> 
> Thanks for the KIP.
> 
> 
> 
> On Thu, May 21, 2020 at 9:49 AM Maulin Vasavada < maulin. vasavada@ gmail.
> com ( maulin.vasav...@gmail.com ) > wrote:
> 
> 
>> 
>> 
>> Thank you for the KIP. I sincerely hope we get enough votes on this KIP. I
>> was thinking of similar changes while working on DR capabilities and
>> offsets are Achilles Heels and this KIP addresses it.
>> 
>> 
>> 
>> On Mon, May 18, 2020 at 6:10 PM Maulin Vasavada < maulin. vasavada@ gmail.
>> com ( maulin.vasav...@gmail.com )
>> 
>> 
>> 
>> wrote:
>> 
>> 
>>> 
>>> 
>>> +1 (non-binding)
>>> 
>>> 
>>> 
>>> On Mon, May 18, 2020 at 9:41 AM Ryanne Dolan < ryannedolan@ gmail. com (
>>> ryannedo...@gmail.com ) > wrote:
>>> 
>>> 
 
 
 Bump. Looks like we've got 6 non-binding votes and 1 binding.
 
 
 
 On Thu, Feb 20, 2020 at 11:25 AM Ning Zhang < ning2008wisc@ gmail. com (
 ning2008w...@gmail.com ) > wrote:
 
 
> 
> 
> Hello committers,
> 
> 
> 
> I am the author of the KIP-545 and if we still miss votes from the
> committers, please review the KIP and vote for it, so that the
> corresponding PR will be reviewed soon.
> 
> 
 
 
>>> 
>>> 
>> 
>> 
>> 
>> https:/ / cwiki. apache. org/ confluence/ display/ KAFKA/ 
>> KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.
>> 0 (
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
>> )
>> 
>> 
>>> 
 
> 
> 
> Thank you
> 
> 
> 
> On 2020/02/06 17:05:41, Edoardo Comar < edoardlists@ gmail. com (
> edoardli...@gmail.com ) > wrote:
> 
> 
>> 
>> 
>> +1 (non-binding)
>> thanks for the KIP !
>> 
>> 
>> 
>> On Tue, 14 Jan 2020 at 13:57, Navinder Brar <
>> 
>> 
> 
> 
 
 
>>> 
>>> 
>> 
>> 
>> 
>> navinder_brar@ yahoo. com ( navinder_b...@yahoo.com )
>> 
>> 
>>> 
 
> 
> 
> .invalid>
> 
> 
>> 
>> 
>> wrote:
>> 
>> 
>>> 
>>> 
>>> +1 (non-binding)
>>> Navinder
>>> On Tuesday, 14 January, 2020, 07:24:02 pm IST, Ryanne Dolan < 
>>> ryannedolan@
>>> gmail. com ( ryannedo...@gmail.com ) > wrote:
>>> 
>>> 
>>> 
>>> Bump. We've got 4 non-binding and one binding vote.
>>> 
>>> 
>>> 
>>> Ryanne
>>> 
>>> 
>>> 
>>> On Fri, Dec 13, 2019, 1:44 AM Tom Bentley < tbentley@ redhat. com (
>>> tbent...@redhat.com ) >
>>> 
>>> 
>> 
>> 
> 
> 
 
 
 
 wrote:
 
 
> 
>> 
>>> 
 
 
 +1 (non-binding)
 
 
 
 On Thu, Dec 12, 2019 at 6:33 PM Andrew Schofield < andrew_schofield@ 
 live.
 com ( andrew_schofi...@live.com ) >
 wrote:
 
 
> 
> 
> +1 (non-binding)
> 
> 
> 
> On 12/12/2019, 14:20, "Mickael Maison" <
> 
> 
 
 
>>> 
>>> 
>> 
>> 
> 
> 
 
 
 
 mickael. maison@ gmail. com ( mickael.mai...@gmail.com ) >
 
 
> 
>> 
>>> 
 
 
 wrote:
 
 
> 
> 
> +1 (binding)
> Thanks for the KIP!
> 
> 
> 
> On Thu, Dec 5, 2019 at 12:56 AM Ryanne Dolan <
> 
> 
 
 
>>> 
>>> 
>> 
>> 
> 
> 
> 
> ryannedolan@ gmail. com ( ryannedo...@gmail.com )
> 
> 
>> 
>>> 
 
> 
> 
> wrote:
> 
> 
>> 
>> 
>> Bump. We've got 2 non-binding votes so far.
>> 
>> 
>> 
>> On Wed, Nov 13, 2019 at 3:32 PM Ning Zhang <
>> 
>> 
> 
> 
 
 
>>> 
>>> 
>>> 
>>> ning2008wisc@ gmail. com ( ning2008w...@gmail.com )
>>> 
>>> 
 
> 
> 
> wrote:
> 
> 
>> 
>>> 
>>> 
>>> My current plan is to implement this in
>>> 
>>> 
>> 
>> 
> 
> 
 
 
>>> 
>>> 
>> 
>> 
> 
> 
> 
> "MirrorCheckpointTask"
> 
> 
>> 
>>> 
 
> 
>> 
>>> 
>>> 
>>> On 2019/11/02 03:30:11, Xu Jianhai <
>>> 
>>> 
>> 
>> 
> 
> 
 
 
>>> 
>>> 
>> 
>> 
> 
> 
 
 
>>> 
>>> 
>> 
>> 
>> 
>> snow4young@ gmail. com ( snow4yo...@gmail.com )
>> 
>> 
>>> 
 
> 
>> 
>>> 
>>> 
>>> wrote:

Re: [VOTE] KIP-545 support automated consumer offset sync across clusters in MM 2.0

2020-05-21 Thread Manikumar
+1 (binding).

Thanks for the KIP.

On Thu, May 21, 2020 at 9:49 AM Maulin Vasavada 
wrote:

> Thank you for the KIP. I sincerely hope we get enough votes on this KIP. I
> was thinking of similar changes while working on DR capabilities and
> offsets are Achilles Heels and this KIP addresses it.
>
> On Mon, May 18, 2020 at 6:10 PM Maulin Vasavada  >
> wrote:
>
> > +1 (non-binding)
> >
> > On Mon, May 18, 2020 at 9:41 AM Ryanne Dolan 
> > wrote:
> >
> >> Bump. Looks like we've got 6 non-binding votes and 1 binding.
> >>
> >> On Thu, Feb 20, 2020 at 11:25 AM Ning Zhang 
> >> wrote:
> >>
> >> > Hello committers,
> >> >
> >> > I am the author of the KIP-545 and if we still miss votes from the
> >> > committers, please review the KIP and vote for it, so that the
> >> > corresponding PR will be reviewed soon.
> >> >
> >> >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
> >> >
> >> > Thank you
> >> >
> >> > On 2020/02/06 17:05:41, Edoardo Comar  wrote:
> >> > > +1 (non-binding)
> >> > > thanks for the KIP !
> >> > >
> >> > > On Tue, 14 Jan 2020 at 13:57, Navinder Brar <
> navinder_b...@yahoo.com
> >> > .invalid>
> >> > > wrote:
> >> > >
> >> > > > +1 (non-binding)
> >> > > > Navinder
> >> > > > On Tuesday, 14 January, 2020, 07:24:02 pm IST, Ryanne Dolan <
> >> > > > ryannedo...@gmail.com> wrote:
> >> > > >
> >> > > >  Bump. We've got 4 non-binding and one binding vote.
> >> > > >
> >> > > > Ryanne
> >> > > >
> >> > > > On Fri, Dec 13, 2019, 1:44 AM Tom Bentley 
> >> wrote:
> >> > > >
> >> > > > > +1 (non-binding)
> >> > > > >
> >> > > > > On Thu, Dec 12, 2019 at 6:33 PM Andrew Schofield <
> >> > > > > andrew_schofi...@live.com>
> >> > > > > wrote:
> >> > > > >
> >> > > > > > +1 (non-binding)
> >> > > > > >
> >> > > > > > On 12/12/2019, 14:20, "Mickael Maison" <
> >> mickael.mai...@gmail.com>
> >> > > > > wrote:
> >> > > > > >
> >> > > > > >+1 (binding)
> >> > > > > >Thanks for the KIP!
> >> > > > > >
> >> > > > > >On Thu, Dec 5, 2019 at 12:56 AM Ryanne Dolan <
> >> > ryannedo...@gmail.com
> >> > > > >
> >> > > > > > wrote:
> >> > > > > >>
> >> > > > > >> Bump. We've got 2 non-binding votes so far.
> >> > > > > >>
> >> > > > > >> On Wed, Nov 13, 2019 at 3:32 PM Ning Zhang <
> >> > > > ning2008w...@gmail.com
> >> > > > > >
> >> > > > > > wrote:
> >> > > > > >>
> >> > > > > >> > My current plan is to implement this in
> >> > "MirrorCheckpointTask"
> >> > > > > >> >
> >> > > > > >> > On 2019/11/02 03:30:11, Xu Jianhai <
> snow4yo...@gmail.com
> >> >
> >> > > > wrote:
> >> > > > > >> > > I think this kip will implement a task in sinkTask ?
> >> > right?
> >> > > > > >> > >
> >> > > > > >> > > On Sat, Nov 2, 2019 at 1:06 AM Ryanne Dolan <
> >> > > > > > ryannedo...@gmail.com>
> >> > > > > >> > wrote:
> >> > > > > >> > >
> >> > > > > >> > > > Hey y'all, Ning Zhang and I would like to start the
> >> > vote for
> >> > > > > > the
> >> > > > > >> > following
> >> > > > > >> > > > small KIP:
> >> > > > > >> > > >
> >> > > > > >> > > >
> >> > > > > >> > > >
> >> > > > > >> >
> >> > > > > >
> >> > > > >
> >> > > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
> >> > > > > >> > > >
> >> > > > > >> > > > This is an elegant way to automatically write
> >> consumer
> >> > group
> >> > > > > > offsets to
> >> > > > > >> > > > downstream clusters without breaking existing use
> >> cases.
> >> > > > > > Currently, we
> >> > > > > >> > rely
> >> > > > > >> > > > on external tooling based on RemoteClusterUtils and
> >> > > > > >> > kafka-consumer-groups
> >> > > > > >> > > > command to write offsets. This KIP bakes this
> >> > functionality
> >> > > > > > into MM2
> >> > > > > >> > > > itself, reducing the effort required to
> >> > failover/failback
> >> > > > > > workloads
> >> > > > > >> > between
> >> > > > > >> > > > clusters.
> >> > > > > >> > > >
> >> > > > > >> > > > Thanks for the votes!
> >> > > > > >> > > >
> >> > > > > >> > > > Ryanne
> >> > > > > >> > > >
> >> > > > > >> > >
> >> > > > > >> >
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > >
> >> >
> >>
> >
>


Re: [VOTE] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Manikumar
+1 (binding)

Thanks for the KIP.

On Thu, May 21, 2020 at 7:42 PM Rajini Sivaram 
wrote:

> +1 (binding)
>
> Thanks for the KIP, Nikolay!
>
> Regards,
>
> Rajini
>
>
> On Thu, May 21, 2020 at 3:04 PM Ismael Juma  wrote:
>
> > Thanks for the KIP, +1 (binding)
> >
> > On Mon, Mar 2, 2020 at 10:55 AM Nikolay Izhikov 
> > wrote:
> >
> > > Hello.
> > >
> > > I would like to start vote for KIP-573: Enable TLSv1.3 by default
> > >
> > > KIP -
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
> > > Discussion thread -
> > >
> >
> https://lists.apache.org/thread.html/r1158b6caf416e7db802780de71115b3e2d3ef2c4664b7ec8cb32ea86%40%3Cdev.kafka.apache.org%3E
> >
>


Re: [VOTE]: KIP-604: Remove ZooKeeper Flags from the Administrative Tools

2020-05-21 Thread Colin McCabe
Hi all,

With 4 binding +1 votes from Guozhang Wang, Manikumar, Mickael Miason, and 
Jason Gustafson, and 1 non-binding vote from David Jacot, the vote passes.

thanks, all.
Colin


On Wed, May 20, 2020, at 18:16, Jason Gustafson wrote:
> Sounds good. +1 from me.
> 
> On Tue, May 19, 2020 at 5:41 PM Colin McCabe  wrote:
> 
> > On Tue, May 19, 2020, at 09:31, Jason Gustafson wrote:
> > > Hi Colin,
> > >
> > > Looks good. I just had one question. It sounds like your intent is to
> > > change kafka-configs.sh so that the --zookeeper flag is only supported
> > for
> > > bootstrapping. I assume in the case of SCRAM that we will only make this
> > > change after the broker API is available?
> > >
> > > Thanks,
> > > Jason
> >
> > Hi Jason,
> >
> > Yes, that's correct.  We will have the SCRAM API ready by the Kafka 3.0
> > release.
> >
> > best,
> > Colin
> >
> >
> > >
> > > On Tue, May 19, 2020 at 5:22 AM Mickael Maison  > >
> > > wrote:
> > >
> > > > +1 (binding)
> > > > Thanks Colin
> > > >
> > > > On Tue, May 19, 2020 at 10:57 AM Manikumar 
> > > > wrote:
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > > Thanks for the KIP
> > > > >
> > > > > On Tue, May 19, 2020 at 12:29 PM David Jacot 
> > > > wrote:
> > > > >
> > > > > > +1 (non-binding).
> > > > > >
> > > > > > Thanks for the KIP.
> > > > > >
> > > > > > On Fri, May 15, 2020 at 12:41 AM Guozhang Wang  > >
> > > > wrote:
> > > > > >
> > > > > > > +1.
> > > > > > >
> > > > > > > Thanks Colin!
> > > > > > >
> > > > > > > Guozhang
> > > > > > >
> > > > > > > On Tue, May 12, 2020 at 3:45 PM Colin McCabe  > >
> > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to start a vote on KIP-604: Remove ZooKeeper Flags
> > from
> > > > the
> > > > > > > > Administrative Tools.
> > > > > > > >
> > > > > > > > As a reminder, this KIP is for the next major release of Kafka,
> > > > the 3.0
> > > > > > > > release.   So it won't go into the upcoming 2.6 release.  It's
> > a
> > > > pretty
> > > > > > > > small change that just removes the --zookeeper flags from some
> > > > tools
> > > > > > and
> > > > > > > > removes a deprecated tool.  We haven't decided exactly when
> > we'll
> > > > do
> > > > > > 3.0
> > > > > > > > but I believe we will certainly want this change in that
> > release.
> > > > > > > >
> > > > > > > > The KIP does contain one small change relevant to Kafka 2.6:
> > adding
> > > > > > > > support for --if-exists and --if-not-exists in combination
> > with the
> > > > > > > > --bootstrap-server flag.
> > > > > > > >
> > > > > > > > best,
> > > > > > > > Colin
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > -- Guozhang
> > > > > > >
> > > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-617: Allow Kafka Streams State Stores to be iterated backwards

2020-05-21 Thread John Roesler
Hi Jorge,

Thanks for that idea. I agree, a feature flag would protect anyone
who may be depending on the current behavior.

It seems better to locate the feature flag in the initialization logic of
the store, rather than have a method on the "live" store that changes
its behavior on the fly.

It seems like there are two options here, one is to add a new config:

StreamsConfig.ENABLE_BACKWARDS_ITERATION =
  "enable.backwards.iteration

Or we can add a feature flag in Materialized, like

Materialized.enableBackwardsIteration()

I think I'd personally lean toward the config, for the following reason.
The concern that Sophie raised is that someone's program may depend
on the existing contract of getting an empty iterator. We don't want to
switch behavior when they aren't expecting it, so we provide them a
config to assert that they _are_ expecting the new behavior, which
means they take responsibility for updating their code to expect the new
behavior.

There doesn't seem to be a reason to offer a choice of behaviors on a
per-query, or per-store basis. We just want people to be not surprised
by this change in general.

What do you think?
Thanks,
-John

On Wed, May 20, 2020, at 17:37, Jorge Quilcate wrote:
> Thank you both for the great feedback.
> 
> I like the "fancy" proposal :), and how it removes the need for
> additional API methods. And with a feature flag on `StateStore`,
> disabled by default, should no break current users.
> 
> The only side-effect I can think of is that: by moving the flag upwards,
> all later operations become affected; which might be ok for most (all?)
> cases. I can't think of an scenario where this would be an issue, just
> want to point this out.
> 
> If moving to this approach, I'd like to check if I got this right before
> updating the KIP:
> 
> - only `StateStore` will change by having a new method:
> `backwardIteration()`, `false` by default to keep things compatible.
> - then all `*Stores` will have to update their implementation based on
> this flag.
> 
> 
> On 20/05/2020 21:02, Sophie Blee-Goldman wrote:
> >> There's no possibility that someone could be relying
> >> on iterating over that range in increasing order, because that's not what
> >> happens. However, they could indeed be relying on getting an empty
> > iterator
> >
> > I just meant that they might be relying on the assumption that the range
> > query
> > will never return results with decreasing keys. The empty iterator wouldn't
> > break that contract, but of course a surprise reverse iterator would.
> >
> > FWIW I actually am in favor of automatically converting to a reverse
> > iterator,
> > I just thought we should consider whether this should be off by default or
> > even possible to disable at all.
> >
> > On Tue, May 19, 2020 at 7:42 PM John Roesler  wrote:
> >
> >> Thanks for the response, Sophie,
> >>
> >> I wholeheartedly agree we should take as much into account as possible
> >> up front, rather than regretting our decisions later. I actually do share
> >> your vague sense of worry, which was what led me to say initially that I
> >> thought my counterproposal might be "too fancy". Sometimes, it's better
> >> to be explicit instead of "elegant", if we think more people will be
> >> confused
> >> than not.
> >>
> >> I really don't think that there's any danger of "relying on a bug" here,
> >> although
> >> people certainly could be relying on current behavior. One thing to be
> >> clear
> >> about (which I just left a more detailed comment in KAFKA-8159 about) is
> >> that
> >> when we say something like key1 > key2, this ordering is defined by the
> >> serde's output and nothing else.
> >>
> >> Currently, thanks to your fix in https://github.com/apache/kafka/pull/6521
> >> ,
> >> the store contract is that for range scans, if from > to, then the store
> >> must
> >> return an empty iterator. There's no possibility that someone could be
> >> relying
> >> on iterating over that range in increasing order, because that's not what
> >> happens. However, they could indeed be relying on getting an empty
> >> iterator.
> >>
> >> My counterproposal was to actually change this contract to say that the
> >> store
> >> must return an iterator over the keys in that range, but in the reverse
> >> order.
> >> So, in addition to considering whether this idea is "too fancy" (aka
> >> confusing),
> >> we should also consider the likelihood of breaking an existing program with
> >> this behavior/contract change.
> >>
> >> To echo your clarification, I'm also not advocating strongly in favor of my
> >> proposal. I just wanted to present it for consideration alongside Jorge's
> >> original one.
> >>
> >> Thanks for raising these very good points,
> >> -John
> >>
> >> On Tue, May 19, 2020, at 20:49, Sophie Blee-Goldman wrote:
>  Rather than working around it, I think we should just fix it
> >>> Now *that's* a "fancy" idea :P
> >>>
> >>> That was my primary concern, although I do have a vague sense of worry
> >>> that 

Re: [VOTE] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Rajini Sivaram
+1 (binding)

Thanks for the KIP, Nikolay!

Regards,

Rajini


On Thu, May 21, 2020 at 3:04 PM Ismael Juma  wrote:

> Thanks for the KIP, +1 (binding)
>
> On Mon, Mar 2, 2020 at 10:55 AM Nikolay Izhikov 
> wrote:
>
> > Hello.
> >
> > I would like to start vote for KIP-573: Enable TLSv1.3 by default
> >
> > KIP -
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
> > Discussion thread -
> >
> https://lists.apache.org/thread.html/r1158b6caf416e7db802780de71115b3e2d3ef2c4664b7ec8cb32ea86%40%3Cdev.kafka.apache.org%3E
>


Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Ismael Juma
Given what we've seen in the test, it would be good to mention that TLS 1.3
will not work for users who have configured ciphers explicitly. If such
users want to use TLS 1.3, they will have to update the list of ciphers to
include TLS 1.3 ciphers (which use a different naming convention). TLS 1.2
will continue to work as usual, so there is no compatibility issue.

Ismael

On Tue, May 19, 2020 at 12:19 PM Nikolay Izhikov 
wrote:

> PR - https://github.com/apache/kafka/pull/8695
>
> > 18 мая 2020 г., в 23:30, Nikolay Izhikov 
> написал(а):
> >
> > Hello, Colin
> >
> > We need hack only because TLSv1.3 not supported in java8.
> >
> >> Java 8 will receive TLS 1.3 support later this year (
> https://java.com/en/jre-jdk-cryptoroadmap.html)
> >
> > We can
> >
> > 1. Enable TLSv1.3 for java11 for now. And after java8 get TLSv1.3
> support remove it.
> > 2. Or we can wait and enable it after java8 update.
> >
> > What do you think?
> >
> >> 18 мая 2020 г., в 22:51, Ismael Juma  написал(а):
> >>
> >> Yeah, agreed. One option is to actually only change this in Apache Kafka
> >> 3.0 and avoid the hack altogether. We could make TLS 1.3 the default and
> >> have 1.2 as one of the enabled protocols.
> >>
> >> Ismael
> >>
> >> On Mon, May 18, 2020 at 12:24 PM Colin McCabe 
> wrote:
> >>
> >>> Hmm.  It would be good to figure out if we are going to remove this
> >>> compatibility hack in the next major release of Kafka?  In other
> words, in
> >>> Kafka 3.0, will we enable TLS 1.3 by default even if the cipher suite
> is
> >>> specified?
> >>>
> >>> best,
> >>> Colin
> >>>
> >>>
> >>> On Mon, May 18, 2020, at 09:26, Ismael Juma wrote:
>  Sounds good.
> 
>  Ismael
> 
> 
>  On Mon, May 18, 2020, 9:03 AM Nikolay Izhikov 
> >>> wrote:
> 
> >> A safer approach may be to only add TLS 1.3 to the list if the
> cipher
> > suite config has not been specified.
> >> So, if TLS 1.3 is added to the list by Kafka, it would seem that it
> > would not work if the user specified a list of cipher suites for
> >>> previous
> > TLS versions
> >
> > Let’s just add test for this case?
> > I can prepare the preliminary PR for this KIP and add this kind of
> >>> test to
> > it.
> >
> > What do you think?
> >
> >
> >> 18 мая 2020 г., в 18:59, Nikolay Izhikov 
> > написал(а):
> >>
> >>> 1. I meant that `ssl.protocol` is TLSv1.2 while
> >>> `ssl.enabled.protocols`
> > is `TLSv1.2, TLSv1.3`. How do these two configs interact
> >>
> >> `ssl.protocol` is what will be used, by default, in this KIP is
> stays
> > unchanged (TLSv1.2) Please, see [1]
> >> `ssl.enabled.protocols` is list of protocols that  *can* be used.
> >>> This
> > value is just passed to the `SSLEngine` implementation.
> >> Please, see DefaultSslEngineFactory#createSslEngine [2]
> >>
> >>> 2. My question is not about obsolete protocols, it is about people
> > using TLS 1.2 with specified cipher suites. How will that behave when
> >>> TLS
> > 1.3 is enabled by default?
> >>
> >> They don’t change anything and all just work as expected on java11.
> >>
> >>> 3. An additional question is how does this impact Java 8 users?
> >>
> >> Yes.
> >> If SSLEngine doesn’t support TLSv1.3 then java8 users should
> >>> explicitly
> > modify `ssl.enabled.protocols` and set it to `TLSv1.2`.
> >>
> >> [1]
> >
> >>>
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L218
> >> [2]
> >
> >>>
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L164
> >>
> >>> 18 мая 2020 г., в 17:34, Ismael Juma 
> >>> написал(а):
> >>>
> >>> Nikolay,
> >>>
> >>> Thanks for the comments. More below:
> >>>
> >>> 1. I meant that `ssl.protocol` is TLSv1.2 while
> >>> `ssl.enabled.protocols`
> > is `TLSv1.2, TLSv1.3`. How do these two configs interact?
> >>> 2. My question is not about obsolete protocols, it is about people
> > using TLS 1.2 with specified cipher suites. How will that behave when
> >>> TLS
> > 1.3 is enabled by default?
> >>> 3. An additional question is how does this impact Java 8 users?
> >>> Java 8
> > will receive TLS 1.3 support later this year (
> > https://java.com/en/jre-jdk-cryptoroadmap.html), but it currently
> does
> > not support it. One way to handle this would be to check if the
> >>> underlying
> > JVM supports TLS 1.3 before enabling it.
> >>>
> >>> I hope this clarifies my questions.
> >>>
> >>> Ismael
> >>>
> >>> On Mon, May 18, 2020 at 6:44 AM Nikolay Izhikov <
> >>> nizhi...@apache.org>
> > wrote:
> >>> Hello, Ismael.
> >>>
> >>> Here is answers to your questions:
> >>>
>  Quick question, the following is meant to include 

Re: [VOTE] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Ismael Juma
Thanks for the KIP, +1 (binding)

On Mon, Mar 2, 2020 at 10:55 AM Nikolay Izhikov  wrote:

> Hello.
>
> I would like to start vote for KIP-573: Enable TLSv1.3 by default
>
> KIP -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
> Discussion thread -
> https://lists.apache.org/thread.html/r1158b6caf416e7db802780de71115b3e2d3ef2c4664b7ec8cb32ea86%40%3Cdev.kafka.apache.org%3E


[jira] [Created] (KAFKA-10030) Throw exception while fetching a key from a single partition

2020-05-21 Thread Dima R (Jira)
Dima R created KAFKA-10030:
--

 Summary: Throw exception while fetching a key from a single 
partition
 Key: KAFKA-10030
 URL: https://issues.apache.org/jira/browse/KAFKA-10030
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.5.0
 Environment: StreamsConfig.NUM_STREAM_THREADS_CONFIG=2
Reporter: Dima R
 Fix For: 2.6.0, 2.5.1


StreamThreadStateStoreProvider#stores throws exception whenever taskId is not 
found, which is not correct behaviour in multi-threaded env where state store 
partitions are distributed among several StreamTasks. 
{code:java}
final Task task = tasks.get(keyTaskId);
if (task == null) {
 throw new InvalidStateStoreException(
 String.format("The specified partition %d for store %s does not exist.",
 storeQueryParams.partition(),
 storeName));
}{code}
Reproducible with KStream number of threads more then 1 

StoreQueryIntegrationTest#streamsConfiguration

config.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);

 

Suggested solution is to not throw exception if at least one state store is 
found, which is always true when using StoreQueryParameters.withPartition



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSSION] KIP-418: A method-chaining way to branch KStream

2020-05-21 Thread Ivan Ponomarev

Hi,

Thanks Matthias for your suggestion: yes, I agree that getting rid of 
`with[Java]Consumer` makes this thing 'as simple as possible, but not 
simpler'.


I made some quick API mocking in my IDE and tried to implement examples 
from KIP.


1. Having to return something from lambda is not a very big deal.

2. For a moment I thouht that I won't be able to use method references 
for already written stream consumers, but then I realized that I can 
just change my methods from returning void to returning the input 
parameter and use references to them. Not very convenient, but passable.


So, I'm ready to agree: 1) we use only functions, no consumer 2) when 
function returns null, we don't insert it into the resulting map.


Usually it's better to implement a non-perfect, but workable solution as 
a first approximation. And later we can always add to `Branched` 
anything we want.


3. Do we have any guidelines on how parameter classes like Branched 
should be built? First of all, it seems that `as` now is more preferred 
than `withName` (although as you probably know it clashes with Kotlin's 
`as` operator).


Then, while trying to mock the APIs, I found out that my Java cannot 
infer types in the following construction:


.branch((key, value) -> value == null,
   Branched.as("foo").withChain(s -> s.mapValues(...)))


so I have to write

.branch((key, value) -> value == null,
   Branched.as("foo").withChain(s -> s.mapValues(...)))


This is not tolerable IMO, so this is the list of `Branched` methods 
that I came to (will you please validate it):


static  Branched as(String name);

static  Branched with(Function, ? 
extends KStream> chain);


static  Branched with(Function, ? 
extends KStream> chain, String name);


//non-static!
Branched withChain(Function, ? extends 
KStream> chain);



4. And one more. What do you think, do we need that flexibility:

Function, ? extends KStream> chain

vs.

Function, ? extends KStreamextends K, ? extends V>> chain


??

Regards,

Ivan


21.05.2020 6:54, John Roesler пишет:

Thanks for this thought, Matthias,

Your idea has a few aspects I find attractive:
1. There’s no ambiguity at all about what will be in the map, because there’s 
only one thing that could be there, which is whatever is returned from the 
chain function.
2. We keep the API smaller. Thanks to the extensible way this KIP is designed, 
it would be trivially easy to add the “terminal” chain later. As you say, fewer 
concepts leads to an API that is easier to learn.
3. We get to side-step the naming of this method. Although I didn’t complain 
about withJavaConsumer, it was only because I couldn’t think of a better name. 
Still, it’s somewhat unsatisfying to name a method after its argument type, 
since this provides no information at all about what the method does. I was 
willing to accept it because I didn’t have an alternative, but I would be happy 
to skip this method for now to avoid the problem until we have more inspiration.

The only con I see is that it makes the code a little less ergonomic to write 
when you don’t want to return the result of the chain (such as when the chain 
is terminal), since I’m your example, you have to declare a block with a return 
statement at the end. It’s not ideal, but it doesn’t seem too bad to me.

Lastly, on the null question, I’d be fine with allowing a null result, which 
would just remove the branch from the returned map. It seems nicer than forcing 
people to pick a stream to return when their chain is terminal and they don’t 
want to use the result later.

Thanks again for sharing the idea,
John

On Wed, May 20, 2020, at 18:17, Matthias J. Sax wrote:

Thanks for updating the KIP!

I guess the only open question is about `Branched.withJavaConsumer` and
its relationship to the returned `Map`.

Originally, we discussed two main patterns:

  (1) split a stream and return the substreams for futher processing
  (2) split a stream and modify the substreams with in-place method chaining

To combine both patterns we wanted to allow for

   -> split a stream, modify the substreams, and return the _modified_
substreams for further processing


But is it also an issue? With Kafka Streams, we can split the topology graph at 
any point. Technically, it's OK to do both: feed the KStream to a 
[Java]Consumer AND save it in resulting Map. If one doesn't need the stream in 
the Map, one simply does not extract it from there


That is of course possible. However, it introduces some "hidded" semantics:

  - using `withChain` I get the modified sub-stream
  - using `withJavaConsumer` I get the unmodifed sub-stream

This seems to be quite subtle to me.



 From my understanding the original idea of `withJavaConsumer` was to
model a terminal operation, ie, it should be similar to:

Branched.withChain(s -> {
   s.to();
   return null;
})

However, I am not sure if we should even allow `withChain()` to return
`null`? IMHO, we should throw an exception for this case to avoid a `key

[jira] [Created] (KAFKA-10029) Selector.completedReceives should not be modified when channel is closed

2020-05-21 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-10029:
--

 Summary: Selector.completedReceives should not be modified when 
channel is closed
 Key: KAFKA-10029
 URL: https://issues.apache.org/jira/browse/KAFKA-10029
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 2.5.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.6.0, 2.5.1


Selector.completedReceives are processed using `forEach` by SocketServer and 
NetworkClient when processing receives from a poll. Since we may close channels 
while processing receives, changes to the map while closing channels can result 
in ConcurrentModificationException. We clear the entire map after each poll 
anyway, so we don't need to remove channel from the map while closing channels.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10028) KIP-584: Implement write path for versioning scheme for features

2020-05-21 Thread Kowshik Prakasam (Jira)
Kowshik Prakasam created KAFKA-10028:


 Summary: KIP-584: Implement write path for versioning scheme for 
features
 Key: KAFKA-10028
 URL: https://issues.apache.org/jira/browse/KAFKA-10028
 Project: Kafka
  Issue Type: Sub-task
Reporter: Kowshik Prakasam


Goal is to implement various classes and integration for the write path of the 
feature versioning system 
([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
 This is preceded by the read path implementation (KAFKA-10027). The write path 
implementation involves developing the new controller API: UpdateFeatures that 
enables transactional application of a set of cluster-wide feature updates to 
the ZK {{'/features'}} node, along with required ACL permissions.

 

Details about the write path are explained [in this 
part|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-ChangestoKafkaController]
 of the KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #212

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9950: Construct new ConfigDef for MirrorTaskConfig before 
defining


--
[...truncated 27.51 KB...]

org.apache.kafka.common.network.SslSelectorTest > 
testInboundConnectionsCountInConnectionCreationMetric STARTED

org.apache.kafka.common.network.SslSelectorTest > 
testInboundConnectionsCountInConnectionCreationMetric PASSED

org.apache.kafka.common.network.SslSelectorTest > testNoRouteToHost STARTED

org.apache.kafka.common.network.SslSelectorTest > testNoRouteToHost PASSED

org.apache.kafka.common.network.SslSelectorTest > testNormalOperation STARTED

org.apache.kafka.common.network.SslSelectorTest > testNormalOperation PASSED

org.apache.kafka.common.network.SslSelectorTest > testMuteOnOOM STARTED

org.apache.kafka.common.network.SslSelectorTest > testMuteOnOOM PASSED

org.apache.kafka.common.network.SslSelectorTest > testConnectionRefused STARTED

org.apache.kafka.common.network.SslSelectorTest > testConnectionRefused PASSED

org.apache.kafka.common.network.SslSelectorTest > testEmptyRequest STARTED

org.apache.kafka.common.network.SslSelectorTest > testEmptyRequest PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testListenerConfigOverride STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testListenerConfigOverride PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationCN STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationCN PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testNetworkThreadTimeRecorded STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testNetworkThreadTimeRecorded PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequestedValidProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequestedValidProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientEndpointNotValidated STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientEndpointNotValidated PASSED

org.apache.kafka.common.network.SslTransportLayerTest > testUnsupportedCiphers 
STARTED

org.apache.kafka.common.network.SslTransportLayerTest > testUnsupportedCiphers 
PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUnsupportedTLSVersion STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUnsupportedTLSVersion PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeRead STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeRead PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequiredNotProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequiredNotProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeWrite STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeWrite PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequestedNotProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequestedNotProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeWrite STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testIOExceptionsDuringHandshakeWrite PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidKeystorePassword STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidKeystorePassword PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationDisabledNotProvided STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationDisabledNotProvided PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationSanDns STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationSanDns PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testEndpointIdentificationNoReverseLookup STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testEndpointIdentificationNoReverseLookup PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUngracefulRemoteCloseDuringHandshakeWrite STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testUngracefulRemoteCloseDuringHandshakeWrite PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeRead STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testGracefulRemoteCloseDuringHandshakeRead PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 

Build failed in Jenkins: kafka-2.5-jdk8 #128

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9950: Construct new ConfigDef for MirrorTaskConfig before 
defining


--
Started by an SCM change
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H36 (ubuntu) in workspace 
/home/jenkins/jenkins-slave/workspace/kafka-2.5-jdk8
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init /home/jenkins/jenkins-slave/workspace/kafka-2.5-jdk8 # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.5^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.5^{commit} # timeout=10
Checking out Revision 887a7869f7e16aae7de71bfa813f6d855d4a1e14 
(refs/remotes/origin/2.5)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 887a7869f7e16aae7de71bfa813f6d855d4a1e14
Commit message: "KAFKA-9950: Construct new ConfigDef for MirrorTaskConfig 
before defining new properties (#8608)"
 > git rev-list --no-walk 06844e78e7c75820784928360c9221beb633d9e4 # timeout=10
[kafka-2.5-jdk8] $ /bin/bash -xe /tmp/jenkins3101982172652546570.sh
+ rm -rf /home/jenkins/jenkins-slave/workspace/kafka-2.5-jdk8/.gradle
[kafka-2.5-jdk8] $ /bin/bash -xe /tmp/jenkins406411417158751719.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon --continue -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
-PscalaVersion=2.12 clean test
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/5.6.2/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing

> Configure project :
Building project 'core' with Scala version 2.12.10
Building project 'streams-scala' with Scala version 2.12.10

> Task :clean
> Task :clients:clean UP-TO-DATE
> Task :connect:clean UP-TO-DATE
> Task :core:clean UP-TO-DATE
> Task :examples:clean UP-TO-DATE
> Task :generator:clean UP-TO-DATE
> Task :jmh-benchmarks:clean UP-TO-DATE
> Task :log4j-appender:clean UP-TO-DATE
> Task :streams:clean UP-TO-DATE
> Task :tools:clean UP-TO-DATE
> Task :connect:api:clean UP-TO-DATE
> Task :connect:basic-auth-extension:clean UP-TO-DATE
> Task :connect:file:clean UP-TO-DATE
> Task :connect:json:clean UP-TO-DATE
> Task :connect:mirror:clean UP-TO-DATE
> Task :connect:mirror-client:clean UP-TO-DATE
> Task :connect:runtime:clean UP-TO-DATE
> Task :connect:transforms:clean UP-TO-DATE
> Task :streams:examples:clean UP-TO-DATE
> Task :streams:streams-scala:clean UP-TO-DATE
> Task :streams:test-utils:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-10:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-11:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-20:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-21:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-22:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-23:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-24:clean UP-TO-DATE
> Task :compileJava NO-SOURCE
> Task :processResources NO-SOURCE
> Task :classes UP-TO-DATE

> Task :rat
Rat report: 
/home/jenkins/jenkins-slave/workspace/kafka-2.5-jdk8/build/rat/rat-report.html

> Task :compileTestJava NO-SOURCE
> Task :processTestResources NO-SOURCE
> Task :testClasses UP-TO-DATE
> Task :test NO-SOURCE
> Task :generator:compileJava
> Task :generator:processResources NO-SOURCE
> Task :generator:classes

> Task :clients:processMessages
MessageGenerator: processed 99 Kafka message JSON files(s).

> Task :clients:compileJava
> Task :clients:processResources
> Task :clients:classes
> Task :clients:checkstyleMain

> Task :clients:processTestMessages
MessageGenerator: processed 1 Kafka message JSON files(s).

> Task :clients:compileTestJava
> Task :clients:processTestResources
> Task :clients:testClasses
> Task :clients:checkstyleTest
> 

Build failed in Jenkins: kafka-trunk-jdk8 #4555

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8869: Remove task configs for deleted connectors from config

[github] KAFKA-9950: Construct new ConfigDef for MirrorTaskConfig before 
defining

[github] KAFKA-9855 - return cached Structs for Schemas with no fields (#8472)


--
[...truncated 2.08 MB...]

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotBlowUpOnNonExistentKeyWhenDeleting STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotBlowUpOnNonExistentKeyWhenDeleting PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotFlushAfterDelete STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotFlushAfterDelete PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotForwardCleanEntryOnEviction STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotForwardCleanEntryOnEviction PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotBlowUpOnNonExistentNamespaceWhenDeleting STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotBlowUpOnNonExistentNamespaceWhenDeleting PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
cacheOverheadsSmallValues STARTED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldFetchExactKeys PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldFetchExactSession STARTED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldFetchExactSession PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldLogAndMeasureExpiredRecordsWithBuiltInMetricsVersion0100To24 STARTED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldLogAndMeasureExpiredRecordsWithBuiltInMetricsVersion0100To24 PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > shouldRemove 
STARTED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > shouldRemove 
PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldFindValuesWithinMergingSessionWindowRange STARTED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldFindValuesWithinMergingSessionWindowRange PASSED

org.apache.kafka.streams.state.internals.RocksDBSessionStoreTest > 
shouldLogAndMeasureExpiredRecordsWithBuiltInMetricsVersionLatest STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
cacheOverheadsSmallValues PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > shouldPutIfAbsent 
STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > shouldPutIfAbsent 
PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldReturnFalseIfNoNextKey STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldReturnFalseIfNoNextKey PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldFlushDirtyEntriesForNamespace STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldFlushDirtyEntriesForNamespace PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > shouldPeekNextKey 
STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > shouldPeekNextKey 
PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldReturnNullIfKeyIsNull STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldReturnNullIfKeyIsNull PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotFlushCleanEntriesForNamespace STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldNotFlushCleanEntriesForNamespace PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldEvictAfterPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldEvictAfterPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldEvictAfterPutAll STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldEvictAfterPutAll PASSED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldEvictImmediatelyIfCacheSizeIsVerySmall STARTED

org.apache.kafka.streams.state.internals.ThreadCacheTest > 
shouldEvictImmediatelyIfCacheSizeIsVerySmall PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfOnlyCacheItemsAndAllDeleted STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfOnlyCacheItemsAndAllDeleted PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfAllCachedItemsDeleted STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheKeyValueBytesStoreIteratorTest
 > shouldNotHaveNextIfAllCachedItemsDeleted PASSED


Build failed in Jenkins: kafka-trunk-jdk11 #1484

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9950: Construct new ConfigDef for MirrorTaskConfig before 
defining

[github] KAFKA-9855 - return cached Structs for Schemas with no fields (#8472)


--
Started by an SCM change
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu) in workspace 
/home/jenkins/jenkins-slave/workspace/kafka-trunk-jdk11
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init /home/jenkins/jenkins-slave/workspace/kafka-trunk-jdk11 # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision aa1b3c1107f53638ec2a4a6c2f06a4626545fad3 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f aa1b3c1107f53638ec2a4a6c2f06a4626545fad3
Commit message: "KAFKA-9855 - return cached Structs for Schemas with no fields 
(#8472)"
 > git rev-list --no-walk 82f5efabc9249e0accf530a6a82afc2f32e65ec6 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins7564565232869140222.sh
+ rm -rf /home/jenkins/jenkins-slave/workspace/kafka-trunk-jdk11/.gradle
ERROR: No tool found matching GRADLE_4_10_2_HOME
Setting GRADLE_4_10_3_HOME=/home/jenkins/tools/gradle/4.10.3
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins7886955468109536163.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon --continue -PmaxParallelForks=2 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean test -PscalaVersion=2.12
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/6.4.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing

> Configure project :
Building project 'core' with Scala version 2.12.11
Building project 'streams-scala' with Scala version 2.12.11

> Task :clean
> Task :clients:clean UP-TO-DATE
> Task :connect:clean UP-TO-DATE
> Task :core:clean UP-TO-DATE
> Task :examples:clean UP-TO-DATE
> Task :generator:clean UP-TO-DATE
> Task :jmh-benchmarks:clean UP-TO-DATE
> Task :log4j-appender:clean UP-TO-DATE
> Task :streams:clean UP-TO-DATE
> Task :tools:clean UP-TO-DATE
> Task :connect:api:clean UP-TO-DATE
> Task :connect:basic-auth-extension:clean UP-TO-DATE
> Task :connect:file:clean UP-TO-DATE
> Task :connect:json:clean UP-TO-DATE
> Task :connect:mirror:clean UP-TO-DATE
> Task :connect:mirror-client:clean UP-TO-DATE
> Task :connect:runtime:clean UP-TO-DATE
> Task :connect:transforms:clean UP-TO-DATE
> Task :streams:examples:clean UP-TO-DATE
> Task :streams:streams-scala:clean UP-TO-DATE
> Task :streams:test-utils:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-10:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-11:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-20:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-21:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-22:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-23:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-24:clean UP-TO-DATE
> Task :streams:upgrade-system-tests-25:clean UP-TO-DATE
> Task :compileJava NO-SOURCE
> Task :processResources NO-SOURCE
> Task :classes UP-TO-DATE

> Task :rat
Rat report: 
/home/jenkins/jenkins-slave/workspace/kafka-trunk-jdk11/build/rat/rat-report.html

> Task :compileTestJava NO-SOURCE
> Task :processTestResources NO-SOURCE
> Task :testClasses UP-TO-DATE
> Task :test NO-SOURCE
> Task :generator:compileJava
> Task :generator:processResources NO-SOURCE
> Task :generator:classes

> Task :clients:processMessages
MessageGenerator: processed 103 Kafka message JSON 

Build failed in Jenkins: kafka-2.5-jdk8 #127

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-8869: Remove task configs for deleted connectors from config


--
[...truncated 1.83 MB...]

kafka.security.authorizer.AclAuthorizerTest > testAuthorizeWithPrefixedResource 
PASSED

kafka.security.authorizer.AclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.authorizer.AclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.authorizer.AclAuthorizerTest > testAuthorizerNoZkConfig STARTED

kafka.security.authorizer.AclAuthorizerTest > testAuthorizerNoZkConfig PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.authorizer.AclAuthorizerTest > testAddAclsOnPrefixedResource 
STARTED

kafka.security.authorizer.AclAuthorizerTest > testAddAclsOnPrefixedResource 
PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testAuthorizerZkConfigFromKafkaConfigWithDefaults STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testAuthorizerZkConfigFromKafkaConfigWithDefaults PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.authorizer.AclAuthorizerTest > testNoAclFound STARTED

kafka.security.authorizer.AclAuthorizerTest > testNoAclFound PASSED

kafka.security.authorizer.AclAuthorizerTest > testAclInheritance STARTED

kafka.security.authorizer.AclAuthorizerTest > testAclInheritance PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.authorizer.AclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.authorizer.AclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testAuthorizerZkConfigFromPrefixOverrides STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testAuthorizerZkConfigFromPrefixOverrides PASSED

kafka.security.authorizer.AclAuthorizerTest > testAclsFilter STARTED

kafka.security.authorizer.AclAuthorizerTest > testAclsFilter PASSED

kafka.security.authorizer.AclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.authorizer.AclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.authorizer.AclAuthorizerTest > testWildCardAcls STARTED

kafka.security.authorizer.AclAuthorizerTest > testWildCardAcls PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.authorizer.AclAuthorizerTest > testTopicAcl STARTED

kafka.security.authorizer.AclAuthorizerTest > testTopicAcl PASSED

kafka.security.authorizer.AclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.authorizer.AclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.authorizer.AclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.authorizer.AclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.authorizer.AclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.authorizer.AclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.authorizer.AclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.authorizer.AclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.authorizer.AclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.authorizer.AclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.authorizer.AclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.authorizer.AclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.authorizer.AclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.authorizer.AclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.authorizer.AclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED


Build failed in Jenkins: kafka-2.3-jdk8 #203

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-8869: Remove task configs for deleted connectors from config


--
[...truncated 2.24 MB...]

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage STARTED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex STARTED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap STARTED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > 
testReloadLargestTimestampAndNextOffsetAfterTruncation STARTED

kafka.log.LogSegmentTest > 
testReloadLargestTimestampAndNextOffsetAfterTruncation PASSED

kafka.log.LogSegmentTest > testTruncate STARTED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testRecoverTransactionIndex STARTED

kafka.log.LogSegmentTest > testRecoverTransactionIndex PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset STARTED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage STARTED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes STARTED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testTruncateEmptySegment STARTED

kafka.log.LogSegmentTest > testTruncateEmptySegment PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptTimeIndex STARTED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptTimeIndex PASSED

kafka.log.LogSegmentTest > shouldTruncateEvenIfOffsetPointsToAGapInTheLog 
STARTED

kafka.log.LogSegmentTest > shouldTruncateEvenIfOffsetPointsToAGapInTheLog PASSED

kafka.log.LogSegmentTest > testMaxOffset STARTED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation STARTED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testFindOffsetByTimestamp STARTED

kafka.log.LogSegmentTest > testFindOffsetByTimestamp PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment STARTED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast STARTED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown STARTED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testAppendFromFile STARTED

kafka.log.LogSegmentTest > testAppendFromFile PASSED

kafka.log.LogSegmentTest > testTruncateFull STARTED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.TransactionIndexTest > testTruncate STARTED

kafka.log.TransactionIndexTest > testTruncate PASSED

kafka.log.TransactionIndexTest > testAbortedTxnSerde STARTED

kafka.log.TransactionIndexTest > testAbortedTxnSerde PASSED

kafka.log.TransactionIndexTest > testRenameIndex STARTED

kafka.log.TransactionIndexTest > testRenameIndex PASSED

kafka.log.TransactionIndexTest > testPositionSetCorrectlyWhenOpened STARTED

kafka.log.TransactionIndexTest > testPositionSetCorrectlyWhenOpened PASSED

kafka.log.TransactionIndexTest > testLastOffsetCannotDecrease STARTED

kafka.log.TransactionIndexTest > testLastOffsetCannotDecrease PASSED

kafka.log.TransactionIndexTest > testLastOffsetMustIncrease STARTED

kafka.log.TransactionIndexTest > testLastOffsetMustIncrease PASSED

kafka.log.TransactionIndexTest > testSanityCheck STARTED

kafka.log.TransactionIndexTest > testSanityCheck PASSED

kafka.log.TransactionIndexTest > testCollectAbortedTransactions STARTED

kafka.log.TransactionIndexTest > testCollectAbortedTransactions PASSED

kafka.api.MetricsTest > testMetrics STARTED

kafka.api.MetricsTest > testMetrics PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII STARTED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII STARTED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ApiVersionTest > testApiVersionUniqueIds STARTED

kafka.api.ApiVersionTest > testApiVersionUniqueIds PASSED

kafka.api.ApiVersionTest > testMinSupportedVersionFor STARTED

kafka.api.ApiVersionTest > testMinSupportedVersionFor PASSED

kafka.api.ApiVersionTest > testShortVersion STARTED

kafka.api.ApiVersionTest > testShortVersion PASSED

kafka.api.ApiVersionTest > testApply STARTED

kafka.api.ApiVersionTest > testApply PASSED

kafka.api.ApiVersionTest > testApiVersionValidator STARTED

kafka.api.ApiVersionTest > testApiVersionValidator PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0 compressionType = 
none] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0 compressionType = 
none] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1 compressionType = 
gzip] STARTED


Build failed in Jenkins: kafka-trunk-jdk14 #112

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9950: Construct new ConfigDef for MirrorTaskConfig before 
defining

[github] KAFKA-9855 - return cached Structs for Schemas with no fields (#8472)


--
[...truncated 47.73 KB...]

org.apache.kafka.common.network.SelectorTest > testImmediatelyConnectedCleaned 
STARTED

org.apache.kafka.common.network.SelectorTest > testImmediatelyConnectedCleaned 
PASSED

org.apache.kafka.common.network.SelectorTest > testExistingConnectionId STARTED

org.apache.kafka.common.network.SelectorTest > testExistingConnectionId PASSED

org.apache.kafka.common.network.SelectorTest > testCantSendWithoutConnecting 
STARTED

org.apache.kafka.common.network.SelectorTest > testCantSendWithoutConnecting 
PASSED

org.apache.kafka.common.network.SelectorTest > testCloseOldestConnection STARTED

org.apache.kafka.common.network.SelectorTest > testCloseOldestConnection PASSED

org.apache.kafka.common.network.SelectorTest > testServerDisconnect STARTED

org.apache.kafka.common.network.SelectorTest > testServerDisconnect PASSED

org.apache.kafka.common.network.SelectorTest > 
testMetricsCleanupOnSelectorClose STARTED

org.apache.kafka.common.network.SelectorTest > 
testMetricsCleanupOnSelectorClose PASSED

org.apache.kafka.common.network.SelectorTest > 
testPartialSendAndReceiveReflectedInMetrics STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testCustomClientAndServerSslEngineFactory[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testListenerConfigOverride[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testListenerConfigOverride[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationCN[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.network.SelectorTest > 
testPartialSendAndReceiveReflectedInMetrics PASSED

org.apache.kafka.common.network.SelectorTest > 
testWriteCompletesSendWithNoBytesWritten STARTED

org.apache.kafka.common.network.SelectorTest > 
testWriteCompletesSendWithNoBytesWritten PASSED

org.apache.kafka.common.network.SelectorTest > testIdleExpiryWithoutReadyKeys 
STARTED

org.apache.kafka.common.network.SelectorTest > testIdleExpiryWithoutReadyKeys 
PASSED

org.apache.kafka.common.network.SelectorTest > testConnectionsByClientMetric 
STARTED

org.apache.kafka.common.network.SelectorTest > testConnectionsByClientMetric 
PASSED

org.apache.kafka.common.network.SelectorTest > 
testInboundConnectionsCountInConnectionCreationMetric STARTED

org.apache.kafka.common.network.SelectorTest > 
testInboundConnectionsCountInConnectionCreationMetric PASSED

org.apache.kafka.common.network.SelectorTest > testNoRouteToHost STARTED

org.apache.kafka.common.network.SelectorTest > testNoRouteToHost PASSED

org.apache.kafka.common.network.SelectorTest > testPartialReceiveGracefulClose 
STARTED

org.apache.kafka.common.network.SelectorTest > testPartialReceiveGracefulClose 
PASSED

org.apache.kafka.common.network.SelectorTest > testNormalOperation STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testValidEndpointIdentificationCN[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testNetworkThreadTimeRecorded[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.network.SelectorTest > testNormalOperation PASSED

org.apache.kafka.common.network.SelectorTest > testMuteOnOOM STARTED

org.apache.kafka.common.network.SelectorTest > testMuteOnOOM PASSED

org.apache.kafka.common.network.SelectorTest > 
testCloseOldestConnectionWithMultiplePendingReceives STARTED

org.apache.kafka.common.network.SelectorTest > 
testCloseOldestConnectionWithMultiplePendingReceives PASSED

org.apache.kafka.common.network.SelectorTest > 
testExpireClosedConnectionWithPendingReceives STARTED

org.apache.kafka.common.network.SelectorTest > 
testExpireClosedConnectionWithPendingReceives PASSED

org.apache.kafka.common.network.SelectorTest > testConnectionRefused STARTED

org.apache.kafka.common.network.SelectorTest > testConnectionRefused PASSED

org.apache.kafka.common.network.SelectorTest > testEmptyRequest STARTED

org.apache.kafka.common.network.SelectorTest > testEmptyRequest PASSED

org.apache.kafka.common.network.SslSelectorTest > 
testBytesBufferedChannelAfterMute STARTED

org.apache.kafka.common.network.SslSelectorTest > 
testBytesBufferedChannelAfterMute PASSED

org.apache.kafka.common.network.SslSelectorTest > 
testBytesBufferedChannelWithNoIncomingBytes STARTED

org.apache.kafka.common.network.SslTransportLayerTest > 
testNetworkThreadTimeRecorded[tlsProtocol=TLSv1.2] PASSED

org.apache.kafka.common.network.SslTransportLayerTest > 
testClientAuthenticationRequestedValidProvided[tlsProtocol=TLSv1.2] STARTED

org.apache.kafka.common.network.SslSelectorTest > 
testBytesBufferedChannelWithNoIncomingBytes 

Build failed in Jenkins: kafka-2.4-jdk8 #211

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-8869: Remove task configs for deleted connectors from config


--
[...truncated 2.26 MB...]

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.TransactionsTest > testBasicTransactions STARTED

kafka.api.TransactionsTest > testBasicTransactions PASSED

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction 
STARTED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction PASSED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions STARTED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
STARTED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
PASSED

kafka.api.TransactionsTest > testFencingOnSend STARTED

kafka.api.TransactionsTest > testFencingOnSend PASSED

kafka.api.TransactionsTest > testFencingOnCommit STARTED

kafka.api.TransactionsTest > testFencingOnCommit PASSED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testCommitTransactionTimeout STARTED

kafka.api.TransactionsTest > testCommitTransactionTimeout PASSED

kafka.api.TransactionsTest > testSendOffsets STARTED

kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.CustomQuotaCallbackTest > testCustomQuotaCallback STARTED

kafka.api.CustomQuotaCallbackTest > testCustomQuotaCallback PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.UserClientIdQuotaTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1483

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9409: Supplement immutability of ClusterConfigState class in

[github] KAFKA-8869: Remove task configs for deleted connectors from config


--
[...truncated 4.68 MB...]

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType STARTED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.ConfigCommandTest > testUserClientQuotaOptsUsingZookeeper STARTED

kafka.admin.ConfigCommandTest > testUserClientQuotaOptsUsingZookeeper PASSED

kafka.admin.ConfigCommandTest > 
shouldFailIfShortBrokerEntityTypeIsNotAnIntegerUsingZookeeper STARTED

kafka.admin.ConfigCommandTest > 
shouldFailIfShortBrokerEntityTypeIsNotAnIntegerUsingZookeeper PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerQuotaConfig STARTED

kafka.admin.ConfigCommandTest > shouldAddBrokerQuotaConfig PASSED

kafka.admin.ConfigCommandTest > 
shouldParseArgumentsForClientsEntityTypeUsingZookeeper STARTED

kafka.admin.ConfigCommandTest > 
shouldParseArgumentsForClientsEntityTypeUsingZookeeper PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateConfigIfNonExistingConfigIsDeletedUsingZookeper STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateConfigIfNonExistingConfigIsDeletedUsingZookeper PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > 
shouldRaiseInvalidConfigurationExceptionWhenAddingInvalidBrokerLoggerConfig 
STARTED

kafka.admin.ConfigCommandTest > 
shouldRaiseInvalidConfigurationExceptionWhenAddingInvalidBrokerLoggerConfig 
PASSED

kafka.admin.ConfigCommandTest > testOptionEntityTypeNamesUsingZookeeper STARTED

kafka.admin.ConfigCommandTest > testOptionEntityTypeNamesUsingZookeeper PASSED

kafka.admin.ConfigCommandTest > shouldDescribeConfigSynonyms STARTED

kafka.admin.ConfigCommandTest > shouldDescribeConfigSynonyms PASSED

kafka.admin.ConfigCommandTest > 
shouldParseArgumentsForUsersEntityTypeUsingZookeeper STARTED

kafka.admin.ConfigCommandTest > 
shouldParseArgumentsForUsersEntityTypeUsingZookeeper PASSED

kafka.admin.ConfigCommandTest > 
testEntityDefaultOptionWithAlterBrokerLoggerIsNotAllowed STARTED

kafka.admin.ConfigCommandTest > 
testEntityDefaultOptionWithAlterBrokerLoggerIsNotAllowed PASSED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
STARTED

kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig 
PASSED

kafka.admin.ConfigCommandTest > testParseConfigsToBeAddedForAddConfigFile 
STARTED

kafka.admin.ConfigCommandTest > testParseConfigsToBeAddedForAddConfigFile PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokerLoggersEntityType 
STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokerLoggersEntityType 
PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateConfigIfNonExistingConfigIsDeleted STARTED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateConfigIfNonExistingConfigIsDeleted PASSED

kafka.admin.ConfigCommandTest > shouldFailIfShortBrokerEntityTypeIsNotAnInteger 
STARTED

kafka.admin.ConfigCommandTest > shouldFailIfShortBrokerEntityTypeIsNotAnInteger 
PASSED

kafka.admin.ConfigCommandTest > testDescribeAllBrokerConfig STARTED

kafka.admin.ConfigCommandTest > testDescribeAllBrokerConfig PASSED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnBrokerCommandError 
STARTED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnBrokerCommandError 
PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED

kafka.admin.ConfigCommandTest > testDynamicBrokerConfigUpdateUsingZooKeeper 
STARTED

kafka.admin.ConfigCommandTest > testDynamicBrokerConfigUpdateUsingZooKeeper 
PASSED

kafka.admin.ConfigCommandTest > 
testNoSpecifiedEntityOptionWithDescribeBrokersInZKIsAllowed STARTED

kafka.admin.ConfigCommandTest > 
testNoSpecifiedEntityOptionWithDescribeBrokersInZKIsAllowed PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities STARTED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType STARTED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED

kafka.admin.ConfigCommandTest > 
testDescribeAllBrokerConfigBootstrapServerRequired STARTED

kafka.admin.ConfigCommandTest > 
testDescribeAllBrokerConfigBootstrapServerRequired PASSED

kafka.admin.ConfigCommandTest > shouldAlterTopicConfigFile STARTED

kafka.admin.ConfigCommandTest > shouldAlterTopicConfigFile PASSED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnArgError STARTED

kafka.admin.ConfigCommandTest > shouldExitWithNonZeroStatusOnArgError PASSED


Build failed in Jenkins: kafka-trunk-jdk14 #111

2020-05-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9409: Supplement immutability of ClusterConfigState class in

[github] KAFKA-8869: Remove task configs for deleted connectors from config


--
[...truncated 3.10 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED