Re: MirrorMaker 2.0 - Offset Sync - Questions/Improvements

2021-03-16 Thread Ryanne Dolan
Georg, sorry for the delay, but hopefully I can still help.

1) I think the detail you're missing is that the offset syncs are very
sparse. Normally, you only get a new sync when the Connector first starts
running. You are right that it is possible for a consumer to lag behind the
most recent offset sync, but that will be a rare, transient condition, e.g.
when the Connector first starts running.

2) I think you are right -- disabling checkpoints probably should also
prevent those topics from being created. I'd support that change.

Ryanne

On Fri, Feb 26, 2021, 4:24 PM Georg Friedrich 
wrote:

> Hi,
>
> recently I've started to look deeper into the code of MirrorMaker 2.0 and
> was faced with some confusing details. Maybe you can point me into a right
> direction here.
>
>
>   *   The line at
> https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/OffsetSyncStore.java#L52
> checks whether the offsets that get translated are smaller than the last
> offset sync.
> If this is the case, no translation happens. But I'm confused here: Isn't
> this a potential issue? What if some consumers are slow in regards to
> processing messages from Kafka and fall back behand the offset sync process
> of the MirrorMaker.
> In this case the MirrorMaker would stop to translate any offsets. Do I
> miss something here or is this really broken?
>   *   I'm wondering: One is able to deactivate emitting checkpoints to the
> target cluster. But when this happens, the offset sync topic is still
> written to the source cluster. Why is that? As far as I can see the only
> consumer of the offset sync topic is the checkpoint connector. So one could
> also deactivate the whole offset sync production entirely when disabling
> emitting checkpoints. Or is there again something that I miss? If not, is
> this worth a KIP?
>
> Thanks in advance for your answers and help.
>
> Kind regards
> Georg Friedrich
>


Re: [DISCUSS] KIP-722: Enable connector client overrides by default

2021-03-16 Thread Ryanne Dolan
Thanks Randall, makes sense to me.

Ryanne

On Tue, Mar 16, 2021, 4:31 PM Randall Hauch  wrote:

> Hello all,
>
> I'd like to propose KIP-722 to change the default value of the existing
> `connector.client.config.override.policy` Connect worker configuration, so
> that by default connectors can override client properties.The details are
> here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-722%3A+Enable+connector+client+overrides+by+default
>
> The feature and worker config property were originally added by KIP-458
> (approved and implemented in AK 2.3.0):
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy
>
> I look forward to your feedback!
>
> Best regards,
>
> Randall
>


Re: [ANNOUNCE] New Kafka PMC Member: Chia-Ping Tsai

2021-03-16 Thread Dongjin Lee


Best,
Dongjin

On Tue, Mar 16, 2021 at 2:20 PM Konstantine Karantasis
 wrote:

> Congratulations Chia-Ping!
>
> Konstantine
>
> On Mon, Mar 15, 2021 at 4:31 AM Rajini Sivaram 
> wrote:
>
> > Congratulations, Chia-Ping, well deserved!
> >
> > Regards,
> >
> > Rajini
> >
> > On Mon, Mar 15, 2021 at 9:59 AM Bruno Cadonna  >
> > wrote:
> >
> > > Congrats, Chia-Ping!
> > >
> > > Best,
> > > Bruno
> > >
> > > On 15.03.21 09:22, David Jacot wrote:
> > > > Congrats Chia-Ping! Well deserved.
> > > >
> > > > On Mon, Mar 15, 2021 at 5:39 AM Satish Duggana <
> > satish.dugg...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Congrats Chia-Ping!
> > > >>
> > > >> On Sat, 13 Mar 2021 at 13:34, Tom Bentley 
> > wrote:
> > > >>
> > > >>> Congratulations Chia-Ping!
> > > >>>
> > > >>> On Sat, Mar 13, 2021 at 7:31 AM Kamal Chandraprakash <
> > > >>> kamal.chandraprak...@gmail.com> wrote:
> > > >>>
> > >  Congratulations, Chia-Ping!!
> > > 
> > >  On Sat, Mar 13, 2021 at 11:38 AM Ismael Juma 
> > > >> wrote:
> > > 
> > > > Congratulations Chia-Ping! Well deserved.
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Mar 12, 2021, 11:14 AM Jun Rao  >
> > > >>> wrote:
> > > >
> > > >> Hi, Everyone,
> > > >>
> > > >> Chia-Ping Tsai has been a Kafka committer since Oct. 15,  2020.
> He
> > > >>> has
> > > > been
> > > >> very instrumental to the community since becoming a committer.
> > It's
> > > >>> my
> > > >> pleasure to announce that Chia-Ping  is now a member of Kafka
> PMC.
> > > >>
> > > >> Congratulations Chia-Ping!
> > > >>
> > > >> Jun
> > > >> on behalf of Apache Kafka PMC
> > > >>
> > > >
> > > 
> > > >>>
> > > >>
> > > >
> > >
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-16 Thread Dongjin Lee
Congratulations, Tom! Your contributions are always great!!

+1. Thanks for supporting KIP-653: Upgrade log4j to log4j2 again.

Best,
Dongjin

On Tue, Mar 16, 2021 at 8:24 PM Rajini Sivaram 
wrote:

> Congratulations, Tom!
>
> Regards,
>
> Rajini
>
> On Tue, Mar 16, 2021 at 10:39 AM Satish Duggana 
> wrote:
>
> > Congratulations Tom!!
> >
> > On Tue, 16 Mar 2021 at 13:30, David Jacot 
> > wrote:
> >
> > > Congrats, Tom!
> > >
> > > On Tue, Mar 16, 2021 at 7:40 AM Kamal Chandraprakash <
> > > kamal.chandraprak...@gmail.com> wrote:
> > >
> > > > Congrats, Tom!
> > > >
> > > > On Tue, Mar 16, 2021 at 8:32 AM Konstantine Karantasis
> > > >  wrote:
> > > >
> > > > > Congratulations Tom!
> > > > > Well deserved.
> > > > >
> > > > > Konstantine
> > > > >
> > > > > On Mon, Mar 15, 2021 at 4:52 PM Luke Chen 
> wrote:
> > > > >
> > > > > > Congratulations!
> > > > > >
> > > > > > Federico Valeri  於 2021年3月16日 週二 上午4:11
> 寫道:
> > > > > >
> > > > > > > Congrats, Tom!
> > > > > > >
> > > > > > > Well deserved.
> > > > > > >
> > > > > > > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno <
> ppatie...@live.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Congratulations Tom!
> > > > > > > >
> > > > > > > > Get Outlook for Android
> > > > > > > >
> > > > > > > > 
> > > > > > > > From: Guozhang Wang 
> > > > > > > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > > > > > > To: dev 
> > > > > > > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > > > > > > >
> > > > > > > > Congratulations Tom!
> > > > > > > >
> > > > > > > > Guozhang
> > > > > > > >
> > > > > > > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck
> > > > >  > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Congratulations, Tom!
> > > > > > > > >
> > > > > > > > > -Bill
> > > > > > > > >
> > > > > > > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> > > > > > >  > > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Congrats, Tom!
> > > > > > > > > >
> > > > > > > > > > Best,
> > > > > > > > > > Bruno
> > > > > > > > > >
> > > > > > > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > > > > > > Hi all,
> > > > > > > > > > >
> > > > > > > > > > > The PMC for Apache Kafka has invited Tom Bentley as a
> > > > > committer,
> > > > > > > and
> > > > > > > > > > > we are excited to announce that he accepted!
> > > > > > > > > > >
> > > > > > > > > > > Tom first contributed to Apache Kafka in June 2017 and
> > has
> > > > been
> > > > > > > > > > > actively contributing since February 2020.
> > > > > > > > > > > He has accumulated 52 commits and worked on a number of
> > > KIPs.
> > > > > > Here
> > > > > > > > are
> > > > > > > > > > > some of the most significant ones:
> > > > > > > > > > > KIP-183: Change
> PreferredReplicaLeaderElectionCommand
> > > to
> > > > > use
> > > > > > > > > > AdminClient
> > > > > > > > > > > KIP-195: AdminClient.createPartitions
> > > > > > > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > > > > > > KIP-621: Deprecate and replace
> > > > DescribeLogDirsResult.all()
> > > > > > and
> > > > > > > > > > .values()
> > > > > > > > > > > KIP-707: The future of KafkaFuture (still in
> > > discussion)
> > > > > > > > > > >
> > > > > > > > > > > In addition, he is very active on the mailing list and
> > has
> > > > > helped
> > > > > > > > > > > review many KIPs.
> > > > > > > > > > >
> > > > > > > > > > > Congratulations Tom and thanks for all the
> contributions!
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > -- Guozhang
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


request permission to create KIP

2021-03-16 Thread Andrei Iatsuk
Hello! 

I create improvement task https://issues.apache.org/jira/browse/KAFKA-12481 
 and offered pull request 
https://github.com/apache/kafka/pull/10333 
 that solves it. ijuma 
 says that according to 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals 
 
I should create KIP.

So can you add me https://cwiki.apache.org/confluence/display/~a.iatsuk 
 permission to create 
KIP?

Best regards, 
Andrei Iatsuk.

Kafka logos and branding

2021-03-16 Thread Justin Mclean
Hi,

As I mentioned on the dev list I can see the logos for the Kafka project are 
out of date. [1] Just a suggestion - which you are free to ignore. From a 
branding and trademark perspective it's going to help the project if 3rd 
parties have access to the correct logos. Having something on your trademark 
page [2] might help people use the right logos.

Thanks,
Justin

1. https://apache.org/logos/#kafka
2. https://kafka.apache.org/trademark


Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #66

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] KAFKA-12455: Fix OffsetValidationTest.test_broker_rolling_bounce 
failure with Raft (#10322)


--
[...truncated 7.25 MB...]
SimpleAclAuthorizerTest > testAddAclsOnWildcardResource() STARTED

SimpleAclAuthorizerTest > testAddAclsOnWildcardResource() PASSED

SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2() STARTED

SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2() PASSED

SimpleAclAuthorizerTest > testAclManagementAPIs() STARTED

SimpleAclAuthorizerTest > testAclManagementAPIs() PASSED

SimpleAclAuthorizerTest > testWildCardAcls() STARTED

SimpleAclAuthorizerTest > testWildCardAcls() PASSED

SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2() STARTED

SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2() PASSED

SimpleAclAuthorizerTest > testTopicAcl() STARTED

SimpleAclAuthorizerTest > testTopicAcl() PASSED

SimpleAclAuthorizerTest > testSuperUserHasAccess() STARTED

SimpleAclAuthorizerTest > testSuperUserHasAccess() PASSED

SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource() STARTED

SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource() PASSED

SimpleAclAuthorizerTest > testDenyTakesPrecedence() STARTED

SimpleAclAuthorizerTest > testDenyTakesPrecedence() PASSED

SimpleAclAuthorizerTest > testSingleCharacterResourceAcls() STARTED

SimpleAclAuthorizerTest > testSingleCharacterResourceAcls() PASSED

SimpleAclAuthorizerTest > testNoAclFoundOverride() STARTED

SimpleAclAuthorizerTest > testNoAclFoundOverride() PASSED

SimpleAclAuthorizerTest > testEmptyAclThrowsException() STARTED

SimpleAclAuthorizerTest > testEmptyAclThrowsException() PASSED

SimpleAclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() STARTED

SimpleAclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() PASSED

SimpleAclAuthorizerTest > testAllowAccessWithCustomPrincipal() STARTED

SimpleAclAuthorizerTest > testAllowAccessWithCustomPrincipal() PASSED

SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource() STARTED

SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource() PASSED

SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

SimpleAclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

SimpleAclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
PASSED

SimpleAclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() 
STARTED

SimpleAclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

SimpleAclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

SimpleAclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

SimpleAclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

SimpleAclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

SimpleAclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

SimpleAclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

SimpleAclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

SimpleAclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

SimpleAclAuthorizerTest > testGetAclsPrincipal() STARTED

SimpleAclAuthorizerTest > testGetAclsPrincipal() PASSED

SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource() STARTED

SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource() PASSED

SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

SimpleAclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

SimpleAclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
PASSED

SimpleAclAuthorizerTest > testLoadCache() STARTED

SimpleAclAuthorizerTest > testLoadCache() PASSED

OperationTest > testJavaConversions() STARTED

OperationTest > testJavaConversions() PASSED

AclEntryTest > testAclJsonConversion() STARTED

AclEntryTest > testAclJsonConversion() PASSED

MiniKdcTest > shouldNotStopImmediatelyWhenStarted() STARTED

MiniKdcTest > shouldNotStopImmediatelyWhenStarted() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #576

2021-03-16 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #605

2021-03-16 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12487) Sink connectors do not work with the cooperative consumer rebalance protocol

2021-03-16 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-12487:
-

 Summary: Sink connectors do not work with the cooperative consumer 
rebalance protocol
 Key: KAFKA-12487
 URL: https://issues.apache.org/jira/browse/KAFKA-12487
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 3.0.0, 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
Reporter: Chris Egerton
Assignee: Chris Egerton


The {{ConsumerRebalanceListener}} used by the framework to respond to rebalance 
events in consumer groups for sink tasks is hard-coded with the assumption that 
the consumer performs rebalances eagerly. In other words, it assumes that 
whenever {{onPartitionsRevoked}} is called, all partitions have been revoked 
from that consumer, and whenever {{onPartitionsAssigned}} is called, the 
partitions passed in to that method comprise the complete set of topic 
partitions assigned to that consumer.

See the [WorkerSinkTask.HandleRebalance 
class|https://github.com/apache/kafka/blob/b96fc7892f1e885239d3290cf509e1d1bb41e7db/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L669-L730]
 for the specifics.

 

One issue this can cause is silently ignoring to-be-committed offsets provided 
by sink tasks, since the framework ignores offsets provided by tasks in their 
{{preCommit}} method if it does not believe that the consumer for that task is 
currently assigned the topic partition for that offset. See these lines in the 
[WorkerSinkTask::commitOffsets 
method|https://github.com/apache/kafka/blob/b96fc7892f1e885239d3290cf509e1d1bb41e7db/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L429-L430]
 for reference.

 

This may not be the only issue caused by configuring a sink connector's 
consumer to use cooperative rebalancing. Rigorous unit and integration testing 
should be added before claiming that the Connect framework supports the use of 
cooperative consumers with sink connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12488) Be more specific about enabled SASL mechnanisms in system tests

2021-03-16 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-12488:
-

 Summary: Be more specific about enabled SASL mechnanisms in system 
tests
 Key: KAFKA-12488
 URL: https://issues.apache.org/jira/browse/KAFKA-12488
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Ron Dagostino


The `SecurityConfig.enabled_sasl_mechanisms()` method simply returns all SASL 
mechanisms that are enabled for the test -- whether for brokers, clients, 
controllers, or Zookeeper.  These enabled mechanisms are used in JAAS config 
files to determine what appears in those config files.  For example, the entire 
list of enabled mechanisms is used in both KafkaClient{} and KafkaServer{} 
sections, but that's way too broad.  We should be more precise about what 
mechanisms we are interested in for the different sections of these JAAS config 
files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #634

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add toString to various Kafka Metrics classes (#10330)

[github] KAFKA-12455: Fix OffsetValidationTest.test_broker_rolling_bounce 
failure with Raft (#10322)


--
[...truncated 3.70 MB...]
AclAuthorizerTest > testNoAclFoundOverride() STARTED

AclAuthorizerTest > testNoAclFoundOverride() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
PASSED

AclAuthorizerTest > testEmptyAclThrowsException() STARTED

AclAuthorizerTest > testEmptyAclThrowsException() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() PASSED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() STARTED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() STARTED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() PASSED

AclAuthorizerTest > testDeleteAclOnWildcardResource() STARTED

AclAuthorizerTest > testDeleteAclOnWildcardResource() PASSED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() STARTED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() PASSED

AclAuthorizerTest > testChangeListenerTiming() STARTED

AclAuthorizerTest > testChangeListenerTiming() PASSED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() STARTED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() STARTED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

AclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

AclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

AclAuthorizerTest > testGetAclsPrincipal() STARTED

AclAuthorizerTest > testGetAclsPrincipal() PASSED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() PASSED

AclAuthorizerTest > testLoadCache() STARTED

AclAuthorizerTest > testLoadCache() PASSED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED


[jira] [Created] (KAFKA-12486) Utilize HighAvailabilityTaskAssignor to avoid downtime on corrupted task

2021-03-16 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12486:
--

 Summary: Utilize HighAvailabilityTaskAssignor to avoid downtime on 
corrupted task
 Key: KAFKA-12486
 URL: https://issues.apache.org/jira/browse/KAFKA-12486
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: A. Sophie Blee-Goldman


In KIP-441, we added the HighAvailabilityTaskAssignor to address certain common 
scenarios which tend to lead to heavy downtime for tasks, such as scaling out. 
The new assignor will always place an active task on a client which has a 
"caught-up" copy of that tasks' state, if any exists, while the intended 
recipient will instead get a standby task to warm up the state in the 
background. This way we keep tasks live as much as possible, and avoid the long 
downtime imposed by state restoration on active tasks.

We can actually expand on this to reduce downtime due to restoring state: 
specifically, we may throw a TaskCorruptedException on an active task which 
leads to wiping out the state stores of that task and restoring from scratch. 
There are a few cases where this may be thrown:
 # No checkpoint found with EOS
 # TimeoutException when processing a StreamTask
 # TimeoutException when committing offsets under eos
 # RetriableException in RecordCollectorImpl

(There is also the case of OffsetOutOfRangeException, but that is excluded here 
since it only applies to standby tasks).

We should consider triggering a rebalance when we hit TaskCorruptedException on 
an active task so that the assignor has the chance to redirect this to another 
client who can resume work on the task while the original owner works on 
restoring the state from scratch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-722: Enable connector client overrides by default

2021-03-16 Thread Randall Hauch
Hello all,

I'd like to propose KIP-722 to change the default value of the existing
`connector.client.config.override.policy` Connect worker configuration, so
that by default connectors can override client properties.The details are
here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-722%3A+Enable+connector+client+overrides+by+default

The feature and worker config property were originally added by KIP-458
(approved and implemented in AK 2.3.0):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy

I look forward to your feedback!

Best regards,

Randall


[DISCUSS] KIP-721: Enable connector log contexts in Connect Log4j configuration

2021-03-16 Thread Randall Hauch
Hello all,

I'd like to propose KIP-721 to change Connect's Log4J configuration that we
ship with AK. This KIP will enable by default Connect's valuable connector
log contexts, which was added as part of KIP-449 to include connector- and
task-specific information to every log message output by the connector, its
tasks, or the worker thread operating those components.

The details are here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-721%3A+Enable+connector+log+contexts+in+Connect+Log4j+configuration

The earlier KIP-449 (approved and implemented in AK 2.3.0) is here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs

I look forward to your feedback!

Best regards,

Randall


Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-16 Thread Bruno Cadonna

Hi Sophie,

Thank you for the explanation regarding "ClientTagsVersion". Now I got it!

After this discussion, for the sake of robustness I am in favor of just 
writing the strings into the subscription info. Maybe, we can limit the 
maximum length of a tag and the maximum amount of tags. The limits are 
implementation details that we do not need to mention in the KIP, IMO.


"rack.aware.assignment.tags" sounds good to me. I also think Levani 
should have the last word on this.


Best,
Bruno

On 16.03.21 20:34, Sophie Blee-Goldman wrote:


Could we not reuse the version of the subscription
data? What are the main benefits of introducing "ClientTagsVersion"?



No, this version would have to be distinct from the protocol version that
we use for the subscription itself.
The reason being that the SubscriptionInfo version is only tied to the
metadata that we, as Kafka Streams,
choose to encode in a specific version. The "ClientTagsVersion" on the
other hand would be tied to the
specific tags that the *user* has chosen to encode in the
SubscriptionInfoData. So the SubscriptionInfo
version is a constant in the Streams source code, whereas you might have
any number of different
client tag versions as you evolve the client.tags used by your application.
They're orthogonal

(As for how to determine the ClientTagsVersion, most likely we would need
to add an additional config
and push it to the user to bump the version when they change the tags. We
could try to derive it ourselves,
but then we would need some way to persist the current version & tags so
that we can tell when to bump
the version, and I couldn't think of an appropriate way to do this off the
top of my head. Anyways we don't
really need to have this conversation until we want to make the tags
evolvable, as long as we know it's at
least possible)

Of course, we could sidestep all of this by just serializing the name of
each client.tag instead of an encoded
key based on its position in the configured task.rack.aware.assignment.tags
list, right? I understand the intention
is to save on bytes, but (a) it was always my impression that the
AssignmentInfo was more at risk of hitting
the max message size than the SubscriptionInfo, since the assignment has to
encode info for all tasks
across the app while the subscription only encodes local info, and (b) how
long do we expect the client
tags to really be? Can't we just push this on to the user to come up with
their own schema and embed
an abbreviation code in the client.tags, or just tell them not to use
insanely long tags (which seems unlikely in
the first place)? WDYT?

That makes sense about purposely excluding "standby" from the config name.
In that case
I would be happy with just "task.rack.aware.assignment.tags", although I'd
propose to shorten
it further by removing the "task" part of "standby.task" as well, ie just
"rack.aware.assignment.tags"
But I'll leave it up to Levani to make this call :)

On Tue, Mar 16, 2021 at 6:34 AM Bruno Cadonna 
wrote:


Forgot to say ...

I am fine with the rest of the name you proposed, i.e.,
"task.rack.aware.assignment.tags".

Best,
Bruno

On 16.03.21 09:30, Bruno Cadonna wrote:

Hi Sophie,

I am +1, for explicitly documenting that this list must be identical in
contents and order across all clients in the application in the KIP. And
later in the docs of the config. If we are too much concerned about this
and we think that we should explicitly check the order and content, we
could think about another encoding that is order independent or not
encode the tags at all.

Good point about upgrading and evolving. That seems related to the
encoding of the tags. If the encoding contained a bit more information,
we could use the intersection of the tags to evolve the tags of existing
applications. That means, we only use tags that are present on all
clients. Having an encoding that contains more information would also
give us the possibility to support removing, changing, and reordering of
tags.

I do not completely understand your question about the
"ClientTagsVersion". Could we not reuse the version of the subscription
data? What are the main benefits of introducing "ClientTagsVersion"?
Versioning a field of the protocol seems a bit too detailed for me, but
I could easily be missing something important here.

Regarding "standby.task.rack.aware.assignment.tags", we intentionally
tried to avoid the words "standby" and "replica" in this name to not
limit this config to standby tasks. In future, we may want to use the
same config also for active tasks. Admittedly, I do not yet know how we
would use this config for active tasks but I think it is better to keep
it generic as much as reasonable.

Best,
Bruno


On 15.03.21 23:07, Sophie Blee-Goldman wrote:

Hey Levani, thanks for the KIP! This looks great.

A few comments/questions about the "task.assignment.rack.awareness"
config:
since this is used to determine the encoding of the client tags, we
should
make sure to specify that this 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #633

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12427: Don't update connection idle time for muted connections 
(#10267)

[github] KAFKA-12330; FetchSessionCache may cause starvation for partitions 
when FetchResponse is full (#10318)


--
[...truncated 3.68 MB...]
AclAuthorizerTest > testNoAclFoundOverride() STARTED

AclAuthorizerTest > testNoAclFoundOverride() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
PASSED

AclAuthorizerTest > testEmptyAclThrowsException() STARTED

AclAuthorizerTest > testEmptyAclThrowsException() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() PASSED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() STARTED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() STARTED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() PASSED

AclAuthorizerTest > testDeleteAclOnWildcardResource() STARTED

AclAuthorizerTest > testDeleteAclOnWildcardResource() PASSED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() STARTED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() PASSED

AclAuthorizerTest > testChangeListenerTiming() STARTED

AclAuthorizerTest > testChangeListenerTiming() PASSED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() STARTED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() STARTED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

AclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

AclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

AclAuthorizerTest > testGetAclsPrincipal() STARTED

AclAuthorizerTest > testGetAclsPrincipal() PASSED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() PASSED

AclAuthorizerTest > testLoadCache() STARTED

AclAuthorizerTest > testLoadCache() PASSED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #575

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12427: Don't update connection idle time for muted connections 
(#10267)

[github] KAFKA-12330; FetchSessionCache may cause starvation for partitions 
when FetchResponse is full (#10318)


--
[...truncated 3.67 MB...]
TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData() STARTED

TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData() PASSED

TransactionsTest > testSendOffsetsToTransactionTimeout() STARTED

TransactionsTest > testSendOffsetsToTransactionTimeout() PASSED

TransactionsTest > testFailureToFenceEpoch() STARTED

TransactionsTest > testFailureToFenceEpoch() PASSED

TransactionsTest > testFencingOnSend() STARTED

TransactionsTest > testFencingOnSend() PASSED

TransactionsTest > testFencingOnCommit() STARTED

TransactionsTest > testFencingOnCommit() PASSED

TransactionsTest > testAbortTransactionTimeout() STARTED

TransactionsTest > testAbortTransactionTimeout() PASSED

TransactionsTest > testMultipleMarkersOneLeader() STARTED

TransactionsTest > testMultipleMarkersOneLeader() PASSED

TransactionsTest > testCommitTransactionTimeout() STARTED

TransactionsTest > testCommitTransactionTimeout() PASSED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() PASSED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() PASSED

UserClientIdQuotaTest > testThrottledProducerConsumer() STARTED

UserClientIdQuotaTest > testThrottledProducerConsumer() PASSED

UserClientIdQuotaTest > testQuotaOverrideDelete() STARTED

UserClientIdQuotaTest > testQuotaOverrideDelete() PASSED

UserClientIdQuotaTest > testThrottledRequest() STARTED

UserClientIdQuotaTest > testThrottledRequest() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED

ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED

ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED

ZooKeeperClientTest > testConnection() STARTED

ZooKeeperClientTest > testConnection() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED

ZooKeeperClientTest > testGetAclExistingZNode() STARTED

ZooKeeperClientTest > testGetAclExistingZNode() PASSED

ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED

ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED

ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED

ZooKeeperClientTest > testSetAclNonExistentZNode() PASSED

ZooKeeperClientTest > testConnectionLossRequestTermination() STARTED

ZooKeeperClientTest > testConnectionLossRequestTermination() 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #604

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12427: Don't update connection idle time for muted connections 
(#10267)

[github] KAFKA-12330; FetchSessionCache may cause starvation for partitions 
when FetchResponse is full (#10318)


--
[...truncated 3.69 MB...]

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() STARTED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() PASSED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

LiteralAclStoreTest > shouldHaveCorrectPatternType() STARTED

LiteralAclStoreTest > 

Re: Proposed breaking changes for Connect in AK 3.0.0

2021-03-16 Thread Israel Ekpo
Thanks Randall for kicking this off.

Are there similar pages for the other ecosystem components and APIs that is
part of the 3.0.0 release?

It will be great to have a central page and then have this page as a subset
of that collection

On Tue, Mar 16, 2021 at 4:38 PM Randall Hauch  wrote:

> The next release of AK will be 3.0.0. Since this is a major release, we
> have an opportunity to:
>
>- remove previously deprecated worker configuration properties; and
>- change some of Connect's defaults that were chosen previously to
>maintain backward compatibility, but for which there are more sensible
>defaults.
>
> I've taken the liberty of creating a wiki page [1] that lists all of the
> Connect-related KIPs since AK 0.10.0.0 (the release after Connect was
> introduced), and identifies a small set of changes that are appropriate
> only for major releases.
>
> This page is not a KIP, but will hopefully help us identify any behaviors
> or APIs that we may wish to change in AK 3.0.0. Note that some changes have
> been already approved, and we need to decide whether to make those changes
> in AK 3.0.0. Other changes will still require a formal KIP with discussion
> and approval.
>
> Please use this thread to discuss these or other proposed changes.
>
> Thanks,
>
> Randall
>
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047362
>


Proposed breaking changes for Connect in AK 3.0.0

2021-03-16 Thread Randall Hauch
The next release of AK will be 3.0.0. Since this is a major release, we
have an opportunity to:

   - remove previously deprecated worker configuration properties; and
   - change some of Connect's defaults that were chosen previously to
   maintain backward compatibility, but for which there are more sensible
   defaults.

I've taken the liberty of creating a wiki page [1] that lists all of the
Connect-related KIPs since AK 0.10.0.0 (the release after Connect was
introduced), and identifies a small set of changes that are appropriate
only for major releases.

This page is not a KIP, but will hopefully help us identify any behaviors
or APIs that we may wish to change in AK 3.0.0. Note that some changes have
been already approved, and we need to decide whether to make those changes
in AK 3.0.0. Other changes will still require a formal KIP with discussion
and approval.

Please use this thread to discuss these or other proposed changes.

Thanks,

Randall

[1]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047362


Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #65

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[A. Sophie Blee-Goldman] HOTFIX: timeout issue in removeStreamThread() (#10321)


--
[...truncated 3.60 MB...]

SocketServerTest > remoteCloseWithIncompleteBufferedReceive() PASSED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() STARTED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() PASSED

SocketServerTest > testStagedListenerStartup() STARTED

SocketServerTest > testStagedListenerStartup() PASSED

SocketServerTest > testConnectionRateLimit() STARTED

SocketServerTest > testConnectionRateLimit() PASSED

SocketServerTest > testConnectionRatePerIp() STARTED

SocketServerTest > testConnectionRatePerIp() PASSED

SocketServerTest > processCompletedSendException() STARTED

SocketServerTest > processCompletedSendException() PASSED

SocketServerTest > processDisconnectedException() STARTED

SocketServerTest > processDisconnectedException() PASSED

SocketServerTest > closingChannelWithBufferedReceives() STARTED

SocketServerTest > closingChannelWithBufferedReceives() PASSED

SocketServerTest > sendCancelledKeyException() STARTED

SocketServerTest > sendCancelledKeyException() PASSED

SocketServerTest > processCompletedReceiveException() STARTED

SocketServerTest > processCompletedReceiveException() PASSED

SocketServerTest > testControlPlaneAsPrivilegedListener() STARTED

SocketServerTest > testControlPlaneAsPrivilegedListener() PASSED

SocketServerTest > closingChannelSendFailure() STARTED

SocketServerTest > closingChannelSendFailure() PASSED

SocketServerTest > idleExpiryWithBufferedReceives() STARTED

SocketServerTest > idleExpiryWithBufferedReceives() PASSED

SocketServerTest > testSocketsCloseOnShutdown() STARTED

SocketServerTest > testSocketsCloseOnShutdown() PASSED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone() STARTED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone() PASSED

SocketServerTest > pollException() STARTED

SocketServerTest > pollException() PASSED

SocketServerTest > closingChannelWithBufferedReceivesFailedSend() STARTED

SocketServerTest > closingChannelWithBufferedReceivesFailedSend() PASSED

SocketServerTest > remoteCloseWithBufferedReceives() STARTED

SocketServerTest > remoteCloseWithBufferedReceives() PASSED

SocketServerTest > testThrottledSocketsClosedOnShutdown() STARTED

SocketServerTest > testThrottledSocketsClosedOnShutdown() PASSED

SocketServerTest > closingChannelWithCompleteAndIncompleteBufferedReceives() 
STARTED

SocketServerTest > closingChannelWithCompleteAndIncompleteBufferedReceives() 
PASSED

SocketServerTest > testInterBrokerListenerAsPrivilegedListener() STARTED

SocketServerTest > testInterBrokerListenerAsPrivilegedListener() PASSED

SocketServerTest > testSslSocketServer() STARTED

SocketServerTest > testSslSocketServer() PASSED

SocketServerTest > testDisabledRequestIsRejected() STARTED

SocketServerTest > testDisabledRequestIsRejected() PASSED

SocketServerTest > tooBigRequestIsRejected() STARTED

SocketServerTest > tooBigRequestIsRejected() PASSED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingInProgress() STARTED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingInProgress() PASSED

KafkaTimerTest > testKafkaTimer() STARTED

KafkaTimerTest > testKafkaTimer() PASSED

LinuxIoMetricsCollectorTest > testReadProcFile() STARTED

LinuxIoMetricsCollectorTest > testReadProcFile() PASSED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() STARTED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() PASSED

TransactionsWithMaxInFlightOneTest > 
testTransactionalProducerSingleBrokerMaxInFlightOne() STARTED

TransactionsWithMaxInFlightOneTest > 
testTransactionalProducerSingleBrokerMaxInFlightOne() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@685031d2, value = [B@5c15a0c0), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@685031d2, value = [B@5c15a0c0), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 

Request permission to create KIP

2021-03-16 Thread Andrei Iatsuk
Hello! 

I create improvement task https://issues.apache.org/jira/browse/KAFKA-12481 
 and offered pull request 
https://github.com/apache/kafka/pull/10333 
 that solves it. ijuma 
 says that according to 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals 
 
I should create KIP.

So can you add me https://cwiki.apache.org/confluence/display/~a.iatsuk 
 permission to create 
KIP?

Best regards, 
Andrei Iatsuk.

[jira] [Created] (KAFKA-12485) Speed up Consumer#committed by returning cached offsets for owned partitions

2021-03-16 Thread A. Sophie Blee-Goldman (Jira)
A. Sophie Blee-Goldman created KAFKA-12485:
--

 Summary: Speed up Consumer#committed by returning cached offsets 
for owned partitions
 Key: KAFKA-12485
 URL: https://issues.apache.org/jira/browse/KAFKA-12485
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: A. Sophie Blee-Goldman


All of the KafkaConsumer#committed APIs will currently make a remote blocking 
call to the server to fetch the committed offsets. This is typically used to 
reset the offsets after a crash or restart, or to fetch offsets for other 
consumers in the group. However some users may wish to invoke this API on 
partitions which are currently owned by the Consumer, in which case the remote 
call is unnecessary since those offsets should already be known.

We should consider optimizing these APIs to just return the cached offsets in 
place of the remote call when passed in only partitions that are currently 
owned. This is similar to what we do in Consumer#position, although there we 
have a guarantee that the partitions are owned by the Consumer whereas in 
#committed we do not



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #632

2021-03-16 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12484) Enable Connect's connector log contexts by default

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12484:
-

 Summary: Enable Connect's connector log contexts by default
 Key: KAFKA-12484
 URL: https://issues.apache.org/jira/browse/KAFKA-12484
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


Connect's Log4J configuration does not by default log the connector contexts. 
That feature was added in 
[KIP-449|https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs]
 and first appeared in AK 2.3.0, but it was not enabled by default since that 
would not have been backward compatible.

But with AK 3.0.0, we have the opportunity to change the default in 
{{config/connect-log4j.properties}} to enable connector log contexts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12483) Enable client overrides in connector configs by default

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12483:
-

 Summary: Enable client overrides in connector configs by default
 Key: KAFKA-12483
 URL: https://issues.apache.org/jira/browse/KAFKA-12483
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


Connector-specific client overrides were added in 
[KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy],
 but that feature is not enabled by default since it would not have been 
backward compatible.

But with AK 3.0.0, we have the opportunity to enable connector client overrides 
by default by changing the worker config's 
{{connector.client.config.override.policy}} default value to \{{All}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12482) Remove deprecated Connect worker configs

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12482:
-

 Summary: Remove deprecated Connect worker configs
 Key: KAFKA-12482
 URL: https://issues.apache.org/jira/browse/KAFKA-12482
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


The following Connect worker configuration properties were deprecated and 
should be removed in 3.0.0:
 * {{rest.host.name}} (deprecated in 
[KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])

 * {{rest.port}} (deprecated in 
[KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])
 * {{internal.key.converter}} (deprecated in 
[KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])
 * {{internal.value.converter}} (deprecated in 
[KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])
 * sd
 *



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-16 Thread Sophie Blee-Goldman
>
> Could we not reuse the version of the subscription
> data? What are the main benefits of introducing "ClientTagsVersion"?


No, this version would have to be distinct from the protocol version that
we use for the subscription itself.
The reason being that the SubscriptionInfo version is only tied to the
metadata that we, as Kafka Streams,
choose to encode in a specific version. The "ClientTagsVersion" on the
other hand would be tied to the
specific tags that the *user* has chosen to encode in the
SubscriptionInfoData. So the SubscriptionInfo
version is a constant in the Streams source code, whereas you might have
any number of different
client tag versions as you evolve the client.tags used by your application.
They're orthogonal

(As for how to determine the ClientTagsVersion, most likely we would need
to add an additional config
and push it to the user to bump the version when they change the tags. We
could try to derive it ourselves,
but then we would need some way to persist the current version & tags so
that we can tell when to bump
the version, and I couldn't think of an appropriate way to do this off the
top of my head. Anyways we don't
really need to have this conversation until we want to make the tags
evolvable, as long as we know it's at
least possible)

Of course, we could sidestep all of this by just serializing the name of
each client.tag instead of an encoded
key based on its position in the configured task.rack.aware.assignment.tags
list, right? I understand the intention
is to save on bytes, but (a) it was always my impression that the
AssignmentInfo was more at risk of hitting
the max message size than the SubscriptionInfo, since the assignment has to
encode info for all tasks
across the app while the subscription only encodes local info, and (b) how
long do we expect the client
tags to really be? Can't we just push this on to the user to come up with
their own schema and embed
an abbreviation code in the client.tags, or just tell them not to use
insanely long tags (which seems unlikely in
the first place)? WDYT?

That makes sense about purposely excluding "standby" from the config name.
In that case
I would be happy with just "task.rack.aware.assignment.tags", although I'd
propose to shorten
it further by removing the "task" part of "standby.task" as well, ie just
"rack.aware.assignment.tags"
But I'll leave it up to Levani to make this call :)

On Tue, Mar 16, 2021 at 6:34 AM Bruno Cadonna 
wrote:

> Forgot to say ...
>
> I am fine with the rest of the name you proposed, i.e.,
> "task.rack.aware.assignment.tags".
>
> Best,
> Bruno
>
> On 16.03.21 09:30, Bruno Cadonna wrote:
> > Hi Sophie,
> >
> > I am +1, for explicitly documenting that this list must be identical in
> > contents and order across all clients in the application in the KIP. And
> > later in the docs of the config. If we are too much concerned about this
> > and we think that we should explicitly check the order and content, we
> > could think about another encoding that is order independent or not
> > encode the tags at all.
> >
> > Good point about upgrading and evolving. That seems related to the
> > encoding of the tags. If the encoding contained a bit more information,
> > we could use the intersection of the tags to evolve the tags of existing
> > applications. That means, we only use tags that are present on all
> > clients. Having an encoding that contains more information would also
> > give us the possibility to support removing, changing, and reordering of
> > tags.
> >
> > I do not completely understand your question about the
> > "ClientTagsVersion". Could we not reuse the version of the subscription
> > data? What are the main benefits of introducing "ClientTagsVersion"?
> > Versioning a field of the protocol seems a bit too detailed for me, but
> > I could easily be missing something important here.
> >
> > Regarding "standby.task.rack.aware.assignment.tags", we intentionally
> > tried to avoid the words "standby" and "replica" in this name to not
> > limit this config to standby tasks. In future, we may want to use the
> > same config also for active tasks. Admittedly, I do not yet know how we
> > would use this config for active tasks but I think it is better to keep
> > it generic as much as reasonable.
> >
> > Best,
> > Bruno
> >
> >
> > On 15.03.21 23:07, Sophie Blee-Goldman wrote:
> >> Hey Levani, thanks for the KIP! This looks great.
> >>
> >> A few comments/questions about the "task.assignment.rack.awareness"
> >> config:
> >> since this is used to determine the encoding of the client tags, we
> >> should
> >> make sure to specify that this list must be identical in contents and
> >> order
> >> across all clients in the application. Unfortunately there doesn't
> >> seem to
> >> be a good way to actually enforce this, so we should call it out in the
> >> config doc string at the least.
> >>
> >> On that note, should it be possible for users to upgrade or evolve their
> >> tags over time? For 

[jira] [Resolved] (KAFKA-12330) FetchSessionCache may cause starvation for partitions when FetchResponse is full

2021-03-16 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-12330.

Fix Version/s: 3.0.0
   Resolution: Fixed

> FetchSessionCache may cause starvation for partitions when FetchResponse is 
> full
> 
>
> Key: KAFKA-12330
> URL: https://issues.apache.org/jira/browse/KAFKA-12330
> Project: Kafka
>  Issue Type: Bug
>Reporter: Lucas Bradstreet
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.0.0
>
>
> The incremental FetchSessionCache sessions deprioritizes partitions where a 
> response is returned. This may happen if log metadata such as log start 
> offset, hwm, etc is returned, or if data for that partition is returned.
> When a fetch response fills to maxBytes, data may not be returned for 
> partitions even if the fetch offset is lower than the fetch upper bound. 
> However, the fetch response will still contain updates to metadata such as 
> hwm if that metadata has changed. This can lead to degenerate behavior where 
> a partition's hwm or log start offset is updated resulting in the next fetch 
> being unnecessarily skipped for that partition. At first this appeared to be 
> worse, as hwm updates occur frequently, but starvation should result in hwm 
> movement becoming blocked, allowing a fetch to go through and then becoming 
> unstuck. However, it'll still require one more fetch request than necessary 
> to do so. Consumers may be affected more than replica fetchers, however they 
> often remove partitions with fetched data from the next fetch request and 
> this may be helping prevent starvation.
> I believe we should only reorder the partition fetch priority if data is 
> actually returned for a partition.
> {noformat}
> private class PartitionIterator(val iter: FetchSession.RESP_MAP_ITER,
> val updateFetchContextAndRemoveUnselected: 
> Boolean)
>   extends FetchSession.RESP_MAP_ITER {
>   var nextElement: util.Map.Entry[TopicPartition, 
> FetchResponse.PartitionData[Records]] = null
>   override def hasNext: Boolean = {
> while ((nextElement == null) && iter.hasNext) {
>   val element = iter.next()
>   val topicPart = element.getKey
>   val respData = element.getValue
>   val cachedPart = session.partitionMap.find(new 
> CachedPartition(topicPart))
>   val mustRespond = cachedPart.maybeUpdateResponseData(respData, 
> updateFetchContextAndRemoveUnselected)
>   if (mustRespond) {
> nextElement = element
> // Example POC change:
> // Don't move partition to end of queue if we didn't actually fetch 
> data
> // This should help avoid starvation even when we are filling the 
> fetch response fully while returning metadata for these partitions
> if (updateFetchContextAndRemoveUnselected && respData.records != null 
> && respData.records.sizeInBytes > 0) {
>   session.partitionMap.remove(cachedPart)
>   session.partitionMap.mustAdd(cachedPart)
> }
>   } else {
> if (updateFetchContextAndRemoveUnselected) {
>   iter.remove()
> }
>   }
> }
> nextElement != null
>   }{noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12427) Broker does not close muted idle connections with buffered data

2021-03-16 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-12427.

Fix Version/s: 3.0.0
 Reviewer: Rajini Sivaram
   Resolution: Fixed

> Broker does not close muted idle connections with buffered data
> ---
>
> Key: KAFKA-12427
> URL: https://issues.apache.org/jira/browse/KAFKA-12427
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Reporter: David Mao
>Assignee: David Mao
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #603

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR; Various code cleanups (#10319)


--
[...truncated 7.38 MB...]

BrokerEndPointTest > testFromJsonV5() PASSED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() STARTED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() PASSED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() STARTED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() PASSED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
STARTED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
PASSED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() STARTED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() PASSED

JsonValueTest > testJsonObjectIterator() STARTED

JsonValueTest > testJsonObjectIterator() PASSED

JsonValueTest > testDecodeLong() STARTED

JsonValueTest > testDecodeLong() PASSED

JsonValueTest > testAsJsonObject() STARTED

JsonValueTest > testAsJsonObject() PASSED

JsonValueTest > testDecodeDouble() STARTED

JsonValueTest > testDecodeDouble() PASSED

JsonValueTest > testDecodeOption() STARTED

JsonValueTest > testDecodeOption() PASSED

JsonValueTest > testDecodeString() STARTED

JsonValueTest > testDecodeString() PASSED

JsonValueTest > testJsonValueToString() STARTED

JsonValueTest > testJsonValueToString() PASSED

JsonValueTest > testAsJsonObjectOption() STARTED

JsonValueTest > testAsJsonObjectOption() PASSED

JsonValueTest > testAsJsonArrayOption() STARTED

JsonValueTest > testAsJsonArrayOption() PASSED

JsonValueTest > testAsJsonArray() STARTED

JsonValueTest > testAsJsonArray() PASSED

JsonValueTest > testJsonValueHashCode() STARTED

JsonValueTest > testJsonValueHashCode() PASSED

JsonValueTest > testDecodeInt() STARTED

JsonValueTest > testDecodeInt() PASSED

JsonValueTest > testDecodeMap() STARTED

JsonValueTest > testDecodeMap() PASSED

JsonValueTest > testDecodeSeq() STARTED

JsonValueTest > testDecodeSeq() PASSED

JsonValueTest > testJsonObjectGet() STARTED

JsonValueTest > testJsonObjectGet() PASSED

JsonValueTest > testJsonValueEquals() STARTED

JsonValueTest > testJsonValueEquals() PASSED

JsonValueTest > testJsonArrayIterator() STARTED

JsonValueTest > testJsonArrayIterator() PASSED

JsonValueTest > testJsonObjectApply() STARTED

JsonValueTest > testJsonObjectApply() PASSED

JsonValueTest > testDecodeBoolean() STARTED

JsonValueTest > testDecodeBoolean() PASSED

PasswordEncoderTest > testEncoderConfigChange() STARTED

PasswordEncoderTest > testEncoderConfigChange() PASSED

PasswordEncoderTest > testEncodeDecodeAlgorithms() STARTED

PasswordEncoderTest > testEncodeDecodeAlgorithms() PASSED

PasswordEncoderTest > testEncodeDecode() STARTED

PasswordEncoderTest > testEncodeDecode() PASSED

ThrottlerTest > testThrottleDesiredRate() STARTED

ThrottlerTest > testThrottleDesiredRate() PASSED

LoggingTest > testLoggerLevelIsResolved() STARTED

LoggingTest > testLoggerLevelIsResolved() PASSED

LoggingTest > testLog4jControllerIsRegistered() STARTED

LoggingTest > testLog4jControllerIsRegistered() PASSED

LoggingTest > testTypeOfGetLoggers() STARTED

LoggingTest > testTypeOfGetLoggers() PASSED

LoggingTest > testLogName() STARTED

LoggingTest > testLogName() PASSED

LoggingTest > testLogNameOverride() STARTED

LoggingTest > testLogNameOverride() PASSED

TimerTest > testAlreadyExpiredTask() STARTED

TimerTest > testAlreadyExpiredTask() PASSED

TimerTest > testTaskExpiration() STARTED

TimerTest > testTaskExpiration() PASSED

ReplicationUtilsTest > testUpdateLeaderAndIsr() STARTED

ReplicationUtilsTest > testUpdateLeaderAndIsr() PASSED

TopicFilterTest > testIncludeLists() STARTED

TopicFilterTest > testIncludeLists() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED

RaftManagerTest > testUncaughtExceptionInIoThread() PASSED

RequestChannelTest > testNonAlterRequestsNotTransformed() STARTED

RequestChannelTest > testNonAlterRequestsNotTransformed() PASSED

RequestChannelTest > testAlterRequests() STARTED

RequestChannelTest > testAlterRequests() PASSED

RequestChannelTest > testJsonRequests() STARTED

RequestChannelTest > testJsonRequests() PASSED

RequestChannelTest > testIncrementalAlterRequests() STARTED

RequestChannelTest > testIncrementalAlterRequests() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
PASSED


[jira] [Created] (KAFKA-12481) Add socket.nagle.disable config to reduce number of packets

2021-03-16 Thread Andrei Iatsuk (Jira)
Andrei Iatsuk created KAFKA-12481:
-

 Summary: Add socket.nagle.disable config to reduce number of 
packets 
 Key: KAFKA-12481
 URL: https://issues.apache.org/jira/browse/KAFKA-12481
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.6.1, 2.7.0, 2.5.1, 2.4.1, 2.3.1, 2.2.2, 2.1.1, 2.0.1, 
1.1.1, 1.0.2, 0.11.0.3, 0.10.2.2, 0.8.2.2
Reporter: Andrei Iatsuk
 Attachments: Screenshot 2021-03-13 at 00.29.10.png, Screenshot 
2021-03-13 at 00.44.43.png, Screenshot 2021-03-14 at 01.46.00.png, Screenshot 
2021-03-14 at 01.52.01.png, Screenshot 2021-03-16 at 21.05.03.png, Screenshot 
2021-03-16 at 21.12.17.png

*What to do?*

Add _socket.nagle.disable_ parameter to Apache Kafka config like in 
[librdkafka|https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md].

*What reason of this improvement?*

A large number of topic-partitions on one broker causes burst of host's 
packets/sec metric. The traffic shaper in the cloud ceases to cope with such a 
load and causes service degradation.

*How to reproduce?*
 # Create Kafka Cluster with 4 brokers. Amount of packet/sec is ~120.
 # Add 100 topics with 100 partitions each and replication factor = 3. It is 
30k topic-partitions in total. Amount of packet/sec is ~15k.
{code:java}
import os
for i in range(100):
  print(f"create topic 'flower{i}'... ", end="")
  cmd = "kafka-topics.sh --create --bootstrap-server {} --topic {} --partitions 
{} --replication-factor {}".format("databus.andrei-iatsuk.ec.odkl.ru:9092", 
f"flower{i}", 100, 3)
  code = os.system(cmd)
  print("ok" if code == 0 else "error")
{code}
!Screenshot 2021-03-16 at 21.05.03.png!

 # Generate server load by launching next script in 4 terminals. Amount of 
packet/sec is ~130k.
{code:java}
import time
from pykafka import KafkaClient
client = KafkaClient(hosts="databus.andrei-iatsuk.ec.odkl.ru:9092")
while True:
  for i in range(100):
print(f"sent message to 'flower{i}'")
with client.topics[f"flower{i}"].get_sync_producer() as producer:
  for j in range(1000):
producer.produce(str.encode(f'test message {j} in topic flower{i}' * 
10))
{code}
!Screenshot 2021-03-13 at 00.44.43.png! 
 !Screenshot 2021-03-13 at 00.29.10.png!

 # Make dump of tcp connections via tcpdump due ~2 sec:
{code:java}
$ tcpdump -i eth1 -w dump.pcap
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 
bytes
^C8873886 packets captured
9139050 packets received by filter
265028 packets dropped by kernel
{code}

 # Load dump to Wireshark and see statistics: ~99.999% of packets is inter 
broker messages, size of packets 40-160 bytes. On screen hosts with IPs 
10.16.23.[157-160] is brokers:
 !Screenshot 2021-03-14 at 01.46.00.png! 
 !Screenshot 2021-03-14 at 01.52.01.png!

*How to fix?*
 # Add boolean _socket.nagle.disable_ parameter to Apache Kafka config and 
provide value to kafka.network.Acceptor.accept(key) method in : 
[https://github.com/apache/kafka/blob/2.4/core/src/main/scala/kafka/network/SocketServer.scala#L646]
 # For disabled TCP_NODELAY value:
 ## ~400 packets/s for idle broker (instead ~12k packets/s)
 ## ~3k packets/s for loaded broker (instead ~150k packets/s)
 !Screenshot 2021-03-16 at 21.12.17.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #574

2021-03-16 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR; Various code cleanups (#10319)


--
[...truncated 3.67 MB...]
LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
PASSED

ProducerStateManagerTest > 

[jira] [Created] (KAFKA-12480) Reuse bootstrap servers in clients when last alive broker in cluster metadata is unavailable

2021-03-16 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-12480:
-

 Summary: Reuse bootstrap servers in clients when last alive broker 
in cluster metadata is unavailable
 Key: KAFKA-12480
 URL: https://issues.apache.org/jira/browse/KAFKA-12480
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Ron Dagostino


https://issues.apache.org/jira/browse/KAFKA-12455 documented how a Java client 
can temporarily lose connectivity to a 2-broker cluster that is undergoing a 
roll because the client will repeatedly retry connecting to the last alive 
broker that it knows about in the cluster metadata even when that broker is 
unavailable.  The client could potentially fallback to its bootstrap brokers in 
this case and reconnect to the cluster quicker.

For example, assume a 2-broker cluster has broker IDs 1 and 2 and both appear 
in the bootstrap servers for a consumer.  Assume broker 1 rolls such that the 
Java consumer receives a new METADATA response and only knows about broker 2 
being alive, and then broker 2 rolls before the consumer gets a new METADATA 
response indicating that broker 1 is also alive.  At this point the Java 
consumer will keep retrying broker 2, and it will not reconnect to the cluster 
unless/until broker 2 becomes available -- or the client itself is restarted so 
it can use its bootstrap servers again.  Another possibility is to fallback to 
the full bootstrap servers list when the last alive broker becomes unavailable.

I believe librdkafka-based client may perform this fallback, though I am not 
certain.  We should consider it for Java clients.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] tombentley commented on pull request #337: Add tombentley to committers

2021-03-16 Thread GitBox


tombentley commented on pull request #337:
URL: https://github.com/apache/kafka-site/pull/337#issuecomment-800299348


   Thanks @guozhangwang!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] tombentley merged pull request #337: Add tombentley to committers

2021-03-16 Thread GitBox


tombentley merged pull request #337:
URL: https://github.com/apache/kafka-site/pull/337


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-16 Thread Bruno Cadonna

Forgot to say ...

I am fine with the rest of the name you proposed, i.e., 
"task.rack.aware.assignment.tags".


Best,
Bruno

On 16.03.21 09:30, Bruno Cadonna wrote:

Hi Sophie,

I am +1, for explicitly documenting that this list must be identical in 
contents and order across all clients in the application in the KIP. And 
later in the docs of the config. If we are too much concerned about this 
and we think that we should explicitly check the order and content, we 
could think about another encoding that is order independent or not 
encode the tags at all.


Good point about upgrading and evolving. That seems related to the 
encoding of the tags. If the encoding contained a bit more information, 
we could use the intersection of the tags to evolve the tags of existing 
applications. That means, we only use tags that are present on all 
clients. Having an encoding that contains more information would also 
give us the possibility to support removing, changing, and reordering of 
tags.


I do not completely understand your question about the 
"ClientTagsVersion". Could we not reuse the version of the subscription 
data? What are the main benefits of introducing "ClientTagsVersion"? 
Versioning a field of the protocol seems a bit too detailed for me, but 
I could easily be missing something important here.


Regarding "standby.task.rack.aware.assignment.tags", we intentionally 
tried to avoid the words "standby" and "replica" in this name to not 
limit this config to standby tasks. In future, we may want to use the 
same config also for active tasks. Admittedly, I do not yet know how we 
would use this config for active tasks but I think it is better to keep 
it generic as much as reasonable.


Best,
Bruno


On 15.03.21 23:07, Sophie Blee-Goldman wrote:

Hey Levani, thanks for the KIP! This looks great.

A few comments/questions about the "task.assignment.rack.awareness" 
config:
since this is used to determine the encoding of the client tags, we 
should
make sure to specify that this list must be identical in contents and 
order
across all clients in the application. Unfortunately there doesn't 
seem to

be a good way to actually enforce this, so we should call it out in the
config doc string at the least.

On that note, should it be possible for users to upgrade or evolve their
tags over time? For example if a user wants to leverage this feature 
for an
existing app, or add new tags to an app that already has some 
configured. I
think we would need to either enforce that you can only add new tags 
to the

end but never remove/change/reorder the existing ones, or else adopt a
similar strategy as to version probing and force all clients to remain on
the old protocol until everyone in the group has been updated to use the
new tags. It's fine with me if you'd prefer to leave this out of scope 
for
the time being, as long as we design this to be forwards-compatible as 
best

we can. Have you considered adding a "ClientTagsVersion" to the
SubscriptionInfo so that we're set up to extend this feature in the 
future?

In my experience so far, any time we *don't* version a protocol like this
we end up regretting it later.

Whatever you decide, it should be documented clearly -- in the KIP and in
the actual docs, ie upgrade guide and/or the config doc string -- so that
users know whether they can ever change the client tags on a running
application or not. (I think this is hinted at in the KIP, but not called
out explicitly)

By the way, I feel the "task.assignment.rack.awareness" config should 
have

the words "clients" and/or "tags" somewhere in it, otherwise it's a bit
unclear what it actually means. And maybe it should specify that it 
applies

to standby task placement only? Obviously we don't need to cover every
possible detail in the config name alone, but it could be a little more
specific. What about "standby.task.rack.aware.assignment.tags" or 
something

like that?

On Mon, Mar 15, 2021 at 2:12 PM Levani Kokhreidze 


wrote:


Hello all,

Bumping this thread as we are one binding vote short accepting this KIP.
Please let me know if you have any extra concerns and/or suggestions.

Regards,
Levani


On 12. Mar 2021, at 13:14, Levani Kokhreidze 

wrote:


Hi Guozhang,

Thanks for the feedback. I think it makes sense.
I updated the KIP with your proposal [1], it’s a nice optimisation.
I do agree that having the same configuration across Kafka Streams

instances is the reasonable requirement.


Best,
Levani

[1] -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams 


<
https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams 






On 12. Mar 2021, at 03:36, Guozhang Wang 
wangg...@gmail.com>> wrote:


Hello Levani,

Thanks for the great write-up! I think this proposal makes sense,

though I
have one minor suggestion regarding the protocol format change: 
note the
subscription info is part of the group metadata message that we 

[jira] [Created] (KAFKA-12479) Combine partition offset requests into single request in ConsumerGroupCommand

2021-03-16 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-12479:
--

 Summary: Combine partition offset requests into single request in 
ConsumerGroupCommand
 Key: KAFKA-12479
 URL: https://issues.apache.org/jira/browse/KAFKA-12479
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


We currently send one request per-partition of obtain offset information. For a 
group with a large number of partitions, this can take several minutes. It 
would be more efficient to send a single request containing all partitions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-16 Thread Rajini Sivaram
Congratulations, Tom!

Regards,

Rajini

On Tue, Mar 16, 2021 at 10:39 AM Satish Duggana 
wrote:

> Congratulations Tom!!
>
> On Tue, 16 Mar 2021 at 13:30, David Jacot 
> wrote:
>
> > Congrats, Tom!
> >
> > On Tue, Mar 16, 2021 at 7:40 AM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > Congrats, Tom!
> > >
> > > On Tue, Mar 16, 2021 at 8:32 AM Konstantine Karantasis
> > >  wrote:
> > >
> > > > Congratulations Tom!
> > > > Well deserved.
> > > >
> > > > Konstantine
> > > >
> > > > On Mon, Mar 15, 2021 at 4:52 PM Luke Chen  wrote:
> > > >
> > > > > Congratulations!
> > > > >
> > > > > Federico Valeri  於 2021年3月16日 週二 上午4:11 寫道:
> > > > >
> > > > > > Congrats, Tom!
> > > > > >
> > > > > > Well deserved.
> > > > > >
> > > > > > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno  >
> > > > wrote:
> > > > > >
> > > > > > > Congratulations Tom!
> > > > > > >
> > > > > > > Get Outlook for Android
> > > > > > >
> > > > > > > 
> > > > > > > From: Guozhang Wang 
> > > > > > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > > > > > To: dev 
> > > > > > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > > > > > >
> > > > > > > Congratulations Tom!
> > > > > > >
> > > > > > > Guozhang
> > > > > > >
> > > > > > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck
> > > >  > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Congratulations, Tom!
> > > > > > > >
> > > > > > > > -Bill
> > > > > > > >
> > > > > > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> > > > > >  > > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Congrats, Tom!
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Bruno
> > > > > > > > >
> > > > > > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > > > > > Hi all,
> > > > > > > > > >
> > > > > > > > > > The PMC for Apache Kafka has invited Tom Bentley as a
> > > > committer,
> > > > > > and
> > > > > > > > > > we are excited to announce that he accepted!
> > > > > > > > > >
> > > > > > > > > > Tom first contributed to Apache Kafka in June 2017 and
> has
> > > been
> > > > > > > > > > actively contributing since February 2020.
> > > > > > > > > > He has accumulated 52 commits and worked on a number of
> > KIPs.
> > > > > Here
> > > > > > > are
> > > > > > > > > > some of the most significant ones:
> > > > > > > > > > KIP-183: Change PreferredReplicaLeaderElectionCommand
> > to
> > > > use
> > > > > > > > > AdminClient
> > > > > > > > > > KIP-195: AdminClient.createPartitions
> > > > > > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > > > > > KIP-621: Deprecate and replace
> > > DescribeLogDirsResult.all()
> > > > > and
> > > > > > > > > .values()
> > > > > > > > > > KIP-707: The future of KafkaFuture (still in
> > discussion)
> > > > > > > > > >
> > > > > > > > > > In addition, he is very active on the mailing list and
> has
> > > > helped
> > > > > > > > > > review many KIPs.
> > > > > > > > > >
> > > > > > > > > > Congratulations Tom and thanks for all the contributions!
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > -- Guozhang
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-16 Thread Satish Duggana
Congratulations Tom!!

On Tue, 16 Mar 2021 at 13:30, David Jacot 
wrote:

> Congrats, Tom!
>
> On Tue, Mar 16, 2021 at 7:40 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > Congrats, Tom!
> >
> > On Tue, Mar 16, 2021 at 8:32 AM Konstantine Karantasis
> >  wrote:
> >
> > > Congratulations Tom!
> > > Well deserved.
> > >
> > > Konstantine
> > >
> > > On Mon, Mar 15, 2021 at 4:52 PM Luke Chen  wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Federico Valeri  於 2021年3月16日 週二 上午4:11 寫道:
> > > >
> > > > > Congrats, Tom!
> > > > >
> > > > > Well deserved.
> > > > >
> > > > > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno 
> > > wrote:
> > > > >
> > > > > > Congratulations Tom!
> > > > > >
> > > > > > Get Outlook for Android
> > > > > >
> > > > > > 
> > > > > > From: Guozhang Wang 
> > > > > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > > > > To: dev 
> > > > > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > > > > >
> > > > > > Congratulations Tom!
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck
> > >  > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Congratulations, Tom!
> > > > > > >
> > > > > > > -Bill
> > > > > > >
> > > > > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> > > > >  > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Congrats, Tom!
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Bruno
> > > > > > > >
> > > > > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > The PMC for Apache Kafka has invited Tom Bentley as a
> > > committer,
> > > > > and
> > > > > > > > > we are excited to announce that he accepted!
> > > > > > > > >
> > > > > > > > > Tom first contributed to Apache Kafka in June 2017 and has
> > been
> > > > > > > > > actively contributing since February 2020.
> > > > > > > > > He has accumulated 52 commits and worked on a number of
> KIPs.
> > > > Here
> > > > > > are
> > > > > > > > > some of the most significant ones:
> > > > > > > > > KIP-183: Change PreferredReplicaLeaderElectionCommand
> to
> > > use
> > > > > > > > AdminClient
> > > > > > > > > KIP-195: AdminClient.createPartitions
> > > > > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > > > > KIP-621: Deprecate and replace
> > DescribeLogDirsResult.all()
> > > > and
> > > > > > > > .values()
> > > > > > > > > KIP-707: The future of KafkaFuture (still in
> discussion)
> > > > > > > > >
> > > > > > > > > In addition, he is very active on the mailing list and has
> > > helped
> > > > > > > > > review many KIPs.
> > > > > > > > >
> > > > > > > > > Congratulations Tom and thanks for all the contributions!
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-708: Rack awareness for Kafka Streams

2021-03-16 Thread Bruno Cadonna

Hi Sophie,

I am +1, for explicitly documenting that this list must be identical in 
contents and order across all clients in the application in the KIP. And 
later in the docs of the config. If we are too much concerned about this 
and we think that we should explicitly check the order and content, we 
could think about another encoding that is order independent or not 
encode the tags at all.


Good point about upgrading and evolving. That seems related to the 
encoding of the tags. If the encoding contained a bit more information, 
we could use the intersection of the tags to evolve the tags of existing 
applications. That means, we only use tags that are present on all 
clients. Having an encoding that contains more information would also 
give us the possibility to support removing, changing, and reordering of 
tags.


I do not completely understand your question about the 
"ClientTagsVersion". Could we not reuse the version of the subscription 
data? What are the main benefits of introducing "ClientTagsVersion"? 
Versioning a field of the protocol seems a bit too detailed for me, but 
I could easily be missing something important here.


Regarding "standby.task.rack.aware.assignment.tags", we intentionally 
tried to avoid the words "standby" and "replica" in this name to not 
limit this config to standby tasks. In future, we may want to use the 
same config also for active tasks. Admittedly, I do not yet know how we 
would use this config for active tasks but I think it is better to keep 
it generic as much as reasonable.


Best,
Bruno


On 15.03.21 23:07, Sophie Blee-Goldman wrote:

Hey Levani, thanks for the KIP! This looks great.

A few comments/questions about the "task.assignment.rack.awareness" config:
since this is used to determine the encoding of the client tags, we should
make sure to specify that this list must be identical in contents and order
across all clients in the application. Unfortunately there doesn't seem to
be a good way to actually enforce this, so we should call it out in the
config doc string at the least.

On that note, should it be possible for users to upgrade or evolve their
tags over time? For example if a user wants to leverage this feature for an
existing app, or add new tags to an app that already has some configured. I
think we would need to either enforce that you can only add new tags to the
end but never remove/change/reorder the existing ones, or else adopt a
similar strategy as to version probing and force all clients to remain on
the old protocol until everyone in the group has been updated to use the
new tags. It's fine with me if you'd prefer to leave this out of scope for
the time being, as long as we design this to be forwards-compatible as best
we can. Have you considered adding a "ClientTagsVersion" to the
SubscriptionInfo so that we're set up to extend this feature in the future?
In my experience so far, any time we *don't* version a protocol like this
we end up regretting it later.

Whatever you decide, it should be documented clearly -- in the KIP and in
the actual docs, ie upgrade guide and/or the config doc string -- so that
users know whether they can ever change the client tags on a running
application or not. (I think this is hinted at in the KIP, but not called
out explicitly)

By the way, I feel the "task.assignment.rack.awareness" config should have
the words "clients" and/or "tags" somewhere in it, otherwise it's a bit
unclear what it actually means. And maybe it should specify that it applies
to standby task placement only? Obviously we don't need to cover every
possible detail in the config name alone, but it could be a little more
specific. What about "standby.task.rack.aware.assignment.tags" or something
like that?

On Mon, Mar 15, 2021 at 2:12 PM Levani Kokhreidze 
wrote:


Hello all,

Bumping this thread as we are one binding vote short accepting this KIP.
Please let me know if you have any extra concerns and/or suggestions.

Regards,
Levani


On 12. Mar 2021, at 13:14, Levani Kokhreidze 

wrote:


Hi Guozhang,

Thanks for the feedback. I think it makes sense.
I updated the KIP with your proposal [1], it’s a nice optimisation.
I do agree that having the same configuration across Kafka Streams

instances is the reasonable requirement.


Best,
Levani

[1] -

https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams
<
https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams





On 12. Mar 2021, at 03:36, Guozhang Wang 
wangg...@gmail.com>> wrote:


Hello Levani,

Thanks for the great write-up! I think this proposal makes sense,

though I

have one minor suggestion regarding the protocol format change: note the
subscription info is part of the group metadata message that we need to
write into the internal topic, and hence it's always better if we can

save

on the number of bytes written there. For this, I'm wondering if we can
encode the key part instead of writing 

Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-16 Thread David Jacot
Congrats, Tom!

On Tue, Mar 16, 2021 at 7:40 AM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Congrats, Tom!
>
> On Tue, Mar 16, 2021 at 8:32 AM Konstantine Karantasis
>  wrote:
>
> > Congratulations Tom!
> > Well deserved.
> >
> > Konstantine
> >
> > On Mon, Mar 15, 2021 at 4:52 PM Luke Chen  wrote:
> >
> > > Congratulations!
> > >
> > > Federico Valeri  於 2021年3月16日 週二 上午4:11 寫道:
> > >
> > > > Congrats, Tom!
> > > >
> > > > Well deserved.
> > > >
> > > > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno 
> > wrote:
> > > >
> > > > > Congratulations Tom!
> > > > >
> > > > > Get Outlook for Android
> > > > >
> > > > > 
> > > > > From: Guozhang Wang 
> > > > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > > > To: dev 
> > > > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > > > >
> > > > > Congratulations Tom!
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck
> >  > > >
> > > > > wrote:
> > > > >
> > > > > > Congratulations, Tom!
> > > > > >
> > > > > > -Bill
> > > > > >
> > > > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> > > >  > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Congrats, Tom!
> > > > > > >
> > > > > > > Best,
> > > > > > > Bruno
> > > > > > >
> > > > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > The PMC for Apache Kafka has invited Tom Bentley as a
> > committer,
> > > > and
> > > > > > > > we are excited to announce that he accepted!
> > > > > > > >
> > > > > > > > Tom first contributed to Apache Kafka in June 2017 and has
> been
> > > > > > > > actively contributing since February 2020.
> > > > > > > > He has accumulated 52 commits and worked on a number of KIPs.
> > > Here
> > > > > are
> > > > > > > > some of the most significant ones:
> > > > > > > > KIP-183: Change PreferredReplicaLeaderElectionCommand to
> > use
> > > > > > > AdminClient
> > > > > > > > KIP-195: AdminClient.createPartitions
> > > > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > > > KIP-621: Deprecate and replace
> DescribeLogDirsResult.all()
> > > and
> > > > > > > .values()
> > > > > > > > KIP-707: The future of KafkaFuture (still in discussion)
> > > > > > > >
> > > > > > > > In addition, he is very active on the mailing list and has
> > helped
> > > > > > > > review many KIPs.
> > > > > > > >
> > > > > > > > Congratulations Tom and thanks for all the contributions!
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-12478) Consumer group may lose data for newly expanded partitions when add partitions for topic if the group is set to consume from the latest

2021-03-16 Thread hudeqi (Jira)
hudeqi created KAFKA-12478:
--

 Summary: Consumer group may lose data for newly expanded 
partitions when add partitions for topic if the group is set to consume from 
the latest
 Key: KAFKA-12478
 URL: https://issues.apache.org/jira/browse/KAFKA-12478
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.7.0
Reporter: hudeqi


  This problem is exposed in our product environment: a topic is used to 
produce monitoring data. *After expanding partitions, the consumer side of the 
business reported that the data is lost.* 

  After preliminary investigation, the lost data is all concentrated in the 
newly expanded partitions. The reason is: when the server expands, the producer 
firstly perceives the expansion, and some data is written in the newly expanded 
partitions. But the consumer group perceives the expansion later, after the 
rebalance is completed, the newly expanded partitions will be consumed from the 
latest if it is set to consume from the latest. Within a period of time, the 
data of the newly expanded partitions is skipped and lost by the consumer.

  If it is not necessarily set to consume from the earliest for a huge data 
flow topic when starts up, this will make the group consume historical data 
from the broker crazily, which will affect the performance of brokers to a 
certain extent.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: Tom Bentley

2021-03-16 Thread Kamal Chandraprakash
Congrats, Tom!

On Tue, Mar 16, 2021 at 8:32 AM Konstantine Karantasis
 wrote:

> Congratulations Tom!
> Well deserved.
>
> Konstantine
>
> On Mon, Mar 15, 2021 at 4:52 PM Luke Chen  wrote:
>
> > Congratulations!
> >
> > Federico Valeri  於 2021年3月16日 週二 上午4:11 寫道:
> >
> > > Congrats, Tom!
> > >
> > > Well deserved.
> > >
> > > On Mon, Mar 15, 2021, 8:09 PM Paolo Patierno 
> wrote:
> > >
> > > > Congratulations Tom!
> > > >
> > > > Get Outlook for Android
> > > >
> > > > 
> > > > From: Guozhang Wang 
> > > > Sent: Monday, March 15, 2021 8:02:44 PM
> > > > To: dev 
> > > > Subject: Re: [ANNOUNCE] New committer: Tom Bentley
> > > >
> > > > Congratulations Tom!
> > > >
> > > > Guozhang
> > > >
> > > > On Mon, Mar 15, 2021 at 11:25 AM Bill Bejeck
>  > >
> > > > wrote:
> > > >
> > > > > Congratulations, Tom!
> > > > >
> > > > > -Bill
> > > > >
> > > > > On Mon, Mar 15, 2021 at 2:08 PM Bruno Cadonna
> > >  > > > >
> > > > > wrote:
> > > > >
> > > > > > Congrats, Tom!
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On 15.03.21 18:59, Mickael Maison wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > The PMC for Apache Kafka has invited Tom Bentley as a
> committer,
> > > and
> > > > > > > we are excited to announce that he accepted!
> > > > > > >
> > > > > > > Tom first contributed to Apache Kafka in June 2017 and has been
> > > > > > > actively contributing since February 2020.
> > > > > > > He has accumulated 52 commits and worked on a number of KIPs.
> > Here
> > > > are
> > > > > > > some of the most significant ones:
> > > > > > > KIP-183: Change PreferredReplicaLeaderElectionCommand to
> use
> > > > > > AdminClient
> > > > > > > KIP-195: AdminClient.createPartitions
> > > > > > > KIP-585: Filter and Conditional SMTs
> > > > > > > KIP-621: Deprecate and replace DescribeLogDirsResult.all()
> > and
> > > > > > .values()
> > > > > > > KIP-707: The future of KafkaFuture (still in discussion)
> > > > > > >
> > > > > > > In addition, he is very active on the mailing list and has
> helped
> > > > > > > review many KIPs.
> > > > > > >
> > > > > > > Congratulations Tom and thanks for all the contributions!
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>