Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1248

2022-09-21 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1247

2022-09-21 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-14236) ListGroups request produces too much Denied logs in authorizer

2022-09-21 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-14236.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> ListGroups request produces too much Denied logs in authorizer
> --
>
> Key: KAFKA-14236
> URL: https://issues.apache.org/jira/browse/KAFKA-14236
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.0.0
>Reporter: Alexandre GRIFFAUT
>Priority: Major
>  Labels: patch-available
> Fix For: 3.4.0
>
>
> Context
> On a multi-tenant secured cluster, with many consumers, a call to ListGroups 
> api will log an authorization error for each consumer group of other tenant.
> Reason
> The handleListGroupsRequest function first tries to authorize a DESCRIBE 
> CLUSTER, and if it fails it will then try to authorize a DESCRIBE GROUP on 
> each consumer group.
> Fix
> In that case neither the DESCRIBE CLUSTER, nor the DESCRIBE GROUP of other 
> tenant were intended, and should be specified in the Action using 
> logIfDenied: false



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14254) Format timestamps in assignor logs as dates instead of integers

2022-09-21 Thread John Roesler (Jira)
John Roesler created KAFKA-14254:


 Summary: Format timestamps in assignor logs as dates instead of 
integers
 Key: KAFKA-14254
 URL: https://issues.apache.org/jira/browse/KAFKA-14254
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


This is a follow-on task from [https://github.com/apache/kafka/pull/12582]

There is another log line that prints the same timestamp: "Triggering the 
followup rebalance scheduled for ...", which should also be printed as a 
date/time in the same manner as PR 12582.

We should also search the codebase a little to see if we're printing timestamps 
in other log lines that would be better off as date/times.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14253) StreamsPartitionAssignor should print the member count in assignment logs

2022-09-21 Thread John Roesler (Jira)
John Roesler created KAFKA-14253:


 Summary: StreamsPartitionAssignor should print the member count in 
assignment logs
 Key: KAFKA-14253
 URL: https://issues.apache.org/jira/browse/KAFKA-14253
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


Debugging rebalance and assignment issues is harder than it needs to be. One 
simple thing that can help is to print out information in the logs that users 
have to compute today.

For example, the StreamsPartitionAssignor prints two messages that contain the 
the newline-delimited group membership:
{code:java}
[StreamsPartitionAssignor] [...-StreamThread-1] stream-thread 
[...-StreamThread-1-consumer] All members participating in this rebalance:

: []

: []

: []{code}
and
{code:java}
[StreamsPartitionAssignor] [...-StreamThread-1] stream-thread 
[...-StreamThread-1-consumer] Assigned tasks [...] including stateful [...] to 
clients as:

=[activeTasks: ([...]) standbyTasks: ([...])]

=[activeTasks: ([...]) standbyTasks: ([...])]

=[activeTasks: ([...]) standbyTasks: ([...])
{code}
 

In both of these cases, it would be nice to:
 # Include the number of members in the group (I.e., "15 members participating" 
and "to 15 clients as")
 # sort the member ids (to help compare the membership and assignment across 
rebalances)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1246

2022-09-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14252) Create background thread skeleton

2022-09-21 Thread Philip Nee (Jira)
Philip Nee created KAFKA-14252:
--

 Summary: Create background thread skeleton
 Key: KAFKA-14252
 URL: https://issues.apache.org/jira/browse/KAFKA-14252
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Philip Nee
Assignee: Philip Nee


The event handler internally instantiates a background thread to consume 
ApplicationEvents and produce BackgroundEvents.  In this ticket, we will create 
a skeleton of the background thread.  We will incrementally add implementation 
in the future.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1245

2022-09-21 Thread Apache Jenkins Server
See 




Re: Re: [DISCUSS] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2022-09-21 Thread Chris Egerton
Hi Daniel,

I'm a little hesitant to add support for REST extensions and the admin API
to dedicated MM2 nodes because they may restrict our options in the future
if/when we add a public-facing REST API.

Regarding REST extensions specifically, I understand their security value
for public-facing APIs, but for the internal API--which should only ever be
targeted by MM2 nodes, and never by humans, UIs, CLIs, etc.--I'm not sure
there's enough need there to add that API this time around. The internal
endpoints used by Kafka Connect should be secured by the session key
mechanism introduced in KIP-507 [1], and IIUC, with this KIP, users will
also be able to configure their cluster to use mTLS.

Regarding the admin API, I consider it part of the public-facing REST API
for Kafka Connect, so I was assuming we wouldn't want to add it to this KIP
since we're intentionally slimming down the REST API to just the bare
essentials (i.e., just the internal endpoints). I can see value for it, but
it's similar to the status endpoints in the Kafka Connect REST API; we
might choose to expose this sometime down the line, but IMO it'd be better
to do that in a separate KIP so that we can iron out the details of exactly
what kind of REST API would best suit dedicated MM2 clusters, and how that
would differ from the API provided by vanilla Kafka Connect.

Let me know what you think!

Cheers,

Chris

[1] -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-507%3A+Securing+Internal+Connect+REST+Endpoints

On Wed, Sep 21, 2022 at 4:59 AM Dániel Urbán  wrote:

> Hi Chris,
>
> About the worker id: makes sense. I changed the wording, but kept it listed
> as it is a change compared to existing MM2 code. Updated the KIP
> accordingly.
>
> About the REST server configurations: again, I agree, it should be
> generalized. But I'm not sure about those exceptions you listed, as all of
> them make sense in MM2 as well. For example, security related rest
> extensions could work with MM2 as well. Admin listeners are also useful, as
> they would allow the same admin operations for MM2 nodes. Am I mistaken
> here? Updated the KIP without mentioning exceptions for now.
>
> I think yes, the lazy config resolution should be unconditional. Even if we
> don't consider the distributed mode of MM2, the eager resolution does not
> allow using sensitive configs in the mm2 properties, as they will be
> written by value into the config topic. I didn't really consider this as
> breaking change, but I might be wrong.
>
> Enable flag property name: also makes sense, updated the KIP.
>
> Thanks a lot!
> Daniel
>
> Chris Egerton  ezt írta (időpont: 2022. szept.
> 20., K, 20:29):
>
> > Hi Daniel,
> >
> > Looking pretty good! Regarding the worker ID generation--apologies, I was
> > unclear with my question. I was wondering if we had to adopt any special
> > logic at all for MM2, or if we could use the same logic that vanilla
> Kafka
> > Connect does in distributed mode, where the worker ID is its advertised
> URL
> > (e.g., "connect:8083" or "localhost:25565"). Unless I'm mistaken, this
> > should allow MM2 nodes to be identified unambiguously. Do you think it
> > makes sense to follow this strategy?
> >
> > Now that the details on the new REST interface have been fleshed out, I'm
> > also wondering if we want to add support for the "
> > rest.advertised.host.name",
> > "rest.advertised.port", and "rest.advertised.listener" properties, which
> > are vital for being able to run Kafka Connect in distributed mode from
> > within containers. In fact, I'm wondering if we can generalize some of
> the
> > specification in the KIP around which REST properties are accepted by
> > stating that all REST-related properties that are available with vanilla
> > Kafka Connect will be supported for dedicated MM2 nodes when
> > "mm.enable.rest" is set to "true", except for ones related to the
> > public-facing REST API like "rest.extension.classes", "admin.listeners",
> > and "admin.listeners.https.*".
> >
> > One other thought--is the lazy evaluation of config provider references
> > going to take place unconditionally, or only when the internal REST API
> is
> > enabled on a worker?
> >
> > Finally, do you think we could change "mm.enable.rest" to
> > "mm.enable.internal.rest"? That way, if we want to introduce a
> > public-facing REST API later on, we can use "mm.enable.rest" as the name
> > for that property.
> >
> > Cheers,
> >
> > Chris
> >
> > On Fri, Sep 16, 2022 at 9:28 AM Dániel Urbán 
> > wrote:
> >
> > > Hi Chris,
> > >
> > > I went through the KIP and updated it based on our discussion. I think
> > your
> > > suggestions simplified (and shortened) the KIP significantly.
> > >
> > > Thanks,
> > > Daniel
> > >
> > > Dániel Urbán  ezt írta (időpont: 2022.
> > szept.
> > > 16., P, 15:15):
> > >
> > > > Hi Chris,
> > > >
> > > > 1. For the REST-server-per-flow setup, it made sense to introduce
> some
> > > > simplified configuration. With a single REST server, it doesn't ma

[jira] [Created] (KAFKA-14251) Improve CPU usage of self-joins by sacrificing order

2022-09-21 Thread Vicky Papavasileiou (Jira)
Vicky Papavasileiou created KAFKA-14251:
---

 Summary: Improve CPU usage of self-joins by sacrificing order
 Key: KAFKA-14251
 URL: https://issues.apache.org/jira/browse/KAFKA-14251
 Project: Kafka
  Issue Type: Improvement
Reporter: Vicky Papavasileiou


The current self-join operator implementation ensures that records in the 
output follow the same order as if the join was implemented using an 
inner-join. To achieve this, the self-join operator needs to use two loops, 
each doing a window store fetch, to simulate the left-hand side of the join 
probing the join and the right-hand side probing the join. 

As an optimization, if we don't care about the ordering of the join results, we 
can avoid doing two loops and instead do one where the window store fetch will 
use the union of the left and righ-side windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-862: Self-join optimization for stream-stream joins

2022-09-21 Thread Vasiliki Papavasileiou
Hello everyone,

The KIP-862 vote has passed with:

binding +1s (John, Guozhang, Bruno)
non-binding +1s (Jim)

Thank you everyone for reviewing the KIP and voting.

Best,
Vicky

On Fri, Sep 16, 2022 at 11:44 AM Bruno Cadonna  wrote:

> Hi Vicky,
>
> Thanks for the KIP!
>
> I think the KIP looks good!
> You described how the self-join is optimized when the names of the state
> stores are automatically generated by Streams. I think for completeness
> you should also mention what happens when users explicitly name the
> state stores of the self-join and give an example.
>
> For the rest, I am +1 (binding).
>
> Best,
> Bruno
>
>
> On 13.09.22 22:50, Jim Hughes wrote:
> > Hi Vicky,
> >
> > I'm +1 (non-binding); thanks for the KIP (and PR)!
> >
> > Cheers,
> >
> > Jim
> >
> > On Tue, Sep 13, 2022 at 12:05 PM Guozhang Wang 
> wrote:
> >
> >> Thank Vicky! I'm +1.
> >>
> >> Guozhang
> >>
> >> On Mon, Sep 12, 2022 at 7:02 PM John Roesler 
> wrote:
> >>
> >>> Thanks for the updates, Vicky!
> >>>
> >>> I've reviewed the KIP and your POC PR,
> >>> and I'm +1 (binding).
> >>>
> >>> Thanks!
> >>> -John
> >>>
> >>> On Mon, Sep 12, 2022, at 09:13, Vasiliki Papavasileiou wrote:
>  Hey Guozhang,
> 
>  Great suggestion, I made the change.
> 
>  Best,
>  Vicky
> 
>  On Fri, Sep 9, 2022 at 10:43 PM Guozhang Wang 
> >>> wrote:
> 
> > Thanks Vicky, that reads much clearer now.
> >
> > Just regarding the value string name itself: "self.join" may be
> >>> confusing
> > compared to other values that people would think before this config
> is
> > enabled, self-join are not allowed at all. Maybe we can rename it to
> > "single.store.self.join"?
> >
> > Guozhang
> >
> > On Fri, Sep 9, 2022 at 2:15 AM Vasiliki Papavasileiou
> >  wrote:
> >
> >> Hey Guozhang,
> >>
> >> Ah it seems my text was not very clear :)
> >> With "TOPOLOGY_OPTIMIZATION_CONFIG will be extended to accept a list
> >>> of
> >> optimization rule configs" I meant that it will accept the new value
> >> strings for each optimization rule. Let me rephrase that in the KIP
> >> to
> > make
> >> it clearer.
> >> Is it better now?
> >>
> >> Best,
> >> Vicky
> >>
> >> On Thu, Sep 8, 2022 at 9:07 PM Guozhang Wang 
> >>> wrote:
> >>
> >>> Thanks Vicky,
> >>>
> >>> I read through the KIP again and it looks good to me. Just a quick
> >> question
> >>> regarding the public config changes: you mentioned "No public
> > interfaces
> >>> will be impacted. The config TOPOLOGY_OPTIMIZATION_CONFIG will be
> >> extended
> >>> to accept a list of optimization rule configs in addition to the
> >>> global
> >>> values "all" and "none" . But there are no new value strings
> >>> mentioned
> > in
> >>> this KIP, so that means we will apply this optimization only when
> >>> `all`
> >> is
> >>> specified in the config right?
> >>>
> >>>
> >>> Guozhang
> >>>
> >>>
> >>> On Thu, Sep 8, 2022 at 12:02 PM Vasiliki Papavasileiou
> >>>  wrote:
> >>>
>  Hello everyone,
> 
>  I'd like to open the vote for KIP-862, which proposes to
> >> optimize
>  stream-stream self-joins by using a single state store for the
> >>> join.
> 
>  The proposal is here:
> 
> 
> >>>
> >>
> >
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-862%3A+Self-join+optimization+for+stream-stream+joins
> 
>  Thanks to all who reviewed the proposal, and thanks in advance
> >> for
> >> taking
>  the time to vote!
> 
>  Thank you,
>  Vicky
> 
> >>>
> >>>
> >>> --
> >>> -- Guozhang
> >>>
> >>
> >
> >
> > --
> > -- Guozhang
> >
> >>>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
>


Re: [DISCUSS] KIP-868 Metadata Transactions (new thread)

2022-09-21 Thread David Arthur
Ziming, thanks for the feedback! Let me know your thoughts on #2 and #3

1. Good idea. I consolidated all the details of record visibility into
that section.

2. I'm not sure we can always know the number of records ahead of time
for a transaction. One future use case is likely for the ZK data
migration which will have an undetermined number of records. I would
be okay with some short textual fields like "name" for the Begin
record and "reason" for the Abort record. These could also be tagged
fields if we don't want to always include them in the records.

3. The metadata records end up in org.apache.kafka.common.metadata, so
maybe we can avoid Metadata in the name since it's kind of implicit.
I'd be okay with [Begin|End|Abort]TransactionRecord.

-David

On Mon, Sep 19, 2022 at 10:58 PM deng ziming  wrote:
>
> Hello David,
> Thanks for the KIP, certainly it makes sense, I left some minor questions.
>
> 1. In “Record Visibility” section you declare visibility in the controller, 
> in “Broker Support” you mention visibility in the broker, we can put them 
> together, and I think we can also describe visibility in the MetadataShell 
> since it is also a public interface.
>
> 2. In “Public interfaces” section, I found that the “BeginMarkerRecord” has 
> no fields, should we include some auxiliary attributes to help parse the 
> transaction, for example, number of records in this transaction.
>
> 3. The record name seems vague, and we already have a `EndTransactionMarker` 
> class in `org.apache.kafka.common.record`, how about 
> `BeginMetadataTransactionRecord`?
>
> - -
> Best,
> Ziming
>
> > On Sep 10, 2022, at 1:13 AM, David Arthur  wrote:
> >
> > Starting a new thread to avoid issues with mail client threading.
> >
> > Original thread follows:
> >
> > Hey folks, I'd like to start a discussion on the idea of adding
> > transactions in the KRaft controller. This will allow us to overcome
> > the current limitation of atomic batch sizes in Raft which lets us do
> > things like create topics with a huge number of partitions.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-868+Metadata+Transactions
> >
> > Thanks!
> >
> > ---
> >
> > Colin McCabe said:
> >
> > Thanks for this KIP, David!
> >
> > In the "motivation" section, it might help to give a concrete example
> > of an operation we want to be atomic. My favorite one is probably
> > CreateTopics since it's easy to see that we want to create all of a
> > topic or none of it, and a topic could be a potentially unbounded
> > number of records (although hopefully people have reasonable create
> > topic policy classes in place...)
> >
> > In "broker support", it would be good to clarify that we will buffer
> > the records in the MetadataDelta and not publish a new MetadataImage
> > until the transaction is over. This is an implementation detail, but
> > it's a simple one and I think it will make it easier to understand how
> > this works.
> >
> > In the "Raft Transactions" section of "Rejected Alternatives," I'd add
> > that managing buffering in the Raft layer would be a lot less
> > efficient than doing it in the controller / broker layer. We would end
> > up accumulating big lists of records which would then have to be
> > applied when the transaction completed, rather than building up a
> > MetadataDelta (or updating the controller state) incrementally.
> >
> > Maybe we want to introduce the concept of "last stable offset" to be
> > the last committed offset that is NOT part of an ongoing transaction?
> > Just a nomenclature suggestion...
> >
> > best,
> > Colin
>


-- 
David Arthur


[jira] [Created] (KAFKA-14250) Exception during normal operation in MirrorSourceTask causes the task to fail instead of shutting down gracefully

2022-09-21 Thread Viktor Somogyi-Vass (Jira)
Viktor Somogyi-Vass created KAFKA-14250:
---

 Summary: Exception during normal operation in MirrorSourceTask 
causes the task to fail instead of shutting down gracefully
 Key: KAFKA-14250
 URL: https://issues.apache.org/jira/browse/KAFKA-14250
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, mirrormaker
Affects Versions: 3.3
Reporter: Viktor Somogyi-Vass
Assignee: Viktor Somogyi-Vass


In MirrorSourceTask we are loading offsets for the topic partitions. At this 
point, while we are fetching the partitions, it is possible for the offset 
reader to be stopped by a parallel thread. Stopping the reader causes a 
CancellationException to be thrown, due to KAFKA-9051.

Currently this exception is not caught in MirrorSourceTask and so the exception 
propagates up and causes the task to go into FAILED state. We only need it to 
go to STOPPED state so that it would be restarted later.

This can be achieved by catching the exception and stopping the task directly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] KIP-870: Retention policy based on record event time

2022-09-21 Thread Николай Ижиков
Hello.

I want to start discussion of KIP-870 [1] [2].

The goal of this KIP is to provide new retention policy purely based on record 
event time.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-870%3A+Retention+policy+based+on+record+event+time
[2] https://issues.apache.org/jira/browse/KAFKA-13866



[jira] [Created] (KAFKA-14249) Flaky test Tls13SelectorTest. testCloseOldestConnection

2022-09-21 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-14249:


 Summary: Flaky test Tls13SelectorTest. testCloseOldestConnection
 Key: KAFKA-14249
 URL: https://issues.apache.org/jira/browse/KAFKA-14249
 Project: Kafka
  Issue Type: Test
  Components: unit tests
Reporter: Divij Vaidya


Execution run: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-12590/10/testReport/org.apache.kafka.common.network/Tls13SelectorTest/Build___JDK_17_and_Scala_2_13___testCloseOldestConnection__/
 

Stack trace:


{noformat}
[2022-09-20 08:28:32,446] ERROR Failed to close release connections with type 
org.apache.kafka.common.network.Selector$$Lambda$1330/0x000801184590 
(org.apache.kafka.common.utils.Utils:1036)
java.lang.RuntimeException: you should fail
at 
org.apache.kafka.common.network.SelectorTest$2$1.close(SelectorTest.java:414)
at org.apache.kafka.common.network.Selector.doClose(Selector.java:956)
at org.apache.kafka.common.network.Selector.close(Selector.java:940)
at org.apache.kafka.common.network.Selector.close(Selector.java:886)
at 
org.apache.kafka.common.network.Selector.lambda$close$0(Selector.java:368)
at org.apache.kafka.common.utils.Utils.closeQuietly(Utils.java:1033)
at org.apache.kafka.common.utils.Utils.closeAllQuietly(Utils.java:1048)
at org.apache.kafka.common.network.Selector.close(Selector.java:367)
at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:53)
at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:35)
at org.junit.jupiter.api.Assertions.assertThrows(Assertions.java:3082)
at 
org.apache.kafka.common.network.SelectorTest.testCloseAllChannels(SelectorTest.java:424)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14248) Flaky test PlaintextAdminIntegrationTest.testCreateTopicsReturnsConfigs

2022-09-21 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-14248:


 Summary: Flaky test 
PlaintextAdminIntegrationTest.testCreateTopicsReturnsConfigs
 Key: KAFKA-14248
 URL: https://issues.apache.org/jira/browse/KAFKA-14248
 Project: Kafka
  Issue Type: Test
  Components: unit tests
Reporter: Divij Vaidya


Failure run: 
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-12590/10/tests

Stack trace
{noformat}
org.opentest4j.AssertionFailedError: expected:  but was: 
 at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
   at 
app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)  at 
app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141) at 
app//kafka.api.PlaintextAdminIntegrationTest.testCreateTopicsReturnsConfigs(PlaintextAdminIntegrationTest.scala:2567)
at 
java.base@17.0.4.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)  at 
java.base@17.0.4.1/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at 
java.base@17.0.4.1/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@17.0.4.1/java.lang.reflect.Method.invoke(Method.java:568)  at 
app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
  at 
app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
   at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
 at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
  at 
app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94)
   at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
 at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
   at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
  at 
app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
  at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
   at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
   at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
 at 
app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
  at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
   at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
  at 
app//org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)  
 at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
  at 
app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
   at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
   at 
app//org.junit.platform.engine.support.hierarchical.NodeTestTask.exe

Re: Re: [DISCUSS] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2022-09-21 Thread Dániel Urbán
Hi Chris,

About the worker id: makes sense. I changed the wording, but kept it listed
as it is a change compared to existing MM2 code. Updated the KIP
accordingly.

About the REST server configurations: again, I agree, it should be
generalized. But I'm not sure about those exceptions you listed, as all of
them make sense in MM2 as well. For example, security related rest
extensions could work with MM2 as well. Admin listeners are also useful, as
they would allow the same admin operations for MM2 nodes. Am I mistaken
here? Updated the KIP without mentioning exceptions for now.

I think yes, the lazy config resolution should be unconditional. Even if we
don't consider the distributed mode of MM2, the eager resolution does not
allow using sensitive configs in the mm2 properties, as they will be
written by value into the config topic. I didn't really consider this as
breaking change, but I might be wrong.

Enable flag property name: also makes sense, updated the KIP.

Thanks a lot!
Daniel

Chris Egerton  ezt írta (időpont: 2022. szept.
20., K, 20:29):

> Hi Daniel,
>
> Looking pretty good! Regarding the worker ID generation--apologies, I was
> unclear with my question. I was wondering if we had to adopt any special
> logic at all for MM2, or if we could use the same logic that vanilla Kafka
> Connect does in distributed mode, where the worker ID is its advertised URL
> (e.g., "connect:8083" or "localhost:25565"). Unless I'm mistaken, this
> should allow MM2 nodes to be identified unambiguously. Do you think it
> makes sense to follow this strategy?
>
> Now that the details on the new REST interface have been fleshed out, I'm
> also wondering if we want to add support for the "
> rest.advertised.host.name",
> "rest.advertised.port", and "rest.advertised.listener" properties, which
> are vital for being able to run Kafka Connect in distributed mode from
> within containers. In fact, I'm wondering if we can generalize some of the
> specification in the KIP around which REST properties are accepted by
> stating that all REST-related properties that are available with vanilla
> Kafka Connect will be supported for dedicated MM2 nodes when
> "mm.enable.rest" is set to "true", except for ones related to the
> public-facing REST API like "rest.extension.classes", "admin.listeners",
> and "admin.listeners.https.*".
>
> One other thought--is the lazy evaluation of config provider references
> going to take place unconditionally, or only when the internal REST API is
> enabled on a worker?
>
> Finally, do you think we could change "mm.enable.rest" to
> "mm.enable.internal.rest"? That way, if we want to introduce a
> public-facing REST API later on, we can use "mm.enable.rest" as the name
> for that property.
>
> Cheers,
>
> Chris
>
> On Fri, Sep 16, 2022 at 9:28 AM Dániel Urbán 
> wrote:
>
> > Hi Chris,
> >
> > I went through the KIP and updated it based on our discussion. I think
> your
> > suggestions simplified (and shortened) the KIP significantly.
> >
> > Thanks,
> > Daniel
> >
> > Dániel Urbán  ezt írta (időpont: 2022.
> szept.
> > 16., P, 15:15):
> >
> > > Hi Chris,
> > >
> > > 1. For the REST-server-per-flow setup, it made sense to introduce some
> > > simplified configuration. With a single REST server, it doesn't make
> > sense
> > > anymore, I'm removing it from the KIP.
> > > 2. I think that changing the worker ID generation still makes sense,
> > > otherwise there is no way to differentiate between the MM2 instances.
> > >
> > > Thanks,
> > > Daniel
> > >
> > > On Wed, Aug 31, 2022 at 8:39 PM Chris Egerton  >
> > > wrote:
> > >
> > > > Hi Daniel,
> > > >
> > > > I've taken a look at the KIP in detail. Here are my complete thoughts
> > > > (minus the aforementioned sections that may be affected by changes to
> > an
> > > > internal-only REST API):
> > > >
> > > > 1. Why introduce new mm.host.name and mm.rest.protocol properties
> > > instead
> > > > of using the properties that are already used by Kafka Connect:
> > > listeners,
> > > > rest.advertised.host.name, rest.advertised.host.port, and
> > > > rest.advertised.listener? We used to have the rest.host.name and
> > > rest.port
> > > > properties in Connect but deprecated and eventually removed them in
> > favor
> > > > of the listeners property in KIP-208 [1]; I'm hoping we can keep
> things
> > > as
> > > > similar as possible between MM2 and Connect in order to make it
> easier
> > > for
> > > > users to work with both. I'm also hoping that we can allow users to
> > > > configure the port that their MM2 nodes listen on instead of
> hardcoding
> > > MM2
> > > > to bind to port 0.
> > > >
> > > > 2. Do we still need to change the worker IDs that get used in the
> > status
> > > > topic?
> > > >
> > > > Everything else looks good, or should change once the KIP is updated
> > with
> > > > the internal-only REST API alternative.
> > > >
> > > > Cheers,
> > > >
> > > > Chris
> > > >
> > > > [1] -
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence