Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1178

2022-08-25 Thread Apache Jenkins Server
See 




Request for permission to assign jiras to myself

2022-08-25 Thread Anupam Aggarwal
Hi,

 I am interested in working on a couple of JIRA issues and noticed I would
need perms to assign these to myself.
(They are currently unassigned)

Could you please add me as a contributor :) ?

My details are:-
email - anupam.aggar...@gmail.com
username - anupamaggarwal

Thanks so much!
-Anupam


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1177

2022-08-25 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-14177) Correctly support older kraft versions without FeatureLevelRecord

2022-08-25 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-14177.
-
Resolution: Fixed

> Correctly support older kraft versions without FeatureLevelRecord
> -
>
> Key: KAFKA-14177
> URL: https://issues.apache.org/jira/browse/KAFKA-14177
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Blocker
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14142) Improve information returned about the cluster metadata partition

2022-08-25 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio resolved KAFKA-14142.

Resolution: Won't Fix

We discussed this and we decided that the kafka-metadata-quorum tool already 
returns enough information to determine this.

> Improve information returned about the cluster metadata partition
> -
>
> Key: KAFKA-14142
> URL: https://issues.apache.org/jira/browse/KAFKA-14142
> Project: Kafka
>  Issue Type: Improvement
>  Components: kraft
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 3.3.0
>
>
> The Apacke Kafka operator needs to know when it is safe to format and start a 
> KRaft Controller that had a disk failure of the metadata log dir.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1176

2022-08-25 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 420408 lines...]
[2022-08-25T21:27:29.998Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart-java ---
[2022-08-25T21:27:29.998Z] [INFO] 
[2022-08-25T21:27:29.998Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart-java ---
[2022-08-25T21:27:29.998Z] [INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_trunk_2/streams/quickstart/java/target/streams-quickstart-java-3.4.0-SNAPSHOT.jar
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.4.0-SNAPSHOT/streams-quickstart-java-3.4.0-SNAPSHOT.jar
[2022-08-25T21:27:29.998Z] [INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_trunk_2/streams/quickstart/java/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.4.0-SNAPSHOT/streams-quickstart-java-3.4.0-SNAPSHOT.pom
[2022-08-25T21:27:29.998Z] [INFO] 
[2022-08-25T21:27:29.998Z] [INFO] --- 
maven-archetype-plugin:2.2:update-local-catalog (default-update-local-catalog) 
@ streams-quickstart-java ---
[2022-08-25T21:27:29.998Z] [INFO] 

[2022-08-25T21:27:29.998Z] [INFO] Reactor Summary for Kafka Streams :: 
Quickstart 3.4.0-SNAPSHOT:
[2022-08-25T21:27:29.998Z] [INFO] 
[2022-08-25T21:27:29.998Z] [INFO] Kafka Streams :: Quickstart 
 SUCCESS [  2.063 s]
[2022-08-25T21:27:29.998Z] [INFO] streams-quickstart-java 
 SUCCESS [  0.870 s]
[2022-08-25T21:27:29.998Z] [INFO] 

[2022-08-25T21:27:29.998Z] [INFO] BUILD SUCCESS
[2022-08-25T21:27:29.998Z] [INFO] 

[2022-08-25T21:27:29.998Z] [INFO] Total time:  3.218 s
[2022-08-25T21:27:29.998Z] [INFO] Finished at: 2022-08-25T21:27:29Z
[2022-08-25T21:27:29.998Z] [INFO] 

[Pipeline] dir
[2022-08-25T21:27:29.999Z] Running in 
/home/jenkins/workspace/Kafka_kafka_trunk_2/streams/quickstart/test-streams-archetype
[Pipeline] {
[Pipeline] sh
[2022-08-25T21:27:30.285Z] 
[2022-08-25T21:27:30.285Z] IQv2IntegrationTest > shouldRejectNonRunningActive() 
PASSED
[2022-08-25T21:27:31.242Z] 
[2022-08-25T21:27:31.242Z] InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs() STARTED
[2022-08-25T21:27:31.509Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2022-08-25T21:27:31.509Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-08-25T21:27:31.509Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-08-25T21:27:31.509Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-08-25T21:27:32.131Z] + echo Y
[2022-08-25T21:27:32.131Z] + mvn archetype:generate -DarchetypeCatalog=local 
-DarchetypeGroupId=org.apache.kafka 
-DarchetypeArtifactId=streams-quickstart-java -DarchetypeVersion=3.4.0-SNAPSHOT 
-DgroupId=streams.examples -DartifactId=streams.examples -Dversion=0.1 
-Dpackage=myapps
[2022-08-25T21:27:32.453Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-08-25T21:27:32.453Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2022-08-25T21:27:33.031Z] 
[2022-08-25T21:27:33.031Z] InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs() PASSED
[2022-08-25T21:27:33.031Z] 
[2022-08-25T21:27:33.031Z] InternalTopicIntegrationTest > 
shouldGetToRunningWithWindowedTableInFKJ() STARTED
[2022-08-25T21:27:33.063Z] [INFO] Scanning for projects...
[2022-08-25T21:27:33.996Z] [INFO] 
[2022-08-25T21:27:33.996Z] [INFO] --< 
org.apache.maven:standalone-pom >---
[2022-08-25T21:27:33.996Z] [INFO] Building Maven Stub Project (No POM) 1
[2022-08-25T21:27:33.996Z] [INFO] [ pom 
]-
[2022-08-25T21:27:33.996Z] [INFO] 
[2022-08-25T21:27:33.996Z] [INFO] >>> maven-archetype-plugin:3.2.1:generate 
(default-cli) > generate-sources @ standalone-pom >>>
[2022-08-25T21:27:33.996Z] [INFO] 
[2022-08-25T21:27:33.996Z] [INFO] <<< maven-archetype-plugin:3.2.1:generate 
(default-cli) < generate-sources @ standalone-pom <<<
[2022-08-25T21:27:33.996Z] [INFO] 
[2022-08-25T21:27:33.996Z] [INFO] 
[2022-08-25T21:27:33.996Z] [INFO] --- maven-archetype-plugin:3.2.1:generate 
(default-cli) @ standalone-pom ---
[2022-08-25T21:27:34.927Z] [INFO] Generating project in Interactive mode
[2022-08-25T21:27:34.927Z] [WARNING] Archetype not found in any catalog. 
Falling back to central repository.
[2022-08-25T21:27:34.927Z] [WARNING] Add a repository with id 'archetype' in 
your settings.xml if archetype's repository is elsewhere.
[2022-08-25T21:27:34.927Z] [INFO] Using property: groupId = streams.examples
[2022-08-25T21:27:34.927Z] [INFO] Using property: artifactId = streams.examples
[2022-08-25T21:27:34.927Z] [INFO] Using property: version = 0.1
[2022-08-25T21:27:34.927Z]

[jira] [Resolved] (KAFKA-14149) Broken DynamicBrokerReconfigurationTest in 3.2 branch

2022-08-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14149.

Fix Version/s: 3.2.2
 Assignee: Mickael Maison
   Resolution: Fixed

> Broken DynamicBrokerReconfigurationTest in 3.2 branch
> -
>
> Key: KAFKA-14149
> URL: https://issues.apache.org/jira/browse/KAFKA-14149
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.2.2
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.2.2
>
>
> The backport of [https://github.com/apache/kafka/pull/12455] does not work in 
> 3.2. The following tests are failing:
> DynamicBrokerReconfigurationTest.testConfigDescribeUsingAdminClient(String).quorum=kraft
> DynamicBrokerReconfigurationTest.testConsecutiveConfigChange(String).quorum=kraft
> DynamicBrokerReconfigurationTest.testKeyStoreAlter(String).quorum=kraft
> DynamicBrokerReconfigurationTest.testLogCleanerConfig(String).quorum=kraft
> DynamicBrokerReconfigurationTest.testTrustStoreAlter(String).quorum=kraft
> DynamicBrokerReconfigurationTest.testUpdatesUsingConfigProvider(String).quorum=kraft
> Caused by :
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidRequestException: Invalid value 
> org.apache.kafka.common.config.ConfigException: Dynamic reconfiguration of 
> listeners is not yet supported when using a Raft-based metadata quorum for 
> configuration Invalid dynamic configuration



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1175

2022-08-25 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 420948 lines...]
[2022-08-25T19:08:16.344Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2022-08-25T19:08:16.344Z] > Task :connect:json:publishToMavenLocal
[2022-08-25T19:08:16.344Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2022-08-25T19:08:16.344Z] > Task :connect:api:testClasses UP-TO-DATE
[2022-08-25T19:08:16.344Z] > Task :connect:api:testJar
[2022-08-25T19:08:16.344Z] > Task :connect:api:testSrcJar
[2022-08-25T19:08:16.344Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2022-08-25T19:08:16.344Z] > Task :connect:api:publishToMavenLocal
[2022-08-25T19:08:18.121Z] 
[2022-08-25T19:08:18.121Z] > Task :streams:javadoc
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:854:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2022-08-25T19:08:18.121Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:109:
 warning - Tag @link: reference not found: this#getResult()
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureReason()
[2022-08-25T19:08:18.121Z] 
/home/jenkins/workspace/Kafka_kafka_trunk@2/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureMessage()
[2022-08-25T19:08:18.121Z]

[jira] [Created] (KAFKA-14183) Kraft bootstrap metadata file should use snapshot header/footer

2022-08-25 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-14183:
---

 Summary: Kraft bootstrap metadata file should use snapshot 
header/footer
 Key: KAFKA-14183
 URL: https://issues.apache.org/jira/browse/KAFKA-14183
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Hosting Kafka Videos on ASF YouTube channel

2022-08-25 Thread John Roesler
Thanks all,

I’m also +1 on the Kafka Streams videos. 

Thanks,
John

On Tue, Aug 9, 2022, at 03:54, Mickael Maison wrote:
> Hi,
>
> I checked the four Streams videos
> (https://kafka.apache.org/32/documentation/streams/), they are good
> and don't mention any vendors.
> +1 (binding) for these four videos
>
> For the last video (https://kafka.apache.org/intro and
> https://kafka.apache.org/quickstart) we will have to wait till the
> intro is edited.
>
> Thanks,
> Mickael
>
>
> On Mon, Aug 8, 2022 at 11:12 PM Joe Brockmeier  wrote:
>>
>> Repurpose away. Thanks!
>>
>> On Mon, Aug 8, 2022 at 4:55 PM Bill Bejeck  wrote:
>> >
>> > Hi Joe,
>> >
>> > Thanks that works for me. As for you watching the videos, they are about 
>> > 10 minutes each, and you can watch them at 1.5 - 1.75 playback speed.
>> >
>> > If it's ok with you, I'm going to repurpose this thread as a voting thread 
>> > for the videos.
>> >
>> > I watched the Kafka Streams videos on 
>> > https://kafka.apache.org/32/documentation/streams/, and I can confirm they 
>> > are vendor-neutral.
>> > The other videos and logo that show up at the end are coming from YouTube, 
>> > so once move the videos to the ASF channel, that should go away.
>> >
>> > +1(binding).
>> >
>> > Thanks,
>> > Bill
>> >
>> >
>> >
>> > On Mon, Aug 8, 2022 at 9:46 AM Joe Brockmeier  wrote:
>> >>
>> >> If we can get a +1 from the PMC on each video that they're happy that
>> >> the videos are vendor neutral I think we can do that. I'll also need
>> >> to view them as well. I hope they're not long videos. :-)
>> >>
>> >> On Tue, Aug 2, 2022 at 3:38 PM Bill Bejeck  wrote:
>> >> >
>> >> > Hi Joe,
>> >> >
>> >> > Yes, that is correct.  Sorry, I should have mentioned that in the 
>> >> > original email.  That is the only video where Tim says that.
>> >> > The Kafka Streams videos do not mention Confluent.
>> >> >
>> >> > We're currently pursuing editing the video to remove the "from 
>> >> > Confluent" part.
>> >> > Note that the site also uses the same video on the "quickstart" page, 
>> >> > so both places will be fixed when editing is completed.
>> >> >
>> >> > Can we pursue hosting the Kafka Streams videos for now, then revisit 
>> >> > the "What is Apache Kafka?" when the editing is done?
>> >> >
>> >> > Thanks,
>> >> > Bill
>> >> >
>> >> >
>> >> > On Tue, Aug 2, 2022 at 3:12 PM Joe Brockmeier  wrote:
>> >> >>
>> >> >> Hi Bill,
>> >> >>
>> >> >> I'm not sure changing hosting would quite solve the problem. The first
>> >> >> video I see on this page:
>> >> >>
>> >> >> https://kafka.apache.org/intro
>> >> >>
>> >> >> Starts with "Hi, I'm Bill Berglund from *Confluent*" rather than "Hi,
>> >> >> I'm Bill from Apache Kafka" -- so moving to the ASF Youtube channel
>> >> >> wouldn't completely solve the problem.
>> >> >>
>> >> >> On Tue, Aug 2, 2022 at 3:05 PM Bill Bejeck  wrote:
>> >> >> >
>> >> >> > Hi,
>> >> >> >
>> >> >> > I am an Apache Kafka® committer and PMC member, and I'm working on 
>> >> >> > our site to address some issues around our embedded videos and 
>> >> >> > branding.
>> >> >> >
>> >> >> > The Kafka site has six embedded videos:  
>> >> >> > https://kafka.apache.org/intro, https://kafka.apache.org/quickstart, 
>> >> >> > and four videos on 
>> >> >> > https://kafka.apache.org/32/documentation/streams/.
>> >> >> >
>> >> >> > The videos are hosted on the Confluent YouTube channel, so the 
>> >> >> > branding on the video is from Confluent.  Since it's coming from 
>> >> >> > YouTube, there's no way to change it.
>> >> >> >
>> >> >> > Would it be possible to upload these videos to the Apache Foundation 
>> >> >> > YouTube channel 
>> >> >> > (https://www.youtube.com/c/TheApacheFoundation/featured)?  Doing 
>> >> >> > this would automatically change the branding to Apache.
>> >> >> >
>> >> >> > Thanks, and I look forward to working with you on this matter.
>> >> >> >
>> >> >> > Bill Bejeck
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Joe Brockmeier
>> >> >> Vice President Marketing & Publicity
>> >> >> j...@apache.org
>> >>
>> >>
>> >>
>> >> --
>> >> Joe Brockmeier
>> >> Vice President Marketing & Publicity
>> >> j...@apache.org
>>
>>
>>
>> --
>> Joe Brockmeier
>> Vice President Marketing & Publicity
>> j...@apache.org


Re: [DISCUSS] KIP-857: Streaming recursion in Kafka Streams

2022-08-25 Thread Nick Telford
Hi Sophie,

The reason I chose to add a new overload of "to", instead of creating a new
method, is simply because I felt that "to" was about sending records "to"
somewhere, and that "somewhere" just happens to currently be exclusively
topics. By re-using "to", we can send records *to other KStreams*,
including a KStream from an earlier point in the current KStreams'
pipeline, which would facilitate recursion. Sending records to a completely
different KStream would be essentially a merge.

However, I'm happy to reduce the scope of this method to focus exclusively
on recursion: we'd simply need to add a check in to the method that ensures
the target is an ancestor node of the current KStream node.

Which brings me to your first query...

My argument is simply that a 0-ary method isn't enough to facilitate
recursive streaming, because you need to be able to communicate which point
in the process graph you want to feed your records back in to.

Consider my example from the KIP, but re-written with a 0-ary "recursively"
method:

updates
.join(parents, (count, parent) -> { KeyValue(parent, count) })
.recursively()

Where does the join output get fed to?

   1. The "updates" (source) node?
   2. The "join" node itself?

It would probably be most intuitive if it simply caused the last step to be
recursive, but that won't always be what you want. Consider if we add some
more steps in to the above:

updates
.map((parent, count) -> KeyValue(parent, count + 1)) // doesn't make
sense in this algorithm, but let's pretend it does
.join(parents, (count, parent) -> { KeyValue(parent, count) })
.recursively()

If "recursively" just feeds records back into the "join", it misses out on
potentially important steps in our recursive algorithm. It also gets even
worse if the step you're making recursive doesn't contain your terminal
condition:

foo
.filter((key, value) -> value <= 0) // <-- terminal condition
.mapValues((value) -> value - 1)
.recursively()

If "recursively" feeds records back to the "mapValues" stage in our
pipeline, and not in to "filter" or "foo", then the terminal condition in
"filter" won't be evaluated for any values with a starting value greater
than 0, *causing an infinite loop*.

There's an argument to be had to always feed the values back to the first
ancestor "source node", in the process-graph, but that might not be
particularly intuitive, and is likely going to limit some of the recursive
algorithms that some may want to implement. For example, in the previous
example, there's no guarantee that "foo" is a source node; it could be the
result of a "mapValues", for example.

Ultimately, the solution here is to make this method take a parameter,
explicitly specifying the KStream that records are fed back in to, making
the above two examples:

updates
.map((parent, count) -> KeyValue(parent, count + 1))
.join(parents, (count, parent) -> { KeyValue(parent, count) })
.recursively(updates)

and:

foo
.filter((key, value) -> value <= 0)
.mapValues((value) -> value - 1)
.recursively(foo)

We could *also* support a 0-ary version of the method that defaults to
recursively executing the previous node, but I'm worried that users may not
fully understand the consequences of this, inadvertently creating infinite
loops that are difficult to debug.

Finally, I'm not convinced that "recursively" is the best name for the
method. Perhaps "recursivelyVia" or "recursivelyTo"? Ideas welcome!

If we want to prevent this method being "abused" to merge different streams
together, it should be trivial to ensure that the provided argument is an
ancestor of the current node, by recursively traversing up the process
graph.

I hope this clarifies your questions. It's clear that the KIP needs more
work to better elaborate on these points. I haven't had a chance to revise
it yet, due to more pressing issues with EOS stability that I've been
looking into.

Regards,

Nick

On Tue, 23 Aug 2022 at 23:50, Sophie Blee-Goldman
 wrote:

> Hey Nick,
>
> Sounds like an interesting KIP, and I agree the current way of achieving
> this in Streams
> seems wildly overcomplicated. So I'm definitely +1 on adding a smooth API
> that abstracts
> away a lot of the complexity and unnecessary topic management.
>
> That said, I've found much of the discussion so far on the API itself to be
> very confusing -- for example, I don't understand this point:
>
>  I actually considered a "recursion" API, something
> > like you suggested, however it won't work, because to do the recursion
> you
> > need to know both the end of the KStream that you want to recurse, AND
> the
> > beginning of the stream you want to feed it back into.
>
>
> As I see it, the internal implementation should be, and is, essentially
> independent from the
> design of the API itself -- in other words, why does calling this
> operator/method `recursion`
> not work, or have anything at all to do with what Streams "knows" or how it
> does

[jira] [Created] (KAFKA-14182) KRaft and ACL + GSSAPI

2022-08-25 Thread Svyatoslav (Jira)
Svyatoslav created KAFKA-14182:
--

 Summary: KRaft and ACL + GSSAPI
 Key: KAFKA-14182
 URL: https://issues.apache.org/jira/browse/KAFKA-14182
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.2.1
Reporter: Svyatoslav


In KRaft mode with GSSAPI and ACL when i am adding any new ACL in log file i am 
always have some information like this:
{code:java}

[2022-08-24 18:04:41,830] ERROR [StandardAuthorizer 1] addAcl error 
(org.apache.kafka.metadata.authorizer.StandardAuthorizerData)
java.lang.RuntimeException: An ACL with ID Gk-Hx0tvQIS8B1RT8R-odw already 
exists.
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18$adapted(BrokerMetadataPublisher.scala:221)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataPublisher.publish(BrokerMetadataPublisher.scala:221)
    at 
kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$publish(BrokerMetadataListener.scala:258)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2(BrokerMetadataListener.scala:119)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2$adapted(BrokerMetadataListener.scala:119)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:119)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    at java.lang.Thread.run(Thread.java:750)
[2022-08-24 18:04:41,858] ERROR [BrokerMetadataPublisher id=1] Error publishing 
broker metadata at OffsetAndEpoch(offset=500, epoch=4) 
(kafka.server.metadata.BrokerMetadataPublisher)
java.lang.RuntimeException: An ACL with ID Gk-Hx0tvQIS8B1RT8R-odw already 
exists.
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18$adapted(BrokerMetadataPublisher.scala:221)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataPublisher.publish(BrokerMetadataPublisher.scala:221)
    at 
kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$publish(BrokerMetadataListener.scala:258)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2(BrokerMetadataListener.scala:119)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2$adapted(BrokerMetadataListener.scala:119)
    at scala.Option.foreach(Option.scala:437)
    at 
kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:119)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    at java.lang.Thread.run(Thread.java:750)
[2022-08-24 18:04:41,859] ERROR [BrokerMetadataListener id=1] Unexpected error 
handling HandleCommitsEvent (kafka.server.metadata.BrokerMetadataListener)
java.lang.RuntimeException: An ACL with ID Gk-Hx0tvQIS8B1RT8R-odw already 
exists.
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    at 
org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    at 
ka

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1174

2022-08-25 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1173

2022-08-25 Thread Apache Jenkins Server
See 




Re: Re: [DISCUSS] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2022-08-25 Thread Dániel Urbán
Hi Chris,

Thanks for bringing this up again :)

1. I think that is reasonable, though I find the current state of MM2 to be
confusing. The current issue with the distributed mode is not documented
properly, but maybe the logging will help a bit.

2. Going for an internal-only Connect REST version would lock MM2 out of a
path where the REST API can be used to dynamically reconfigure
replications. For now, I agree, it would be easy to corrupt the state of
MM2 if someone wanted to use the properties and the REST at the same time,
but in the future, we might have a chance to introduce a different config
mechanism, where only the cluster connections have to be specified in the
properties file, and everything else can be configured through REST
(enabling replications, changing topic filters, etc.). Because of this, I'm
leaning towards a full Connect REST API. To avoid issues with conflicts
between the props file and the REST, we could document security best
practices (e.g. turn on basic auth or mTLS on the Connect REST to avoid
unwanted interactions).

3. That is a good point, and I agree, a big plus for motivation.

I have a working version of this in which all flows spin up a dedicated
Connect REST, but I can give other solutions a try, too.

Thanks,
Daniel

Chris Egerton  ezt írta (időpont: 2022. aug. 24.,
Sze, 17:46):

> Hi Daniel,
>
> I'd like to resurface this KIP in case you're still interested in pursuing
> it. I know it's been a while since you published it, and it hasn't received
> much attention, but I'm hoping we can give it a try now and finally put
> this long-standing bug to rest. To that end, I have some thoughts about the
> proposal. This isn't a complete review, but I wanted to give enough to get
> the ball rolling:
>
> 1. Some environments with firewalls or strict security policies may not be
> able to bring up a REST server for each MM2 node. If we decide that we'd
> like to use the Connect REST API (or even just parts of it) to address this
> bug with MM2, it does make sense to eventually make the availability of the
> REST API a hard requirement for running MM2, but it might be a bit too
> abrupt to do that all in a single release. What do you think about making
> the REST API optional for now, but noting that it will become required in a
> later release (probably 4.0.0 or, if that's not enough time, 5.0.0)? We
> could choose not to bring the REST server for any node whose configuration
> doesn't explicitly opt into one, and maybe log a warning message on startup
> if none is configured. In effect, we'd be marking the current mode (no REST
> server) as deprecated.
>
> 2. I'm not sure that we should count out the "Creating an internal-only
> derivation of the Connect REST API" rejected alternative. Right now, the
> single source of truth for the configuration of a MM2 cluster (assuming
> it's being run in dedicated mode, and not as a connector in a vanilla
> Connect cluster) is the configuration file used for the process. By
> bringing up the REST API, we'd expose endpoints to modify connector
> configurations, which would not only add complexity to the operation of a
> MM2 cluster, but even qualify as an attack vector for malicious entities.
> Thanks to KIP-507 we have some amount of security around the internal-only
> endpoints used by the Connect framework, but for any public endpoints, the
> Connect REST API doesn't come with any security out of the box.
>
> 3. Small point, but with support for exactly-once source connectors coming
> out in 3.3.0, it's also worth noting that that's another feature that won't
> work properly with multi-node MM2 clusters without adding a REST server for
> each node (or some substitute that accomplishes the same goal). I don't
> think this will affect the direction of the design discussion too much, but
> it does help strengthen the motivation.
>
> Cheers,
>
> Chris
>
> On 2021/02/18 15:57:36 Dániel Urbán wrote:
> > Hello everyone,
> >
> > * Sorry, I meant KIP-710.
> >
> > Right now the MirrorMaker cluster is somewhat unreliable, and not
> > supporting running in a cluster properly. I'd say that fixing this would
> be
> > a nice addition.
> > Does anyone have some input on this?
> >
> > Thanks in advance
> > Daniel
> >
> > Dániel Urbán  ezt írta (időpont: 2021. jan. 26., K,
> > 15:56):
> >
> > > Hello everyone,
> > >
> > > I would like to start a discussion on KIP-709, which addresses some
> > > missing features in MM2 dedicated mode.
> > >
> > >
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-710%3A+Full+support+for+distributed+mode+in+dedicated+MirrorMaker+2.0+clusters
> > >
> > > Currently, the dedicated mode of MM2 does not fully support running in
> a
> > > cluster. The core issue is that the Connect REST Server is not included
> in
> > > the dedicated mode, which makes follower->leader communication
> impossible.
> > > In some cases, this results in the cluster not being able to react to
> > > dynamic configuration changes (e.g. dynamic top

[jira] [Resolved] (KAFKA-13850) kafka-metadata-shell is missing some record types

2022-08-25 Thread dengziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengziming resolved KAFKA-13850.

Fix Version/s: 3.3
   Resolution: Fixed

> kafka-metadata-shell is missing some record types
> -
>
> Key: KAFKA-13850
> URL: https://issues.apache.org/jira/browse/KAFKA-13850
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: David Arthur
>Assignee: dengziming
>Priority: Major
> Fix For: 3.3
>
>
> Noticed while working on feature flags in KRaft, the in-memory tree of the 
> metadata  (MetadataNodeManager) is missing support for a few of record types. 
>  * DelegationTokenRecord
>  * UserScramCredentialRecord (should we include this?)
>  * FeatureLevelRecord
>  * AccessControlEntryRecord
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)