Re: [DISCUSS] Apache Kafka 3.4.1 release

2023-08-09 Thread Luke Chen
Hi José,

Thanks for the reminder.
Yes, I did miss that.
Already updated and pushed.

Thanks.
Luke

On Wed, Aug 9, 2023 at 8:08 AM José Armando García Sancio
 wrote:

> Hey Luke,
>
> Thanks for working on the release for 3.4.1. I was working on some
> cherry picks and I noticed that branch 3.4 doesn't contain the
> commit/tag for 3.4.1. I think we are supposed to merge the tag back to
> the 3.4 branch. E.g.:
>
> > Merge the last version change / rc tag into the release branch and bump
> the version to 0.10.0.1-SNAPSHOT
> >
> > git checkout 0.10.0
> > git merge 0.10.0.0-rc6
>
> from: https://cwiki.apache.org/confluence/display/KAFKA/Release+Process
>
> Did we forget to do that part?
>
> Thanks!
> --
> -José
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2082

2023-08-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-960: Support interactive queries (IQv2) for versioned state stores

2023-08-09 Thread Matthias J. Sax
Seems there was a lot of additional feedback. Looking forward to an 
updated version of the KIP.


I also agree to make the queries more composable. I was considering to 
raise this originally, but hold off because `RangeQuery` is also not 
designed very composable. But for versioned store, we have many more 
combinations, so making it composable does make sense to me.


About iterator order: I would also propose to be pragmatic, and only add 
what is simple to implement for now. We can always extend it later. We 
just need to clearly document the order (or say: order is not defined -- 
also a valid option). Of course, if we limit what we add now, we should 
keep in mind how to extend the API in the future without the need to 
deprecate a lot of stuff (ideally, we would not need to deprecate 
anything but only extend what we have).


Btw: I am also happy to de-scope this KIP to only implement the two 
queries Victoria mentioned being easy to implement, and do follow up 
KIPs for range queries. There is no need to do everything with a single KIP.


About the original v-store KIP and `long` vs `Instance` -- I don't think 
we forget it. If the store is use inside a `Processor` using `long` is 
preferred because performance is important and we are on the hot code 
path. For IQ on the other hand, it's not the hot code path, and 
semantics exposed to the user are more important. -- At least, this is 
how we did it in the past.



One more thoughts.

The new `VersionedKeyQuery` seems to have two different query types 
merged into a single class. Queries which return a single result, and 
queries that return multiple results. This does not seem ideal. For 
`withKeyLatestValue` and `withKeyWithTimestampBound` (should we rename 
this to `withKeyAsOfTimestamp`?) I would expect to get a single 
`VersionedRecord` back, not an interator. Hence, we might need to 
split `VersionedKeyQuery` into two query types?



-Matthias




On 8/9/23 6:46 AM, Victoria Xia wrote:

Hey Alieh,

Thanks for the KIP!

It looks like the KIP proposes three different types of interactive queries for 
versioned stores, though they are grouped together into two classes: 
VersionedKeyQuery adds supports for single-key, single-timestamp lookups, and 
also for single-key, multi-timestamp lookups, while VersionedRangeQuery 
additionally adds support for key-range queries.

The first type of query (single-key, single-timestamp lookups) are already 
supported by versioned stores (per the VersionedKeyValueStore interface) today, 
so exposing these via interactive queries require low additional implementation 
effort, and are a quick win to users. The other two types of queries will 
require more effort to add, and also come with more design decisions. I've 
sorted my thoughts accordingly.

Regarding single-key, multi-timestamp lookups:

1. If we add these, we should add a new method to the VersionedKeyValueStore 
interface to support this type of lookup. Otherwise, there is no easy/efficient 
way to compose methods from the existing interface in order to implement this 
type of lookup, and therefore the new interactive query type cannot be used on 
generic VersionedKeyValueStores.

2. I agree with Matthias's and Lucas's comments about being very explicit about what the timestamp range means. For consistency 
with single-key, single-timestamp lookups, I think the "upper timestamp bound" should really be an "as of 
timestamp bound" instead, so that it is inclusive. For the "lower timestamp bound"/start timestamp, we have a 
choice regarding whether to interpret it as the user saying "I want valid records for all timestamps in the range" in 
which case the query should return a record with timestamp earlier than the start timestamp, or to interpret it as the user 
saying "I want all records with timestamps in the range" in which case the query should not return any records with 
timestamp earlier than the start timestamp. My current preference is for the former, but it'd be good to hear other opinions.

3. The existing VersionedRecord interface contains only a value and validFrom 
timestamp, and does not allow null values. This presents a problem for introducing 
single-key, multi-timestamp lookups because if there is a tombstone contained within 
the timestamp range of the query, then there is no way to represent this as part of a 
ValueIterator return type. You'll either have to allow null 
values or add a validTo timestamp to the returned records.

4. Also +1 to Matthias's question about standardizing the order in which 
records are returned. Will they always be returned in forwards-timestamp order? 
Reverse-timestamp order? Will users get a choice? It'd be good to make this 
explicit in the KIP.

Regarding key-range queries (either single-timestamp or multi-timestamp):

5. Same comment about adding new methods for this type of lookup to the 
VersionedKeyValueStore interface.

6. Again +1 to Matthias's question about the order in which records are 

[GitHub] [kafka-site] lbradstreet commented on a diff in pull request #537: MINOR: Add Lucas Bradstreet to committers

2023-08-09 Thread via GitHub


lbradstreet commented on code in PR #537:
URL: https://github.com/apache/kafka-site/pull/537#discussion_r1289395022


##
committers.html:
##
@@ -552,6 +552,13 @@ The committers
   https://github.com/gharris1727/;>github.com/gharris1727
   https://twitter.com/gharris1727;>@gharris1727
 
+
+  Lucas Bradstreet

Review Comment:
   Oops, should have opened the html.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] jolshan commented on a diff in pull request #537: MINOR: Add Lucas Bradstreet to committers

2023-08-09 Thread via GitHub


jolshan commented on code in PR #537:
URL: https://github.com/apache/kafka-site/pull/537#discussion_r1289372410


##
committers.html:
##
@@ -552,6 +552,13 @@ The committers
   https://github.com/gharris1727/;>github.com/gharris1727
   https://twitter.com/gharris1727;>@gharris1727
 
+
+  Lucas Bradstreet

Review Comment:
   Did you leave out your picture here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] lbradstreet opened a new pull request, #537: MINOR: Add Lucas Bradstreet to committers

2023-08-09 Thread via GitHub


lbradstreet opened a new pull request, #537:
URL: https://github.com/apache/kafka-site/pull/537

   I recently became a committer 
https://projects.apache.org/committee.html?kafka.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] KIP-955: Add stream-table join on foreign key

2023-08-09 Thread Matthias J. Sax
Thanks for the details. And sorry for being a little bit picky. My goal 
is to really understand the use-case and the need for this KIP. It's a 
massive change and I just want to ensure we don't add (complex) things 
unnecessarily.



So you have a streams of "orderEvents" with key=orderId. You cannot 
represent them as a KTable, because `orderId` is not a PK, but just an 
identify that a message belongs to a certain order. This part I understand.


You also have a KTable "orderItems", with orderId as a value-field.




 Relationship between parent and child messages is 1:1


If I understand correctly, you want to join on orderId. If the join is 
1:1, it means that there is only a single table-record for each unique 
orderId. Thus, orderId could be the PK of the table. If that's correct, 
you could use orderId as the key of "orderItems" and do a regular 
stream-table join. -- Or do I miss something?





and to send it only once to the target system as one ‘complete order > message 
for each new ‘order event’ message.


This sound like an aggregation to me, not a join? It seems that an order 
consists of multiple "orderEvent" messages, and you would want to 
aggregate them based on orderId (plus add some more order detail 
information from the table)? Only after all "orderEvent" messages are 
received and the order is "completed" you want to send a result 
downstream (that is fine and would be a filter in the DSL to drop 
incomplete results).





Maybe there could be a flag to stream-table foreign key join that would
indicate if we want this join to aggregate children or not?


Wouldn't this mud the waters between a join and an aggregation and imply 
that it's a "weird" hybrid operator, and we would also need to change 
the `join()` method to accept an additional `Aggregator` function?




From what I understand so far (correct me if I am wrong), you could do 
what you want to do as follows:


// accumulate all orderEvents per `orderId`
// cf last step to avoid unbounded growth of the result KTable
KStream orderEventStream = builder.stream("orderEventTopic")
// you might want to disable caching in the next step
KTable orderEvents = orderEventStream.groupByKey().aggregate(...);

// rekey you orderItems to use `orderId` as PK for the table
KStream orderItemStream = builder.stream("orderItemTopic");
KTable orderItems = orderItemStream.map(/*put orderId as key */).toTable();

// do the join
KStream enrichedOrders = orderEvents.toStream().join(orderItems);

// drop incomplete orders
KStreame completedOrderds = enrichedOrders.filter(o -> o.isCompleted());

// publish result
completedOrderds.to("resultTopic");

// additional cleanup
completedOrderds.map(/*craft a special "delete order 
message"*/).to("orderEventTopic");



The last step is required to have a "cleanup" message to purge state 
from the `orderEvents` KTable that was computed via the aggregation. If 
such a cleanup message is processed by the `aggregate` step, you would 
return `null` as aggregation result to drop the record for the 
corresponding orderId that was completed, to avoid unbounded growth of 
the KTable. (There are other ways to do the same cleanup; it's just one 
example how it could be done.)



If I got it wrong, can you explain what part I messed up?



-Matthias




On 8/7/23 10:15 AM, Igor Fomenko wrote:

Hi Matthias,

Hi Matthias,



Thanks for your comments.



I would like to clarify the use case a little more to show why existing
table-table foreign key join will not work for the use case I am trying to
address.

Let’s consider the very simple use case with the parent messages in one
Kafka topic (‘order event’ messages that also contain some key order info)
and the child messages in another topic (‘order items’ messages with an
additional info for the order). Relationship between parent and child
messages is 1:1. Also ‘order items’ message has OrderID as one of its
fields (foreign key).



The requirement is to combine info of the parent ‘order event’ message with
child ‘order items’ message using foreign key and to send it only once to
the target system as one ‘complete order’ message for each new ‘order
event’ message.

Please note that the new messages which are related to order items (create,
update, delete) should not trigger the resulting ‘complete order’ message).



 From the above requirements we can state the following:

1. Order events are unique and never updated or deleted; they can only
be replayed if we need to recover the event stream. For our order example I
would use OrderID as an event key but if we use the KTable to represent
events then events with the same OrderID will overwrite each other. This
may or may not cause some issues but using the stream to model seems to be
a more correct approach from at least performance point of view.

2. We do not want updates from the “order items” table on the right
side of the join to generate an output since only events should be the
trigger for output messages 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #56

2023-08-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 189429 lines...]
> Task :connect:json:publishToMavenLocal
> Task :connect:api:javadocJar
> Task :connect:api:compileTestJava UP-TO-DATE
> Task :connect:api:testClasses UP-TO-DATE
> Task :connect:api:testJar
> Task :connect:api:testSrcJar
> Task :connect:api:publishMavenJavaPublicationToMavenLocal
> Task :connect:api:publishToMavenLocal
> Task :streams:javadoc
> Task :streams:javadocJar

> Task :clients:javadoc
/home/jenkins/workspace/Kafka_kafka_3.5/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."
/home/jenkins/workspace/Kafka_kafka_3.5/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
2 warnings

> Task :clients:javadocJar
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:compileScala
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava UP-TO-DATE
> Task :streams:testClasses UP-TO-DATE
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/8.0.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 4m 39s
89 actionable tasks: 33 executed, 56 up-to-date
[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in /home/jenkins/workspace/Kafka_kafka_3.5/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.5.2-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.5/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.5.2-SNAPSHOT/streams-quickstart-3.5.2-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.5.2-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- resources:2.7:resources (default-resources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- resources:2.7:testResources (default-testResources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- archetype:2.2:jar (default-jar) @ streams-quickstart-java ---
[INFO] Building archetype jar: 
/home/jenkins/workspace/Kafka_kafka_3.5/streams/quickstart/java/target/streams-quickstart-java-3.5.2-SNAPSHOT
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart-java 

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2081

2023-08-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-962 Relax non-null key requirement in Kafka Streams

2023-08-09 Thread Matthias J. Sax

Thanks for the KIP.


left join KStream-GlobalTable: no longer drop left records with null-key and 
call KeyValueMapper with 'null' for left  key. The case where KeyValueMapper 
returns null is already handled in the current implementation.


Not sure if this is the right phrasing? In the end, even now, the stream 
input record key can be null (cf 
https://issues.apache.org/jira/browse/KAFKA-10277) -- a stream record is 
only dropped if the `KeyValueMapper` returns `null` (note that the 
key-extractor has no default implemenation but is a required argument) 
-- this KIP would relax this case for left-join.




In the pull request all relevant Javadocs will be updated with the information 
on how to keep the old behavior for a given operator / method.


I would remove this from the KIP -- I am also not sure if we should put 
it into the JavaDoc? -- I agree that it should go into the upgrade docs 
as well as "join section" in the docs: 
https://kafka.apache.org/35/documentation/streams/developer-guide/dsl-api.html#joining


We also have 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Join+Semantics 
that we should update.




I added a remark about the repartition of null-key records.


I think the way it's phrase is good. In the end, it's an optimization to 
drop records upstream (instead of piping them through the topic and drop 
the downstream), and thus we don't have to cover it in the KIP in 
detail. In general, for aggregations we can still apply the 
optimization, however, we need to be careful as we could also have two 
downstream operators with a shared repartition topic: for this case, we 
can only drop upstream if all downstream operator would drop null-key 
records anyway. We can cover details on the PR.




-Matthias



On 8/9/23 5:39 AM, Florin Akermann wrote:

Hi All,

I added a remark about the repartition of null-key records.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams#KIP962:RelaxnonnullkeyrequirementinKafkaStreams-Repartitionofnull-keyrecords

Can we relax this constraint for any kind of repartitioning or should it
only be relaxed in the context of left stream-table and left/outer
stream-stream joins?

Florin

On Mon, 7 Aug 2023 at 13:23, Florin Akermann 
wrote:


Hi Lucas,

Thanks. I added the point about the upgrade guide as well.

Florin

On Mon, 7 Aug 2023 at 11:06, Lucas Brutschy 
wrote:


Hi Florin,

thanks for the KIP! This looks good to me. I agree that the precise
Java doc wording doesn't have to be discussed as part of the KIP.

I would also suggest to include an update to
https://kafka.apache.org/documentation/streams/upgrade-guide

Cheers,
Lucas

On Mon, Aug 7, 2023 at 10:51 AM Florin Akermann
 wrote:


Hi Both,

Thanks.
I added remarks to account for this.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams#KIP962:RelaxnonnullkeyrequirementinKafkaStreams-Remarks


In short, let's add a note in the Java docs? The exact wording of the

note

can be scrutinized in the pull request?

What do you think?


On Sun, 6 Aug 2023 at 19:41, Guozhang Wang 
wrote:


I'm just thinking we can try to encourage users to migrate from XX to
XXWithKey in the docs, giving this as one good example that the latter
can help you distinguish different scenarios whereas the former
cannot.

On Fri, Aug 4, 2023 at 6:32 PM Matthias J. Sax 

wrote:


Guozhang,

thanks for pointing out ValueJoinerWithKey. In the end, it's just a
documentation change, ie, point out that the passed in key could be
`null` and similar?

-Matthias


On 8/2/23 3:20 PM, Guozhang Wang wrote:

Thanks Florin for the writeup,

One quick thing I'd like to bring up is that in KIP-149
(



https://cwiki.apache.org/confluence/display/KAFKA/KIP-149%3A+Enabling+key+access+in+ValueTransformer%2C+ValueMapper%2C+and+ValueJoiner

)

we introduced ValueJoinerWithKey which is aimed to enhance
ValueJoiner. It would have a benefit for this KIP such that
implementers can distinguish "null-key" v.s. "not-null-key but
null-value" scenarios.

Hence I'd suggest we also include the semantic changes with
ValueJoinerWithKey, which can help distinguish these two

scenarios,

and also document that if users apply ValueJoiner only, they may

not

have this benefit, and hence we suggest users to use the former.


Guozhang

On Mon, Jul 31, 2023 at 12:11 PM Florin Akermann
 wrote:






https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams










[jira] [Resolved] (KAFKA-14831) Illegal state errors should be fatal in transactional producer

2023-08-09 Thread Kirk True (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True resolved KAFKA-14831.
---
Resolution: Fixed

> Illegal state errors should be fatal in transactional producer
> --
>
> Key: KAFKA-14831
> URL: https://issues.apache.org/jira/browse/KAFKA-14831
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.2.3
>Reporter: Jason Gustafson
>Assignee: Kirk True
>Priority: Major
> Fix For: 3.6.0
>
>
> In KAFKA-14830, the producer hit an illegal state error. The error was 
> propagated to the {{Sender}} thread and logged, but the producer otherwise 
> continued on. It would be better to make illegal state errors fatal since 
> continuing to write to transactions when the internal state is inconsistent 
> may cause incorrect and unpredictable behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-714: Client metrics and observability

2023-08-09 Thread Ryanne Dolan
-1, non-binding, for reasons previously stated.

Ryanne

On Fri, Aug 4, 2023, 3:46 AM Andrew Schofield <
andrew_schofield_j...@outlook.com> wrote:

> Hi,
> After almost 2 1/2 years in the making, I would like to call a vote for
> KIP-714 (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability
> ).
>
> This KIP aims to improve monitoring and troubleshooting of client
> performance by enabling clients to push metrics to brokers.
>
> I’d like to thank everyone that participated in the discussion, especially
> the librdkafka team since one of the aims of the KIP is to enable any
> client to participate, not just the Apache Kafka project’s Java clients.
>
> Thanks,
> Andrew


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2080

2023-08-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 384979 lines...]
Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
TaskMetadataIntegrationTest > shouldReportCorrectEndOffsetInformation STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
TaskMetadataIntegrationTest > shouldReportCorrectEndOffsetInformation PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
TaskMetadataIntegrationTest > shouldReportCorrectCommittedOffsetInformation 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
TaskMetadataIntegrationTest > shouldReportCorrectCommittedOffsetInformation 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndInMemoryStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndInMemoryStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamAggregationDedupIntegrationTest > shouldReduce(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamAggregationDedupIntegrationTest > shouldReduce(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamAggregationDedupIntegrationTest > shouldGroupByKey(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamAggregationDedupIntegrationTest > shouldGroupByKey(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamAggregationDedupIntegrationTest > shouldReduceWindowed(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamAggregationDedupIntegrationTest > shouldReduceWindowed(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamKStreamIntegrationTest > shouldOuterJoin() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KStreamKStreamIntegrationTest > shouldOuterJoin() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableKTableForeignKeyInnerJoinMultiIntegrationTest > 
shouldInnerJoinMultiPartitionQueryable() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableKTableForeignKeyInnerJoinMultiIntegrationTest > 
shouldInnerJoinMultiPartitionQueryable() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosAlphaEnabled()
 STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosAlphaEnabled()
 PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled() 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled() 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosV2Enabled() 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosV2Enabled() 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 179 > 
LagFetchIntegrationTest > 

[jira] [Created] (KAFKA-15327) Async consumer should commit offsets on close

2023-08-09 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-15327:
--

 Summary: Async consumer should commit offsets on close
 Key: KAFKA-15327
 URL: https://issues.apache.org/jira/browse/KAFKA-15327
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer
Reporter: Lianet Magrans


In the current implementation of the KafkaConsumer, the ConsumerCoordinator 
commits offsets before the consumer is closed, with a call to 
maybeAutoCommitOffsetsSync(timer);
The async consumer should provide the same behaviour to commit offsets on 
close. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #157

2023-08-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 439354 lines...]

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetDataAndStat() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testChrootExistsAndRootIsLocked() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testChrootExistsAndRootIsLocked() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testCreateTopLevelPaths() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testCreateTopLevelPaths() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetLogConfigs() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetLogConfigs() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testAclMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testAclMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testConditionalUpdatePath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testConditionalUpdatePath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testDeleteTopicZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testDeleteTopicZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testDeletePath() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testDeletePath() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetBrokerMethods() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetBrokerMethods() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testJuteMaxBufffer() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testJuteMaxBufffer() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 167 > 
KafkaZkClientTest > testChroot(boolean) > 

[jira] [Created] (KAFKA-15326) Decouple Processing Thread from Polling Thread

2023-08-09 Thread Lucas Brutschy (Jira)
Lucas Brutschy created KAFKA-15326:
--

 Summary: Decouple Processing Thread from Polling Thread
 Key: KAFKA-15326
 URL: https://issues.apache.org/jira/browse/KAFKA-15326
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: Lucas Brutschy
Assignee: Lucas Brutschy


As part of an ongoing effort to implement a better threading architecture in 
Kafka streams, we decouple N stream threads into N polling threads and N 
processing threads. The effort to consolidate N polling thread into a single 
thread is follow-up after this ticket. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-714: Client metrics and observability

2023-08-09 Thread Kirk True
Hi Andrew,

+1 (non-binding)

This is a huge step in enabling end-to-end observability for users and 
hopefully even help us get a better idea where we can improvement the client 
behavior.

And +100 re: librdkafka team involvement. 

Thanks!

> On Aug 8, 2023, at 4:00 AM, Milind Luthra  
> wrote:
> 
> Hi Andrew, thanks for working on the KIP.
> 
> +1 (non binding)
> 
> Thanks,
> Milind
> 
> On Fri, Aug 4, 2023 at 2:16 PM Andrew Schofield <
> andrew_schofield_j...@outlook.com> wrote:
> 
>> Hi,
>> After almost 2 1/2 years in the making, I would like to call a vote for
>> KIP-714 (
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability
>> ).
>> 
>> This KIP aims to improve monitoring and troubleshooting of client
>> performance by enabling clients to push metrics to brokers.
>> 
>> I’d like to thank everyone that participated in the discussion, especially
>> the librdkafka team since one of the aims of the KIP is to enable any
>> client to participate, not just the Apache Kafka project’s Java clients.
>> 
>> Thanks,
>> Andrew



Re: [VOTE] KIP-858: Handle JBOD broker disk failure in KRaft

2023-08-09 Thread Mickael Maison
Hi Igor,

Thanks for the KIP, adding JBOD support to KRaft is really important.

I see 2 different metric names mentioned
"QueuedReplicaToDirAssignments" and
"NumMismatchingReplicaToLogDirAssignments". From the descriptions it
seems it's the same metric, can you clarify which name you propose
using?

Apart from this small inconsistency, I'm +1 (binding)

Thanks,
Mickael

On Tue, Jul 25, 2023 at 11:06 PM Igor Soarez  wrote:
>
> Hi Ismael,
>
> I believe I have addressed all concerns.
> Please have a look, and consider a vote on this KIP.
>
> Thank you,
>
> --
> Igor


[jira] [Resolved] (KAFKA-15298) Disable DeleteRecords on Tiered Storage topics

2023-08-09 Thread Christo Lolov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christo Lolov resolved KAFKA-15298.
---
Resolution: Won't Fix

> Disable DeleteRecords on Tiered Storage topics
> --
>
> Key: KAFKA-15298
> URL: https://issues.apache.org/jira/browse/KAFKA-15298
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
>  Labels: tiered-storage
>
> Currently the DeleteRecords API does not work with Tiered Storage. We should 
> ensure that this is reflected in the responses that clients get when trying 
> to use the API with tiered topics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15100) Unsafe to call tryCompleteFetchResponse on request timeout

2023-08-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Armando García Sancio resolved KAFKA-15100.

Resolution: Fixed

> Unsafe to call tryCompleteFetchResponse on request timeout
> --
>
> Key: KAFKA-15100
> URL: https://issues.apache.org/jira/browse/KAFKA-15100
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: José Armando García Sancio
>Assignee: José Armando García Sancio
>Priority: Major
> Fix For: 3.6.0, 3.4.2, 3.5.2
>
>
> When the fetch request times out the future is completed from the 
> "raft-expiration-executor" SystemTimer thread. KafkaRaftClient assumes that 
> tryCompleteFetchResponse is always called from the same thread. This 
> invariant is violated in this case.
> {code:java}
>            return future.handle((completionTimeMs, exception) -> {
>               if (exception != null) {
>                   Throwable cause = exception instanceof ExecutionException ?
>                       exception.getCause() : exception;                  // 
> If the fetch timed out in purgatory, it means no new data is available,
>                   // and we will complete the fetch successfully. Otherwise, 
> if there was
>                   // any other error, we need to return it.
>                   Errors error = Errors.forException(cause);
>                   if (error != Errors.REQUEST_TIMED_OUT) {
>                       logger.info("Failed to handle fetch from {} at {} due 
> to {}",
>                           replicaId, fetchPartition.fetchOffset(), error);
>                       return buildEmptyFetchResponse(error, Optional.empty());
>                   }
>               }              // FIXME: `completionTimeMs`, which can be null
>               logger.trace("Completing delayed fetch from {} starting at 
> offset {} at {}",
>                   replicaId, fetchPartition.fetchOffset(), completionTimeMs);
>               return tryCompleteFetchRequest(replicaId, fetchPartition, 
> time.milliseconds());
>           });
> {code}
> One solution is to always build an empty response if the future was completed 
> exceptionally. This works because the ExpirationService completes the future 
> with a `TimeoutException`.
> A longer-term solution is to use a more flexible event executor service. This 
> would be a service that allows more kinds of event to get scheduled/submitted 
> to the KRaft thread.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


RE: Re: [DISCUSS] KIP-960: Support interactive queries (IQv2) for versioned state stores

2023-08-09 Thread Victoria Xia
Hey Alieh,

Thanks for the KIP! 

It looks like the KIP proposes three different types of interactive queries for 
versioned stores, though they are grouped together into two classes: 
VersionedKeyQuery adds supports for single-key, single-timestamp lookups, and 
also for single-key, multi-timestamp lookups, while VersionedRangeQuery 
additionally adds support for key-range queries.

The first type of query (single-key, single-timestamp lookups) are already 
supported by versioned stores (per the VersionedKeyValueStore interface) today, 
so exposing these via interactive queries require low additional implementation 
effort, and are a quick win to users. The other two types of queries will 
require more effort to add, and also come with more design decisions. I've 
sorted my thoughts accordingly.

Regarding single-key, multi-timestamp lookups:

1. If we add these, we should add a new method to the VersionedKeyValueStore 
interface to support this type of lookup. Otherwise, there is no easy/efficient 
way to compose methods from the existing interface in order to implement this 
type of lookup, and therefore the new interactive query type cannot be used on 
generic VersionedKeyValueStores.

2. I agree with Matthias's and Lucas's comments about being very explicit about 
what the timestamp range means. For consistency with single-key, 
single-timestamp lookups, I think the "upper timestamp bound" should really be 
an "as of timestamp bound" instead, so that it is inclusive. For the "lower 
timestamp bound"/start timestamp, we have a choice regarding whether to 
interpret it as the user saying "I want valid records for all timestamps in the 
range" in which case the query should return a record with timestamp earlier 
than the start timestamp, or to interpret it as the user saying "I want all 
records with timestamps in the range" in which case the query should not return 
any records with timestamp earlier than the start timestamp. My current 
preference is for the former, but it'd be good to hear other opinions.

3. The existing VersionedRecord interface contains only a value and validFrom 
timestamp, and does not allow null values. This presents a problem for 
introducing single-key, multi-timestamp lookups because if there is a tombstone 
contained within the timestamp range of the query, then there is no way to 
represent this as part of a ValueIterator return type. You'll 
either have to allow null values or add a validTo timestamp to the returned 
records.

4. Also +1 to Matthias's question about standardizing the order in which 
records are returned. Will they always be returned in forwards-timestamp order? 
Reverse-timestamp order? Will users get a choice? It'd be good to make this 
explicit in the KIP.

Regarding key-range queries (either single-timestamp or multi-timestamp):

5. Same comment about adding new methods for this type of lookup to the 
VersionedKeyValueStore interface.

6. Again +1 to Matthias's question about the order in which records are 
returned, for multi-key, multi-timestamp queries. Organizing first by key and 
then by timestamp makes the most sense to me, based on the layout of the 
existing store implementation. (Trying to sort by timestamp would require 
reading potentially all keys into memory first, which is not feasible.)

I think the complexity of introducing single-key, multi-timestamp lookups and 
especially multi-key, multi-timestamp lookups is significantly higher than for 
single-key, single-timestamp lookups, so it'd be good to think about/guage what 
the use cases for these types of queries are before committing to the 
implementation, and also to stage the implementation to get single-key, 
single-timestamp lookups as a quick win first without blocking on the others. 
(Guessing you were already planning to do that, though :))

Also a separate question: 

7. What's the motivation for introducing new VersionedKeyQuery and 
VersionedRangeQuery types rather than re-using the existing KeyQuery and 
RangeQuery types, to add optional asOfTimestamp bounds? I can see pros and cons 
of each, just curious to hear your thoughts.

If you do choose to keep VersionedKeyQuery and VersionedRangeQuery separate 
from KeyQuery and RangeQuery, then you can remove the KeyQuery and RangeQuery 
placeholders in the versioned store implementation as part of implementing your 
KIP: 
https://github.com/apache/kafka/blob/f23394336a7741bf4eb23fcde951af0a23a69bd0/streams/src/main/java/org/apache/kafka/streams/state/internals/MeteredVersionedKeyValueStore.java#L142-L158

Best,
Victoria

On 2023/08/09 10:16:44 Bruno Cadonna wrote:
> Hi,
> 
> I will use the initials of the authors to distinguish the points.
> 
> LB 2.
> I like the idea of composable construction of queries. It would make the 
> API more readable. I think this is better than the VersionsQualifier, I 
> proposed in BC 3..
> 
> LB 4. and LB 5.
> Being explicit on the time bounds and key ranges is really important.
> 
> LB 6.
> I 

[jira] [Created] (KAFKA-15325) Integrate topicId in OffsetFetch and OffsetCommit async consumer calls

2023-08-09 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-15325:
--

 Summary: Integrate topicId in OffsetFetch and OffsetCommit async 
consumer calls
 Key: KAFKA-15325
 URL: https://issues.apache.org/jira/browse/KAFKA-15325
 Project: Kafka
  Issue Type: Task
  Components: clients, consumer
Reporter: Lianet Magrans
Assignee: Lianet Magrans


KIP-848 introduces support for topicIds in the OffsetFetch and OffsetCommit 
APIs. The consumer calls to those APIs should be updated to include topicIds 
when available.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-963: Upload and delete lag metrics in Tiered Storage

2023-08-09 Thread Christo Lolov
Heya Kamal,

Thank you for going through the KIP and for the question!

I have been thinking about this and as an operator I might find it the most
useful to know all three of them actually.

I would find knowing the size in bytes useful to determine how much disk I
might need to add temporarily to compensate for the slowdown.
I would find knowing the number of records useful, because using the
MessagesInPerSec metric I would be able to determine how old the records
which are facing problems are.
I would find knowing the number of segments useful because I would be able
to correlate this with whether I need to change
*remote.log.manager.task.interval.ms
 *to a lower or higher value.

What are your thoughts on the above? Would you find some of them more
useful than others?

Best,
Christo

On Tue, 8 Aug 2023 at 16:43, Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Hi Christo,
>
> Thanks for the KIP!
>
> The proposed tiered storage metrics are useful. The unit mentioned in the
> KIP is the number of records.
> Each topic can have varying amounts of records in a segment depending on
> the record size.
>
> Do you think having the tier-lag by number of segments (or) size of
> segments in bytes will be useful
> to the operator?
>
> Thanks,
> Kamal
>
> On Tue, Aug 8, 2023 at 8:56 PM Christo Lolov 
> wrote:
>
> > Hello all!
> >
> > I would like to start a discussion for KIP-963: Upload and delete lag
> > metrics in Tiered Storage (https://cwiki.apache.org/confluence/x/sZGzDw
> ).
> >
> > The purpose of this KIP is to introduce a couple of metrics to track lag
> > with respect to remote storage from the point of view of Kafka.
> >
> > Thanks in advance for leaving a review!
> >
> > Best,
> > Christo
> >
>


Re: [DISCUSS] KIP-962 Relax non-null key requirement in Kafka Streams

2023-08-09 Thread Florin Akermann
Hi All,

I added a remark about the repartition of null-key records.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams#KIP962:RelaxnonnullkeyrequirementinKafkaStreams-Repartitionofnull-keyrecords

Can we relax this constraint for any kind of repartitioning or should it
only be relaxed in the context of left stream-table and left/outer
stream-stream joins?

Florin

On Mon, 7 Aug 2023 at 13:23, Florin Akermann 
wrote:

> Hi Lucas,
>
> Thanks. I added the point about the upgrade guide as well.
>
> Florin
>
> On Mon, 7 Aug 2023 at 11:06, Lucas Brutschy 
> wrote:
>
>> Hi Florin,
>>
>> thanks for the KIP! This looks good to me. I agree that the precise
>> Java doc wording doesn't have to be discussed as part of the KIP.
>>
>> I would also suggest to include an update to
>> https://kafka.apache.org/documentation/streams/upgrade-guide
>>
>> Cheers,
>> Lucas
>>
>> On Mon, Aug 7, 2023 at 10:51 AM Florin Akermann
>>  wrote:
>> >
>> > Hi Both,
>> >
>> > Thanks.
>> > I added remarks to account for this.
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams#KIP962:RelaxnonnullkeyrequirementinKafkaStreams-Remarks
>> >
>> > In short, let's add a note in the Java docs? The exact wording of the
>> note
>> > can be scrutinized in the pull request?
>> >
>> > What do you think?
>> >
>> >
>> > On Sun, 6 Aug 2023 at 19:41, Guozhang Wang 
>> > wrote:
>> >
>> > > I'm just thinking we can try to encourage users to migrate from XX to
>> > > XXWithKey in the docs, giving this as one good example that the latter
>> > > can help you distinguish different scenarios whereas the former
>> > > cannot.
>> > >
>> > > On Fri, Aug 4, 2023 at 6:32 PM Matthias J. Sax 
>> wrote:
>> > > >
>> > > > Guozhang,
>> > > >
>> > > > thanks for pointing out ValueJoinerWithKey. In the end, it's just a
>> > > > documentation change, ie, point out that the passed in key could be
>> > > > `null` and similar?
>> > > >
>> > > > -Matthias
>> > > >
>> > > >
>> > > > On 8/2/23 3:20 PM, Guozhang Wang wrote:
>> > > > > Thanks Florin for the writeup,
>> > > > >
>> > > > > One quick thing I'd like to bring up is that in KIP-149
>> > > > > (
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-149%3A+Enabling+key+access+in+ValueTransformer%2C+ValueMapper%2C+and+ValueJoiner
>> > > )
>> > > > > we introduced ValueJoinerWithKey which is aimed to enhance
>> > > > > ValueJoiner. It would have a benefit for this KIP such that
>> > > > > implementers can distinguish "null-key" v.s. "not-null-key but
>> > > > > null-value" scenarios.
>> > > > >
>> > > > > Hence I'd suggest we also include the semantic changes with
>> > > > > ValueJoinerWithKey, which can help distinguish these two
>> scenarios,
>> > > > > and also document that if users apply ValueJoiner only, they may
>> not
>> > > > > have this benefit, and hence we suggest users to use the former.
>> > > > >
>> > > > >
>> > > > > Guozhang
>> > > > >
>> > > > > On Mon, Jul 31, 2023 at 12:11 PM Florin Akermann
>> > > > >  wrote:
>> > > > >>
>> > > > >>
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams
>> > >
>>
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2079

2023-08-09 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.4 #156

2023-08-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-960: Support interactive queries (IQv2) for versioned state stores

2023-08-09 Thread Bruno Cadonna

Hi,

I will use the initials of the authors to distinguish the points.

LB 2.
I like the idea of composable construction of queries. It would make the 
API more readable. I think this is better than the VersionsQualifier, I 
proposed in BC 3..


LB 4. and LB 5.
Being explicit on the time bounds and key ranges is really important.

LB 6.
I think peekNextValue() comes from peekNextKey() in KeyValueIterator. I 
agree with Lucas that in ValueIterator peek() is clear enough.


BC 5.
I missed to answer your question. I am sorry! I do not think we need 
system tests. However, you should add a test plan listing the test you 
plan to write. I guess this list will comprise unit and integration tests.


BC 6. (new)
Could you please use asOf or asOfTime or asOfTimestamp instead of 
untilTimestamp. I do not want to query a key until a timestamp, but I 
want to query a key at a specific point in time. Maybe atTime might also 
work, but I think asOf is more popular.


BC 7. (new)
You should change long to Instant for timestamps. Just because we missed 
to use Instant in other places, we should not keep long.



Best,
Bruno


On 8/8/23 10:26 PM, Lucas Brutschy wrote:

Hi Alieh,

thanks a lot for the KIP. IQ with time semantics is going to be
another great improvement towards having crystal clear streaming
semantics!

1. I agree with Bruno and Matthias, to remove the 'bound' term for the
timestamps. It's confusing that we have bounds for both timestamps and
keys. In particular, `withNoBoundWithTimestampBound` seems to
contradict itself.

2. I would personally prefer having composable construction of the
query, instead of defining a separate method for each combination. So
for example:
- `keyRangeLatestValue(l,u)` ->  `withBounds(l, u).latest()`
- `withNoBoundsWithTimestampRange(t1,t2)` ->
`withNoBounds().fromTime(t1).untilTime(t2)`
- etc.pp.
This would have the advantage, that the interface would be very
similar to `RangeQuery` and we'd need a lot fewer methods, so it will
make the API reference a much quicker read. We already use this style
to define `skipCache` in `KeyQuery`. I guess that diverges quite a bit
from the current proposal, but I'll leave it here anyways for you to
consider it (even if you decide to stick with the current model).

4. Please make sure to specify in every range-based method whether the
bounds are inclusive or exclusive. I see it being mentioned for some
methods, but for others, this is omitted. As I understand, 'until' is
usually used to mean exclusive, and 'from' is usually used to mean
inclusive, but it's better to specify this in the javadoc.

5. Similarly, as Matthias says, specify what happens if the "validity
range" of a value overlaps with the query range. So, to clarify his
remark, what happens if the value v1 is inserted at time 1 and value
v2 is inserted at time 3, and I query for the range `[2,4]` - will the
result include v1 or not? It's the valid value at time 2. For
inspiration, in `WindowRangeQuery`, this important semantic detail is
even clear from the method name `withWindowStartRange`.

6. For iterators, it is convention to call the method `peek` and this
convention followed by e.g. `AbstractIterator` in Kafka, but also
Guava, Apache Commons etc. So I would also call it `peek`, not
`peekNextValue` here. It's clear what we are peeking at.

Cheers,
Lucas

On Thu, Jul 27, 2023 at 3:07 PM Alieh Saeedi
 wrote:


Thanks, Bruno, for the feedback.


- I agree with both points 2 and 3. About 3: Having "VersionsQualifier"
reduces the number of methods and makes everything less confusing. At the
end, that will be easier to use for the developers.
- About point 4: I renamed all the properties and parameters from
"asOfTimestamp" to "fromTimestamp". That was my misunderstanding. So Now we
have these two timestamp bounds: "fromTimestamp" and "untilTimestamp".
- About point 5: Do we need system tests here? I assumed just
integration tests were enough.
- Regarding long vs timestamp instance: I think yes, that 's why I used
Long as timestamp.

Bests,
Alieh






On Thu, Jul 27, 2023 at 2:28 PM Bruno Cadonna  wrote:


Hi Alieh,

Thanks for the KIP!


Here my feedback.

1.
You can remove the private fields and constructors from the KIP. Those
are implementation details.


2.
Some proposals for renamings

in VersionedKeyQuery

withKeyWithTimestampBound()
-> withKeyAndAsOf()

withKeyWithTimestampRange()
-> withKeyAndTimeRange()

in VersionedRangeQuery

KeyRangeWithTimestampBound()
-> withKeyRangeAndAsOf()

withLowerBoundWithTimestampBound()
-> withLowerBoundAndAsOf()

withUpperBoundWithTimestampBound()
-> withUpperBoundAndAsOf()

withNoBoundWithTimestampBound()
-> withNoBoundsAndAsOf

keyRangeWithTimestampRange()
-> withKeyRangeAndTimeRange()

withLowerBoundWithTimestampRange()
-> withLowerBoundAndTimeRange()

withUpperBoundWithTimestampRange()
-> withUpperBounfAndTimeRange()

withNoBoundWithTimestampRange()
-> 

[jira] [Reopened] (KAFKA-14595) Move ReassignPartitionsCommand to tools

2023-08-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison reopened KAFKA-14595:


> Move ReassignPartitionsCommand to tools
> ---
>
> Key: KAFKA-14595
> URL: https://issues.apache.org/jira/browse/KAFKA-14595
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Nikolay Izhikov
>Priority: Major
> Fix For: 3.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #55

2023-08-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 281194 lines...]

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZkMigrationClientTest > testCreateNewTopic() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZkMigrationClientTest > testCreateNewTopic() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testConnection() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testConnection() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testGetAclExistingZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testGetAclExistingZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testSetAclNonExistentZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testConnectionLossRequestTermination() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testConnectionLossRequestTermination() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testExistsNonExistentZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testExistsNonExistentZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testGetDataNonExistentZNode() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testGetDataNonExistentZNode() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testConnectionTimeout() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testConnectionTimeout() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testBlockOnRequestCompletionFromStateChangeHandler() 
STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testBlockOnRequestCompletionFromStateChangeHandler() 
PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testUnresolvableConnectString() STARTED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 
ZooKeeperClientTest > testUnresolvableConnectString() PASSED

Gradle Test Run :core:integrationTest > Gradle Test Executor 174 > 

Re: [QUESTION] What is the difference between sequence and offset for a Record?

2023-08-09 Thread tison
Thanks for your reply!

I may not use "normalization". What I want to refer to is:

appendInfo.setLastOffset(offset.value - 1)

which underneath updates the base offset field (in record batch) but not
the offset delta of each record.

Best,
tison.


Justine Olshan  于2023年8月8日周二 00:43写道:

> The sequence summary looks right to me.
> For log normalization, are you referring to compaction? The segment's first
> and last offsets might change, but a batch keeps its offsets when
> compaction occurs.
>
> Hope that helps.
> Justine
>
> On Mon, Aug 7, 2023 at 8:59 AM Matthias J. Sax  wrote:
>
> > > but the base offset may change during log normalizing.
> >
> > Not sure what you mean by "normalization" but offsets are immutable, so
> > they don't change. (To be fair, I am not an expert on brokers, so not
> > sure how this work in detail when log compaction ticks in).
> >
> > > This field is given by the producer and the broker should only read it.
> >
> > Sounds right. The point being is, that the broker has an "expected"
> > value for it, and if the provided value does not match the expected one,
> > the write is rejected to begin with.
> >
> >
> > -Matthias
> >
> > On 8/7/23 6:35 AM, tison wrote:
> > > Hi Matthias and Justine,
> > >
> > > Thanks for your reply!
> > >
> > > I can summarize the answer as -
> > >
> > > Record offset = base offset + offset delta. This field is calculated by
> > the
> > > broker and the delta won't change but the base offset may change during
> > log
> > > normalizing.
> > > Record sequence = base sequence + (offset) delta. This field is given
> by
> > > the producer and the broker should only read it.
> > >
> > > Is it correct?
> > >
> > > I implement the manipulation part of base offset following this
> > > understanding at [1].
> > >
> > > Best,
> > > tison.
> > >
> > > [1]
> > >
> >
> https://github.com/tisonkun/kafka-api/blob/d080ab7e4b57c0ab0182e0b254333f400e616cd2/simplesrv/src/lib.rs#L391-L394
> > >
> > >
> > > Justine Olshan  于2023年8月2日周三 04:19写道:
> > >
> > >> For what it's worth -- the sequence number is not calculated
> > >> "baseOffset/baseSequence + offset delta" but rather by monotonically
> > >> increasing for a given epoch. If the epoch is bumped, we reset back to
> > >> zero.
> > >> This may mean that the offset and sequence may match, but do not
> > strictly
> > >> need to be the same. The sequence number will also always come from
> the
> > >> client and be in the produce records sent to the Kafka broker.
> > >>
> > >> As for offsets, there is some code in the log layer that maintains the
> > log
> > >> end offset and assigns offsets to the records. The produce handling on
> > the
> > >> leader should typically assign the offset.
> > >> I believe you can find that code here:
> > >>
> > >>
> >
> https://github.com/apache/kafka/blob/b9a45546a7918799b6fb3c0fe63b56f47d8fcba9/core/src/main/scala/kafka/log/UnifiedLog.scala#L766
> > >>
> > >> Justine
> > >>
> > >> On Tue, Aug 1, 2023 at 11:38 AM Matthias J. Sax 
> > wrote:
> > >>
> > >>> The _offset_ is the position of the record in the partition.
> > >>>
> > >>> The _sequence number_ is a unique ID that allows broker to
> de-duplicate
> > >>> messages. It requires the producer to implement the idempotency
> > protocol
> > >>> (part of Kafka transactions); thus, sequence numbers are optional and
> > as
> > >>> long as you don't want to support idempotent writes, you don't need
> to
> > >>> worry about them. (If you want to dig into details, checkout KIP-98
> > that
> > >>> is the original KIP about Kafka TX).
> > >>>
> > >>> HTH,
> > >>> -Matthias
> > >>>
> > >>> On 8/1/23 2:19 AM, tison wrote:
> >  Hi,
> > 
> >  I'm wringing a Kafka API Rust codec library[1] to understand how
> Kafka
> >  models its concepts and how the core business logic works.
> > 
> >  During implementing the codec for Records[2], I saw a twins of
> fields
> >  "sequence" and "offset". Both of them are calculated by
> >  baseOffset/baseSequence + offset delta. Then I'm a bit confused how
> to
> > >>> deal
> >  with them properly - what's the difference between these two
> concepts
> >  logically?
> > 
> >  Also, to understand how the core business logic works, I write a
> > simple
> >  server based on my codec library, and observe that the server may
> need
> > >> to
> >  update offset for records produced. How does Kafka set the correct
> > >> offset
> >  for each produced records? And how does Kafka maintain the
> calculation
> > >>> for
> >  offset and sequence during these modifications?
> > 
> >  I'll appreciate if anyone can answer the question or give some
> > insights
> > >>> :D
> > 
> >  Best,
> >  tison.
> > 
> >  [1] https://github.com/tisonkun/kafka-api
> >  [2] https://kafka.apache.org/documentation/#messageformat
> > 
> > >>>
> > >>
> > >
> >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #185

2023-08-09 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 357112 lines...]
Please consult deprecation warnings for more details.

BUILD SUCCESSFUL in 29s
79 actionable tasks: 33 executed, 46 up-to-date
[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.3.3-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/quickstart/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.3.3-SNAPSHOT/streams-quickstart-3.3.3-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.3.3-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- resources:2.7:resources (default-resources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- resources:2.7:testResources (default-testResources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- archetype:2.2:jar (default-jar) @ streams-quickstart-java ---
[INFO] Building archetype jar: 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/quickstart/java/target/streams-quickstart-java-3.3.3-SNAPSHOT
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- archetype:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[WARNING]  Parameter 'skip' (user property 'archetype.test.skip') is read-only, 
must not be used in configuration
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart-java ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart-java ---
[INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/quickstart/java/target/streams-quickstart-java-3.3.3-SNAPSHOT.jar
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.3.3-SNAPSHOT/streams-quickstart-java-3.3.3-SNAPSHOT.jar
[INFO] Installing 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/quickstart/java/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.3.3-SNAPSHOT/streams-quickstart-java-3.3.3-SNAPSHOT.pom
[INFO] 
[INFO] --- archetype:2.2:update-local-catalog (default-update-local-catalog) @ 
streams-quickstart-java ---
[INFO] 
[INFO] Reactor Summary for Kafka Streams :: Quickstart 3.3.3-SNAPSHOT:
[INFO] 
[INFO] Kafka Streams :: Quickstart  SUCCESS [  2.928 s]
[INFO] streams-quickstart-java  SUCCESS [  0.982 s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time:  4.262 s
[INFO] Finished at: 2023-08-09T01:13:16Z
[INFO] 
[Pipeline] dir
Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.3/streams/quickstart/test-streams-archetype
[Pipeline] {
[Pipeline] sh
+ echo Y
+ mvn archetype:generate -DarchetypeCatalog=local 
-DarchetypeGroupId=org.apache.kafka 
-DarchetypeArtifactId=streams-quickstart-java -DarchetypeVersion=3.3.3-SNAPSHOT