Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.7 #102

2024-03-05 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 456542 lines...]
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldAddItselfToRecordingTriggerWhenFirstValueProvidersAreAddedAfterLastValueProvidersWereRemoved()
 PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > shouldThrowIfValueProvidersToRemoveNotFound() 
STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > shouldThrowIfValueProvidersToRemoveNotFound() 
PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfCacheToAddIsSameAsOnlyOneOfMultipleCaches() STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfCacheToAddIsSameAsOnlyOneOfMultipleCaches() PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldNotSetStatsLevelToExceptDetailedTimersWhenValueProvidersWithoutStatisticsAreAdded()
 STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldNotSetStatsLevelToExceptDetailedTimersWhenValueProvidersWithoutStatisticsAreAdded()
 PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldNotRemoveItselfFromRecordingTriggerWhenAtLeastOneValueProviderIsPresent() 
STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldNotRemoveItselfFromRecordingTriggerWhenAtLeastOneValueProviderIsPresent() 
PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldRemoveItselfFromRecordingTriggerWhenAllValueProvidersAreRemoved() STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldRemoveItselfFromRecordingTriggerWhenAllValueProvidersAreRemoved() PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldAddItselfToRecordingTriggerWhenFirstValueProvidersAreAddedToNewlyCreatedRecorder()
 STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldAddItselfToRecordingTriggerWhenFirstValueProvidersAreAddedToNewlyCreatedRecorder()
 PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfValueProvidersForASegmentHasBeenAlreadyAdded() STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfValueProvidersForASegmentHasBeenAlreadyAdded() PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldCorrectlyHandleHitRatioRecordingsWithZeroHitsAndMisses() STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldCorrectlyHandleHitRatioRecordingsWithZeroHitsAndMisses() PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfCacheToAddIsNotNullButExistingCacheIsNull() STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfCacheToAddIsNotNullButExistingCacheIsNull() PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfCacheToAddIsNullButExistingCacheIsNotNull() STARTED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfCacheToAddIsNullButExistingCacheIsNotNull() PASSED
[2024-03-06T05:11:17.212Z] 
[2024-03-06T05:11:17.212Z] Gradle Test Run :streams:test > Gradle Test Executor 
93 > RocksDBMetricsRecorderTest > 
shouldThrowIfStatisticsToAddIsNotNullButExistingStatisticsAreNull() STARTED
[2024

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2699

2024-03-05 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-1025: Optionally URL-encode clientID and clientSecret in authorization header

2024-03-05 Thread Nelson B.
Hi all,

I would like to start a discussion on KIP-1025, which would optionally
URL-encode clientID and clientSecret in the authorization header

https://cwiki.apache.org/confluence/display/KAFKA/KIP-1025%3A+Optionally+URL-encode+clientID+and+clientSecret+in+authorization+header

Best,
Nelson B.


Re: [VOTE] KIP-995: Allow users to specify initial offsets while creating connectors

2024-03-05 Thread Ashwin
Thanks Yash,

Yes , I think we can use @JsonInclude(JsonInclude.Include.NON_NULL) to
exclude “initial_offsets_response” from the create response if offset is
not specified.

I’ll close the voting this week , if there are no further comments.

Thanks for voting, everyone!


Ashwin

On Tue, Mar 5, 2024 at 11:20 PM Yash Mayya  wrote:

> Hi Chris,
>
> I followed up with Ashwin offline and I believe he wanted to take a closer
> look at the `ConnectorInfoWithInitialOffsetsResponse` stuff he mentioned in
> the previous email and whether or not that'll be required (alternatively
> using some Jackson JSON tricks). However, that's an implementation detail
> and shouldn't hold up the KIP. Bikeshedding a little on the
> "initial_offsets_response" field - I'm wondering if something like
> "offsets_status" might be more appropriate, what do you think? I don't
> think the current name is terrible though, so I'm +1 (binding) if everyone
> else agrees that it's suitable.
>
> Thanks,
> Yash
>
> On Tue, Mar 5, 2024 at 9:51 PM Chris Egerton 
> wrote:
>
> > Hi all,
> >
> > Wanted to bump this and see if it looks good enough for a third vote.
> Yash,
> > any thoughts?
> >
> > Cheers,
> >
> > Chris
> >
> > On Mon, Jan 29, 2024 at 2:55 AM Ashwin 
> > wrote:
> >
> > > Thanks for reviewing this KIP,  Yash.
> > >
> > > Could you please elaborate on the cleanup steps? For instance, if we
> > > > encounter an error after wiping existing offsets but before writing
> the
> > > new
> > > > offsets, there's not really any good way to "revert" the wiped
> offsets.
> > > > It's definitely extremely unlikely that a user would expect the
> > previous
> > > > offsets for a connector to still be present (by creating a new
> > connector
> > > > with the same name but without initial offsets for instance) after
> > such a
> > > > failed operation, but it would still be good to call this out
> > > explicitly. I
> > > > presume that we'd want to wipe the newly written initial offsets if
> we
> > > fail
> > > > while writing the connector's config however?
> > >
> > >
> > > Agree - I have clarified the cleanup here -
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-995%3A+Allow+users+to+specify+initial+offsets+while+creating+connectors#KIP995:Allowuserstospecifyinitialoffsetswhilecreatingconnectors-ProposedChanges
> > > .
> > >
> > > The `PATCH /connectors/{connector}/offsets` and `DELETE
> > > > /connectors/{connector}/offsets` endpoints have two possible success
> > > > messages in the response depending on whether or not the connector
> > plugin
> > > > has implemented the `alterOffsets` connector method. Since we're
> > > proposing
> > > > to utilize the same offset validation during connector creation if
> > > initial
> > > > offsets are specified, I think it would be valuable to surface
> similar
> > > > information to users here as well
> > >
> > >
> > > Thanks for pointing this out. I have updated the response to include a
> > new
> > > field “initial_offsets_response” which will contain the response based
> on
> > > whether the connector implements alterOffsets or not. This also means
> > that
> > > if initial_offsets is set in the ConnectorCreate request, we will
> return
> > a
> > > new REST entity (ConnectorInfoWithInitialOffsetsResponse ?) which will
> > be a
> > > child class of ConnectorInfo.
> > >
> > > (
> > >
> > >
> >
> https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/entities/ConnectorInfo.java#L28-L28
> > > )
> > >
> > > Thanks,
> > > Ashwin
> > >
> > > On Wed, Jan 17, 2024 at 4:48 PM Yash Mayya 
> wrote:
> > >
> > > > Hi Ashwin,
> > > >
> > > > Thanks for the KIP.
> > > >
> > > > > If Connect runtime encounters an error in any of these steps,
> > > > > it will cleanup (if required) and return an error response
> > > >
> > > > Could you please elaborate on the cleanup steps? For instance, if we
> > > > encounter an error after wiping existing offsets but before writing
> the
> > > new
> > > > offsets, there's not really any good way to "revert" the wiped
> offsets.
> > > > It's definitely extremely unlikely that a user would expect the
> > previous
> > > > offsets for a connector to still be present (by creating a new
> > connector
> > > > with the same name but without initial offsets for instance) after
> > such a
> > > > failed operation, but it would still be good to call this out
> > > explicitly. I
> > > > presume that we'd want to wipe the newly written initial offsets if
> we
> > > fail
> > > > while writing the connector's config however?
> > > >
> > > > > Validate the offset using the same checks performed while
> > > > > altering connector offsets (PATCH /$connector/offsets ) as
> > > > > specified in KIP-875
> > > >
> > > > The `PATCH /connectors/{connector}/offsets` and `DELETE
> > > > /connectors/{connector}/offsets` endpoints have two possible success
> > > > messages in the response depending on whether or not the connector
> > p

[jira] [Created] (KAFKA-16346) Fix flay MetricsTest.testMetrics

2024-03-05 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-16346:
--

 Summary: Fix flay MetricsTest.testMetrics
 Key: KAFKA-16346
 URL: https://issues.apache.org/jira/browse/KAFKA-16346
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai


{code}
Gradle Test Run :core:test > Gradle Test Executor 1119 > MetricsTest > 
testMetrics(boolean) > testMetrics with systemRemoteStorageEnabled: false FAILED
org.opentest4j.AssertionFailedError: Broker metric not recorded correctly 
for 
kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request=Produce 
value 0.0 ==> expected:  but was: 
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)
at 
app//kafka.api.MetricsTest.verifyBrokerMessageConversionMetrics(MetricsTest.scala:314)
at app//kafka.api.MetricsTest.testMetrics(MetricsTest.scala:110)
{code}

The value used to update metrics is calculated by Math.round, so it could be 
zero if you have a good machine :)

We should verify the `count`  instead of `value`, since it is convincible and 
more stable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2024-03-05 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-15417.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: Afbeelding 1-1.png, Afbeelding 1.png, 
> SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16209) fetchSnapshot might return null if topic is created before v2.8

2024-03-05 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-16209.
---
Fix Version/s: 3.8.0
   3.7.1
   Resolution: Fixed

> fetchSnapshot might return null if topic is created before v2.8
> ---
>
> Key: KAFKA-16209
> URL: https://issues.apache.org/jira/browse/KAFKA-16209
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.1
>Reporter: Luke Chen
>Assignee: Arpit Goyal
>Priority: Major
>  Labels: newbie, newbie++
> Fix For: 3.8.0, 3.7.1
>
>
> Remote log manager will fetch snapshot via ProducerStateManager 
> [here|https://github.com/apache/kafka/blob/trunk/storage/src/main/java/org/apache/kafka/storage/internals/log/ProducerStateManager.java#L608],
>  but the snapshot map might get nothing if the topic has no snapshot created, 
> ex: topics before v2.8. Need to fix it to avoid NPE.
> old PR: https://github.com/apache/kafka/pull/14615/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2698

2024-03-05 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-03-05 Thread Jun Rao
Hi, Jose,

Thanks for the reply.

30. So raft.version controls the version of Fetch among the voters. It
would be useful to document that.

36. Option 1 is fine. Could we document this in the section of
"Bootstrapping with multiple voters"?

37. We don't batch multiple topic partitions in AddVoter, RemoveVoter and
UpdateVoter requests while other requests like Vote and BeginQuorumEpoch
support batching. Should we make them consistent?

38. BeginQuorumEpochRequest: It seems that we need to replace the name
field with a nodeId field in LeaderEndpoints?

39. VoteRequest: Will the Voter ever be different from the Candidate? I
thought that in all the VoteRequests, the voter just votes for itself.

40. EndQuorumEpochRequest: Should we add a replicaUUID field to pair with
LeaderId?

41. Regarding including replica UUID to identify a voter: It adds a bit of
complexity. Could you explain whether it is truly needed? Before this KIP,
KRaft already supports replacing a disk on the voter node, right?

Jun

On Mon, Mar 4, 2024 at 2:55 PM José Armando García Sancio
 wrote:

> Hi Jun,
>
> Thanks for the feedback. See my comments below.
>
> On Fri, Mar 1, 2024 at 11:36 AM Jun Rao  wrote:
> > 30. Historically, we used MV to gate the version of Fetch request. Are
> you
> > saying that voters will ignore MV and only depend on raft.version when
> > choosing the version of Fetch request?
>
> Between Kafka servers/nodes (brokers and controllers) there are two
> implementations for the Fetch RPC.
>
> One, is the one traditionally used between brokers to replicate ISR
> based topic partitions. As you point out Kafka negotiates those
> versions using the IBP for ZK-based clusters and MV for KRaft-based
> clusters. This KIP doesn't change that. There have been offline
> conversations of potentially using the ApiVersions to negotiate those
> RPC versions but that is outside the scope of this KIP.
>
> Two, is the KRaft implementation. As of today only the controller
> listeners  (controller.listener.names) implement the request handlers
> for this version of the Fetch RPC. KafkaRaftClient implements the
> client side of this RPC. This version of the Fetch RPC is negotiated
> using ApiVersions.
>
> I hope that clarifies the two implementations. On a similar note,
> Jason and I did have a brief conversation regarding if KRaft should
> use a different RPC from Fetch to replicate the log of KRaft topic
> partition. This could be a long term option to make these two
> implementations clearer and allow them to diverge. I am not ready to
> tackle that problem in this KIP.
>
> > 35. Upgrading the controller listeners.
> > 35.1 So, the protocol is that each controller will pick the first
> listener
> > in controller.listener.names to initiate a connection?
>
> Yes. The negative of this solution is that it requires 3 rolls of
> voters (controllers) and 1 roll of observers (brokers) to replace a
> voter endpoint. In the future, we can have a solution that initiates
> the connection based on the state of the VotersRecord for voters RPCs.
> That solution can replace an endpoint with 2 rolls of voters and 1
> roll of observers.
>
> > 35.2 Should we include the new listeners in the section "Change the
> > controller listener in the brokers"?
>
> Yes. We need to. The observers (brokers) need to know what security
> protocol to use to connect to the endpoint(s) in
> controller.quorum.bootstrap.servers. This is also how connections to
> controller.quorum.voters work today.
>
> > 35.3 For every RPC that returns the controller leader, do we need to
> > return multiple endpoints?
>
> KRaft only needs to return the endpoint associated with the listener
> used to send the RPC request. This is similar to how the Metadata RPC
> works. The Brokers field in the Metadata response only returns the
> endpoints that match the listener used to receive the Metadata
> request.
>
> This is the main reason why KRaft needs to initiate connections using
> a security protocol (listener name) that is supported by all of the
> replicas. All of the clients (voters and observers) need to know
> (security protocol) how to connect to the redirection endpoint. All of
> the voters need to be listening on that listener name so that
> redirection works no matter the leader.
>
> > 35.4 The controller/observer can now get the endpoint from both records
> and
> > RPCs. Which one takes precedence? For example, suppose that a voter is
> down
> > for a while. It's started and gets the latest listener for the leader
> from
> > the initial fetch response. When fetching the records, it could see an
> > outdated listener. If it picks up this listener, it may not be able to
> > connect to the leader.
>
> Yeah. This is where connection and endpoint management gets tricky.
> This is my implementation strategy:
>
> 1. For the RPCs Vote, BeginQuorumEpoch and EndQuorumEpoch the replicas
> (votes) will always initiate connections using the endpoints described
> in the VotersRecord (or controller.quo

Re: [VOTE] KIP-995: Allow users to specify initial offsets while creating connectors

2024-03-05 Thread Yash Mayya
Hi Chris,

I followed up with Ashwin offline and I believe he wanted to take a closer
look at the `ConnectorInfoWithInitialOffsetsResponse` stuff he mentioned in
the previous email and whether or not that'll be required (alternatively
using some Jackson JSON tricks). However, that's an implementation detail
and shouldn't hold up the KIP. Bikeshedding a little on the
"initial_offsets_response" field - I'm wondering if something like
"offsets_status" might be more appropriate, what do you think? I don't
think the current name is terrible though, so I'm +1 (binding) if everyone
else agrees that it's suitable.

Thanks,
Yash

On Tue, Mar 5, 2024 at 9:51 PM Chris Egerton 
wrote:

> Hi all,
>
> Wanted to bump this and see if it looks good enough for a third vote. Yash,
> any thoughts?
>
> Cheers,
>
> Chris
>
> On Mon, Jan 29, 2024 at 2:55 AM Ashwin 
> wrote:
>
> > Thanks for reviewing this KIP,  Yash.
> >
> > Could you please elaborate on the cleanup steps? For instance, if we
> > > encounter an error after wiping existing offsets but before writing the
> > new
> > > offsets, there's not really any good way to "revert" the wiped offsets.
> > > It's definitely extremely unlikely that a user would expect the
> previous
> > > offsets for a connector to still be present (by creating a new
> connector
> > > with the same name but without initial offsets for instance) after
> such a
> > > failed operation, but it would still be good to call this out
> > explicitly. I
> > > presume that we'd want to wipe the newly written initial offsets if we
> > fail
> > > while writing the connector's config however?
> >
> >
> > Agree - I have clarified the cleanup here -
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-995%3A+Allow+users+to+specify+initial+offsets+while+creating+connectors#KIP995:Allowuserstospecifyinitialoffsetswhilecreatingconnectors-ProposedChanges
> > .
> >
> > The `PATCH /connectors/{connector}/offsets` and `DELETE
> > > /connectors/{connector}/offsets` endpoints have two possible success
> > > messages in the response depending on whether or not the connector
> plugin
> > > has implemented the `alterOffsets` connector method. Since we're
> > proposing
> > > to utilize the same offset validation during connector creation if
> > initial
> > > offsets are specified, I think it would be valuable to surface similar
> > > information to users here as well
> >
> >
> > Thanks for pointing this out. I have updated the response to include a
> new
> > field “initial_offsets_response” which will contain the response based on
> > whether the connector implements alterOffsets or not. This also means
> that
> > if initial_offsets is set in the ConnectorCreate request, we will return
> a
> > new REST entity (ConnectorInfoWithInitialOffsetsResponse ?) which will
> be a
> > child class of ConnectorInfo.
> >
> > (
> >
> >
> https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/entities/ConnectorInfo.java#L28-L28
> > )
> >
> > Thanks,
> > Ashwin
> >
> > On Wed, Jan 17, 2024 at 4:48 PM Yash Mayya  wrote:
> >
> > > Hi Ashwin,
> > >
> > > Thanks for the KIP.
> > >
> > > > If Connect runtime encounters an error in any of these steps,
> > > > it will cleanup (if required) and return an error response
> > >
> > > Could you please elaborate on the cleanup steps? For instance, if we
> > > encounter an error after wiping existing offsets but before writing the
> > new
> > > offsets, there's not really any good way to "revert" the wiped offsets.
> > > It's definitely extremely unlikely that a user would expect the
> previous
> > > offsets for a connector to still be present (by creating a new
> connector
> > > with the same name but without initial offsets for instance) after
> such a
> > > failed operation, but it would still be good to call this out
> > explicitly. I
> > > presume that we'd want to wipe the newly written initial offsets if we
> > fail
> > > while writing the connector's config however?
> > >
> > > > Validate the offset using the same checks performed while
> > > > altering connector offsets (PATCH /$connector/offsets ) as
> > > > specified in KIP-875
> > >
> > > The `PATCH /connectors/{connector}/offsets` and `DELETE
> > > /connectors/{connector}/offsets` endpoints have two possible success
> > > messages in the response depending on whether or not the connector
> plugin
> > > has implemented the `alterOffsets` connector method. Since we're
> > proposing
> > > to utilize the same offset validation during connector creation if
> > initial
> > > offsets are specified, I think it would be valuable to surface similar
> > > information to users here as well. Thoughts?
> > >
> > > Thanks,
> > > Yash
> > >
> > > On Wed, Jan 17, 2024 at 3:31 PM Ashwin 
> > > wrote:
> > >
> > > > Hi All ,
> > > >
> > > > Can I please get one more binding vote, so that the KIP is approved ?
> > > > Thanks for the votes Chris and Mickael !
> > > >
> > > >
> > > > - Ashwin
> > >

Re: [DISCUSS] KIP-1021: Allow to get last stable offset (LSO) in kafka-get-offsets.sh

2024-03-05 Thread Ahmed Sobeh
I adjusted the KIP according to what we agreed on, let me know if you have
any comments!

Best,
Ahmed

On Thu, Feb 29, 2024 at 1:44 AM Luke Chen  wrote:

> Hi Ahmed,
>
> Thanks for the KIP!
>
> Comments:
> 1. If we all agree with the suggestion from Andrew, you could update the
> KIP.
>
> Otherwise, LGTM!
>
>
> Thanks.
> Luke
>
> On Thu, Feb 29, 2024 at 1:32 AM Andrew Schofield <
> andrew_schofield_j...@outlook.com> wrote:
>
> > Hi Ahmed,
> > Could do. Personally, I find the existing “--time -1” totally horrid
> > anyway, which was why
> > I suggested an alternative. I think your suggestion of a flag for
> > isolation level is much
> > better than -6.
> >
> > Maybe I should put in a KIP which adds:
> > --latest (as a synonym for --time -1)
> > --earliest (as a synonym for --time -2)
> > --max-timestamp (as a synonym for --time -3)
> >
> > That’s really what I would prefer. If the user has a timestamp, use
> > `--time`. If they want a
> > specific special offset, use a separate flag.
> >
> > Thanks,
> > Andrew
> >
> > > On 28 Feb 2024, at 09:22, Ahmed Sobeh 
> > wrote:
> > >
> > > Hi Andrew,
> > >
> > > Thanks for the hint! That sounds reasonable, do you think adding a
> > > conditional argument, something like "--time -1 --isolation -committed"
> > and
> > > "--time -1 --isolation -uncommitted" would make sense to keep the
> > > consistency of getting the offset by time? or do you think having a
> > special
> > > argument for this case is better?
> > >
> > > On Tue, Feb 27, 2024 at 2:19 PM Andrew Schofield <
> > > andrew_schofield_j...@outlook.com> wrote:
> > >
> > >> Hi Ahmed,
> > >> Thanks for the KIP.  It looks like a useful addition.
> > >>
> > >> The use of negative timestamps, and in particular letting the user use
> > >> `--time -1` or the equivalent `--time latest`
> > >> is a bit peculiar in this tool already. The negative timestamps come
> > from
> > >> org.apache.kafka.common.requests.ListOffsetsRequest,
> > >> but you’re not actually adding another value to that. As a result, I
> > >> really wouldn’t recommend using -6 for the new
> > >> flag. LSO is really a variant of -1 with read_committed isolation
> level.
> > >>
> > >> I think that perhaps it would be better to add `--last-stable` as an
> > >> alternative to `--time`. Then you’ll get the LSO with
> > >> cleaner syntax.
> > >>
> > >> Thanks,
> > >> Andrew
> > >>
> > >>
> > >>> On 27 Feb 2024, at 10:12, Ahmed Sobeh 
> > >> wrote:
> > >>>
> > >>> Hi all,
> > >>> I would like to start a discussion on KIP-1021, which would enable
> > >> getting
> > >>> LSO in kafka-get-offsets.sh:
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1021%3A+Allow+to+get+last+stable+offset+%28LSO%29+in+kafka-get-offsets.sh
> > >>>
> > >>> Best,
> > >>> Ahmed
> > >>
> > >>
> > >
> > > --
> > > [image: Aiven] 
> > > *Ahmed Sobeh*
> > > Engineering Manager OSPO, *Aiven*
> > > ahmed.so...@aiven.io 
> > > aiven.io    |   <
> > https://www.facebook.com/aivencloud>
> > >     <
> > https://twitter.com/aiven_io>
> > > *Aiven Deutschland GmbH*
> > > Immanuelkirchstraße 26, 10405 Berlin
> > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > > Amtsgericht Charlottenburg, HRB 209739 B
> >
> >
> >
>


-- 
[image: Aiven] 
*Ahmed Sobeh*
Engineering Manager OSPO, *Aiven*
ahmed.so...@aiven.io 
aiven.io    |   
     
*Aiven Deutschland GmbH*
Immanuelkirchstraße 26, 10405 Berlin
Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
Amtsgericht Charlottenburg, HRB 209739 B


Re: [VOTE] KIP-995: Allow users to specify initial offsets while creating connectors

2024-03-05 Thread Chris Egerton
Hi all,

Wanted to bump this and see if it looks good enough for a third vote. Yash,
any thoughts?

Cheers,

Chris

On Mon, Jan 29, 2024 at 2:55 AM Ashwin  wrote:

> Thanks for reviewing this KIP,  Yash.
>
> Could you please elaborate on the cleanup steps? For instance, if we
> > encounter an error after wiping existing offsets but before writing the
> new
> > offsets, there's not really any good way to "revert" the wiped offsets.
> > It's definitely extremely unlikely that a user would expect the previous
> > offsets for a connector to still be present (by creating a new connector
> > with the same name but without initial offsets for instance) after such a
> > failed operation, but it would still be good to call this out
> explicitly. I
> > presume that we'd want to wipe the newly written initial offsets if we
> fail
> > while writing the connector's config however?
>
>
> Agree - I have clarified the cleanup here -
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-995%3A+Allow+users+to+specify+initial+offsets+while+creating+connectors#KIP995:Allowuserstospecifyinitialoffsetswhilecreatingconnectors-ProposedChanges
> .
>
> The `PATCH /connectors/{connector}/offsets` and `DELETE
> > /connectors/{connector}/offsets` endpoints have two possible success
> > messages in the response depending on whether or not the connector plugin
> > has implemented the `alterOffsets` connector method. Since we're
> proposing
> > to utilize the same offset validation during connector creation if
> initial
> > offsets are specified, I think it would be valuable to surface similar
> > information to users here as well
>
>
> Thanks for pointing this out. I have updated the response to include a new
> field “initial_offsets_response” which will contain the response based on
> whether the connector implements alterOffsets or not. This also means that
> if initial_offsets is set in the ConnectorCreate request, we will return a
> new REST entity (ConnectorInfoWithInitialOffsetsResponse ?) which will be a
> child class of ConnectorInfo.
>
> (
>
> https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/entities/ConnectorInfo.java#L28-L28
> )
>
> Thanks,
> Ashwin
>
> On Wed, Jan 17, 2024 at 4:48 PM Yash Mayya  wrote:
>
> > Hi Ashwin,
> >
> > Thanks for the KIP.
> >
> > > If Connect runtime encounters an error in any of these steps,
> > > it will cleanup (if required) and return an error response
> >
> > Could you please elaborate on the cleanup steps? For instance, if we
> > encounter an error after wiping existing offsets but before writing the
> new
> > offsets, there's not really any good way to "revert" the wiped offsets.
> > It's definitely extremely unlikely that a user would expect the previous
> > offsets for a connector to still be present (by creating a new connector
> > with the same name but without initial offsets for instance) after such a
> > failed operation, but it would still be good to call this out
> explicitly. I
> > presume that we'd want to wipe the newly written initial offsets if we
> fail
> > while writing the connector's config however?
> >
> > > Validate the offset using the same checks performed while
> > > altering connector offsets (PATCH /$connector/offsets ) as
> > > specified in KIP-875
> >
> > The `PATCH /connectors/{connector}/offsets` and `DELETE
> > /connectors/{connector}/offsets` endpoints have two possible success
> > messages in the response depending on whether or not the connector plugin
> > has implemented the `alterOffsets` connector method. Since we're
> proposing
> > to utilize the same offset validation during connector creation if
> initial
> > offsets are specified, I think it would be valuable to surface similar
> > information to users here as well. Thoughts?
> >
> > Thanks,
> > Yash
> >
> > On Wed, Jan 17, 2024 at 3:31 PM Ashwin 
> > wrote:
> >
> > > Hi All ,
> > >
> > > Can I please get one more binding vote, so that the KIP is approved ?
> > > Thanks for the votes Chris and Mickael !
> > >
> > >
> > > - Ashwin
> > >
> > >
> > > On Thu, Jan 11, 2024 at 3:55 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Ashwin,
> > > >
> > > > +1 (binding), thanks for the KIP
> > > >
> > > > Mickael
> > > >
> > > > On Tue, Jan 9, 2024 at 4:54 PM Chris Egerton  >
> > > > wrote:
> > > > >
> > > > > Thanks for the KIP! +1 (binding)
> > > > >
> > > > > On Mon, Jan 8, 2024 at 9:35 AM Ashwin  >
> > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I would like to start  a vote on KIP-995.
> > > > > >
> > > > > >
> > > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-995%3A+Allow+users+to+specify+initial+offsets+while+creating+connectors
> > > > > >
> > > > > > Discussion thread -
> > > > > > https://lists.apache.org/thread/msorbr63scglf4484yq764v7klsj7c4j
> > > > > >
> > > > > > Thanks!
> > > > > >
> > > > > > Ashwin
> > > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-1010: Topic Partition Quota

2024-03-05 Thread Viktor Somogyi-Vass
Hi Afshin,

A couple observations:
1. The image you inserted doesn't get shown, please fix it
2. I'd like to clarify your proposal a bit. So for now we had (user,
client), (user) or (client) combinations. You'd like to introduce
topic-partitions in this framework. Would it extend the current behavior,
so the previously 4 item set becomes a 6 item set like this: (tp, user,
client), (tp, user), (tp, client), (tp), (user) or (client)? Or do these tp
quotas behave differently?
3. How would your implementation work when the aggregate of topic quotas
exceed the available bandwidth? Do topics get fair access or is it possible
that some partitions can't be consumed because others eat the bandwidth?
4. I'm a bit confused about the motivation section. So you're saying that
if you have a topic with 6 partitions where a quota is set to 2MB/s, are
you expecting a 4MB/s throughput if 2 of those topic's partition leaders
are hosted on the broker? Wouldn't that violate backward compatibility
because with a client now I can produce at 4MB/s rate?

Thanks,
Viktor

On Wed, Feb 14, 2024 at 9:27 PM Afshin Moazami
 wrote:

> Thanks Viktor,
>
> Hi folks,
> I would like to propose a new feature to extend the quota management in
> Kafka to support topic-partition based quotas. The following is the link to
> the KIP
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1010%3A+Topic+Partition+Quota
>
>
> Best,
> Afshin Moazami
>
> On Wed, Feb 7, 2024 at 5:25 AM Viktor Somogyi-Vass
>  wrote:
>
> > Hi Afshin,
> >
> > We keep KIP discussions on dev@kafka.apache.org so please post this over
> > there too. I'll go over this later this week but devs usually monitor
> that
> > list more frequently and you'll have better chances of getting a reply
> > there.
> >
> > Regards,
> > Viktor
> >
> > On Wed, Jan 17, 2024 at 12:03 AM Afshin Moazami
> >  wrote:
> >
> > > Hi folks,
> > > I am not sure what is the KIP life-cycle and how we can get more
> > attention
> > > on them, so I just reply to this thread with the hope to get some
> > > discussion started.
> > >
> > > Thanks,
> > > Afshin
> > >
> > > On Mon, Dec 11, 2023 at 10:43 AM Afshin Moazami <
> amoaz...@salesforce.com
> > >
> > > wrote:
> > >
> > > > Hi folks,
> > > > I would like to propose a new feature to extend the quota management
> in
> > > > Kafka to support topic-partition based quotas. The following is the
> > link
> > > to
> > > > the KIP
> > > >
> > > >
> > >
> >
> https://urldefense.com/v3/__https://cwiki.apache.org/confluence/display/KAFKA/KIP-1010*3A*Topic*Partition*Quota__;JSsrKw!!DCbAVzZNrAf4!BK-888ZjIeh53cmPcRZ_ZIpA6-02xIk5LXsT4cl82ieHRjWN31a-xsi36sN9I3P3LOhhpYCJU2FpbYkfg2YpGX2RXtPFAIjsHv0$
> > > >
> > > >
> > > > Best,
> > > > Afshin Moazami
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-16345) Optionally allow urlencoding clientId and clientSecret in authorization header

2024-03-05 Thread Nelson B. (Jira)
Nelson B. created KAFKA-16345:
-

 Summary: Optionally allow urlencoding clientId and clientSecret in 
authorization header
 Key: KAFKA-16345
 URL: https://issues.apache.org/jira/browse/KAFKA-16345
 Project: Kafka
  Issue Type: Bug
Reporter: Nelson B.


When a client communicates with OIDC provider to retrieve an access token 
RFC-6749 says that clientID and clientSecret must be urlencoded in the 
authorization header. (see [https://tools.ietf.org/html/rfc6749#section-2.3.1)] 
However, it seems that in practice some OIDC providers do not enforce this, so 
I was thinking about introducing a new configuration parameter that will 
optionally urlencode clientId & clientSecret in the authorization header. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16344) Internal topic mm2-offset-syncsinternal created with single partition is putting more load on the broker

2024-03-05 Thread Janardhana Gopalachar (Jira)
Janardhana Gopalachar created KAFKA-16344:
-

 Summary: Internal topic mm2-offset-syncsinternal 
created with single partition is putting more load on the broker
 Key: KAFKA-16344
 URL: https://issues.apache.org/jira/browse/KAFKA-16344
 Project: Kafka
  Issue Type: Bug
  Components: connect
Affects Versions: 3.5.1
Reporter: Janardhana Gopalachar


We are using Kafka 3.5.1 version, we see that the internal topic created by 
mirrormaker 

mm2-offset-syncsinternal is created with single partition due to 
which the CPU load on the broker which will be leader for this partition is 
increased compared to other brokers. Can multiple partitions be  created for 
the topic so that the CPU load would get distributed 

 

Topic: mm2-offset-syncscluster-ainternal    TopicId: XRvTDbogT8ytNhqX2YTyrA    
PartitionCount: 1ReplicationFactor: 3    Configs: 
min.insync.replicas=2,cleanup.policy=compact,message.format.version=3.0-IV1
    Topic: mm2-offset-syncscluster-ainternal    Partition: 0    Leader: 2    
Replicas: 2,1,0    Isr: 2,1,0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2696

2024-03-05 Thread Apache Jenkins Server
See 




[VOTE] KIP-981: Manage Connect topics with custom implementation of Admin

2024-03-05 Thread Omnia Ibrahim
Hi everyone, I would like to start the vote on KIP-981: Manage Connect topics 
with custom implementation of Admin 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-981%3A+Manage+Connect+topics+with+custom+implementation+of+Admin
 

Thanks
Omnia