[jira] [Resolved] (KAFKA-15710) KRaft support in ServerShutdownTest

2023-11-08 Thread Sameer Tejani (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sameer Tejani resolved KAFKA-15710.
---
Resolution: Won't Fix

> KRaft support in ServerShutdownTest
> ---
>
> Key: KAFKA-15710
> URL: https://issues.apache.org/jira/browse/KAFKA-15710
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in ServerShutdownTest in 
> core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala need to be 
> updated to support KRaft
> 192 : def testCleanShutdownWithZkUnavailable(quorum: String): Unit = {
> 258 : def testControllerShutdownDuringSend(quorum: String): Unit = {
> Scanned 324 lines. Found 5 KRaft tests out of 7 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


AW: tiered storage - remote data to topic binding

2023-11-08 Thread philipp lehmann
My bad! I missed that RemoteLogSegmentId also contains TopicIdPartition, which 
includes the required data. I wrongfully assumed that it was only a UUID. Thank 
you for your quick response. The problem is solved.


Von: Divij Vaidya 
Gesendet: Mittwoch, 8. November 2023 21:07
An: dev@kafka.apache.org 
Betreff: Re: tiered storage - remote data to topic binding

I am assuming that you are referring to the RSM.fetchLogSegment() API call.
The RemoteLogSegmentMetadata object passed to it contains
RemoteLogSegmentId which contains information about Topic and the Partition
for this segment. Isn't that information sufficient for your use case? If
not, RemoteLogSegmentMetadata also contains a map, CustomMetadata which is
opaque to the broker and is populated by RemoteLogMetadataManager. You can
choose to feed any attributes you require into this custom metadata and
read it in RSM.

Does this answer your question?

--
Divij Vaidya



On Wed, Nov 8, 2023 at 8:55 PM philipp lehmann <
philipp.lehm...@medionmail.com> wrote:

> Hello,
>
> If I understand tiered storage correctly, the RemoteStorageManager doesn't
> know to which topic the data it receives belongs. In some environments that
> offer access-pattern-based storage, this is disadvantageous. If the
> corresponding topics were part of the storage request, the
> RemoteStorageManager could use this to predict the access pattern. Which in
> turn enables the selection of the best matching storage. Let's say that
> there's a topic that gets rarely consumed. In this case, the
> RemoteStorageManager could use colder storage, which results in lower
> storage costs. So my question is, is this within the scope of tiered
> storage? If it is, what is needed to get it changed?
>
> Regards,
> Philipp
>


Re: tiered storage - remote data to topic binding

2023-11-08 Thread Divij Vaidya
I am assuming that you are referring to the RSM.fetchLogSegment() API call.
The RemoteLogSegmentMetadata object passed to it contains
RemoteLogSegmentId which contains information about Topic and the Partition
for this segment. Isn't that information sufficient for your use case? If
not, RemoteLogSegmentMetadata also contains a map, CustomMetadata which is
opaque to the broker and is populated by RemoteLogMetadataManager. You can
choose to feed any attributes you require into this custom metadata and
read it in RSM.

Does this answer your question?

--
Divij Vaidya



On Wed, Nov 8, 2023 at 8:55 PM philipp lehmann <
philipp.lehm...@medionmail.com> wrote:

> Hello,
>
> If I understand tiered storage correctly, the RemoteStorageManager doesn't
> know to which topic the data it receives belongs. In some environments that
> offer access-pattern-based storage, this is disadvantageous. If the
> corresponding topics were part of the storage request, the
> RemoteStorageManager could use this to predict the access pattern. Which in
> turn enables the selection of the best matching storage. Let's say that
> there's a topic that gets rarely consumed. In this case, the
> RemoteStorageManager could use colder storage, which results in lower
> storage costs. So my question is, is this within the scope of tiered
> storage? If it is, what is needed to get it changed?
>
> Regards,
> Philipp
>


tiered storage - remote data to topic binding

2023-11-08 Thread philipp lehmann
Hello,

If I understand tiered storage correctly, the RemoteStorageManager doesn't know 
to which topic the data it receives belongs. In some environments that offer 
access-pattern-based storage, this is disadvantageous. If the corresponding 
topics were part of the storage request, the RemoteStorageManager could use 
this to predict the access pattern. Which in turn enables the selection of the 
best matching storage. Let's say that there's a topic that gets rarely 
consumed. In this case, the RemoteStorageManager could use colder storage, 
which results in lower storage costs. So my question is, is this within the 
scope of tiered storage? If it is, what is needed to get it changed?

Regards,
Philipp


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2369

2023-11-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 422712 lines...]

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithOneElement PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > testValidateFormatNotMap 
STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > testValidateFormatNotMap 
PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyValidList STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyValidList PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilSuccessfulCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilSuccessfulCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertBeforeGetOnSuccessfulCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertBeforeGetOnSuccessfulCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotCancelIfMayNotCancelWhileRunning STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotCancelIfMayNotCancelWhileRunning PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldCancelBeforeGetIfMayCancelWhileRunning STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldCancelBeforeGetIfMayCancelWhileRunning PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldRecordOnlyFirstErrorBeforeGetOnFailedCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldRecordOnlyFirstErrorBeforeGetOnFailedCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilFailedCompletion STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilFailedCompletion PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.LoggingContextTest > 
shouldCreateConnectorLoggingContext STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.LoggingContextTest > 
shouldCreateConnectorLoggingContext PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.LoggingContextTest > 
shouldCreateTaskLoggingContext STARTED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.LoggingContextTest > 
shouldCreateTaskLoggingContext PASSED

Gradle Test Run :connect:runtime:test > Gradle Test Executor 42 > 
org.apache.kafka.connect.util.LoggingContextTest > 

Re: KIP-993: Allow restricting files accessed by File and Directory ConfigProviders

2023-11-08 Thread Greg Harris
Hey Tina,

Thanks for the KIP! Unrestricted file system access over a REST API is
an unfortunate anti-pattern, so I'm glad that you're trying to change
it. I had a few questions, mostly from the Connect perspective.

1. In the past Connect removed the FileStream connectors in order to
prevent a REST API attacker from accessing the filesystem. Is this the
only remaining attack vector for reading the file system? Meaning, if
this feature is configured and all custom plugins are audited for
filesystem accesses, would someone with access to the REST API be
unable to access arbitrary files on disk?
2. Could you explain how this feature would prevent a path traversal
attack, and how we will verify that such attacks are not feasible?
3. This applies a single "allowed paths" to a whole worker, but I've
seen situations where preventing one connector from accessing
another's secrets may also be desirable. Is there any way to extend
this feature now or in the future to make that possible?

Thanks!
Greg

On Tue, Nov 7, 2023 at 7:06 AM Mickael Maison  wrote:
>
> Hi Tina,
>
> Thanks for the KIP.
> For clarity it might make sense to mention this feature will be useful
> when using a ConfigProvider with Kafka Connect as providers are set in
> the runtime and can then be used by connectors. This feature has no
> use when using a ConfigProvider in server.properties or in clients.
>
> When trying to use a path not allowed, you propose returning an error.
> With Connect does that mean the connector will be failed? The
> EnvVarConfigProvider returns empty string in case a user tries to
> access an environment variable not allowed. I wonder if we should
> follow the same pattern so the behavior is "consistent" across all
> built-in providers.
>
> Thanks,
> Mickael
>
> On Tue, Nov 7, 2023 at 1:52 PM Gantigmaa Selenge  wrote:
> >
> > Hi everyone,
> >
> > Please let me know if you have any comments on the KIP.
> >
> > I will leave it for a few more days. If there are still no comments, I will
> > start the vote on it.
> >
> > Regards,
> > Tina
> >
> > On Wed, Oct 25, 2023 at 8:31 AM Gantigmaa Selenge 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > I would like to start a discussion on KIP-933 that proposes restricting
> > > files accessed by File and Directory ConfigProviders.
> > >
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-993%3A+Allow+restricting+files+accessed+by+File+and+Directory+ConfigProviders
> > >
> > > Regards,
> > > Tina
> > >


Re: [VOTE] KIP-992 Proposal to introduce IQv2 Query Types: TimestampedKeyQuery and TimestampedRangeQuery

2023-11-08 Thread Hanyu (Peter) Zheng
Hi all,

This voting thread has been open for over 72 hours and has received enough
votes. Therefore, the vote will be closed now.

+3 binding votes
+1 (non-binding)

KIP-992 has PASSED.


Thanks all for your votes
Hanyu

On Fri, Nov 3, 2023 at 5:10 PM Matthias J. Sax  wrote:

> Thanks for the KIP.
>
> +1 (binding)
>
>
> -Matthias
>
> On 11/3/23 6:08 AM, Lucas Brutschy wrote:
> > Hi Hanyu,
> >
> > Thanks for the KIP!
> > +1 (binding)
> >
> > Cheers
> > Lucas
> >
> > On Thu, Nov 2, 2023 at 10:19 PM Hao Li  wrote:
> >>
> >> Hi Hanyu,
> >>
> >> Thanks for the KIP!
> >> +1 (non-binding)
> >>
> >> Hao
> >>
> >> On Thu, Nov 2, 2023 at 1:29 PM Bill Bejeck  wrote:
> >>
> >>> Hi Hanyu,
> >>>
> >>> Thanks for the KIP this LGTM.
> >>> +1 (binding)
> >>>
> >>> Thanks,
> >>> Bill
> >>>
> >>>
> >>>
> >>> On Wed, Nov 1, 2023 at 1:07 PM Hanyu (Peter) Zheng
> >>>  wrote:
> >>>
>  Hello everyone,
> 
>  I would like to start a vote for KIP-992: Proposal to introduce IQv2
> >>> Query
>  Types: TimestampedKeyQuery and TimestampedRangeQuery.
> 
>  Sincerely,
>  Hanyu
> 
>  On Wed, Nov 1, 2023 at 10:00 AM Hanyu (Peter) Zheng <
> pzh...@confluent.io
> 
>  wrote:
> 
> >
> >
> 
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery
> >
> > --
> >
> > [image: Confluent] 
> > Hanyu (Peter) Zheng he/him/his
> > Software Engineer Intern
> > +1 (213) 431-7193 <+1+(213)+431-7193>
> > Follow us: [image: Blog]
> > <
> 
> >>>
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> > [image:
> > Twitter] [image: LinkedIn]
> > [image: Slack]
> > [image: YouTube]
> > 
> >
> > [image: Try Confluent Cloud for Free]
> > <
> 
> >>>
> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound_source=gmail_medium=organic
> >
> >
> 
> 
>  --
> 
>  [image: Confluent] 
>  Hanyu (Peter) Zheng he/him/his
>  Software Engineer Intern
>  +1 (213) 431-7193 <+1+(213)+431-7193>
>  Follow us: [image: Blog]
>  <
> 
> >>>
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> > [image:
>  Twitter] [image: LinkedIn]
>  [image: Slack]
>  [image: YouTube]
>  
> 
>  [image: Try Confluent Cloud for Free]
>  <
> 
> >>>
> https://www.confluent.io/get-started?utm_campaign=tm.fm-apac_cd.inbound_source=gmail_medium=organic
> >
> 
> >>>
>


-- 

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]



Re: [DISCUSS] KIP-997 Support fetch(fromKey, toKey, from, to) to WindowRangeQuery and unify WindowKeyQuery and WindowRangeQuery

2023-11-08 Thread Hanyu (Peter) Zheng
Hello everyone,

I would like to start the discussion for KIP-997: Support fetch(fromKey,
toKey, from, to) to WindowRangeQuery and unify WindowKeyQuery and
WindowRangeQuery
The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-997%3A++Support+fetch%28fromKey%2C+toKey%2C+from%2C+to%29+to+WindowRangeQuery+and+unify+WindowKeyQuery+and+WindowRangeQuery

Any suggestions are more than welcome.

Many thanks,
Hanyu

On Wed, Nov 8, 2023 at 10:38 AM Hanyu (Peter) Zheng 
wrote:

>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-997%3A++Support+fetch%28fromKey%2C+toKey%2C+from%2C+to%29+to+WindowRangeQuery+and+unify+WindowKeyQuery+and+WindowRangeQuery
>
> --
>
> [image: Confluent] 
> Hanyu (Peter) Zheng he/him/his
> Software Engineer Intern
> +1 (213) 431-7193 <+1+(213)+431-7193>
> Follow us: [image: Blog]
> [image:
> Twitter] [image: LinkedIn]
> [image: Slack]
> [image: YouTube]
> 
>
> [image: Try Confluent Cloud for Free]
> 
>


-- 

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]



[DISCUSS] KIP-997 Support fetch(fromKey, toKey, from, to) to WindowRangeQuery and unify WindowKeyQuery and WindowRangeQuery

2023-11-08 Thread Hanyu (Peter) Zheng
https://cwiki.apache.org/confluence/display/KAFKA/KIP-997%3A++Support+fetch%28fromKey%2C+toKey%2C+from%2C+to%29+to+WindowRangeQuery+and+unify+WindowKeyQuery+and+WindowRangeQuery

-- 

[image: Confluent] 
Hanyu (Peter) Zheng he/him/his
Software Engineer Intern
+1 (213) 431-7193 <+1+(213)+431-7193>
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]


[image: Try Confluent Cloud for Free]



[jira] [Created] (KAFKA-15800) Malformed connect source offsets corrupt other partitions with DataException

2023-11-08 Thread Greg Harris (Jira)
Greg Harris created KAFKA-15800:
---

 Summary: Malformed connect source offsets corrupt other partitions 
with DataException
 Key: KAFKA-15800
 URL: https://issues.apache.org/jira/browse/KAFKA-15800
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 3.5.1, 3.6.0, 3.5.0
Reporter: Greg Harris
Assignee: Greg Harris
 Fix For: 3.5.2, 3.7.0, 3.6.1


The KafkaOffsetBackingStore consumer callback was recently augmented with a 
call to OffsetUtils.processPartitionKey: 
[https://github.com/apache/kafka/blob/f1e58a35d7aebbe72844faf3e5019d9aa7a85e4a/connect/runtime/src/main/java/org/apache/kafka/connect/storage/KafkaOffsetBackingStore.java#L323]

This function deserializes the offset key, which may be malformed in the topic: 
[https://github.com/apache/kafka/blob/f1e58a35d7aebbe72844faf3e5019d9aa7a85e4a/connect/runtime/src/main/java/org/apache/kafka/connect/storage/OffsetUtils.java#L92]

When this happens, a DataException is thrown, and propagates to the 
KafkaBasedLog try-catch surrounding the batch processing of the records: 
[https://github.com/apache/kafka/blob/f1e58a35d7aebbe72844faf3e5019d9aa7a85e4a/connect/runtime/src/main/java/org/apache/kafka/connect/util/KafkaBasedLog.java#L445-L454]

For example:
{noformat}
ERROR Error polling: org.apache.kafka.connect.errors.DataException: Converting 
byte[] to Kafka Connect data failed due to serialization error:  
(org.apache.kafka.connect.util.KafkaBasedLog:453){noformat}
This means that one DataException for a malformed record may cause the 
remainder of the batch to be dropped, corrupting the in-memory state of the 
KafkaOffsetBackingStore. This prevents tasks using the KafkaOffsetBackingStore 
from seeing all of the offsets in the topics, and can cause duplicate records 
to be emitted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Requesting permissions to contribute to Apache Kafka

2023-11-08 Thread Mickael Maison
Hi,

I've granted you permissions in Jira and Confluence.

Thanks,
Mickael

On Wed, Nov 8, 2023 at 4:51 PM Sverre H. Huseby  wrote:
>
> Ref
> https://cwiki.apache.org/confluence/display/kafka/kafka+improvement+proposals
>
> Wiki ID: sverrehu
> Jira ID: sverrehu
>
>
> Thanks,
> Sverre.


[jira] [Created] (KAFKA-15799) ZK brokers incorrectly handle KRaft metadata snapshots

2023-11-08 Thread David Arthur (Jira)
David Arthur created KAFKA-15799:


 Summary: ZK brokers incorrectly handle KRaft metadata snapshots
 Key: KAFKA-15799
 URL: https://issues.apache.org/jira/browse/KAFKA-15799
 Project: Kafka
  Issue Type: Bug
Reporter: David Arthur
Assignee: David Arthur
 Fix For: 3.6.1


While working on the fix for KAFKA-15605, I noticed that ZK brokers are 
unconditionally merging data from UpdateMetadataRequest with their existing 
MetadataCache. This is not the correct behavior when handling a metadata 
snapshot from the KRaft controller. 

For example, if a topic was deleted in KRaft and not transmitted as part of a 
delta update (e.g., during a failover) then the ZK brokers will never remove 
the topic from their cache (until they restart and rebuild their cache).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-968: Support single-key_multi-timestamp interactive queries (IQv2) for versioned state stores

2023-11-08 Thread Alieh Saeedi
Thank you, Bruno and Matthias, for keeping the discussion going and for
reviewing the PR.

Here are the KIP updates:

   - I removed the `peek()` from the `ValueIterator` interface since we do
   not need it.
   - Yes, Bruno, the `validTo` field in the `VersionedRecod` class is
   exclusive. I updated the javadocs for that.


Very important critical open questions. I list them here based on priority
(descendingly).

   - I implemented the `get(key, fromtime, totime)` method here
   
:
   the problem is that this implementation does not guarantee consistency
   because processing might continue interleaved (no snapshot semantics is
   implemented). More over, it materializes all results in memory.
  - Solution 1: Use a lock and release it after retrieving all desired
  records from all segments.
 - positive point: snapshot semantics is implemented
 - negative points: 1) It is expensive since iterating over all
 segments may take a long time. 2) It still requires
materializing results
 on memory
  - Solution 2: use `RocksDbIterator`.
 - positive points: 1) It guarantees snapshot segments. 2) It does
 not require materializing results in memory.
 - negative points: it is expensive because, anyway, we need to
 iterate over all (many) segments.

   Do you have any thoughts on this issue? (ref: Matthias's comment
)

   - I added the field `validTo` in `VersionedRecord`. Its default value is
   MAX. But as Matthias mentioned, for the single-key single-ts
   (`VersionedKeyQuery` in KIP-960), it may not always be true. If the
   returned record belongs to an old segment, maybe it is not valid any more.
   So MAX is not the correct value for `ValidTo`. Two solutions come to mind:
  - Solution 1: make the `ValidTo` as an `Optional` and set it to
  `empty` for the retuned result of `get(key, asOfTimestamp)`.
  - Solution 2: change the implementation of `get(key, asOfTimestamp)`
  so that it finds the correct `validTo` for the returned versionedRecord.

  - In this KIP and the next one, even though the default ordering is
   with ascending ts, I added the method `withAscendingTimestamps()` to have
   more user readibility (as Bruno suggested), while Hanyu did only add
   `withDescending...` methods (he did not need ascending because that's the
   default anyway). Matthias believes that we should not have
   inconsistencies (he actually hates them:D). Shall I change my KIP or Hanyu?
   Thoughts?


That would be maybe helpful to look into the PR
 for more clarity and even
review that ;-)

Cheers,
Alieh

On Thu, Nov 2, 2023 at 7:13 PM Bruno Cadonna  wrote:

> Hi Alieh,
>
> First of all, I like the examples.
>
> Is validTo in VersionedRecord exclusive or inclusive?
> In the javadocs you write:
>
> "@param validTothe latest timestamp that value is valid"
>
> I think that is not true because the validity is defined by the start
> time of the new version. The new and the old version cannot both be
> valid at that same time.
>
> Theoretically, you could set validTo to the start time of the new
> version - 1. However, what is the unit of the 1? Is it nanoseconds?
> Milliseconds? Seconds? Sure we could agree on one, but I think it is
> more elegant to just make the validTo exclusive. Actually, you used it
> as exclusive in your examples.
>
>
> Thanks for the KIP!
>
> Best,
> Bruno
>
> On 11/1/23 9:01 PM, Alieh Saeedi wrote:
> > Hi all,
> > @Matthias: I think Victoria was right. I must add the method `get(key,
> > fromTime, toTime)` to the interface `VersionedKeyValueStore`. Right now,
> > having the method only in `RocksDBVersionedStore`, made me to have an
> > instance of `RocksDBVersionedStore` (instead of `VersionedKeyValueStore`)
> > in `StoreQueryUtils.runMultiVersionedKeyQuery()` method. In future, we
> are
> > going to use the same method for In-memory/SPDB/blaBla versioned stores.
> > Then either this method won't work any more, or we have to add code (if
> > clauses) for each type of versioned stores. What do you think about that?
> >
> > Bests,
> > Alieh
> >
> > On Tue, Oct 24, 2023 at 10:01 PM Alieh Saeedi 
> wrote:
> >
> >> Thank you, Matthias, Bruno, and Guozhang for keeping the discussion
> going.
> >>
> >> Here is the list of changes I made:
> >>
> >> 1. I enriched the "Example" section as Bruno suggested. Do you
> please
> >> have a look at that section? I think I devised an interesting one
> ;-)
> >> 2. As Matthias and Guozhang suggested, I renamed variables and
> methods
> >> as follows:
> >>- "fromTimestamp" -> "fromTime"
> >>- "asOfTimestamp" -> "toTime"
> >>- 

Requesting permissions to contribute to Apache Kafka

2023-11-08 Thread Sverre H. Huseby
Ref 
https://cwiki.apache.org/confluence/display/kafka/kafka+improvement+proposals


Wiki ID: sverrehu
Jira ID: sverrehu


Thanks,
Sverre.


Re: [DISCUSS] KIP-982: Access SslPrincipalMapper and kerberosShortNamer in Custom KafkaPrincipalBuilder

2023-11-08 Thread Mickael Maison
Hi Raghu,

Can you clarify why the currently proposed solution would have less
compatibility issues?
Also if you don't want to follow the AuthenticationContext
alternative, can you add it to the rejected alternatives section.

Thanks,
Mickael

On Wed, Nov 8, 2023 at 1:42 AM Raghu B  wrote:
>
> Hi Mickael,
>
> Yes, it is a more elegant solution with minor code changes but considering
> the backward compatibility and impact on existing custom
> KafkaPrincipalBuilder implementations I thought the proposed solution is a
> better option.
>
> Thanks,
> Raghu
>
> On Thu, Nov 2, 2023 at 7:20 AM Mickael Maison 
> wrote:
>
> > Hi Raghu,
> >
> > Thanks for the KIP.
> > Have you considered retrieving these values using
> > AuthenticationContext? For example SslAuthenticationContext could have
> > a getter for SslPrincipalMapper. For kerberosShortNamer we could have
> > a new subclass of SaslAuthenticationContext, for example
> > GssapiAuthenticationContext.
> >
> > Thanks,
> > Mickael
> >
> > On Mon, Oct 16, 2023 at 8:15 PM Manikumar 
> > wrote:
> > >
> > > Hi Raghu,
> > >
> > > Thanks for the KIP. Proposed changes look good to me.
> > >
> > > Thanks,
> > > Manikumar
> > >
> > > On Fri, Sep 22, 2023 at 11:44 PM Raghu B  wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > I would like to start the discussion on the KIP-982 to Access
> > > > SslPrincipalMapper and kerberosShortNamer in Custom
> > KafkaPrincipalBuilder
> > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-982%3A+Access+SslPrincipalMapper+and+kerberosShortNamer+in+Custom+KafkaPrincipalBuilder
> > > >
> > > > Looking forward to your feedback!
> > > >
> > > > Thanks,
> > > > Raghu
> > > >
> >