Re: [VOTE] KIP-837 Allow MultiCasting a Result Record.

2022-08-17 Thread Sophie Blee-Goldman
Hey Sagar, thanks for the KIP!

Just some cosmetic points to make it absolutely clear what this KIP is
doing:
1) could you clarify up front in the Motivation section that this is
focused on Kafka Streams applications, and not the plain Producer client?
2) you included the entire implementation of the `#send` method to
demonstrate the change in logic, but can you either remove the parts of
the implementation that aren't being touched here or at least highlight in
some way the specific lines that have changed?
3) In general the implementation is, well, an implementation detail that
doesn't need to be included in the KIP, but it's ok -- always nice to get a
sense of how things will work internally. But what I think would be more
useful to show in the KIP is how things will work with the new public
interface -- ie, can you provide a brief example of how a user would go
about taking advantage of this new interface? Even better, include an
example of what it takes for a user to accomplish this behavior before this
KIP. It would help showcase the concrete benefit this KIP is bringing and
anchor the motivation section a bit better.

Also nit: in the 2nd sentence under Public Interfaces I think it should say
"it would invoke the partition()  method" -- ie this should be
"partition()" not "partition*s*()"

Other than that this looks good, but I'll wait until you've addressed the
above to cast a vote.

Thanks!
Sophie

On Fri, Aug 12, 2022 at 10:36 PM Sagar  wrote:

> Hey John,
>
> Thanks for the vote. I added the reason for the rejection of the
> alternatives. The first one is basically an option to broadcast to all
> partitions which I felt was restrictive. Instead the KIP allows
> multicasting to 0-N partitions based upon the partitioner implementation.
>
> Thanks!
> Sagar.
>
> On Sat, Aug 13, 2022 at 7:35 AM John Roesler  wrote:
>
> > Thanks, Sagar!
> >
> > I’m +1 (binding)
> >
> > Can you add a short explanation to each rejected alternative? I was
> > wondering why we wouldn’t provide an overloaded to()/addSink() (the first
> > rejected alternative), and I had to look back at the Streams code to see
> > that they both already accept the partitioner (I thought it was a
> config).
> >
> > Thanks!
> > -John
> >
> > On Tue, Aug 9, 2022, at 13:44, Walker Carlson wrote:
> > > +1 (non binding)
> > >
> > > Walker
> > >
> > > On Tue, May 31, 2022 at 4:44 AM Sagar 
> wrote:
> > >
> > >> Hi All,
> > >>
> > >> I would like to start a voting thread on
> > >>
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883356
> > >> .
> > >>
> > >> I am just starting this as the discussion thread has been open for 10+
> > >> days. In case there are some comments, we can always discuss them over
> > >> there.
> > >>
> > >> Thanks!
> > >> Sagar.
> > >>
> >
>


[jira] [Resolved] (KAFKA-14167) Unexpected UNKNOWN_SERVER_ERROR raised from kraft controller

2022-08-17 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-14167.
-
Resolution: Fixed

> Unexpected UNKNOWN_SERVER_ERROR raised from kraft controller
> 
>
> Key: KAFKA-14167
> URL: https://issues.apache.org/jira/browse/KAFKA-14167
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Blocker
> Fix For: 3.3.0
>
>
> In `ControllerApis`, we have callbacks such as the following after completion:
> {code:java}
>     controller.allocateProducerIds(context, allocatedProducerIdsRequest.data)
>       .handle[Unit] { (results, exception) =>
>         if (exception != null) {
>           requestHelper.handleError(request, exception)
>         } else {
>           requestHelper.sendResponseMaybeThrottle(request, requestThrottleMs 
> => {
>             results.setThrottleTimeMs(requestThrottleMs)
>             new AllocateProducerIdsResponse(results)
>           })
>         }
>       } {code}
> What I see locally is that the underlying exception that gets passed to 
> `handle` always gets wrapped in a `CompletionException`. When passed to 
> `getErrorResponse`, this error will get converted to `UNKNOWN_SERVER_ERROR`. 
> For example, in this case, a `NOT_CONTROLLER` error returned from the 
> controller would be returned as `UNKNOWN_SERVER_ERROR`. It looks like there 
> are a few APIs that are potentially affected by this bug, such as 
> `DeleteTopics` and `UpdateFeatures`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13940) DescribeQuorum returns INVALID_REQUEST if not handled by leader

2022-08-17 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13940.
-
Resolution: Fixed

> DescribeQuorum returns INVALID_REQUEST if not handled by leader
> ---
>
> Key: KAFKA-13940
> URL: https://issues.apache.org/jira/browse/KAFKA-13940
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 3.3.0
>
>
> In `KafkaRaftClient.handleDescribeQuorum`, we currently return 
> INVALID_REQUEST if the node is not the current raft leader. This is 
> surprising and doesn't work with our general approach for retrying forwarded 
> APIs. In `BrokerToControllerChannelManager`, we only retry after 
> `NOT_CONTROLLER` errors. It would be more consistent with the other Raft APIs 
> if we returned NOT_LEADER_OR_FOLLOWER, but that also means we need additional 
> logic in `BrokerToControllerChannelManager` to handle that error and retry 
> correctly. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1149

2022-08-17 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 4038 lines...]
[2022-08-17T21:26:07.424Z] > Task :storage:testClasses
[2022-08-17T21:26:10.987Z] > Task :storage:checkstyleTest
[2022-08-17T21:26:14.549Z] > Task :connect:runtime:compileTestJava
[2022-08-17T21:26:14.549Z] > Task :connect:runtime:testClasses
[2022-08-17T21:26:16.294Z] > Task :connect:mirror:compileTestJava
[2022-08-17T21:26:16.294Z] > Task :connect:mirror:testClasses
[2022-08-17T21:26:18.042Z] > Task :connect:mirror:checkstyleTest
[2022-08-17T21:26:18.721Z] [Warn] 
/home/jenkins/workspace/Kafka_kafka_trunk/core/src/main/scala/kafka/utils/Implicits.scala:58:4:
 @nowarn annotation does not suppress any warnings
[2022-08-17T21:26:18.721Z] [Warn] 
/home/jenkins/workspace/Kafka_kafka_trunk/core/src/main/scala/kafka/raft/KafkaMetadataLog.scala:477:4:
 @nowarn annotation does not suppress any warnings
[2022-08-17T21:26:18.721Z] [Warn] 
/home/jenkins/workspace/Kafka_kafka_trunk/core/src/main/scala/kafka/security/authorizer/AclAuthorizer.scala:508:4:
 @nowarn annotation does not suppress any warnings
[2022-08-17T21:26:18.721Z] [Warn] 
/home/jenkins/workspace/Kafka_kafka_trunk/core/src/main/scala/kafka/utils/CoreUtils.scala:320:4:
 @nowarn annotation does not suppress any warnings
[2022-08-17T21:26:18.721Z] 8 warnings found
[2022-08-17T21:26:18.721Z] Unexpected javac output: warning: [options] 
bootstrap class path not set in conjunction with -source 8
[2022-08-17T21:26:18.721Z] 1 warning.
[2022-08-17T21:26:18.721Z] 
[2022-08-17T21:26:18.721Z] > Task :clients:spotbugsMain
[2022-08-17T21:26:18.721Z] [main] INFO edu.umd.cs.findbugs.ExitCodes - 
Calculating exit code...
[2022-08-17T21:26:18.721Z] 
[2022-08-17T21:26:18.721Z] > Task :core:classes
[2022-08-17T21:26:18.721Z] > Task :core:compileTestJava NO-SOURCE
[2022-08-17T21:26:18.721Z] > Task :examples:compileJava
[2022-08-17T21:26:18.721Z] > Task :examples:classes
[2022-08-17T21:26:18.721Z] > Task :examples:compileTestJava NO-SOURCE
[2022-08-17T21:26:18.721Z] > Task :shell:compileJava
[2022-08-17T21:26:18.721Z] > Task :shell:classes
[2022-08-17T21:26:18.721Z] > Task :examples:testClasses UP-TO-DATE
[2022-08-17T21:26:18.721Z] > Task :examples:checkstyleTest NO-SOURCE
[2022-08-17T21:26:18.721Z] > Task :examples:checkstyleMain
[2022-08-17T21:26:18.721Z] > Task :shell:compileTestJava
[2022-08-17T21:26:18.721Z] > Task :shell:testClasses
[2022-08-17T21:26:19.734Z] > Task :shell:checkstyleTest
[2022-08-17T21:26:19.734Z] > Task :shell:checkstyleMain
[2022-08-17T21:26:22.871Z] 
[2022-08-17T21:26:22.871Z] > Task :examples:spotbugsMain
[2022-08-17T21:26:22.871Z] WARNING: A terminally deprecated method in 
java.lang.System has been called
[2022-08-17T21:26:22.871Z] WARNING: System::setSecurityManager has been called 
by edu.umd.cs.findbugs.ba.jsr305.TypeQualifierValue 
(file:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.spotbugs/spotbugs/4.2.2/fef7e4208082fc1c6a1b7dfd84eb382add52dead/spotbugs-4.2.2.jar)
[2022-08-17T21:26:22.871Z] WARNING: Please consider reporting this to the 
maintainers of edu.umd.cs.findbugs.ba.jsr305.TypeQualifierValue
[2022-08-17T21:26:22.871Z] WARNING: System::setSecurityManager will be removed 
in a future release
[2022-08-17T21:26:28.024Z] > Task :connect:runtime:checkstyleTest
[2022-08-17T21:26:29.331Z] [main] INFO edu.umd.cs.findbugs.ExitCodes - 
Calculating exit code...
[2022-08-17T21:26:29.331Z] 
[2022-08-17T21:26:29.331Z] > Task :shell:spotbugsMain
[2022-08-17T21:26:29.331Z] WARNING: A terminally deprecated method in 
java.lang.System has been called
[2022-08-17T21:26:29.331Z] WARNING: System::setSecurityManager has been called 
by edu.umd.cs.findbugs.ba.jsr305.TypeQualifierValue 
(file:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.spotbugs/spotbugs/4.2.2/fef7e4208082fc1c6a1b7dfd84eb382add52dead/spotbugs-4.2.2.jar)
[2022-08-17T21:26:29.331Z] WARNING: Please consider reporting this to the 
maintainers of edu.umd.cs.findbugs.ba.jsr305.TypeQualifierValue
[2022-08-17T21:26:29.331Z] WARNING: System::setSecurityManager will be removed 
in a future release
[2022-08-17T21:26:32.370Z] [main] INFO edu.umd.cs.findbugs.ExitCodes - 
Calculating exit code...
[2022-08-17T21:26:32.370Z] 
[2022-08-17T21:26:32.370Z] > Task :core:compileTestScala
[2022-08-17T21:26:32.370Z] [Warn] 
/home/jenkins/workspace/Kafka_kafka_trunk/core/src/test/scala/integration/kafka/server/MultipleListenersWithSameSecurityProtocolBaseTest.scala:30:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server
[2022-08-17T21:26:36.411Z] > Task :core:spotbugsMain
[2022-08-17T21:26:42.015Z] [Warn] 
/home/jenkins/workspace/Kafka_kafka_trunk/core/src/test/scala/unit/kafka/server/AdvertiseBrokerTest.scala:22:21:
 imported `QuorumTestHarness` is permanently hidden by definition of object 
QuorumTestHarness in package server

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1148

2022-08-17 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 418826 lines...]
[2022-08-17T21:12:42.992Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeft[caching
 enabled = false] STARTED
[2022-08-17T21:12:55.191Z] 
[2022-08-17T21:12:55.191Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeft[caching
 enabled = false] PASSED
[2022-08-17T21:12:55.191Z] 
[2022-08-17T21:12:55.191Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerInner[caching
 enabled = false] STARTED
[2022-08-17T21:13:08.764Z] 
[2022-08-17T21:13:08.764Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerInner[caching
 enabled = false] PASSED
[2022-08-17T21:13:08.764Z] 
[2022-08-17T21:13:08.764Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerOuter[caching
 enabled = false] STARTED
[2022-08-17T21:13:26.578Z] 
[2022-08-17T21:13:26.578Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerOuter[caching
 enabled = false] PASSED
[2022-08-17T21:13:26.578Z] 
[2022-08-17T21:13:26.578Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerLeft[caching
 enabled = false] STARTED
[2022-08-17T21:13:39.667Z] 
[2022-08-17T21:13:39.667Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInnerLeft[caching
 enabled = false] PASSED
[2022-08-17T21:13:39.667Z] 
[2022-08-17T21:13:39.667Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterInner[caching
 enabled = false] STARTED
[2022-08-17T21:13:51.136Z] 
[2022-08-17T21:13:51.136Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterInner[caching
 enabled = false] PASSED
[2022-08-17T21:13:51.136Z] 
[2022-08-17T21:13:51.136Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterOuter[caching
 enabled = false] STARTED
[2022-08-17T21:14:03.418Z] 
[2022-08-17T21:14:03.418Z] TableTableJoinIntegrationTest > [caching enabled = 
false] > 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterOuter[caching
 enabled = false] PASSED
[2022-08-17T21:14:03.418Z] 
[2022-08-17T21:14:03.418Z] TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor STARTED
[2022-08-17T21:14:03.418Z] 
[2022-08-17T21:14:03.418Z] TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor PASSED
[2022-08-17T21:14:06.355Z] 
[2022-08-17T21:14:06.355Z] TaskMetadataIntegrationTest > 
shouldReportCorrectEndOffsetInformation STARTED
[2022-08-17T21:14:07.269Z] 
[2022-08-17T21:14:07.269Z] TaskMetadataIntegrationTest > 
shouldReportCorrectEndOffsetInformation PASSED
[2022-08-17T21:14:07.269Z] 
[2022-08-17T21:14:07.269Z] TaskMetadataIntegrationTest > 
shouldReportCorrectCommittedOffsetInformation STARTED
[2022-08-17T21:14:10.060Z] 
[2022-08-17T21:14:10.060Z] TaskMetadataIntegrationTest > 
shouldReportCorrectCommittedOffsetInformation PASSED
[2022-08-17T21:14:10.974Z] 
[2022-08-17T21:14:10.975Z] HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted STARTED
[2022-08-17T21:14:18.397Z] 
[2022-08-17T21:14:18.397Z] HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted PASSED
[2022-08-17T21:14:21.666Z] 
[2022-08-17T21:14:21.666Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-08-17T21:14:23.750Z] 
[2022-08-17T21:14:23.750Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-08-17T21:14:23.750Z] 
[2022-08-17T21:14:23.750Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-08-17T21:14:30.031Z] 
[2022-08-17T21:14:30.031Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-08-17T21:14:30.031Z] 
[2022-08-17T21:14:30.031Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-08-17T21:14:36.589Z] 
[2022-08-17T21:14:36.589Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-08-17T21:14:36.589Z] 
[2022-08-17T21:14:36.589Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-08-17T21:14:38.711Z] 

[jira] [Created] (KAFKA-14170) KRaft Controller: Possible NPE when we remove topics with any offline partitions in the cluster

2022-08-17 Thread Akhilesh Chaganti (Jira)
Akhilesh Chaganti created KAFKA-14170:
-

 Summary: KRaft Controller: Possible NPE when we remove topics with 
any offline partitions in the cluster
 Key: KAFKA-14170
 URL: https://issues.apache.org/jira/browse/KAFKA-14170
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.3
Reporter: Akhilesh Chaganti
Assignee: Akhilesh Chaganti
 Fix For: 3.3


When we remove a topic, it goes through the following function in KRaft 
Controller replay method for RemoveTopicRecord:
{code:java}
void removeTopicEntryForBroker(Uuid topicId, int brokerId) {
Map topicMap = isrMembers.get(brokerId);
if (topicMap != null) {
if (brokerId == NO_LEADER) {
offlinePartitionCount.set(offlinePartitionCount.get() - 
topicMap.get(topicId).length);
}
topicMap.remove(topicId);
}
} {code}

If the broker has any offline partitions but doesn't have offline partitions 
for the topic we're deleting, the above code will run into NPE because we 
directly access the `topicMap.get(topicId).length`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-854 Separate configuration for producer ID expiry

2022-08-17 Thread Ismael Juma
Why does this have to be an external config? We can provide an internal
mechanism to configure this, no?

Ismael

On Wed, Aug 17, 2022 at 9:22 AM Justine Olshan 
wrote:

> Hey all,
> Quick update to this KIP. While working on the PR and tests I realized that
> we have a hardcoded value for how often we clean up producer IDs. Currently
> this value is 10 minutes and is defined in LogManager.
> I thought for better testing and ease of use of the new configuration, we
> should also be able to configure the cleanup interval.
>
> Here is the new configuration I'm hoping to add. I also added it to the
> KIP.
> name: producer.id.expiration.check.interval.ms
> description: The interval at which to remove producer IDs that have expired
> due to producer.id.expiration.ms passing
> default: 60 (10 minutes)
> valid values: [1,...]
> priority: low
> update-mode: read-only
>
> I left the default as the current hardcoded value to avoid disruptions. If
> there are any issues with this change let me know.
> Thanks,
> Justine
>
> On Fri, Aug 5, 2022 at 1:40 PM Justine Olshan 
> wrote:
>
> > Awesome. Thanks Tom!
> >
> > I plan to open this KIP vote at the start of next week.
> > Thanks all for the discussion! Let me know if there is anything else. :)
> >
> > Justine
> >
> > On Wed, Aug 3, 2022 at 11:32 AM Tom Bentley  wrote:
> >
> >> Hi Justine,
> >>
> >> That all seems reasonable to me, thanks!
> >>
> >> On Wed, 3 Aug 2022 at 19:14, Justine Olshan
>  >> >
> >> wrote:
> >>
> >> > Hi Tom and Ismael,
> >> >
> >> > 1. Yes, there are definitely many ways to improve this issue and I
> plan
> >> to
> >> > write followup KIPs to address some of the larger changes.
> >> > Just wanted to get this simple fix in as a short term measure to
> prevent
> >> > issues with too many producer IDs in the cache. Stay tuned :)
> >> >
> >> > 2. I did have some offline discussion about informing the client. I
> >> think
> >> > for this specific KIP the default behavior in practice should not
> change
> >> > enough to require this information to go back to the client. In other
> >> > words, a reasonable configuration should not regress behavior.
> However,
> >> > with the further changes I mention in 1, perhaps this is something we
> >> want
> >> > to do. And yes -- unfortunately the current state of Kafka is no
> longer
> >> > totally consistent with KIP-98. This is something we probably want to
> >> > clarify in the future.
> >> >
> >> > 3. I will update the config to mention it is not dynamic. I think
> since
> >> the
> >> > transactional id configuration is read-only, this should be too.
> >> >
> >> > 4. I can update this wording.
> >> >
> >> > 5. I think there are definitely benefits to the name `
> >> > idempotent.pid.expiration.ms` but there are other ways this could
> cause
> >> > confusion. And to be clear -- the configuration can expire a producer
> ID
> >> > for a transactional producer as long as there isn't an ongoing
> >> transaction.
> >> >
> >> > Let me know if you have any questions and thanks for taking a look!
> >> >
> >> > Justine
> >> >
> >> > On Wed, Aug 3, 2022 at 9:30 AM Ismael Juma  wrote:
> >> >
> >> > > Regarding 1, more can certainly be done, but I think it would be
> >> > > complementary. As such, I think this KIP stands on its own and
> >> additional
> >> > > improvements can be handled via future KIPs (unless Justine wants to
> >> > > combine things, of course).
> >> > >
> >> > > Ismael
> >> > >
> >> > > On Wed, Aug 3, 2022 at 9:12 AM Tom Bentley 
> >> wrote:
> >> > >
> >> > > > Hi Justine,
> >> > > >
> >> > > > Thanks for the KIP! I can see that this is a pragmatic attempt to
> >> > > address a
> >> > > > nasty problem. I have a few questions:
> >> > > >
> >> > > > 1. The KIP makes the problem significantly harder to trigger, but
> >> > doesn't
> >> > > > eliminate it entirely. How confident are you that it will be
> >> sufficient
> >> > > in
> >> > > > practice? We can point to applications which are creating
> idempotent
> >> > > > producers at a high rate and say they're broken, but that doesn't
> do
> >> > > > anything to defend the broker from an interaction pattern that
> >> differs
> >> > > only
> >> > > > in rate from a "good application". Did you consider a new quota to
> >> > limit
> >> > > > the rate at which a (principal, clientId) can allocate new PIDs?
> >> > > >
> >> > > > 2. The KIP contains this sentence: "when an idempotent producer’s
> ID
> >> > > > expires, it silently loses its idempotency guarantees." That's at
> >> odds
> >> > > with
> >> > > > my reading of "PID expiration" in the KIP-98 design[1], but it
> does
> >> > seem
> >> > > > consistent with a (brief!) look at the code. I accept that the
> risk
> >> > > should
> >> > > > be minimal so long as the expiration time is > the producer's
> >> delivery
> >> > > > timeout, but it would still be nice if we could detect this
> >> situation
> >> > and
> >> > > > return an error to the client. Is there a reason for the apparent
> >> > > 

Re: [DISCUSS] KIP-854 Separate configuration for producer ID expiry

2022-08-17 Thread Justine Olshan
Hey all,
Quick update to this KIP. While working on the PR and tests I realized that
we have a hardcoded value for how often we clean up producer IDs. Currently
this value is 10 minutes and is defined in LogManager.
I thought for better testing and ease of use of the new configuration, we
should also be able to configure the cleanup interval.

Here is the new configuration I'm hoping to add. I also added it to the KIP.
name: producer.id.expiration.check.interval.ms
description: The interval at which to remove producer IDs that have expired
due to producer.id.expiration.ms passing
default: 60 (10 minutes)
valid values: [1,...]
priority: low
update-mode: read-only

I left the default as the current hardcoded value to avoid disruptions. If
there are any issues with this change let me know.
Thanks,
Justine

On Fri, Aug 5, 2022 at 1:40 PM Justine Olshan  wrote:

> Awesome. Thanks Tom!
>
> I plan to open this KIP vote at the start of next week.
> Thanks all for the discussion! Let me know if there is anything else. :)
>
> Justine
>
> On Wed, Aug 3, 2022 at 11:32 AM Tom Bentley  wrote:
>
>> Hi Justine,
>>
>> That all seems reasonable to me, thanks!
>>
>> On Wed, 3 Aug 2022 at 19:14, Justine Olshan > >
>> wrote:
>>
>> > Hi Tom and Ismael,
>> >
>> > 1. Yes, there are definitely many ways to improve this issue and I plan
>> to
>> > write followup KIPs to address some of the larger changes.
>> > Just wanted to get this simple fix in as a short term measure to prevent
>> > issues with too many producer IDs in the cache. Stay tuned :)
>> >
>> > 2. I did have some offline discussion about informing the client. I
>> think
>> > for this specific KIP the default behavior in practice should not change
>> > enough to require this information to go back to the client. In other
>> > words, a reasonable configuration should not regress behavior. However,
>> > with the further changes I mention in 1, perhaps this is something we
>> want
>> > to do. And yes -- unfortunately the current state of Kafka is no longer
>> > totally consistent with KIP-98. This is something we probably want to
>> > clarify in the future.
>> >
>> > 3. I will update the config to mention it is not dynamic. I think since
>> the
>> > transactional id configuration is read-only, this should be too.
>> >
>> > 4. I can update this wording.
>> >
>> > 5. I think there are definitely benefits to the name `
>> > idempotent.pid.expiration.ms` but there are other ways this could cause
>> > confusion. And to be clear -- the configuration can expire a producer ID
>> > for a transactional producer as long as there isn't an ongoing
>> transaction.
>> >
>> > Let me know if you have any questions and thanks for taking a look!
>> >
>> > Justine
>> >
>> > On Wed, Aug 3, 2022 at 9:30 AM Ismael Juma  wrote:
>> >
>> > > Regarding 1, more can certainly be done, but I think it would be
>> > > complementary. As such, I think this KIP stands on its own and
>> additional
>> > > improvements can be handled via future KIPs (unless Justine wants to
>> > > combine things, of course).
>> > >
>> > > Ismael
>> > >
>> > > On Wed, Aug 3, 2022 at 9:12 AM Tom Bentley 
>> wrote:
>> > >
>> > > > Hi Justine,
>> > > >
>> > > > Thanks for the KIP! I can see that this is a pragmatic attempt to
>> > > address a
>> > > > nasty problem. I have a few questions:
>> > > >
>> > > > 1. The KIP makes the problem significantly harder to trigger, but
>> > doesn't
>> > > > eliminate it entirely. How confident are you that it will be
>> sufficient
>> > > in
>> > > > practice? We can point to applications which are creating idempotent
>> > > > producers at a high rate and say they're broken, but that doesn't do
>> > > > anything to defend the broker from an interaction pattern that
>> differs
>> > > only
>> > > > in rate from a "good application". Did you consider a new quota to
>> > limit
>> > > > the rate at which a (principal, clientId) can allocate new PIDs?
>> > > >
>> > > > 2. The KIP contains this sentence: "when an idempotent producer’s ID
>> > > > expires, it silently loses its idempotency guarantees." That's at
>> odds
>> > > with
>> > > > my reading of "PID expiration" in the KIP-98 design[1], but it does
>> > seem
>> > > > consistent with a (brief!) look at the code. I accept that the risk
>> > > should
>> > > > be minimal so long as the expiration time is > the producer's
>> delivery
>> > > > timeout, but it would still be nice if we could detect this
>> situation
>> > and
>> > > > return an error to the client. Is there a reason for the apparent
>> > > deviation
>> > > > from KIP-98 (or am I misreading the code?)
>> > > >
>> > > > 3. Could the KIP be explicit on whether the new config will be
>> > > dynamically
>> > > > changeable?
>> > > >
>> > > > 4. The description of producer.id.expiration.ms mentions the
>> > > > ProducerStateManager, which will mean nothing to a normal user. We
>> > could
>> > > > probably change it to "a topic partition leader" without loss of
>> > meaning.
>> > > >
>> 

Re: [DISCUSS] KIP-844: Transactional State Stores

2022-08-17 Thread Sagar
Hi Alex,

Thanks for your response. Yeah I kind of mixed the EOS behaviour with state
stores. Thanks for clarifying.

I think what you are suggesting wrt querying secondary stores makes sense.
What I had imagined was trying to leverage the fact that the keys are
sorted in a known order. So, before querying we should be able to figure
out which store to hit. But that seems to be a case of over optimisation
upfront. Either way, bloom filters should be able to help us out as you
pointed out.

Thanks!
Sagar.


On Wed, Aug 17, 2022 at 6:05 PM Alexander Sorokoumov
 wrote:

> Hey Sagar,
>
> I'll start from the end.
>
> if EOS is enabled, it would return only committed data.
>
> I think you might refer to Kafka's consumer isolation levels. To my
> knowledge, they only work for consuming data from a topic. For example,
> READ_COMMITTED prevents reading data from an ongoing Kafka transaction.
> Because of these isolation levels, we can ensure EOS for stateful tasks by
> wiping local state stores and replaying only committed entries from the
> changelog topic. However, we do have to wipe the local stores now because
> there is no way to distinguish between committed and uncommitted entries;
> therefore, these isolation levels do not affect reads from the state stores
> during regular operation. This is why currently reads from RocksDB or
> in-memory stores always return the latest write. By deprecating
> StateStore#flush and introducing StateStore#commit/StateStore#recover, this
> proposal adds a way to adopt these isolation levels in the future, say, for
> interactive queries.
>
>
> I don't know if it's a performance bottleneck, but is it possible to figure
> > out beforehand and query only the relevant store?
>
>
> You are correct that the secondary store implementation introduces
> performance overhead of an additional read from the uncommitted store. My
> reasoning about it is the following. The worst case for performance here is
> when a key is available only in the main store because we'd have to check
> the uncommitted store first. If the key is present in the uncommitted
> store, we can return it right away without checking the main store. There
> are 2 situations to consider for the worst case scenario where we do the
> unnecessary read from the uncommitted store:
>
>1. If the uncommitted data set fits into memory (this should be the most
>common case in practice), then the extra read from the uncommitted
> store is
>as cheap as copying key bytes off JVM plus RocksDB in-memory read.
>2. If the uncommitted data set does not fit into memory, RocksDB already
>implements Bloom Filters that filter out SSTs that definitely do not
>contain the key.
>
> In principle, we can implement an additional Bloom Filter on top of the
> uncommitted RocksDB, but it will only save us the JNI call overhead. I
> might do that if the performance overhead of that extra read turns out to
> be significant.
>
> The range queries follow similar logic. I implemented the merge by creating
> range iterators for both stores and peeking at the next available key after
> consuming the previous one. In that case, the unnecessary overhead that
> comes from the secondary store is a JNI call to the uncommitted store for a
> non-existing range that should be cheap relative to the overall cost of a
> range query.
>
> Best,
> Alex
>
> On Wed, Aug 17, 2022 at 12:37 PM Sagar  wrote:
>
> > Hi Alex,
> >
> > I went through the KIP again and it looks good to me. I just had a
> question
> > on the secondary state stores:
> >
> > *All writes and deletes go to the temporary store. Reads query the
> > temporary store; if the data is missing, query the regular store. Range
> > reads query both stores and return a KeyValueIterator that merges the
> > results. On crash
> > failure, ProcessorStateManager calls StateStore#recover(offset) that
> > truncates the temporary store.*
> >
> > I guess the reads go to the temp store today as well, the state stores
> read
> > can fetch uncommitted data? Also, if the query is for a recent(what is
> > recent is also debatable TBH). write, then it makes sense to go to the
> > secondary store. But, if not, i.e if it's a read to a key which is
> > committed, it would always be a miss and we would end up making 2 reads.
> I
> > don't know if it's a performance bottleneck, but is it possible to figure
> > out beforehand and query only the relevant store? If you think it's a
> case
> > of over optimisation, then you can ignore it.
> >
> > I think we can do something similar to range queries as well and  look to
> > merge the results from secondary and primary stores only if there is an
> > overlap.
> >
> > Also, staying on this, I still wanted to ask that in Kafka Streams, from
> my
> > limited knowledge, if EOS is enabled, it would return only committed
> data.
> > So, I'm still curious about the choice of going to the secondary store
> > first. Maybe there's something fundamental that I am missing.
> >
> 

Re: [DISCUSS] KIP-844: Transactional State Stores

2022-08-17 Thread Alexander Sorokoumov
Hey Sagar,

I'll start from the end.

if EOS is enabled, it would return only committed data.

I think you might refer to Kafka's consumer isolation levels. To my
knowledge, they only work for consuming data from a topic. For example,
READ_COMMITTED prevents reading data from an ongoing Kafka transaction.
Because of these isolation levels, we can ensure EOS for stateful tasks by
wiping local state stores and replaying only committed entries from the
changelog topic. However, we do have to wipe the local stores now because
there is no way to distinguish between committed and uncommitted entries;
therefore, these isolation levels do not affect reads from the state stores
during regular operation. This is why currently reads from RocksDB or
in-memory stores always return the latest write. By deprecating
StateStore#flush and introducing StateStore#commit/StateStore#recover, this
proposal adds a way to adopt these isolation levels in the future, say, for
interactive queries.


I don't know if it's a performance bottleneck, but is it possible to figure
> out beforehand and query only the relevant store?


You are correct that the secondary store implementation introduces
performance overhead of an additional read from the uncommitted store. My
reasoning about it is the following. The worst case for performance here is
when a key is available only in the main store because we'd have to check
the uncommitted store first. If the key is present in the uncommitted
store, we can return it right away without checking the main store. There
are 2 situations to consider for the worst case scenario where we do the
unnecessary read from the uncommitted store:

   1. If the uncommitted data set fits into memory (this should be the most
   common case in practice), then the extra read from the uncommitted store is
   as cheap as copying key bytes off JVM plus RocksDB in-memory read.
   2. If the uncommitted data set does not fit into memory, RocksDB already
   implements Bloom Filters that filter out SSTs that definitely do not
   contain the key.

In principle, we can implement an additional Bloom Filter on top of the
uncommitted RocksDB, but it will only save us the JNI call overhead. I
might do that if the performance overhead of that extra read turns out to
be significant.

The range queries follow similar logic. I implemented the merge by creating
range iterators for both stores and peeking at the next available key after
consuming the previous one. In that case, the unnecessary overhead that
comes from the secondary store is a JNI call to the uncommitted store for a
non-existing range that should be cheap relative to the overall cost of a
range query.

Best,
Alex

On Wed, Aug 17, 2022 at 12:37 PM Sagar  wrote:

> Hi Alex,
>
> I went through the KIP again and it looks good to me. I just had a question
> on the secondary state stores:
>
> *All writes and deletes go to the temporary store. Reads query the
> temporary store; if the data is missing, query the regular store. Range
> reads query both stores and return a KeyValueIterator that merges the
> results. On crash
> failure, ProcessorStateManager calls StateStore#recover(offset) that
> truncates the temporary store.*
>
> I guess the reads go to the temp store today as well, the state stores read
> can fetch uncommitted data? Also, if the query is for a recent(what is
> recent is also debatable TBH). write, then it makes sense to go to the
> secondary store. But, if not, i.e if it's a read to a key which is
> committed, it would always be a miss and we would end up making 2 reads. I
> don't know if it's a performance bottleneck, but is it possible to figure
> out beforehand and query only the relevant store? If you think it's a case
> of over optimisation, then you can ignore it.
>
> I think we can do something similar to range queries as well and  look to
> merge the results from secondary and primary stores only if there is an
> overlap.
>
> Also, staying on this, I still wanted to ask that in Kafka Streams, from my
> limited knowledge, if EOS is enabled, it would return only committed data.
> So, I'm still curious about the choice of going to the secondary store
> first. Maybe there's something fundamental that I am missing.
>
> Thanks for the KIP again!
>
> Thanks!
> Sagar.
>
>
>
> On Mon, Aug 15, 2022 at 8:25 PM Alexander Sorokoumov
>  wrote:
>
> > Hey Guozhang,
> >
> > Thank you for elaborating! I like your idea to introduce a StreamsConfig
> > specifically for the default store APIs. You mentioned Materialized, but
> I
> > think changes in StreamJoined follow the same logic.
> >
> > I updated the KIP and the prototype according to your suggestions:
> > * Add a new StoreType and a StreamsConfig for transactional RocksDB.
> > * Decide whether Materialized/StreamJoined are transactional based on the
> > configured StoreType.
> > * Move RocksDBTransactionalMechanism to
> > org.apache.kafka.streams.state.internals to remove it from the proposal
> > scope.
> > * Add a 

Re: [Discuss] KIP-581: Value of optional null field which has default value

2022-08-17 Thread Mickael Maison
Hi Cheng Pan,

Thanks for the updates. I have a few more questions:

1) Since this KIP is targeting JsonConverter, I think we should add
this new config to JsonConverterConfig instead of ConverterConfig.

2) What about renaming the new field to something like
"use.null.for.optional.fields"?

Thanks,
Mickael

On Thu, May 5, 2022 at 3:19 PM Cheng Pan  wrote:
>
> Update PR links
>
> [1] https://github.com/apache/kafka/pull/12126
>
> On 2022/05/05 13:16:59 Cheng Pan wrote:
> > Hi Mickael,
> >
> > Thanks for ping me.
> >
> > I updated the KIP-581 description to narrow the change scope to address 
> > this specific issue, and raised a WIP PR[1] to implement it.
> >
> > Please let me know if you have any concerns.
> >
> > [1] https://github.com/apache/kafka/pull/12126
> >
> > Thanks,
> > Cheng Pan
> >
> > On 2022/05/02 12:57:25 Mickael Maison wrote:
> > > Hi Cheng Pan,
> > >
> > > Thanks for raising this KIP, this would be a useful improvement!
> > >
> > > You never started a VOTE thread for this KIP. Are you interested in
> > > finishing up this work?
> > > If so, based on the discussion, I think you can open a VOTE thread as
> > > described in 
> > > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals#KafkaImprovementProposals-Process
> > > If not, it's not a problem, someone else can volunteer to pick it up.
> > >
> > > Please let us know.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Sat, Aug 8, 2020 at 11:05 AM Ruslan Gibaiev  
> > > wrote:
> > > >
> > > > Hello guys.
> > > > Proposed PR seems to be fixing the issue in a backward-compatible way.
> > > > Let's please move forward with it. Would be great to see it included 
> > > > into next Kafka release
> > > > Thank you
> > > >
> > > > On 2020/07/29 02:49:07, "379377...@qq.com" <379377...@qq.com> wrote:
> > > > > Hi Chris,
> > > > >
> > > > > Thanks for your good suggestion, the KIP document and draft PR has 
> > > > > been updated, please review again.
> > > > >
> > > > > And I found due to my misoperation, the mail thread has been broken, 
> > > > > no idea how to fix it.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Thanks
> > > > > Cheng Pan
> > > > >
> > > > > From: Christopher Egerton
> > > > > Date: 2020-05-04 10:53
> > > > > To: dev
> > > > > Subject: Re: [Discuss] KIP-581: Value of optional null field which 
> > > > > has default value
> > > > > Hi Cheng,
> > > > >
> > > > > I think refactoring that method should be fine (if maybe a little 
> > > > > painful);
> > > > > the method itself is private and all places that it's invoked 
> > > > > directly are
> > > > > either package-private or non-static, so it shouldn't affect any of 
> > > > > the
> > > > > public methods of the JSON converter to change "convertToConnect" to 
> > > > > be
> > > > > non-static. Even if it did, the only parts of the JSON converter that 
> > > > > are
> > > > > public API (and therefore officially subject to concerns about
> > > > > compatibility) are the methods it implements that satisfy the 
> > > > > "Converter"
> > > > > and "HeaderConverter" interfaces.
> > > > >
> > > > > Would you mind explicitly specifying in the KIP that the new property 
> > > > > will
> > > > > be added for the JSON converter only, and that it will affect both
> > > > > serialization and deserialization?
> > > > >
> > > > > Cheers,
> > > > >
> > > > > Chris
> > > > >
> > > > > On Tue, Apr 28, 2020 at 10:52 AM 379377944 <379377...@qq.com> wrote:
> > > > >
> > > > > > Hi Chris,
> > > > > >
> > > > > >
> > > > > > Thanks for your reminder, the original implement is deprecated, I 
> > > > > > just
> > > > > > update the JIRA with the new
> > > > > > PR link:  https://github.com/apache/kafka/pull/8575
> > > > > >
> > > > > >
> > > > > > As question 2), I agree with you that we should consider both
> > > > > > serialization and deserialization, and as you said, I only 
> > > > > > implement the
> > > > > > serialization now. This is  because the original serde implement is 
> > > > > > not
> > > > > > symmetrical, the convertToConnect is a static method and can’t 
> > > > > > access the
> > > > > > field in JsonConverter
> > > > > > instance, maybe I should do some refactoring to implement the
> > > > > > deserialization.
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Cheng Pan
> > > > > >  Original Message
> > > > > > Sender: Christopher Egerton
> > > > > > Recipient: dev
> > > > > > Date: Wednesday, Apr 15, 2020 02:28
> > > > > > Subject: Re: [Discuss] KIP-581: Value of optional null field which 
> > > > > > has
> > > > > > default value
> > > > > >
> > > > > >
> > > > > > Hi Cheng, Thanks for the KIP! I really appreciate the care that was 
> > > > > > taken
> > > > > > to ensure backwards compatibility for existing users, and the 
> > > > > > minimal
> > > > > > changes to public interface that are suggested to address this. I 
> > > > > > have two
> > > > > > quick requests for clarification: 1) Where is the proposed
> > > > > > "accept.optional.null" 

Re: [DISCUSS] KIP-844: Transactional State Stores

2022-08-17 Thread Sagar
Hi Alex,

I went through the KIP again and it looks good to me. I just had a question
on the secondary state stores:

*All writes and deletes go to the temporary store. Reads query the
temporary store; if the data is missing, query the regular store. Range
reads query both stores and return a KeyValueIterator that merges the
results. On crash
failure, ProcessorStateManager calls StateStore#recover(offset) that
truncates the temporary store.*

I guess the reads go to the temp store today as well, the state stores read
can fetch uncommitted data? Also, if the query is for a recent(what is
recent is also debatable TBH). write, then it makes sense to go to the
secondary store. But, if not, i.e if it's a read to a key which is
committed, it would always be a miss and we would end up making 2 reads. I
don't know if it's a performance bottleneck, but is it possible to figure
out beforehand and query only the relevant store? If you think it's a case
of over optimisation, then you can ignore it.

I think we can do something similar to range queries as well and  look to
merge the results from secondary and primary stores only if there is an
overlap.

Also, staying on this, I still wanted to ask that in Kafka Streams, from my
limited knowledge, if EOS is enabled, it would return only committed data.
So, I'm still curious about the choice of going to the secondary store
first. Maybe there's something fundamental that I am missing.

Thanks for the KIP again!

Thanks!
Sagar.



On Mon, Aug 15, 2022 at 8:25 PM Alexander Sorokoumov
 wrote:

> Hey Guozhang,
>
> Thank you for elaborating! I like your idea to introduce a StreamsConfig
> specifically for the default store APIs. You mentioned Materialized, but I
> think changes in StreamJoined follow the same logic.
>
> I updated the KIP and the prototype according to your suggestions:
> * Add a new StoreType and a StreamsConfig for transactional RocksDB.
> * Decide whether Materialized/StreamJoined are transactional based on the
> configured StoreType.
> * Move RocksDBTransactionalMechanism to
> org.apache.kafka.streams.state.internals to remove it from the proposal
> scope.
> * Add a flag in new Stores methods to configure a state store as
> transactional. Transactional state stores use the default transactional
> mechanism.
> * The changes above allowed to remove all changes to the StoreSupplier
> interface.
>
> I am not sure about marking StateStore#transactional() as evolving. As long
> as we allow custom user implementations of that interface, we should
> probably either keep that flag to distinguish between transactional and
> non-transactional implementations or change the contract behind the
> interface. What do you think?
>
> Best,
> Alex
>
> On Thu, Aug 11, 2022 at 1:00 AM Guozhang Wang  wrote:
>
> > Hello Alex,
> >
> > Thanks for the replies. Regarding the global config v.s. per-store spec,
> I
> > agree with John's early comments to some degrees, but I think we may well
> > distinguish a couple scenarios here. In sum we are discussing about the
> > following levels of per-store spec:
> >
> > * Materialized#transactional()
> > * StoreSupplier#transactional()
> > * StateStore#transactional()
> > * Stores.persistentTransactionalKeyValueStore()...
> >
> > And my thoughts are the following:
> >
> > * In the current proposal users could specify transactional as either
> > "Materialized.as("storeName").withTransantionsEnabled()" or
> > "Materialized.as(Stores.persistentTransactionalKeyValueStore(..))", which
> > seems not necessary to me. In general, the more options the library
> > provides, the messier for users to learn the new APIs.
> >
> > * When using built-in stores, users would usually go with
> > Materialized.as("storeName"). In such cases I feel it's not very
> meaningful
> > to specify "some of the built-in stores to be transactional, while others
> > be non transactional": as long as one of your stores are
> non-transactional,
> > you'd still pay for large restoration cost upon unclean failure. People
> > may, indeed, want to specify if different transactional mechanisms to be
> > used across stores; but for whether or not the stores should be
> > transactional, I feel it's really an "all or none" answer, and our
> built-in
> > form (rocksDB) should support transactionality for all store types.
> >
> > * When using customized stores, users would usually go with
> > Materialized.as(StoreSupplier). And it's possible if users would choose
> > some to be transactional while others non-transactional (e.g. if their
> > customized store only supports transactional for some store types, but
> not
> > others).
> >
> > * At a per-store level, the library do not really care, or need to know
> > whether that store is transactional or not at runtime, except for
> > compatibility reasons today we want to make sure the written checkpoint
> > files do not include those non-transactional stores. But this check would
> > eventually go away as one day we would always 

Re: [DISCUSS] KIP-855: Add schema.namespace parameter to SetSchemaMetadata SMT in Kafka Connect

2022-08-17 Thread Mickael Maison
Hi Michael,

Thanks for the KIP! Sorry for the delay, I finally took some time to
take a look.

In both the "Public Interfaces" and "Compatibility, Deprecation, and
Migration Plan" sections it mentions the new config is
"transforms.transformschema.schema.name". However if I understand
correctly the config you propose adding is actually
"transforms.transformschema.schema.namespace". Is this a typo or am I
missing something?

Thanks,
Mickael

On Fri, Jul 22, 2022 at 9:57 AM Michael Negodaev  wrote:
>
> Hi all,
>
> I would like to start the discussion on my design to add "schema.namespace"
> parameter in SetSchemaMetadata Single Message Transform in Kafka Connect.
>
> KIP URL: https://cwiki.apache.org/confluence/x/CiT1D
>
> Thanks!
> -Michael


Kafka/Azul swag

2022-08-17 Thread Geertjan Wielenga
Hi Apache Kafka devs,

I'm on the Apache NetBeans PMC but, aside from that, in my day job I work
at Azul (azul.com), a Java vendor.

For upcoming Kafka conferences, we at Azul would like to make a hat as swag
that has both our Azul logo as well as Kafka's logo (with the message that
that is a good combination, faster Kafka throughput with Azul's JDKs etc).

Of course, we don't want to create the impression that this swag that we're
planning to produce is in any way created by Kafka, i.e., this is really
Azul swag, and we'd be very careful about that and would like your input,
however, since we'd be using Kafka's logo or at least name, we'd like to
check with you (and are also checking with Apache Trademarks group) about
how you'd feel about this and any concerns you might have.

Thanks, and if this is the wrong mailing list, please let me know.

Geertjan


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #37

2022-08-17 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 570450 lines...]
[2022-08-16T17:13:15.451Z] > Task :connect:api:testSrcJar
[2022-08-16T17:13:15.451Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2022-08-16T17:13:15.451Z] > Task :connect:json:publishToMavenLocal
[2022-08-16T17:13:15.451Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2022-08-16T17:13:15.451Z] > Task :connect:api:publishToMavenLocal
[2022-08-16T17:13:15.451Z] 
[2022-08-16T17:13:15.451Z] > Task :streams:javadoc
[2022-08-16T17:13:15.451Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-16T17:13:15.451Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-16T17:13:15.451Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-16T17:13:15.451Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-16T17:13:15.451Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-08-16T17:13:15.451Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2022-08-16T17:13:15.451Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2022-08-16T17:13:16.375Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:44:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:36:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:57:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:74:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:110:
 warning - Tag @link: reference not found: this#getResult()
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:117:
 warning - Tag @link: reference not found: this#getFailureReason()
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:117:
 warning - Tag @link: reference not found: this#getFailureMessage()
[2022-08-16T17:13:16.375Z] 
/home/jenkins/h49-shared/712657a4/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:155:
 warning - Tag @link: reference not found: this#isSuccess()
[2022-08-16T17:13:16.375Z]