Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #382

2021-08-01 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 488749 lines...]
[2021-08-02T02:36:19.519Z] 
[2021-08-02T02:36:19.519Z] ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestWithUnsupportedVersion()[1]
 PASSED
[2021-08-02T02:36:19.519Z] 
[2021-08-02T02:36:19.520Z] ApiVersionsRequestTest > 
testApiVersionsRequestValidationV0() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestValidationV0()[1] 
STARTED
[2021-08-02T02:36:21.269Z] 
[2021-08-02T02:36:21.269Z] ApiVersionsRequestTest > 
testApiVersionsRequestValidationV0() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestValidationV0()[1] 
PASSED
[2021-08-02T02:36:21.269Z] 
[2021-08-02T02:36:21.269Z] ApiVersionsRequestTest > 
testApiVersionsRequestValidationV3() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestValidationV3()[1] 
STARTED
[2021-08-02T02:36:24.840Z] 
[2021-08-02T02:36:24.840Z] ApiVersionsRequestTest > 
testApiVersionsRequestValidationV3() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestValidationV3()[1] 
PASSED
[2021-08-02T02:36:24.840Z] 
[2021-08-02T02:36:24.840Z] ApiVersionsRequestTest > 
testApiVersionsRequestThroughControlPlaneListener() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestThroughControlPlaneListener()[1]
 STARTED
[2021-08-02T02:36:26.758Z] 
[2021-08-02T02:36:26.758Z] ApiVersionsRequestTest > 
testApiVersionsRequestThroughControlPlaneListener() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestThroughControlPlaneListener()[1]
 PASSED
[2021-08-02T02:36:26.758Z] 
[2021-08-02T02:36:26.758Z] ApiVersionsRequestTest > testApiVersionsRequest() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequest()[1] STARTED
[2021-08-02T02:36:29.384Z] 
[2021-08-02T02:36:29.384Z] ApiVersionsRequestTest > testApiVersionsRequest() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequest()[1] PASSED
[2021-08-02T02:36:29.384Z] 
[2021-08-02T02:36:29.384Z] ApiVersionsRequestTest > 
testApiVersionsRequestValidationV0ThroughControlPlaneListener() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestValidationV0ThroughControlPlaneListener()[1]
 STARTED
[2021-08-02T02:36:31.134Z] 
[2021-08-02T02:36:31.134Z] ApiVersionsRequestTest > 
testApiVersionsRequestValidationV0ThroughControlPlaneListener() > 
kafka.server.ApiVersionsRequestTest.testApiVersionsRequestValidationV0ThroughControlPlaneListener()[1]
 PASSED
[2021-08-02T02:36:31.134Z] 
[2021-08-02T02:36:31.134Z] LogDirFailureTest > testIOExceptionDuringLogRoll() 
STARTED
[2021-08-02T02:36:39.545Z] 
[2021-08-02T02:36:39.545Z] LogDirFailureTest > testIOExceptionDuringLogRoll() 
PASSED
[2021-08-02T02:36:39.545Z] 
[2021-08-02T02:36:39.545Z] LogDirFailureTest > 
testIOExceptionDuringCheckpoint() STARTED
[2021-08-02T02:36:46.747Z] 
[2021-08-02T02:36:46.747Z] LogDirFailureTest > 
testIOExceptionDuringCheckpoint() PASSED
[2021-08-02T02:36:46.747Z] 
[2021-08-02T02:36:46.747Z] LogDirFailureTest > 
testProduceErrorFromFailureOnCheckpoint() STARTED
[2021-08-02T02:36:51.526Z] 
[2021-08-02T02:36:51.526Z] LogDirFailureTest > 
testProduceErrorFromFailureOnCheckpoint() PASSED
[2021-08-02T02:36:51.526Z] 
[2021-08-02T02:36:51.526Z] LogDirFailureTest > 
brokerWithOldInterBrokerProtocolShouldHaltOnLogDirFailure() STARTED
[2021-08-02T02:36:59.991Z] 
[2021-08-02T02:36:59.991Z] LogDirFailureTest > 
brokerWithOldInterBrokerProtocolShouldHaltOnLogDirFailure() PASSED
[2021-08-02T02:36:59.991Z] 
[2021-08-02T02:36:59.991Z] LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower() STARTED
[2021-08-02T02:37:06.993Z] 
[2021-08-02T02:37:06.993Z] LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower() PASSED
[2021-08-02T02:37:06.993Z] 
[2021-08-02T02:37:06.993Z] LogDirFailureTest > 
testProduceErrorFromFailureOnLogRoll() STARTED
[2021-08-02T02:37:10.565Z] 
[2021-08-02T02:37:10.565Z] LogDirFailureTest > 
testProduceErrorFromFailureOnLogRoll() PASSED
[2021-08-02T02:37:10.565Z] 
[2021-08-02T02:37:10.565Z] LogOffsetTest > 
testFetchOffsetsBeforeWithChangingSegmentSize() STARTED
[2021-08-02T02:37:14.136Z] 
[2021-08-02T02:37:14.136Z] LogOffsetTest > 
testFetchOffsetsBeforeWithChangingSegmentSize() PASSED
[2021-08-02T02:37:14.136Z] 
[2021-08-02T02:37:14.136Z] LogOffsetTest > testGetOffsetsBeforeEarliestTime() 
STARTED
[2021-08-02T02:37:18.866Z] 
[2021-08-02T02:37:18.866Z] LogOffsetTest > testGetOffsetsBeforeEarliestTime() 
PASSED
[2021-08-02T02:37:18.866Z] 
[2021-08-02T02:37:18.866Z] LogOffsetTest > 
testFetchOffsetByTimestampForMaxTimestampAfterTruncate() STARTED
[2021-08-02T02:37:22.450Z] 
[2021-08-02T02:37:22.450Z] LogOffsetTest > 
testFetchOffsetByTimestampForMaxTimestampAfterTruncate() PASSED
[2021-08-02T02:37:22.450Z] 
[2021-08-02T02:37:22.450Z] LogOffsetTest > 
testFetchOffsetByTimestampForMaxTimestampWithUnorderedTimestamps() STARTED

Re: [VOTE] KIP-690: Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-08-01 Thread Gwen Shapira
+1 (binding). Thank you for your patience and clear explanations, Omnia.

On Mon, Jul 26, 2021 at 3:39 PM Omnia Ibrahim 
wrote:

> Bumping up this voting thread.
>
> On Fri, Jul 16, 2021 at 1:57 PM Omnia Ibrahim 
> wrote:
>
> > Hi,
> > Can I get 2 more +1 binding for this KIP?
> > Thanks
> >
> > On Fri, Jul 2, 2021 at 5:14 PM Omnia Ibrahim 
> > wrote:
> >
> >> Hi All,
> >>
> >> Just thought of bumping this voting thread again to see if we can form a
> >> consensus around this.
> >>
> >> Thanks
> >>
> >> On Thu, Jun 24, 2021 at 5:55 PM Mickael Maison <
> mickael.mai...@gmail.com>
> >> wrote:
> >>
> >>> +1 (binding)
> >>> Thanks for the KIP!
> >>>
> >>> On Tue, May 4, 2021 at 3:23 PM Igor Soarez 
> >>> wrote:
> >>> >
> >>> > Another +1 here, also non-binding.
> >>> >
> >>> > Thank you Omnia!
> >>> >
> >>> > --
> >>> > Igor
> >>> >
> >>> >
> >>> > On Fri, Apr 30, 2021, at 3:15 PM, Ryanne Dolan wrote:
> >>> > > +1 (non-binding), thanks!
> >>> > >
> >>> > > On Thu, Jan 21, 2021, 4:31 AM Omnia Ibrahim <
> o.g.h.ibra...@gmail.com>
> >>> wrote:
> >>> > >
> >>> > >> Hi
> >>> > >> Can I get a vote on this, please?
> >>> > >>
> >>> > >> Best
> >>> > >> Omnia
> >>> > >>
> >>> > >> On Tue, Dec 15, 2020 at 12:16 PM Omnia Ibrahim <
> >>> o.g.h.ibra...@gmail.com>
> >>> > >> wrote:
> >>> > >>
> >>> > >>> If anyone interested in reading the discussions you can find it
> >>> here
> >>> > >>> https://www.mail-archive.com/dev@kafka.apache.org/msg113373.html
> >>> > >>>
> >>> > >>> On Tue, Dec 8, 2020 at 4:01 PM Omnia Ibrahim <
> >>> o.g.h.ibra...@gmail.com>
> >>> > >>> wrote:
> >>> > >>>
> >>> >  Hi everyone,
> >>> >  I’m proposing a new KIP for MirrorMaker 2 to add the ability to
> >>> control
> >>> >  internal topics naming convention. The proposal details are here
> >>> > 
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention
> >>> > 
> >>> >  Please vote in this thread.
> >>> >  Thanks
> >>> >  Omnia
> >>> > 
> >>> > >>>
> >>> > >
> >>>
> >>
>


-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #381

2021-08-01 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #76

2021-08-01 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 412952 lines...]
[2021-08-02T00:35:45.717Z] 
[2021-08-02T00:35:45.717Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicOnly() PASSED
[2021-08-02T00:35:45.717Z] 
[2021-08-02T00:35:45.717Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition() STARTED
[2021-08-02T00:35:48.853Z] 
[2021-08-02T00:35:48.853Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition() PASSED
[2021-08-02T00:35:48.853Z] 
[2021-08-02T00:35:48.853Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition() STARTED
[2021-08-02T00:35:52.146Z] 
[2021-08-02T00:35:52.146Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition() PASSED
[2021-08-02T00:35:52.146Z] 
[2021-08-02T00:35:52.146Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition() STARTED
[2021-08-02T00:35:55.269Z] 
[2021-08-02T00:35:55.269Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition() PASSED
[2021-08-02T00:35:55.269Z] 
[2021-08-02T00:35:55.269Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly() STARTED
[2021-08-02T00:35:58.395Z] 
[2021-08-02T00:35:58.395Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly() PASSED
[2021-08-02T00:35:58.395Z] 
[2021-08-02T00:35:58.395Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition() STARTED
[2021-08-02T00:36:01.690Z] 
[2021-08-02T00:36:01.690Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithTopicPartition() PASSED
[2021-08-02T00:36:01.690Z] 
[2021-08-02T00:36:01.690Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsNonExistingGroup() STARTED
[2021-08-02T00:36:03.768Z] 
[2021-08-02T00:36:03.768Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsNonExistingGroup() PASSED
[2021-08-02T00:36:03.768Z] 
[2021-08-02T00:36:03.768Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() STARTED
[2021-08-02T00:36:07.065Z] 
[2021-08-02T00:36:07.065Z] DeleteOffsetsConsumerGroupCommandIntegrationTest > 
testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly() PASSED
[2021-08-02T00:36:07.065Z] 
[2021-08-02T00:36:07.065Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() STARTED
[2021-08-02T00:36:11.313Z] 
[2021-08-02T00:36:11.313Z] TopicCommandIntegrationTest > 
testAlterPartitionCount() PASSED
[2021-08-02T00:36:11.313Z] 
[2021-08-02T00:36:11.313Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-08-02T00:36:15.561Z] 
[2021-08-02T00:36:15.561Z] TopicCommandIntegrationTest > 
testCreatePartitionsDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-08-02T00:36:15.561Z] 
[2021-08-02T00:36:15.561Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() STARTED
[2021-08-02T00:36:21.025Z] 
[2021-08-02T00:36:21.025Z] TopicCommandIntegrationTest > 
testAlterWhenTopicDoesntExistWithIfExists() PASSED
[2021-08-02T00:36:21.025Z] 
[2021-08-02T00:36:21.025Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() STARTED
[2021-08-02T00:36:25.362Z] 
[2021-08-02T00:36:25.362Z] TopicCommandIntegrationTest > 
testCreateWithDefaultReplication() PASSED
[2021-08-02T00:36:25.362Z] 
[2021-08-02T00:36:25.362Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() STARTED
[2021-08-02T00:36:33.581Z] 
[2021-08-02T00:36:33.581Z] TopicCommandIntegrationTest > 
testDescribeAtMinIsrPartitions() PASSED
[2021-08-02T00:36:33.581Z] 
[2021-08-02T00:36:33.581Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() STARTED
[2021-08-02T00:36:36.716Z] 
[2021-08-02T00:36:36.716Z] TopicCommandIntegrationTest > 
testCreateWithNegativeReplicationFactor() PASSED
[2021-08-02T00:36:36.716Z] 
[2021-08-02T00:36:36.716Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() STARTED
[2021-08-02T00:36:40.011Z] 
[2021-08-02T00:36:40.011Z] TopicCommandIntegrationTest > 
testCreateWithInvalidReplicationFactor() PASSED
[2021-08-02T00:36:40.011Z] 
[2021-08-02T00:36:40.011Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() STARTED
[2021-08-02T00:36:44.346Z] 
[2021-08-02T00:36:44.346Z] TopicCommandIntegrationTest > 
testDeleteTopicDoesNotRetryThrottlingQuotaExceededException() PASSED
[2021-08-02T00:36:44.346Z] 
[2021-08-02T00:36:44.346Z] 

[VOTE] KIP-764 Configurable backlog size for creating Acceptor

2021-08-01 Thread Haruki Okada
Hi, Kafka.

I would like to start a vote on KIP that makes SocketServer acceptor's
backlog size configurable.

KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-764%3A+Configurable+backlog+size+for+creating+Acceptor

Discussion thread:
https://lists.apache.org/thread.html/rd77469b7de0190d601dd37bd6894e1352a674d08038bcfe7ff68a1e0%40%3Cdev.kafka.apache.org%3E

Thanks,

-- 

Okada Haruki
ocadar...@gmail.com



Re: [VOTE] KIP-707: The future of KafkaFuture

2021-08-01 Thread Colin McCabe
Hi Tom,

We don't want to break source code compatibility, so I think we should avoid 
removing this exception.

best,
Colin


On Mon, Jul 5, 2021, at 06:00, Tom Bentley wrote:
> During code review Chia-Ping noticed that the `throws InterruptedException`
> clause on the declaration of KafkaFuture#getNow() is not needed. We propose
> to remove it. Note that removing it is not a source compatible change since
> existing code which caught InterruptedException would get a compile error
> about the exception not being thrown in the try block.
> 
> Please reply here if you want to discuss further, otherwise we'll assume
> removing it is acceptable.
> 
> Kind regards,
> 
> Tom
> 
> 
> 
> On Tue, Apr 6, 2021 at 4:08 PM Tom Bentley  wrote:
> 
> > Hi,
> >
> > The vote passes with 4 binding +1s (Ismael, David Chia-Ping and Colin),
> > and 1 non-binding +1 (Ryanne).
> >
> > Many thanks to those who commented and/or voted.
> >
> > Tom
> >
> > On Thu, Apr 1, 2021 at 8:21 PM Colin McCabe  wrote:
> >
> >> +1 (binding).  Thanks for the KIP.
> >>
> >> Colin
> >>
> >>
> >> On Tue, Mar 30, 2021, at 20:36, Chia-Ping Tsai wrote:
> >> > Thanks for this KIP. +1 (binding)
> >> >
> >> > On 2021/03/29 15:34:55, Tom Bentley  wrote:
> >> > > Hi,
> >> > >
> >> > > I'd like to start a vote on KIP-707, which proposes to add
> >> > > KafkaFuture.toCompletionStage(), deprecate KafkaFuture.Function and
> >> make a
> >> > > couple of other minor cosmetic changes.
> >> > >
> >> > >
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-707%3A+The+future+of+KafkaFuture
> >> > >
> >> > > Many thanks,
> >> > >
> >> > > Tom
> >> > >
> >> >
> >>
> >>
> 


[jira] [Resolved] (KAFKA-13114) Unregsiter listener during renounce when the in-memory snapshot is missing

2021-08-01 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13114.
-
Resolution: Fixed

> Unregsiter listener during renounce when the in-memory snapshot is missing
> --
>
> Key: KAFKA-13114
> URL: https://issues.apache.org/jira/browse/KAFKA-13114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: controller
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Blocker
>  Labels: kip-500
> Fix For: 3.0.0
>
>
> Need to improve the renounce logic to do the following when the last 
> committer offset in-memory snapshot is missing:
>  # Reset the snapshot registry
>  # Unregister the listener from the RaftClient
>  # Register the listener from the RaftClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-655: Windowed "Distinct" Operation for KStream

2021-08-01 Thread Ivan Ponomarev

Hi Bruno,

I'm sorry for the delay with the answer. Unfortunately your messages 
were put to spam folder, that's why I didn't answer them right away.


Concerning your question about comparing serialized values vs. using 
equals: I think it must be clear now due to John's explanations. 
Distinct is a stateful operation, thus we will need to use 
serialization. (Although AFAICS the in-memory storage might be a good 
practical solution in many cases).


> I do currently not see why it should not make sense in hopping 
windows... I do not understand  the following sentence: "...one record 
can be multiplied instead of deduplication."


Ok, let me explain.

As it's written in the KIP, "The distinct operation returns only a first 
record that falls into a new window, and filters out all the other 
records that fall into an already existing window."


Also it's worth to remember that the result of `distinct` is 
KTable, V>, not Stream.


If we have, say, hopping time windows [0, 40], [10, 50], [20, 60] and a 
record (key, val) with timestamp 25 arrives, it will be forwarded three 
times ('multiplied') since is falls into the intersection of all three 
windows. The output will be


(key@[0/40],  val)
(key@[10/50], val)
(key@[20/60], val)

You can reason about `distinct` operation just like you reason about 
`sum` or `count`. When a record arrives that falls into a window, we 
update the aggregation on this window. For `distinct`, when extra 
records arrive into the same window, we also perform some sort of 
aggregation (we may even count them internally!), but, unlike sum or 
count, we will not forward anything since counter is strictly greater 
than zero.


You may refer to 'usage examples' of the KIP 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-655:+Windowed+Distinct+Operation+for+Kafka+Streams+API#KIP655:WindowedDistinctOperationforKafkaStreamsAPI-UsageExamples) 
to get clearer idea of how it works.


> As I said earlier, I do not think that SQL and the Java Stream API 
are good arguments to not use a verb


This is an important matter. As we all know, naming is hard.

However, `distinct` name is not used just in SQL and Java Streams. It is 
a kind of a standard operation that is used in nearly all the data 
processing frameworks, see all the hyperlinked examples in 'Motivation' 
section of KIP 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-655:+Windowed+Distinct+Operation+for+Kafka+Streams+API#KIP655:WindowedDistinctOperationforKafkaStreamsAPI-Motivation)


Please look at it and let me know what do you think.

Regards,

Ivan

29.07.2021 4:49, John Roesler пишет:

Hi Bruno,

I had previously been thinking to use equals(), since I
thought that this might be a stateless operation. Comparing
the serialized form requires a serde and a fairly expensive
serialization operation, so while byte equality is superior
to equals(), we shouldn't use it in operations unless they
already require serialization.

I chnaged my mind when I later realized I had been mistaken,
and this operation is of course stateful.

I hope this helps clarify it.

Thanks,
-John

On Fri, 2021-07-23 at 09:53 +0200, Bruno Cadonna wrote:

Hi Ivan and John,

1. John, could you clarify why comparing serialized values seems the way
to go, now?

2. Ivan, Could you please answer my questions that I posted earlier? I
will repost it here:
Ivan, could you please make this matter a bit clearer in the KIP?
Actually, thinking about it again, I do currently not see why it should
not make sense in hopping windows. Regarding this, I do not understand
the following sentence:

"hopping and sliding windows do not make much sense for distinct()
because they produce multiple intersected windows, so that one record
can be multiplied instead of deduplication."

Ivan, what do you mean with "multiplied"?

3. As I said earlier, I do not think that SQL and the Java Stream API
are good arguments to not use a verb. However, if everybody else is fine
with it, I can get behind it.

John, good catch about the missing overloads!
BTW, the overload with Named should be there regardless of stateful or
stateless.

Best,
Bruno

On 22.07.21 20:58, John Roesler wrote:

Hi Ivan,

Thanks for the reply.

1. I think I might have gotten myself confused. I was
thinking of this operation as stateless, but now I'm not
sure what I was thinking... This operator has to be
stateful, right? In that case, I agree that comparing
serialized values seems to be way to do it.

2. Thanks for the confirmation

3. I continue to be satisfied to let you all hash it out.

Thanks,
-John

On Tue, 2021-07-20 at 11:42 +0300, Ivan Ponomarev wrote:

Hi all,

1. Actually I always thought about the serialized byte array only -- at
least this is what local stores depend upon, and what Kafka itself
depends upon when doing log compaction.

I can imagine a case where two different byte arrays deserialize to
objects which are `equals` to each other. But I think we can ignore this
for now because IMO the