[jira] [Created] (KAFKA-12627) Unify MemoryPool and BufferSupplier

2021-04-06 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12627:
--

 Summary: Unify MemoryPool and BufferSupplier
 Key: KAFKA-12627
 URL: https://issues.apache.org/jira/browse/KAFKA-12627
 Project: Kafka
  Issue Type: Task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


Both I/O thread and network thread need memory management but we, currently, 
give them different interface (MemoryPool v.s BufferSupplier). That is weird 
and so we should consider unifying them to eliminate duplicate code. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Re: [DISCUSS] KIP-706: Add method "Producer#produce" to return CompletionStage instead of Future

2021-04-06 Thread Chia-Ping Tsai
> > 3. Did we consider the possibility of introducing a new interface which
> > extended both CompletionStage and Future? That would make it easier for
> > people to update their existing code, since the handling of the result (in
> > the case they weren't using the Callback version) would be source
> > compatible. Not that I particularly want to introduce a new
> > KafkaFuture-like type, but just thought it worthwhile to float the idea.
> > 4. What about the possibility of users doing
> > `send().toCompletableFuture().complete(...)`. In KIP-707 I explicitly
> > prevented that, and I can't think of any use cases why we'd want to allow
> > it here. It's easier to start off preventing that kind of accidental misuse
> > and later allowing it when people turn up with valid use cases. KIP-707
> > would provide an internal KafkaCompletableFuture which should make doing
> > this relatively simple, I think.

As we don't deprecate other two send methods, I feel returning CompletionStage 
is more acceptable. Users who want to call blocking method can call other send 
methods. Also, returning interface (CompletionStage) open a room to us to 
implement more powerful custom CompletionStage.


On 2021/04/02 01:38:12, Chia-Ping Tsai  wrote: 
> hi Tom,
> 
> thanks for all your suggestions!
> 
> > 2. I'm not sure that having separate Builder.topic() and .partition()
> > methods is better than forcing people to set the target via a single method
> > call. For example, `Builder.target(String topic)`, `Builder.target(String
> > topic, int partition)`, `Builder.target(TopicPartition)` and
> > `Builder.target(Uuid)` (or higher level equivalent) forces the targeting to
> > a single place. It's also compatible with the SendTarget idea being added
> > later on.
> 
> nice one. will update KIP.
> 
> > 3. Did we consider the possibility of introducing a new interface which
> > extended both CompletionStage and Future? That would make it easier for
> > people to update their existing code, since the handling of the result (in
> > the case they weren't using the Callback version) would be source
> > compatible. Not that I particularly want to introduce a new
> > KafkaFuture-like type, but just thought it worthwhile to float the idea.
> > 4. What about the possibility of users doing
> > `send().toCompletableFuture().complete(...)`. In KIP-707 I explicitly
> > prevented that, and I can't think of any use cases why we'd want to allow
> > it here. It's easier to start off preventing that kind of accidental misuse
> > and later allowing it when people turn up with valid use cases. KIP-707
> > would provide an internal KafkaCompletableFuture which should make doing
> > this relatively simple, I think.
> 
> Yep, returning ComopletableFuture can bring less change to users. Also, I can 
> file a follow-up to replace ComopletableFuture by KafkaCompletableFuture if 
> KafkaCompletableFuture can prevent such accident.
> 
> > 2. I'm not sure that having separate Builder.topic() and .partition()
> > methods is better than forcing people to set the target via a single method
> > call. For example, `Builder.target(String topic)`, `Builder.target(String
> > topic, int partition)`, `Builder.target(TopicPartition)` and
> > `Builder.target(Uuid)` (or higher level equivalent) forces the targeting to
> > a single place. It's also compatible with the SendTarget idea being added
> > later on.
> > If we're intending to add a send-to-topic-id feature it would affect
> > ProducerRecord.topic(). Although not currently documented to never return
> > null, it currently has that semantic, and the methods of ProducerRecord
> > which can return null are explicitly documented to do so. Making it return
> > null would be a backwards incompatible change, so perhaps we should change
> > its contract now to allow us to support topic ids in 3.x? If we went with
> > the SendTarget idea ProducerRecord would presumably gain a target() method
> > (and we could deprecate topic()), and if not I suppose ProducerRecord would
> > gain a topicId() method, and exactly one of topic() and topicId() would
> > return null, which isn't terribly nice.
> 
> SendTarget seems to be a good way to deal with both topic name and topic id. 
> I will re-think the interface mentioned by Jason (send(SendTarget target, 
> Record record)) and your suggestions. As the new interface `Record` don't 
> have the topic name/id, we don't need to make ProducerRecord extend the new 
> interface and so users have to call new `send` if they want to send data to 
> topic id.
> 
> On 2021/03/31 14:10:35, Tom Bentley  wrote: 
> > Hi,
> > 
> > Starting with the KIP as written:
> > 
> > 1. I think the Builder.key() and Builder.value() methods in the KIP have
> > the wrong parameter type: Should be K and V I think.
> > 
> > 2. I'm not sure that having separate Builder.topic() and .partition()
> > methods is better than forcing people to set the target via a single method
> > call. For example, `Builder.target(String 

[jira] [Resolved] (KAFKA-10769) Remove JoinGroupRequest#containsValidPattern as it is duplicate to Topic#containsValidPattern

2021-04-06 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10769.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Remove JoinGroupRequest#containsValidPattern as it is duplicate to 
> Topic#containsValidPattern
> -
>
> Key: KAFKA-10769
> URL: https://issues.apache.org/jira/browse/KAFKA-10769
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: highluck
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
>
> as title. Remove the redundant code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12384) Flaky Test ListOffsetsRequestTest.testResponseIncludesLeaderEpoch

2021-04-06 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12384.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Flaky Test ListOffsetsRequestTest.testResponseIncludesLeaderEpoch
> -
>
> Key: KAFKA-12384
> URL: https://issues.apache.org/jira/browse/KAFKA-12384
> Project: Kafka
>  Issue Type: Test
>  Components: core, unit tests
>Reporter: Matthias J. Sax
>Assignee: Chia-Ping Tsai
>Priority: Critical
>  Labels: flaky-test
> Fix For: 3.0.0
>
>
> {quote}org.opentest4j.AssertionFailedError: expected: <(0,0)> but was: 
> <(-1,-1)> at 
> org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at 
> org.junit.jupiter.api.AssertionUtils.failNotEqual(AssertionUtils.java:62) at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182) at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177) at 
> org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1124) at 
> kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:172){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-06 Thread Ismael Juma
Great, thanks. Instead of calling it "bridge release", can we say 3.0?

Ismael

On Tue, Apr 6, 2021 at 7:48 PM David Arthur  wrote:

> Thanks for the feedback, Ismael. Renaming the RPC and using start+len
> instead of start+end sounds fine.
>
> And yes, the controller will allocate the IDs in ZK mode for the bridge
> release.
>
> I'll update the KIP to reflect these points.
>
> Thanks!
>
> On Tue, Apr 6, 2021 at 7:30 PM Ismael Juma  wrote:
>
> > Sorry, one more question: the allocation of ids will be done by the
> > controller even in ZK mode, right?
> >
> > Ismael
> >
> > On Tue, Apr 6, 2021 at 4:26 PM Ismael Juma  wrote:
> >
> > > One additional comment: if you return the number of ids instead of the
> > end
> > > range, you can use an int32.
> > >
> > > Ismael
> > >
> > > On Tue, Apr 6, 2021 at 4:25 PM Ismael Juma  wrote:
> > >
> > >> Thanks for the KIP, David. Any reason not to rename
> > >> AllocateProducerIdBlockRequest to AllocateProducerIdsRequest?
> > >>
> > >> Ismael
> > >>
> > >> On Tue, Apr 6, 2021 at 3:51 PM David Arthur  wrote:
> > >>
> > >>> Hello everyone,
> > >>>
> > >>> I'd like to start the discussion for KIP-730
> > >>>
> > >>>
> > >>>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
> > >>>
> > >>> This KIP proposes a new RPC for generating blocks of IDs for
> > >>> transactional
> > >>> and idempotent producers.
> > >>>
> > >>> Cheers,
> > >>> David Arthur
> > >>>
> > >>
> >
>
>
> --
> David Arthur
>


Re: [DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-06 Thread David Arthur
Thanks for the feedback, Ismael. Renaming the RPC and using start+len
instead of start+end sounds fine.

And yes, the controller will allocate the IDs in ZK mode for the bridge
release.

I'll update the KIP to reflect these points.

Thanks!

On Tue, Apr 6, 2021 at 7:30 PM Ismael Juma  wrote:

> Sorry, one more question: the allocation of ids will be done by the
> controller even in ZK mode, right?
>
> Ismael
>
> On Tue, Apr 6, 2021 at 4:26 PM Ismael Juma  wrote:
>
> > One additional comment: if you return the number of ids instead of the
> end
> > range, you can use an int32.
> >
> > Ismael
> >
> > On Tue, Apr 6, 2021 at 4:25 PM Ismael Juma  wrote:
> >
> >> Thanks for the KIP, David. Any reason not to rename
> >> AllocateProducerIdBlockRequest to AllocateProducerIdsRequest?
> >>
> >> Ismael
> >>
> >> On Tue, Apr 6, 2021 at 3:51 PM David Arthur  wrote:
> >>
> >>> Hello everyone,
> >>>
> >>> I'd like to start the discussion for KIP-730
> >>>
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
> >>>
> >>> This KIP proposes a new RPC for generating blocks of IDs for
> >>> transactional
> >>> and idempotent producers.
> >>>
> >>> Cheers,
> >>> David Arthur
> >>>
> >>
>


-- 
David Arthur


[jira] [Created] (KAFKA-12626) RaftClusterTest and ClusterTestExtensionTest failures

2021-04-06 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-12626:
--

 Summary: RaftClusterTest and ClusterTestExtensionTest failures
 Key: KAFKA-12626
 URL: https://issues.apache.org/jira/browse/KAFKA-12626
 Project: Kafka
  Issue Type: Bug
Reporter: Justine Olshan


RaftClusterTest and ClusterTestExtensionsTest.[Quorum 2] Name=cluster-tests-2, 
security=PLAINTEXT are failing due to
{noformat}
java.util.concurrent.ExecutionException: java.lang.ClassNotFoundException: 
org.apache.kafka.controller.NoOpSnapshotWriterBuilder{noformat}
I think it is related to the changes from 
[https://github.com/apache/kafka/commit/7bc84d6ced71056dbb4cecdc9abbdbd7d8a5aa10#diff-77dc2adb187fd078084644613cff2b53021c8a5fbcdcfa116515734609d1332aR210]
 specifically this part of the code 
[https://github.com/apache/kafka/blob/33d0445b8408289800352de7822340028782a154/metadata/src/main/java/org/apache/kafka/controller/QuorumController.java#L210]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-718: Make KTable Join on Foreign key unopinionated

2021-04-06 Thread Matthias J. Sax
Just catching up here.

I agree that we have two issue, and the first (align subscription store
to main store) can be done as a bug-fix.

For the KIP (that addressed the second), I tend to agree that reusing
`Materialized` might be better as it would keep the API surface area
smaller.


-Matthias

On 4/6/21 8:48 AM, John Roesler wrote:
> Hi Marco,
> 
> Just a quick clarification: I just reviewed the Materialized class. It looks 
> like the only undesirable members are:
> 1. Retention
> 2. Key/Value serdes 
> 
> The underlying store type would be “KeyValueStore” , for which 
> case the withRetention javadoc already says it’s ignored.
> 
> Perhaps we could just stick with Materialized by adding a note to the 
> Key/Value serdes setters that they are ignored for FKJoin subscription stores?
> 
> Not as elegant as a new config class, but these config classes actually bring 
> a fair amount of complexity, so it might be nice to avoid a new one.
> 
> Thanks,
> John
> 
> On Tue, Apr 6, 2021, at 10:28, Marco Aurélio Lotz wrote:
>> Hi John / Guozhang,
>>
>> If I correctly understood John's message, he agrees on having the two
>> scenarios (piggy-back and api extension). In my view, these two scenarios
>> are separate tasks - the first one is a bug-fix and the second is an
>> improvement on the current API.
>>
>> - bug-fix: On the current API, we change its implementation to piggy back
>> on the materialization method provided to the materialized parameter. This
>> way it will not be opinionated anymore and will not force RocksDb
>> persistence for subscription store. Thus an in-memory materialized
>> parameter would imply an in-memory subscription store, for example. From my
>> understanding, the original implementation tried to be as unopionated
>> towards storage methods as possible - and the current implementation is not
>> allowing that. Was that the case? We would still need to add this
>> modification to the update notes, since it may affect some deployments.
>>
>> - improvement: We extend the API to allow a user to fine tune different
>> materialization methods for subscription and join store. This is done by
>> adding a new parameter to the associated methods.
>>
>> Does it sound reasonable to you Guozhang?
>> On your question, does it make sense for an user to decide retention
>> policies (withRetention method) or caching for subscription stores? I can
>> see why to finetune Logging for example, but in a first moment not these
>> other behaviours. That's why I am unsure about using Materialized class.
>>
>> @John, I will update the KIP with your points as soon as we clarify this.
>>
>> Cheers,
>> Marco
>>
>> On Tue, Apr 6, 2021 at 1:17 AM Guozhang Wang  wrote:
>>
>>> Thanks Marco / John,
>>>
>>> I think the arguments for not piggy-backing on the existing Materialized
>>> makes sense; on the other hand, if we go this route should we just use a
>>> separate Materialized than using an extended /
>>> narrowed-scoped MaterializedSubscription since it seems we want to allow
>>> users to fully customize this store?
>>>
>>> Guozhang
>>>
>>> On Thu, Apr 1, 2021 at 5:28 PM John Roesler  wrote:
>>>
 Thanks Marco,

 Sorry if I caused any trouble!

 I don’t remember what I was thinking before, but reasoning about it now,
 you might need the fine-grained choice if:

 1. The number or size of records in each partition of both tables is
 small(ish), but the cardinality of the join is very high. Then you might
 want an in-memory table store, but an on-disk subscription store.

 2. The number or size of records is very large, but the join cardinality
 is low. Then you might need an on-disk table store, but an in-memory
 subscription store.

 3. You might want a different kind (or differently configured) store for
 the subscription store, since it’s access pattern is so different.

 If you buy these, it might be good to put the justification into the KIP.

 I’m in favor of the default you’ve proposed.

 Thanks,
 John

 On Mon, Mar 29, 2021, at 04:24, Marco Aurélio Lotz wrote:
> Hi Guozhang,
>
> Apologies for the late answer. Originally that was my proposal - to
> piggyback on the provided materialisation method (
> https://issues.apache.org/jira/browse/KAFKA-10383).
> John Roesler suggested to us to provide even further fine tuning on API
> level parameters. Maybe we could see this as two sides of the same
>>> coin:
>
> - On the current API, we change it to piggy back on the materialization
> method provided to the join store.
> - We extend the API to allow a user to fine tune different
 materialization
> methods for subscription and join store.
>
> What do you think?
>
> Cheers,
> Marco
>
> On Thu, Mar 4, 2021 at 8:04 PM Guozhang Wang 
>>> wrote:
>
>> Thanks Marco,
>>
>> Just a quick thought: what if we reuse the existing 

Re: [DISCUSS] KIP-633: Drop 24 hour default of grace period in Streams

2021-04-06 Thread Matthias J. Sax
Thanks for the KIP Sophie. It make total sense to get rid of default
grace period of 24h.


Some questions/comments:

(1) Is there any particular reason why we want to remove
`grace(Duration)` method?


(2) About `SlidingWindows#withTimeDifferenceAndGrace` -- personally I
think it's worth to clean it up right now -- given that sliding windows
are rather new the "splash radius" should be small.




(3) Some nits on wording:

> This config determines how long after a window closes any new data will still 
> be processed

Should be "after a window ends" -- a window is closed after grace period
passed.


> one which indicates to use no grace period and not handle out-of-order data

Seems strictly not correct -- if there is a window from 0 to 100 and you
get record with ts 99,98,97,...,0 all but the first of those records are
out-of-order but they are still processed even with a grace period of zero.

Maybe better: "one which indicate to use no grace period and close the
window immediately when the window ends."


> and make a conscious decision to skip the grace period and drop out-of-order 
> records,

Maybe better: "and make a conscious decision to skip the grace period
and close a window immediately"



-Matthias




On 3/31/21 5:02 PM, Guozhang Wang wrote:
> Hello Sophie,
> 
> I agree that the old 24-hour grace period should be updated, and I also
> think now it is a better idea to make the grace period "mandatory" from the
> API names since it is a very important concept and hence worth emphasizing
> to users up front.
> 
> Guozhang
> 
> On Wed, Mar 31, 2021 at 1:58 PM John Roesler  wrote:
> 
>> Thanks for bringing this up, Sophie!
>>
>> This has indeed been a pain point for a lot of people.
>>
>> It's a really thorny issue with no obvious "right" solution.
>> I think your proposal is a good one.
>>
>> Thanks,
>> -John
>>
>> On Wed, 2021-03-31 at 13:28 -0700, Sophie Blee-Goldman
>> wrote:
>>> Hey all,
>>>
>>> It's finally time to reconsider the default grace period in Kafka
>> Streams,
>>> and hopefully save a lot of suppression users from the pain of figuring
>> out
>>> why their results don't show up until 24 hours later. Please check out
>> the
>>> proposal and let me know what you think.
>>>
>>> KIP:
>>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24+hour+default+of+grace+period+in+Streams
>>> <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24hr+default+grace+period
>>>
>>>
>>> JIRA: https://issues.apache.org/jira/browse/KAFKA-8613
>>>
>>> Cheers,
>>> Sophie
>>
>>
>>
> 


[jira] [Resolved] (KAFKA-5146) Kafka Streams: remove compile dependency on connect-json

2021-04-06 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5146.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Kafka Streams: remove compile dependency on connect-json
> 
>
> Key: KAFKA-5146
> URL: https://issues.apache.org/jira/browse/KAFKA-5146
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0, 0.10.2.1
>Reporter: Michael G. Noll
>Assignee: Marco Lotz
>Priority: Minor
> Fix For: 3.0.0
>
>
> We currently have a compile-dependency on `connect-json`:
> {code}
> 
>   org.apache.kafka
>   connect-json
>   0.10.2.0
>   compile
>   
> {code}
> The snippet above is from the generated POM of Kafka Streams as of 0.10.2.0 
> release.
> AFAICT the only reason for that is because the Kafka Streams *examples* 
> showcase some JSON processing, but that’s it.
> First and foremost, we should remove the connect-json dependency, and also 
> figure out a way to set up / structure the examples so we that we can 
> continue showcasing JSON support.  Alternatively, we could consider removing 
> the JSON example (but I don't like that, personally).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Permission to Assign Jiras

2021-04-06 Thread Konstantine Karantasis
Hi Ryan.

I found your first and last name on jira and added you to the list of
contributors for the Apache Kafka project.
You should be able to assign tickets to yourself now.

Welcome!
Konstantine

On Tue, Apr 6, 2021 at 2:20 PM Ryan Dielhenn 
wrote:

> Hello,
>
> May I have permission to assign Jiras to myself? I just got this done
> https://issues.apache.org/jira/browse/KAFKA-12265 but don't have access to
> reassign it.
>
> Best,
> Ryan Dielhenn
>


Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-04-06 Thread Colin McCabe
Hi Soumyajit,

The difficult thing is deciding which fields to share and how to share them.  
Key and value are probably the minimum we need to make this useful.  If we do 
choose to go with byte buffer, it is not necessary to also pass the size, since 
ByteBuffer maintains that internally.

ApiRecordError is also an internal class, so it can't be used in a public API.  
I think most likely if we were going to do this, we would just catch an 
exception and use the exception text as the validation error.

best,
Colin


On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
> Hi Tom,
> 
> Makes sense. Thanks for the explanation. I get what Colin had meant earlier.
> 
> Would a different signature for the interface work? Example below, but
> please feel free to suggest alternatives if there are any possibilities of
> such.
> 
> If needed, then deprecating this and introducing a new signature would be
> straight-forward as both (old and new) calls could be made serially in the
> LogValidator allowing a coexistence for a transition period.
> 
> interface BrokerRecordValidator {
> /**
>  * Validate the record for a given topic-partition.
>  */
> Optional validateRecord(TopicPartition topicPartition,
> int keySize, ByteBuffer key, int valueSize, ByteBuffer value, Header[]
> headers);
> }
> 
> 
> On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley  wrote:
> 
> > Hi Soumyajit,
> >
> > Although that class does indeed have public access at the Java level, it
> > does so only because it needs to be used by internal Kafka code which lives
> > in other packages (there isn't any more restrictive access modifier which
> > would work). What the project considers public Java API is determined by
> > what's included in the published Javadocs:
> > https://kafka.apache.org/27/javadoc/index.html, which doesn't include the
> > org.apache.kafka.common.record package.
> >
> > One of the problems with making these internal classes public is it ties
> > the project into supporting them as APIs, which can make changing them much
> > harder and in the long run that can slow, or even prevent, innovation in
> > the rest of Kafka.
> >
> > Kind regards,
> >
> > Tom
> >
> >
> >
> > On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu 
> > wrote:
> >
> > > Hi Colin,
> > > I see that both the interface "Record" and the implementation
> > > "DefaultRecord" being used in LogValidator.java are public
> > > interfaces/classes.
> > >
> > >
> > >
> > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
> > > and
> > >
> > >
> > https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
> > >
> > > So, it should be ok to use them. Let me know what you think.
> > >
> > > Thanks,
> > > Soumyajit
> > >
> > >
> > > On Fri, Apr 2, 2021 at 8:51 AM Colin McCabe  wrote:
> > >
> > > > Hi Soumyajit,
> > > >
> > > > I believe we've had discussions about proposals similar to this before,
> > > > although I'm having trouble finding one right now.  The issue here is
> > > that
> > > > Record is a private class -- it is not part of any public API, and may
> > > > change at any time.  So we can't expose it in public APIs.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Thu, Apr 1, 2021, at 14:18, Soumyajit Sahu wrote:
> > > > > Hello All,
> > > > > I would like to start a discussion on the KIP-729.
> > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-729%3A+Custom+validation+of+records+on+the+broker+prior+to+log+append
> > > > >
> > > > > Thanks!
> > > > > Soumyajit
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-06 Thread Ismael Juma
Sorry, one more question: the allocation of ids will be done by the
controller even in ZK mode, right?

Ismael

On Tue, Apr 6, 2021 at 4:26 PM Ismael Juma  wrote:

> One additional comment: if you return the number of ids instead of the end
> range, you can use an int32.
>
> Ismael
>
> On Tue, Apr 6, 2021 at 4:25 PM Ismael Juma  wrote:
>
>> Thanks for the KIP, David. Any reason not to rename
>> AllocateProducerIdBlockRequest to AllocateProducerIdsRequest?
>>
>> Ismael
>>
>> On Tue, Apr 6, 2021 at 3:51 PM David Arthur  wrote:
>>
>>> Hello everyone,
>>>
>>> I'd like to start the discussion for KIP-730
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
>>>
>>> This KIP proposes a new RPC for generating blocks of IDs for
>>> transactional
>>> and idempotent producers.
>>>
>>> Cheers,
>>> David Arthur
>>>
>>


Re: [DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-06 Thread Ismael Juma
One additional comment: if you return the number of ids instead of the end
range, you can use an int32.

Ismael

On Tue, Apr 6, 2021 at 4:25 PM Ismael Juma  wrote:

> Thanks for the KIP, David. Any reason not to rename
> AllocateProducerIdBlockRequest to AllocateProducerIdsRequest?
>
> Ismael
>
> On Tue, Apr 6, 2021 at 3:51 PM David Arthur  wrote:
>
>> Hello everyone,
>>
>> I'd like to start the discussion for KIP-730
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
>>
>> This KIP proposes a new RPC for generating blocks of IDs for transactional
>> and idempotent producers.
>>
>> Cheers,
>> David Arthur
>>
>


Re: [DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-06 Thread Ismael Juma
Thanks for the KIP, David. Any reason not to rename
AllocateProducerIdBlockRequest to AllocateProducerIdsRequest?

Ismael

On Tue, Apr 6, 2021 at 3:51 PM David Arthur  wrote:

> Hello everyone,
>
> I'd like to start the discussion for KIP-730
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode
>
> This KIP proposes a new RPC for generating blocks of IDs for transactional
> and idempotent producers.
>
> Cheers,
> David Arthur
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #11

2021-04-06 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-04-06 Thread Soumyajit Sahu
Hi Tom,

Makes sense. Thanks for the explanation. I get what Colin had meant earlier.

Would a different signature for the interface work? Example below, but
please feel free to suggest alternatives if there are any possibilities of
such.

If needed, then deprecating this and introducing a new signature would be
straight-forward as both (old and new) calls could be made serially in the
LogValidator allowing a coexistence for a transition period.

interface BrokerRecordValidator {
/**
 * Validate the record for a given topic-partition.
 */
Optional validateRecord(TopicPartition topicPartition,
int keySize, ByteBuffer key, int valueSize, ByteBuffer value, Header[]
headers);
}


On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley  wrote:

> Hi Soumyajit,
>
> Although that class does indeed have public access at the Java level, it
> does so only because it needs to be used by internal Kafka code which lives
> in other packages (there isn't any more restrictive access modifier which
> would work). What the project considers public Java API is determined by
> what's included in the published Javadocs:
> https://kafka.apache.org/27/javadoc/index.html, which doesn't include the
> org.apache.kafka.common.record package.
>
> One of the problems with making these internal classes public is it ties
> the project into supporting them as APIs, which can make changing them much
> harder and in the long run that can slow, or even prevent, innovation in
> the rest of Kafka.
>
> Kind regards,
>
> Tom
>
>
>
> On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu 
> wrote:
>
> > Hi Colin,
> > I see that both the interface "Record" and the implementation
> > "DefaultRecord" being used in LogValidator.java are public
> > interfaces/classes.
> >
> >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
> > and
> >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
> >
> > So, it should be ok to use them. Let me know what you think.
> >
> > Thanks,
> > Soumyajit
> >
> >
> > On Fri, Apr 2, 2021 at 8:51 AM Colin McCabe  wrote:
> >
> > > Hi Soumyajit,
> > >
> > > I believe we've had discussions about proposals similar to this before,
> > > although I'm having trouble finding one right now.  The issue here is
> > that
> > > Record is a private class -- it is not part of any public API, and may
> > > change at any time.  So we can't expose it in public APIs.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Thu, Apr 1, 2021, at 14:18, Soumyajit Sahu wrote:
> > > > Hello All,
> > > > I would like to start a discussion on the KIP-729.
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-729%3A+Custom+validation+of+records+on+the+broker+prior+to+log+append
> > > >
> > > > Thanks!
> > > > Soumyajit
> > > >
> > >
> >
>


[DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-06 Thread David Arthur
Hello everyone,

I'd like to start the discussion for KIP-730

https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode

This KIP proposes a new RPC for generating blocks of IDs for transactional
and idempotent producers.

Cheers,
David Arthur


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #4

2021-04-06 Thread Apache Jenkins Server
See 




Re: [VOTE] 2.8.0 RC0

2021-04-06 Thread John Roesler
Hello again, all,

I am closing this vote in favor of the 2.8.0 RC1 thread.

Thank you,
John

On Tue, 2021-03-30 at 17:11 -0500, John Roesler wrote:
> Hello again, all,
> 
> I just wanted to mention that I am aware of Justin's
> concerns in the 2.6.2 thread:
> https://lists.apache.org/thread.html/r2df54c11c10d3d38443054998bc7dd92d34362641733c2fb7c579b50%40%3Cdev.kafka.apache.org%3E
> 
> I plan to make sure we address these concerns before the
> actual 2.8.0 release, but wanted to get RC0 out asap for
> testing.
> 
> Thank you,
> John
> 
> On Tue, 2021-03-30 at 16:37 -0500, John Roesler wrote:
> > Hello Kafka users, developers and client-developers,
> > 
> > This is the first candidate for release of Apache Kafka
> > 2.8.0. This is a major release that includes many new
> > features, including:
> > 
> > * Early-access release of replacing Zookeeper with a self-
> > managed quorum
> > * Add Describe Cluster API
> > * Support mutual TLS authentication on SASL_SSL listeners
> > * Ergonomic improvements to Streams TopologyTestDriver
> > * Logger API improvement to respect the hierarchy
> > * Request/response trace logs are now JSON-formatted
> > * New API to add and remove Streams threads while running
> > * New REST API to expose Connect task configurations
> > * Fixed the TimeWindowDeserializer to be able to deserialize
> > keys outside of Streams (such as in the console consumer)
> > * Streams resilient improvement: new uncaught exception
> > handler
> > * Streams resilience improvement: automatically recover from
> > transient timeout exceptions
> > 
> > 
> > 
> > Release notes for the 2.8.0 release:
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/RELEASE_NOTES.html
> > 
> > *** Please download, test and vote by 6 April 2021 ***
> > 
> > Kafka's KEYS file containing PGP keys we use to sign the
> > release:
> > https://kafka.apache.org/KEYS
> > 
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/
> > 
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > 
> > * Javadoc:
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/javadoc/
> > 
> > * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.8.0-rc0
> > 
> > * Documentation:
> > https://kafka.apache.org/28/documentation.html
> > 
> > * Protocol:
> > https://kafka.apache.org/28/protocol.html
> > 
> > 
> > /**
> > 
> > Thanks,
> > John
> > 
> > 
> 
> 




Subject: [VOTE] 2.8.0 RC1

2021-04-06 Thread John Roesler
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka
2.8.0. This is a major release that includes many new
features, including:

* Early-access release of replacing Zookeeper with a self-
managed quorum
* Add Describe Cluster API
* Support mutual TLS authentication on SASL_SSL listeners
* Ergonomic improvements to Streams TopologyTestDriver
* Logger API improvement to respect the hierarchy
* Request/response trace logs are now JSON-formatted
* New API to add and remove Streams threads while running
* New REST API to expose Connect task configurations
* Fixed the TimeWindowDeserializer to be able to deserialize
keys outside of Streams (such as in the console consumer)
* Streams resilient improvement: new uncaught exception
handler
* Streams resilience improvement: automatically recover from
transient timeout exceptions




Release notes for the 2.8.0 release:
https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/RELEASE_NOTES.html


*** Please download, test and vote by 6 April 2021 ***

Kafka's KEYS file containing PGP keys we use to sign the
release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):

https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:

https://home.apache.org/~vvcephei/kafka-2.8.0-rc1/javadoc/

* Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:

https://github.com/apache/kafka/releases/tag/2.8.0-rc1

* Documentation:
https://kafka.apache.org/28/documentation.html

* Protocol:
https://kafka.apache.org/28/protocol.html


/**

Thanks,
John





Permission to Assign Jiras

2021-04-06 Thread Ryan Dielhenn
Hello,

May I have permission to assign Jiras to myself? I just got this done
https://issues.apache.org/jira/browse/KAFKA-12265 but don't have access to
reassign it.

Best,
Ryan Dielhenn


[jira] [Resolved] (KAFKA-12602) The LICENSE and NOTICE files don't list everything they should

2021-04-06 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-12602.
--
Resolution: Fixed

> The LICENSE and NOTICE files don't list everything they should
> --
>
> Key: KAFKA-12602
> URL: https://issues.apache.org/jira/browse/KAFKA-12602
> Project: Kafka
>  Issue Type: Bug
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.8.0
>
>
> [~jmclean] raised this on the mailing list: 
> [https://lists.apache.org/thread.html/r2df54c11c10d3d38443054998bc7dd92d34362641733c2fb7c579b50%40%3Cdev.kafka.apache.org%3E]
>  
> We need to make  the license file match what we are actually shipping in 
> source and binary distributions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12625) Fix the NOTICE file

2021-04-06 Thread John Roesler (Jira)
John Roesler created KAFKA-12625:


 Summary: Fix the NOTICE file
 Key: KAFKA-12625
 URL: https://issues.apache.org/jira/browse/KAFKA-12625
 Project: Kafka
  Issue Type: Task
Reporter: John Roesler
 Fix For: 3.0.0, 2.8.1


In https://issues.apache.org/jira/browse/KAFKA-12602, we fixed the license 
file, and in the comments, Justin noted that we really should fix the NOTICE 
file as well.

Basically, we need to look though each of the packaged dependencies and 
transmit each of their NOTICEs (for Apache2 deps) or otherwise, any copyright 
notices they assert.

It would be good to consider automating a check for this as well (see 
https://issues.apache.org/jira/browse/KAFKA-12622)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #10

2021-04-06 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12624) Fix LICENSE in 2.6

2021-04-06 Thread John Roesler (Jira)
John Roesler created KAFKA-12624:


 Summary: Fix LICENSE in 2.6
 Key: KAFKA-12624
 URL: https://issues.apache.org/jira/browse/KAFKA-12624
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler
Assignee: A. Sophie Blee-Goldman
 Fix For: 2.6.2


Just splitting this out as a sub-task.

I've fixed the parent ticket on trunk and 2.8.

You'll need to cherry-pick the fix from 2.8 (see 
[https://github.com/apache/kafka/pull/10474)]

Then, you can follow the manual verification steps I detailed here: 
https://issues.apache.org/jira/browse/KAFKA-12622



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12623) Fix LICENSE in 2.7

2021-04-06 Thread John Roesler (Jira)
John Roesler created KAFKA-12623:


 Summary: Fix LICENSE in 2.7
 Key: KAFKA-12623
 URL: https://issues.apache.org/jira/browse/KAFKA-12623
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler
Assignee: Mickael Maison
 Fix For: 2.7.1


Just splitting this out as a sub-task.

I've fixed the parent ticket on trunk and 2.8.

You'll need to cherry-pick the fix from 2.8 (see 
[https://github.com/apache/kafka/pull/10474)]

Then, you can follow the manual verification steps I detailed here: 
https://issues.apache.org/jira/browse/KAFKA-12622



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12622) Automate LICENCSE file validation

2021-04-06 Thread John Roesler (Jira)
John Roesler created KAFKA-12622:


 Summary: Automate LICENCSE file validation
 Key: KAFKA-12622
 URL: https://issues.apache.org/jira/browse/KAFKA-12622
 Project: Kafka
  Issue Type: Task
Reporter: John Roesler
 Fix For: 3.0.0, 2.8.1


In https://issues.apache.org/jira/browse/KAFKA-12602, we manually constructed a 
correct license file for 2.8.0. This file will certainly become wrong again in 
later releases, so we need to write some kind of script to automate a check.

It crossed my mind to automate the generation of the file, but it seems to be 
an intractable problem, considering that each dependency may change licenses, 
may package license files, link to them from their poms, link to them from 
their repos, etc. I've also found multiple URLs listed with various delimiters, 
broken links that I have to chase down, etc.

Therefore, it seems like the solution to aim for is simply: list all the jars 
that we package, and print out a report of each jar that's extra or missing vs. 
the ones in our `LICENSE-binary` file.

Here's how I do this manually right now:
{code:java}
// build the binary artifacts
$ ./gradlewAll releaseTarGz

// unpack the binary artifact $ cd core/build/distributions/
$ tar xf kafka_2.13-X.Y.Z.tgz
$ cd xf kafka_2.13-X.Y.Z

// list the packaged jars 
// (you can ignore the jars for our own modules, like kafka, kafka-clients, 
etc.)
$ ls libs/

// cross check the jars with the packaged LICENSE
// make sure all dependencies are listed with the right versions
$ cat LICENSE

// also double check all the mentioned license files are present
$ ls licenses {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12621) Kafka setup with Zookeeper specifying an alternate znode as root fails

2021-04-06 Thread Jibitesh Prasad (Jira)
Jibitesh Prasad created KAFKA-12621:
---

 Summary: Kafka setup with Zookeeper specifying an alternate znode 
as root fails
 Key: KAFKA-12621
 URL: https://issues.apache.org/jira/browse/KAFKA-12621
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 2.6.1
 Environment: Linux
OS: 16.04.1-Ubuntu SMP 
Architecture: x86_64
Kernel Version: 4.15.0-1108-azure
Reporter: Jibitesh Prasad


While configuring kafka with an znode apart from "/", the configuration is 
created in the wrong znode. Fo example, I have the following entry in my 
server.properties

_zookeeper.connect=10.114.103.207:2181/kafka_secondary_cluster,10.114.103.206:2181/kafka_secondary_cluster,10.114.103.205:2181/kafka_secondary_cluster_

The IPs are the IP addresses of the nodes of zookeeper cluster. I expect the 
kafka server to use _kafka_secondary_cluster_ as the znode in the zookeeper 
nodes. But, the znode which is created is actually

_/kafka_secondary_cluster,10.114.103.206:2181/kafka_secondary_cluster,10.114.103.205:2181/kafka_secondary_cluster_

Executing ls on the above path shows me the necessary znodes being created in 
that path _[zk: localhost:2181(CONNECTED) 1] ls 
/kafka_secondary_cluster,10.114.103.206:2181/kafka_secondary_cluster,10.114.103.205:2181/kafka_secondary_cluster_

Output:
_[admin, brokers, cluster, config, consumers, controller, controller_epoch, 
isr_change_notification, latest_producer_id_block, log_dir_event_notification]_

Shouldn't these configurations be created in _/kafka_secondary_cluster_. It 
seems the comma separated values are not being split correctly. Or am I doing 
something wrong?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12620) Producer IDs generated by the controller

2021-04-06 Thread David Arthur (Jira)
David Arthur created KAFKA-12620:


 Summary: Producer IDs generated by the controller
 Key: KAFKA-12620
 URL: https://issues.apache.org/jira/browse/KAFKA-12620
 Project: Kafka
  Issue Type: New Feature
Reporter: David Arthur
Assignee: David Arthur


This is to track the implementation of 
[KIP-730|https://cwiki.apache.org/confluence/display/KAFKA/KIP-730%3A+Producer+ID+generation+in+KRaft+mode]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #9

2021-04-06 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-633: Drop 24 hour default of grace period in Streams

2021-04-06 Thread Lotz Utfpr
Makes sense to me! +1

Apologies for being brief. This email was sent from my mobile phone.

> On 6 Apr 2021, at 18:45, Walker Carlson  wrote:
> 
> This makes sense to me +1!
> 
> Walker
> 
>> On Tue, Apr 6, 2021 at 11:08 AM Guozhang Wang  wrote:
>> 
>> +1. Thanks!
>> 
>> On Tue, Apr 6, 2021 at 7:00 AM Leah Thomas 
>> wrote:
>> 
>>> Thanks for picking this up, Sophie. +1 from me, non-binding.
>>> 
>>> Leah
>>> 
 On Mon, Apr 5, 2021 at 9:42 PM John Roesler  wrote:
>>> 
 Thanks, Sophie,
 
 I’m +1 (binding)
 
 -John
 
 On Mon, Apr 5, 2021, at 21:34, Sophie Blee-Goldman wrote:
> Hey all,
> 
> I'd like to start the voting on KIP-633, to drop the awkward 24 hour
 grace
> period and improve the API to raise visibility on an important
>> concept
>>> in
> Kafka Streams: grace period nad out-of-order data handling.
> 
> Here's the KIP:
> 
 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24+hour+default+of+grace+period+in+Streams
> <
 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24hr+default+grace+period
> 
> 
> Cheers,
> Sophie
> 
 
>>> 
>> 
>> 
>> --
>> -- Guozhang
>> 


Re: [VOTE] KIP-633: Drop 24 hour default of grace period in Streams

2021-04-06 Thread Walker Carlson
This makes sense to me +1!

Walker

On Tue, Apr 6, 2021 at 11:08 AM Guozhang Wang  wrote:

> +1. Thanks!
>
> On Tue, Apr 6, 2021 at 7:00 AM Leah Thomas 
> wrote:
>
> > Thanks for picking this up, Sophie. +1 from me, non-binding.
> >
> > Leah
> >
> > On Mon, Apr 5, 2021 at 9:42 PM John Roesler  wrote:
> >
> > > Thanks, Sophie,
> > >
> > > I’m +1 (binding)
> > >
> > > -John
> > >
> > > On Mon, Apr 5, 2021, at 21:34, Sophie Blee-Goldman wrote:
> > > > Hey all,
> > > >
> > > > I'd like to start the voting on KIP-633, to drop the awkward 24 hour
> > > grace
> > > > period and improve the API to raise visibility on an important
> concept
> > in
> > > > Kafka Streams: grace period nad out-of-order data handling.
> > > >
> > > > Here's the KIP:
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24+hour+default+of+grace+period+in+Streams
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24hr+default+grace+period
> > > >
> > > >
> > > > Cheers,
> > > > Sophie
> > > >
> > >
> >
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-725: Streamlining configurations for TimeWindowedDeserializer.

2021-04-06 Thread Guozhang Wang
+1. Thanks!

On Tue, Apr 6, 2021 at 7:01 AM Leah Thomas 
wrote:

> Hi Sagar, +1 non-binding. Thanks again for doing this.
>
> Leah
>
> On Mon, Apr 5, 2021 at 9:40 PM John Roesler  wrote:
>
> > Thanks, Sagar!
> >
> > I’m +1 (binding)
> >
> > -John
> >
> > On Mon, Apr 5, 2021, at 21:35, Sophie Blee-Goldman wrote:
> > > Thanks for the KIP! +1 (binding) from me
> > >
> > > Cheers,
> > > Sophie
> > >
> > > On Mon, Apr 5, 2021 at 7:13 PM Sagar 
> wrote:
> > >
> > > > Hi All,
> > > >
> > > > I would like to start voting on the following KIP:
> > > >
> > > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930
> > > >
> > > > Thanks!
> > > > Sagar.
> > > >
> > >
> >
>


-- 
-- Guozhang


Re: [VOTE] KIP-633: Drop 24 hour default of grace period in Streams

2021-04-06 Thread Guozhang Wang
+1. Thanks!

On Tue, Apr 6, 2021 at 7:00 AM Leah Thomas 
wrote:

> Thanks for picking this up, Sophie. +1 from me, non-binding.
>
> Leah
>
> On Mon, Apr 5, 2021 at 9:42 PM John Roesler  wrote:
>
> > Thanks, Sophie,
> >
> > I’m +1 (binding)
> >
> > -John
> >
> > On Mon, Apr 5, 2021, at 21:34, Sophie Blee-Goldman wrote:
> > > Hey all,
> > >
> > > I'd like to start the voting on KIP-633, to drop the awkward 24 hour
> > grace
> > > period and improve the API to raise visibility on an important concept
> in
> > > Kafka Streams: grace period nad out-of-order data handling.
> > >
> > > Here's the KIP:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24+hour+default+of+grace+period+in+Streams
> > > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24hr+default+grace+period
> > >
> > >
> > > Cheers,
> > > Sophie
> > >
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-718: Make KTable Join on Foreign key unopinionated

2021-04-06 Thread John Roesler
Hi Marco,

Just a quick clarification: I just reviewed the Materialized class. It looks 
like the only undesirable members are:
1. Retention
2. Key/Value serdes 

The underlying store type would be “KeyValueStore” , for which 
case the withRetention javadoc already says it’s ignored.

Perhaps we could just stick with Materialized by adding a note to the Key/Value 
serdes setters that they are ignored for FKJoin subscription stores?

Not as elegant as a new config class, but these config classes actually bring a 
fair amount of complexity, so it might be nice to avoid a new one.

Thanks,
John

On Tue, Apr 6, 2021, at 10:28, Marco Aurélio Lotz wrote:
> Hi John / Guozhang,
> 
> If I correctly understood John's message, he agrees on having the two
> scenarios (piggy-back and api extension). In my view, these two scenarios
> are separate tasks - the first one is a bug-fix and the second is an
> improvement on the current API.
> 
> - bug-fix: On the current API, we change its implementation to piggy back
> on the materialization method provided to the materialized parameter. This
> way it will not be opinionated anymore and will not force RocksDb
> persistence for subscription store. Thus an in-memory materialized
> parameter would imply an in-memory subscription store, for example. From my
> understanding, the original implementation tried to be as unopionated
> towards storage methods as possible - and the current implementation is not
> allowing that. Was that the case? We would still need to add this
> modification to the update notes, since it may affect some deployments.
> 
> - improvement: We extend the API to allow a user to fine tune different
> materialization methods for subscription and join store. This is done by
> adding a new parameter to the associated methods.
> 
> Does it sound reasonable to you Guozhang?
> On your question, does it make sense for an user to decide retention
> policies (withRetention method) or caching for subscription stores? I can
> see why to finetune Logging for example, but in a first moment not these
> other behaviours. That's why I am unsure about using Materialized class.
> 
> @John, I will update the KIP with your points as soon as we clarify this.
> 
> Cheers,
> Marco
> 
> On Tue, Apr 6, 2021 at 1:17 AM Guozhang Wang  wrote:
> 
> > Thanks Marco / John,
> >
> > I think the arguments for not piggy-backing on the existing Materialized
> > makes sense; on the other hand, if we go this route should we just use a
> > separate Materialized than using an extended /
> > narrowed-scoped MaterializedSubscription since it seems we want to allow
> > users to fully customize this store?
> >
> > Guozhang
> >
> > On Thu, Apr 1, 2021 at 5:28 PM John Roesler  wrote:
> >
> > > Thanks Marco,
> > >
> > > Sorry if I caused any trouble!
> > >
> > > I don’t remember what I was thinking before, but reasoning about it now,
> > > you might need the fine-grained choice if:
> > >
> > > 1. The number or size of records in each partition of both tables is
> > > small(ish), but the cardinality of the join is very high. Then you might
> > > want an in-memory table store, but an on-disk subscription store.
> > >
> > > 2. The number or size of records is very large, but the join cardinality
> > > is low. Then you might need an on-disk table store, but an in-memory
> > > subscription store.
> > >
> > > 3. You might want a different kind (or differently configured) store for
> > > the subscription store, since it’s access pattern is so different.
> > >
> > > If you buy these, it might be good to put the justification into the KIP.
> > >
> > > I’m in favor of the default you’ve proposed.
> > >
> > > Thanks,
> > > John
> > >
> > > On Mon, Mar 29, 2021, at 04:24, Marco Aurélio Lotz wrote:
> > > > Hi Guozhang,
> > > >
> > > > Apologies for the late answer. Originally that was my proposal - to
> > > > piggyback on the provided materialisation method (
> > > > https://issues.apache.org/jira/browse/KAFKA-10383).
> > > > John Roesler suggested to us to provide even further fine tuning on API
> > > > level parameters. Maybe we could see this as two sides of the same
> > coin:
> > > >
> > > > - On the current API, we change it to piggy back on the materialization
> > > > method provided to the join store.
> > > > - We extend the API to allow a user to fine tune different
> > > materialization
> > > > methods for subscription and join store.
> > > >
> > > > What do you think?
> > > >
> > > > Cheers,
> > > > Marco
> > > >
> > > > On Thu, Mar 4, 2021 at 8:04 PM Guozhang Wang 
> > wrote:
> > > >
> > > > > Thanks Marco,
> > > > >
> > > > > Just a quick thought: what if we reuse the existing Materialized
> > > object for
> > > > > both subscription and join stores, instead of introducing a new
> > param /
> > > > > class?
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Tue, Mar 2, 2021 at 1:07 AM Marco Aurélio Lotz <
> > > cont...@marcolotz.com>
> > > > > wrote:
> > > > >
> > > > > > Hi folks,
> > > > > >
> > > > > 

Re: [DISCUSS] KIP-718: Make KTable Join on Foreign key unopinionated

2021-04-06 Thread Marco Aurélio Lotz
Hi John / Guozhang,

If I correctly understood John's message, he agrees on having the two
scenarios (piggy-back and api extension). In my view, these two scenarios
are separate tasks - the first one is a bug-fix and the second is an
improvement on the current API.

- bug-fix: On the current API, we change its implementation to piggy back
on the materialization method provided to the materialized parameter. This
way it will not be opinionated anymore and will not force RocksDb
persistence for subscription store. Thus an in-memory materialized
parameter would imply an in-memory subscription store, for example. From my
understanding, the original implementation tried to be as unopionated
towards storage methods as possible - and the current implementation is not
allowing that. Was that the case? We would still need to add this
modification to the update notes, since it may affect some deployments.

- improvement: We extend the API to allow a user to fine tune different
materialization methods for subscription and join store. This is done by
adding a new parameter to the associated methods.

Does it sound reasonable to you Guozhang?
On your question, does it make sense for an user to decide retention
policies (withRetention method) or caching for subscription stores? I can
see why to finetune Logging for example, but in a first moment not these
other behaviours. That's why I am unsure about using Materialized class.

@John, I will update the KIP with your points as soon as we clarify this.

Cheers,
Marco

On Tue, Apr 6, 2021 at 1:17 AM Guozhang Wang  wrote:

> Thanks Marco / John,
>
> I think the arguments for not piggy-backing on the existing Materialized
> makes sense; on the other hand, if we go this route should we just use a
> separate Materialized than using an extended /
> narrowed-scoped MaterializedSubscription since it seems we want to allow
> users to fully customize this store?
>
> Guozhang
>
> On Thu, Apr 1, 2021 at 5:28 PM John Roesler  wrote:
>
> > Thanks Marco,
> >
> > Sorry if I caused any trouble!
> >
> > I don’t remember what I was thinking before, but reasoning about it now,
> > you might need the fine-grained choice if:
> >
> > 1. The number or size of records in each partition of both tables is
> > small(ish), but the cardinality of the join is very high. Then you might
> > want an in-memory table store, but an on-disk subscription store.
> >
> > 2. The number or size of records is very large, but the join cardinality
> > is low. Then you might need an on-disk table store, but an in-memory
> > subscription store.
> >
> > 3. You might want a different kind (or differently configured) store for
> > the subscription store, since it’s access pattern is so different.
> >
> > If you buy these, it might be good to put the justification into the KIP.
> >
> > I’m in favor of the default you’ve proposed.
> >
> > Thanks,
> > John
> >
> > On Mon, Mar 29, 2021, at 04:24, Marco Aurélio Lotz wrote:
> > > Hi Guozhang,
> > >
> > > Apologies for the late answer. Originally that was my proposal - to
> > > piggyback on the provided materialisation method (
> > > https://issues.apache.org/jira/browse/KAFKA-10383).
> > > John Roesler suggested to us to provide even further fine tuning on API
> > > level parameters. Maybe we could see this as two sides of the same
> coin:
> > >
> > > - On the current API, we change it to piggy back on the materialization
> > > method provided to the join store.
> > > - We extend the API to allow a user to fine tune different
> > materialization
> > > methods for subscription and join store.
> > >
> > > What do you think?
> > >
> > > Cheers,
> > > Marco
> > >
> > > On Thu, Mar 4, 2021 at 8:04 PM Guozhang Wang 
> wrote:
> > >
> > > > Thanks Marco,
> > > >
> > > > Just a quick thought: what if we reuse the existing Materialized
> > object for
> > > > both subscription and join stores, instead of introducing a new
> param /
> > > > class?
> > > >
> > > > Guozhang
> > > >
> > > > On Tue, Mar 2, 2021 at 1:07 AM Marco Aurélio Lotz <
> > cont...@marcolotz.com>
> > > > wrote:
> > > >
> > > > > Hi folks,
> > > > >
> > > > > I would like to invite everyone to discuss further KIP-718:
> > > > >
> > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-718%3A+Make+KTable+Join+on+Foreign+key+unopinionated
> > > > >
> > > > > I welcome all feedback on it.
> > > > >
> > > > > Kind Regards,
> > > > > Marco Lotz
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-707: The future of KafkaFuture

2021-04-06 Thread Tom Bentley
Hi,

The vote passes with 4 binding +1s (Ismael, David Chia-Ping and Colin), and
1 non-binding +1 (Ryanne).

Many thanks to those who commented and/or voted.

Tom

On Thu, Apr 1, 2021 at 8:21 PM Colin McCabe  wrote:

> +1 (binding).  Thanks for the KIP.
>
> Colin
>
>
> On Tue, Mar 30, 2021, at 20:36, Chia-Ping Tsai wrote:
> > Thanks for this KIP. +1 (binding)
> >
> > On 2021/03/29 15:34:55, Tom Bentley  wrote:
> > > Hi,
> > >
> > > I'd like to start a vote on KIP-707, which proposes to add
> > > KafkaFuture.toCompletionStage(), deprecate KafkaFuture.Function and
> make a
> > > couple of other minor cosmetic changes.
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-707%3A+The+future+of+KafkaFuture
> > >
> > > Many thanks,
> > >
> > > Tom
> > >
> >
>
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » 2.8 #3

2021-04-06 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-633: Drop 24 hour default of grace period in Streams

2021-04-06 Thread Leah Thomas
Thanks for picking this up, Sophie. +1 from me, non-binding.

Leah

On Mon, Apr 5, 2021 at 9:42 PM John Roesler  wrote:

> Thanks, Sophie,
>
> I’m +1 (binding)
>
> -John
>
> On Mon, Apr 5, 2021, at 21:34, Sophie Blee-Goldman wrote:
> > Hey all,
> >
> > I'd like to start the voting on KIP-633, to drop the awkward 24 hour
> grace
> > period and improve the API to raise visibility on an important concept in
> > Kafka Streams: grace period nad out-of-order data handling.
> >
> > Here's the KIP:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24+hour+default+of+grace+period+in+Streams
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24hr+default+grace+period
> >
> >
> > Cheers,
> > Sophie
> >
>


Re: [VOTE] KIP-725: Streamlining configurations for TimeWindowedDeserializer.

2021-04-06 Thread Leah Thomas
Hi Sagar, +1 non-binding. Thanks again for doing this.

Leah

On Mon, Apr 5, 2021 at 9:40 PM John Roesler  wrote:

> Thanks, Sagar!
>
> I’m +1 (binding)
>
> -John
>
> On Mon, Apr 5, 2021, at 21:35, Sophie Blee-Goldman wrote:
> > Thanks for the KIP! +1 (binding) from me
> >
> > Cheers,
> > Sophie
> >
> > On Mon, Apr 5, 2021 at 7:13 PM Sagar  wrote:
> >
> > > Hi All,
> > >
> > > I would like to start voting on the following KIP:
> > >
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930
> > >
> > > Thanks!
> > > Sagar.
> > >
> >
>


help to review patch

2021-04-06 Thread lamber-ken
hi kafka community:


In ProducerPerformance, random payload always same. It has a great impact when 
use the compression.type option,
here is my patch[1], please review when you get a chance, thanks.


best,


[1] https://github.com/apache/kafka/pull/10469

Re: New Jenkins job for master and release branches

2021-04-06 Thread Ismael Juma
On Sun, Apr 4, 2021 at 2:22 PM Ismael Juma  wrote:

> There is currently an open PR  to
> extend the Jenkinsfile with functionality desired for branch builds. Once
> that is merged and has been shown to work correctly, I will delete legacy
> Jenkins jobs like:
>
>- https://ci-builds.apache.org/job/Kafka/job/kafka-2.8-jdk8/
>- https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk11/
>
> This is now done.

Ismael

>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #8

2021-04-06 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 404224 lines...]
[2021-04-06T09:59:48.276Z] 
[2021-04-06T09:59:48.276Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() PASSED
[2021-04-06T09:59:48.276Z] 
[2021-04-06T09:59:48.276Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls() STARTED
[2021-04-06T09:59:58.260Z] 
[2021-04-06T09:59:58.260Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls() PASSED
[2021-04-06T09:59:58.260Z] 
[2021-04-06T09:59:58.260Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe() STARTED
[2021-04-06T10:00:07.039Z] 
[2021-04-06T10:00:07.039Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe() PASSED
[2021-04-06T10:00:07.039Z] 
[2021-04-06T10:00:07.039Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign() STARTED
[2021-04-06T10:00:16.711Z] 
[2021-04-06T10:00:16.711Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign() PASSED
[2021-04-06T10:00:16.711Z] 
[2021-04-06T10:00:16.711Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoGroupAcl() STARTED
[2021-04-06T10:00:25.050Z] 
[2021-04-06T10:00:25.050Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoGroupAcl() PASSED
[2021-04-06T10:00:25.050Z] 
[2021-04-06T10:00:25.050Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl() STARTED
[2021-04-06T10:00:32.399Z] 
[2021-04-06T10:00:32.399Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl() PASSED
[2021-04-06T10:00:32.399Z] 
[2021-04-06T10:00:32.399Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() STARTED
[2021-04-06T10:00:44.861Z] 
[2021-04-06T10:00:44.861Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() PASSED
[2021-04-06T10:00:44.861Z] 
[2021-04-06T10:00:44.861Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() STARTED
[2021-04-06T10:00:54.489Z] 
[2021-04-06T10:00:54.489Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() PASSED
[2021-04-06T10:00:54.489Z] 
[2021-04-06T10:00:54.489Z] SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials() STARTED
[2021-04-06T10:01:02.964Z] 
[2021-04-06T10:01:02.964Z] SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials() PASSED
[2021-04-06T10:01:02.964Z] 
[2021-04-06T10:01:02.964Z] SaslPlainSslEndToEndAuthorizationTest > testAcls() 
STARTED
[2021-04-06T10:01:11.629Z] 
[2021-04-06T10:01:11.630Z] SaslPlainSslEndToEndAuthorizationTest > testAcls() 
PASSED
[2021-04-06T10:01:11.630Z] 
[2021-04-06T10:01:11.630Z] SaslSslAdminIntegrationTest > 
testCreateDeleteTopics() STARTED
[2021-04-06T10:01:46.069Z] 
[2021-04-06T10:01:46.069Z] SaslSslAdminIntegrationTest > 
testCreateDeleteTopics() PASSED
[2021-04-06T10:01:46.069Z] 
[2021-04-06T10:01:46.069Z] SaslSslAdminIntegrationTest > 
testAuthorizedOperations() STARTED
[2021-04-06T10:02:11.869Z] 
[2021-04-06T10:02:11.869Z] SaslSslAdminIntegrationTest > 
testAuthorizedOperations() PASSED
[2021-04-06T10:02:11.869Z] 
[2021-04-06T10:02:11.869Z] SaslSslAdminIntegrationTest > testAclDescribe() 
STARTED
[2021-04-06T10:02:36.693Z] 
[2021-04-06T10:02:36.693Z] SaslSslAdminIntegrationTest > testAclDescribe() 
PASSED
[2021-04-06T10:02:36.693Z] 
[2021-04-06T10:02:36.693Z] SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed() STARTED
[2021-04-06T10:03:06.265Z] 
[2021-04-06T10:03:06.265Z] SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed() PASSED
[2021-04-06T10:03:06.265Z] 
[2021-04-06T10:03:06.265Z] SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig() STARTED
[2021-04-06T10:03:31.725Z] 
[2021-04-06T10:03:31.725Z] SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig() PASSED
[2021-04-06T10:03:31.725Z] 
[2021-04-06T10:03:31.725Z] SaslSslAdminIntegrationTest > 
testAttemptToCreateInvalidAcls() STARTED
[2021-04-06T10:03:57.783Z] 
[2021-04-06T10:03:57.783Z] SaslSslAdminIntegrationTest > 
testAttemptToCreateInvalidAcls() PASSED
[2021-04-06T10:03:57.783Z] 
[2021-04-06T10:03:57.783Z] SaslSslAdminIntegrationTest > 
testAclAuthorizationDenied() STARTED
[2021-04-06T10:04:23.348Z] 
[2021-04-06T10:04:23.348Z] SaslSslAdminIntegrationTest > 
testAclAuthorizationDenied() PASSED
[2021-04-06T10:04:23.348Z] 
[2021-04-06T10:04:23.348Z] SaslSslAdminIntegrationTest > testAclOperations() 
STARTED
[2021-04-06T10:04:52.727Z] 
[2021-04-06T10:04:52.727Z] SaslSslAdminIntegrationTest > testAclOperations() 
PASSED
[2021-04-06T10:04:52.727Z] 
[2021-04-06T10:04:52.727Z] SaslSslAdminIntegrationTest > testAclOperations2() 
STARTED
[2021-04-06T10:05:22.791Z] 
[2021-04-06T10:05:22.791Z] 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #673

2021-04-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: update GroupMetadataManager#getMagic docs (#10442)


--
[...truncated 3.73 MB...]

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() STARTED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() PASSED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

LiteralAclStoreTest > shouldHaveCorrectPatternType() STARTED

LiteralAclStoreTest > shouldHaveCorrectPatternType() PASSED

LiteralAclStoreTest > shouldDecodeResourceUsingTwoPartLogic() STARTED

LiteralAclStoreTest > shouldDecodeResourceUsingTwoPartLogic() PASSED

ReassignPartitionsZNodeTest > testDecodeInvalidJson() STARTED

ReassignPartitionsZNodeTest > testDecodeInvalidJson() PASSED

ReassignPartitionsZNodeTest > testEncode() STARTED

ReassignPartitionsZNodeTest > testEncode() PASSED

ReassignPartitionsZNodeTest > 

Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-04-06 Thread Tom Bentley
Hi Soumyajit,

Although that class does indeed have public access at the Java level, it
does so only because it needs to be used by internal Kafka code which lives
in other packages (there isn't any more restrictive access modifier which
would work). What the project considers public Java API is determined by
what's included in the published Javadocs:
https://kafka.apache.org/27/javadoc/index.html, which doesn't include the
org.apache.kafka.common.record package.

One of the problems with making these internal classes public is it ties
the project into supporting them as APIs, which can make changing them much
harder and in the long run that can slow, or even prevent, innovation in
the rest of Kafka.

Kind regards,

Tom



On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu 
wrote:

> Hi Colin,
> I see that both the interface "Record" and the implementation
> "DefaultRecord" being used in LogValidator.java are public
> interfaces/classes.
>
>
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
> and
>
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
>
> So, it should be ok to use them. Let me know what you think.
>
> Thanks,
> Soumyajit
>
>
> On Fri, Apr 2, 2021 at 8:51 AM Colin McCabe  wrote:
>
> > Hi Soumyajit,
> >
> > I believe we've had discussions about proposals similar to this before,
> > although I'm having trouble finding one right now.  The issue here is
> that
> > Record is a private class -- it is not part of any public API, and may
> > change at any time.  So we can't expose it in public APIs.
> >
> > best,
> > Colin
> >
> >
> > On Thu, Apr 1, 2021, at 14:18, Soumyajit Sahu wrote:
> > > Hello All,
> > > I would like to start a discussion on the KIP-729.
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-729%3A+Custom+validation+of+records+on+the+broker+prior+to+log+append
> > >
> > > Thanks!
> > > Soumyajit
> > >
> >
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #7

2021-04-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #672

2021-04-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Jenkinsfile's `post` needs `agent` to be set (#10479)


--
[...truncated 3.70 MB...]

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenRecordListTooLargeError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenRecordListTooLargeError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenInvalidProducerEpoch() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenInvalidProducerEpoch() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotLeaderOrFollowerError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotLeaderOrFollowerError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenInvalidRequiredAcksError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenInvalidRequiredAcksError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldReEnqueuePartitionsWhenBrokerDisconnected() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldReEnqueuePartitionsWhenBrokerDisconnected() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNoErrors() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNoErrors() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenCorruptMessageError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenCorruptMessageError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenCoordinatorLoading() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenCoordinatorLoading() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWheCoordinatorEpochFenced() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWheCoordinatorEpochFenced() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenUnknownError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenUnknownError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNotCoordinator() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNotCoordinator() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenCoordinatorEpochChanged() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenCoordinatorEpochChanged() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenUnknownTopicOrPartitionError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenUnknownTopicOrPartitionError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasAfterAppendError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasAfterAppendError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenMessageTooLargeError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenMessageTooLargeError() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionIfErrorCodeNotAvailableForPid() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionIfErrorCodeNotAvailableForPid() PASSED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenKafkaStorageError() STARTED

TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenKafkaStorageError() PASSED

RequestChannelTest > testNonAlterRequestsNotTransformed() STARTED

RequestChannelTest > testNonAlterRequestsNotTransformed() PASSED

RequestChannelTest > testAlterRequests() STARTED

RequestChannelTest > testAlterRequests() PASSED

RequestChannelTest > testJsonRequests() STARTED

RequestChannelTest > testJsonRequests() PASSED

RequestChannelTest > testIncrementalAlterRequests() STARTED

RequestChannelTest > testIncrementalAlterRequests() PASSED

RequestConvertToJsonTest > testAllResponseTypesHandled() STARTED

RequestConvertToJsonTest > testAllResponseTypesHandled() PASSED

RequestConvertToJsonTest > testAllRequestTypesHandled() STARTED

RequestConvertToJsonTest > 

Re: contributor permission

2021-04-06 Thread Tom Bentley
Hi Amuthan,

That's now done. Thanks for your interest in Apache Kafka.

Kind regards,

Tom

On Mon, Apr 5, 2021 at 7:52 PM Amuthan  wrote:

> Hi
>
> I would like to contribute to
> https://issues.apache.org/jira/browse/KAFKA-12559, Could you please give
> me
> contributor permission, here is my jira id: *simplyamuthan*
>
> Regards
> Amuthan.
>


[GitHub] [kafka-site] ableegoldman commented on pull request #345: Fix the formatting of example RocksDBConfigSetter

2021-04-06 Thread GitBox


ableegoldman commented on pull request #345:
URL: https://github.com/apache/kafka-site/pull/345#issuecomment-813896409


   >  Also how can I know if it will be picked?
   
   I'll let you know :) 
   
   But you can follow. along with the 2.8 release progress by subscribing to 
the dev mailing list. Look out for an email with a subject along the lines of 
`[VOTE] 2.8 RC1` , this means the RC for 2.8 has been cut and after that the 
release manager may proceed with the copy of docs from kafka/docs to 
kafka-site/28/docs. If your [PR](https://github.com/apache/kafka/pull/10486) 
hasn't been cherrypicked back to the 2.8 branch by then, then you'll need to 
reopen or submit a new PR against kafka-site. And of course I'll let you know 
on the PR once it's been merged, cherrypicked to 2.8, and so on.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] cc13ny closed pull request #345: Fix the formatting of example RocksDBConfigSetter

2021-04-06 Thread GitBox


cc13ny closed pull request #345:
URL: https://github.com/apache/kafka-site/pull/345


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] cc13ny commented on pull request #345: Fix the formatting of example RocksDBConfigSetter

2021-04-06 Thread GitBox


cc13ny commented on pull request #345:
URL: https://github.com/apache/kafka-site/pull/345#issuecomment-813862721


   Thanks. I created https://github.com/apache/kafka/pull/10486. I will close 
it now. But will reopen if it's not picked before the 2.8 release. Also how can 
I know if it will be picked?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Re: [DISCUSS] KIP-706: Add method "Producer#produce" to return CompletionStage instead of Future

2021-04-06 Thread Chia-Ping Tsai
hi Ismael,

In order to minimize the pain (and code changes), I remove all deprecations. 
The main purpose of this KIP is to introduce new API so I plan to keep using 
ProduceRecord in internal process. The new interfaces (SendRecord and 
SendTarget) are used by new API only and they are converted to ProducerRecord 
internally. The benefit is that we can focus on new API rather than a lot of 
changes to code base (for example, ProducerInterceptor and other classes which 
accept ProducerRecord).

On 2021/04/02 02:14:40, Ismael Juma  wrote: 
> To avoid a separate KIP, maybe we can agree on the grace period as part of
> this KIP. Maybe 3 releases (~1 year) is a good target?
> 
> Ismael
> 
> On Thu, Apr 1, 2021, 6:39 PM Chia-Ping Tsai  wrote:
> 
> > > Deprecating `send` is going to be extremely disruptive to all existing
> > > users (if you use -Werror, it will require updating every call site).
> > Have
> > > we considered encouraging the usage of the new method while not
> > deprecating
> > > the old methods? We could consider deprecation down the line. The
> > existing
> > > methods work fine for many people, it doesn't seem like a good idea to
> > > penalize them.
> > >
> > > Instead, we can make the new method available for people who benefit from
> > > it. After a grace period (3 releases), we can consider deprecating.
> > > Thoughts?
> >
> > Fair enough. will remove deprecation.
> >
> > On 2021/03/31 14:41:22, Ismael Juma  wrote:
> > > Hi Chia-Ping,
> > >
> > > Deprecating `send` is going to be extremely disruptive to all existing
> > > users (if you use -Werror, it will require updating every call site).
> > Have
> > > we considered encouraging the usage of the new method while not
> > deprecating
> > > the old methods? We could consider deprecation down the line. The
> > existing
> > > methods work fine for many people, it doesn't seem like a good idea to
> > > penalize them.
> > >
> > > Instead, we can make the new method available for people who benefit from
> > > it. After a grace period (3 releases), we can consider deprecating.
> > > Thoughts?
> > >
> > > Ismael
> > >
> > > On Tue, Mar 30, 2021 at 8:50 PM Chia-Ping Tsai 
> > wrote:
> > >
> > > > hi,
> > > >
> > > > I have updated KIP according to my latest response. I will start vote
> > > > thread next week if there is no more comments :)
> > > >
> > > > Best Regards,
> > > > Chia-Ping
> > > >
> > > > On 2021/01/31 05:39:17, Chia-Ping Tsai  wrote:
> > > > > It seems to me changing the input type might make complicate the
> > > > migration from deprecated send method to new API.
> > > > >
> > > > > Personally, I prefer to introduce a interface called “SendRecord” to
> > > > replace ProducerRecord. Hence, the new API/classes is shown below.
> > > > >
> > > > > 1. CompletionStage send(SendRecord)
> > > > > 2. class ProducerRecord implement SendRecord
> > > > > 3. Introduce builder pattern for SendRecord
> > > > >
> > > > > That includes following benefit.
> > > > >
> > > > > 1. Kafka users who don’t use both return type and callback do not
> > need
> > > > to change code even though we remove deprecated send methods. (of
> > course,
> > > > they still need to compile code with new Kafka)
> > > > >
> > > > > 2. Kafka users who need Future can easily migrate to new API by regex
> > > > replacement. (cast ProduceRecord to SendCast and add
> > toCompletableFuture)
> > > > >
> > > > > 3. It is easy to support topic id in the future. We can add new
> > method
> > > > to SendRecord builder. For example:
> > > > >
> > > > > Builder topicName(String)
> > > > > Builder topicId(UUID)
> > > > >
> > > > > 4. builder pattern can make code more readable. Especially, Produce
> > > > record has a lot of fields which can be defined by users.
> > > > > —
> > > > > Chia-Ping
> > > > >
> > > > > On 2021/01/30 22:50:36 Ismael Juma wrote:
> > > > > > Another thing to think about: the consumer api currently has
> > > > > > `subscribe(String|Pattern)` and a number of methods that accept
> > > > > > `TopicPartition`. A similar approach could be used for the
> > Consumer to
> > > > work
> > > > > > with topic ids or topic names. The consumer side also has to
> > support
> > > > > > regexes so it probably makes sense to have a separate interface.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Sat, Jan 30, 2021 at 2:40 PM Ismael Juma 
> > wrote:
> > > > > >
> > > > > > > I think this is a promising idea. I'd personally avoid the
> > overload
> > > > and
> > > > > > > simply have a `Topic` type that implements `SendTarget`. It's a
> > mix
> > > > of both
> > > > > > > proposals: strongly typed, no overloads and general class names
> > that
> > > > > > > implement `SendTarget`.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Sat, Jan 30, 2021 at 2:22 PM Jason Gustafson <
> > ja...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Giving this a little more thought, I imagine sending to a topic
> > is
> > > > the
> > > > > > >> most
> >