Re: Spotting stale KIPs with a better "Under Discussion" table

2022-03-02 Thread Luke Chen
Hi Tom,

Thanks for the nice work!
This is very helpful to understand the status of each KIP!
And thanks to this tool, it reminds me of a KIP that needs one more vote :)

I agree we can automatically change the state of "black" KIPs, and move
them into the "Discarded KIPs" or "Dormant/inactive KIPs" table

with notification to the KIP authors.
And of course, the KIP authors can always revive those KIPs anytime.

However, I found there is a small bug that it would take the "draft" KIP as
inactive KIP.
I have filed into the github issue page.

Thank you.
Luke





On Thu, Mar 3, 2022 at 2:28 AM Thomas Cooper  wrote:

> Hi All,
>
> I am hoping to get more involved in the upstream Kafka community. To that
> end, I was trying to keep up with the KIPs that were currently under
> discussion. However, I found it hard to keep track of what was and wasn't
> being discussed and the progress they were making. Some KIPs appeared
> abandoned but will still classed as "Under Discussion".
>
> So, during a very rainy week on holiday, I created a tool (which I called
> KIPper [[1](https://github.com/tomncooper/kipper)]) to parse the dev
> mailing list archive and extract all KIP mentions. I paired this with
> information parsed from the confluence (wiki) API to create an enriched
> table of the KIPs Under Discussion [[2](
> https://tomncooper.github.io/kipper/)].
>
> The table shows a "Status" for each KIP, which is based on the last time
> the KIP was mentioned in the subject line of an email on the dev mailing
> list. Green for within the last month, yellow for the last 3 months and red
> for within the last year. If the status is black then it hasn't been
> mentioned in over a year.
>
> I also added vote information, but this is only indicative as it is based
> on parsing the non-reply lines (without ">" in) of the email bodies so
> could hold false positives.
>
> In the spirit of the discussion on closing stale PRs [[3](
> https://lists.apache.org/thread/66yj9m6tcyz8zqb3lqlbnr386bqwsopt)], it
> might be a good idea to introduce a new KIP "state" alongside "Under
> Discussion", "Accepted" and "Rejected" (and their numerous variants [[4](
> https://github.com/tomncooper/kipper/blob/0bbb5595e79a9e075b0d2dc907c84693734d7846/kipper/wiki.py#L54)]).
> Maybe a KIP with a black status and no votes could be moved to a "Stale" or
> "Rejected" state?
>
> The kipper page is statically generated at the moment so could be updated
> every day with a cron job. The data used to create the page could also be
> used to drive automation, perhaps emailing the KIPs author once a KIP hits
> "Red" status and then automatically setting the state to stale once it
> turns "Black"?
>
> Anyway, I learned a lot making the tool and I now feel I have a better
> handle on the state of various KIPs. I hope others find it useful. There is
> loads of information to be harvested from the mailing list and wiki APIs so
> if any one has any feature requests please post issues on the GH page. I
> had one suggestion of performing sentiment analysis on the email bodies
> related to each KIP, to get a feel of how the KIP was being received. But
> maybe that is a step too far..
>
> Cheers,
>
> [1] https://github.com/tomncooper/kipper
> [2] https://tomncooper.github.io/kipper/
> [3] https://lists.apache.org/thread/66yj9m6tcyz8zqb3lqlbnr386bqwsopt
> [4]
> https://github.com/tomncooper/kipper/blob/0bbb5595e79a9e075b0d2dc907c84693734d7846/kipper/wiki.py#L54
>
> Tom Cooper
>
> [@tomncooper](https://twitter.com/tomncooper) | https://tomcooper.dev


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #732

2022-03-02 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-792: Add "generation" field into consumer protocol

2022-03-02 Thread Luke Chen
Hi David,

If you don't have other comments, would you vote for the KIP?

Thank you.
Luke

On Tue, Feb 22, 2022 at 3:13 PM David Jacot  wrote:

> Thanks, Luke!
>
> Le mar. 22 févr. 2022 à 08:02, Luke Chen  a écrit :
>
> > Hi David,
> >
> > Thanks for the comment.
> > I've updated the KIP, to add the method will be added into `Subscription`
> > class:
> >
> > // new added, the generationId getter
> > public int generationId() {
> > return generationId;
> > }
> >
> > Thank you.
> > Luke
> >
> > On Mon, Feb 21, 2022 at 5:24 PM David Jacot  >
> > wrote:
> >
> > > Hi Luke,
> > >
> > > I apologize for my late reply. I was out for a while.
> > >
> > > Coming back to my previous point, could you also
> > > spell out the new method(s) that we need to add to
> > > the Subscription class?
> > >
> > > Thanks,
> > > David
> > >
> > > On Mon, Feb 14, 2022 at 6:28 PM Guozhang Wang 
> > wrote:
> > > >
> > > > Thanks Luke, no more comments from me, nice work!
> > > >
> > > > On Mon, Feb 14, 2022 at 5:22 AM Luke Chen  wrote:
> > > >
> > > > > Hi Guozhang,
> > > > >
> > > > > Thanks for your comments. I've updated the KIP.
> > > > > Here's what I've updated:
> > > > >
> > > > > * In the motivation section, I've added this paragraph after
> > > > > cooperativeStickyAssignor like this:
> > > > >
> > > > > *On the other hand,  `StickyAssignor` is also adding "generation"
> > field
> > > > > plus the "ownedPartitions" into subscription userData bytes. the
> > > difference
> > > > > is that the `StickyAssignor`'s user bytes also encode the
> prev-owned
> > > > > partitions while the `CooperativeStickyAssignor` relies on the
> > > prev-owned
> > > > > partitions on the subscription protocol directly.*
> > > > >
> > > > > * In the proposed change section, I've updated the paragraph as:
> > > > >
> > > > >
> > > > > *For built-in CooperativeStickyAssignor, if there are consumers in
> > old
> > > > > bytecode and some in the new bytecode, it's totally fine, because
> the
> > > > > subscription data from old consumers will contain \[empty
> > > ownedPartitions +
> > > > > default generation(-1)] in V0, or \[current ownedPartitions +
> default
> > > > > generation(-1)] in V1. For V0 case, it's quite simple, because
> we'll
> > > just
> > > > > ignore the info since they are empty. For V1 case, we'll get the
> > > > > "ownedPartitions" data, and then decode the "generation" info in
> the
> > > > > subscription userData bytes. So that we can continue to do
> assignment
> > > with
> > > > > these information.*
> > > > > * Also, after the "cooperativeStickyAssignor paragraph, I've also
> > > mentioned
> > > > > stickyAssignor:
> > > > >
> > > > >
> > > > > *For built-in StickyAssignor, if there are consumers in old
> bytecode
> > > and
> > > > > some in the new bytecode, it's also fine, because the subscription
> > data
> > > > > from old consumers will contain \[empty ownedPartitions + default
> > > > > generation(-1)] in V0, or \[current ownedPartitions + default
> > > > > generation(-1)] in V1. For both V0 and V1 case, we'll directly use
> > the
> > > > > ownedPartition and generation info in the subscription userData
> > bytes.
> > > *
> > > > >
> > > > > Please let me know if you have other comments.
> > > > >
> > > > > Thank you.
> > > > > Luke
> > > > >
> > > > > On Wed, Feb 9, 2022 at 2:57 PM Guozhang Wang 
> > > wrote:
> > > > >
> > > > > > Hello Luke,
> > > > > >
> > > > > > Thanks for the updated KIP, I've taken a look at it and still
> LGTM.
> > > Just
> > > > > a
> > > > > > couple minor comments in the wiki:
> > > > > >
> > > > > > * Both `StickyAssignor` and `CooperativeStickyAssignor` that
> > there's
> > > > > > already generation is encoded in user-data bytes, the difference
> is
> > > that
> > > > > > the `StickyAssignor`'s user bytes also encode the prev-owned
> > > partitions
> > > > > > while the `CooperativeStickyAssignor` relies on the prev-owned
> > > partitions
> > > > > > on the subscription protocol directly. So we can add the
> > > `StickyAssignor`
> > > > > > in your paragraph talking about `CooperativeStickyAssignor` as
> > well.
> > > > > >
> > > > > > * This sentence: "otherwise, we'll take the ownedPartitions as
> > > default
> > > > > > generation(-1)." does not read right to me, maybe need to
> rephrase
> > a
> > > bit?
> > > > > >
> > > > > >
> > > > > > Guozhang
> > > > > >
> > > > > > On Mon, Feb 7, 2022 at 7:36 PM Luke Chen 
> > wrote:
> > > > > >
> > > > > > > Hi David,
> > > > > > >
> > > > > > > Thanks for your comments.
> > > > > > > I've updated the KIP to add changes in Subscription class.
> > > > > > >
> > > > > > > Thank you.
> > > > > > > Luke
> > > > > > >
> > > > > > > On Fri, Feb 4, 2022 at 11:43 PM David Jacot
> > > > >  > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Luke,
> > > > > > > >
> > > > > > > > Thanks for updating the KIP. I just have a minor request.
> > > > > > > > Could you fully describe the changes to the Subscription
> > > > > > > > public class in 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #731

2022-03-02 Thread Apache Jenkins Server
See 




Re: Question about the Log Compaction

2022-03-02 Thread Jun Rao
Hi, Liang,

Currently, we store the MD5 of the record key in OffsetMap. Since it has a
large domain (16 bytes), we assume there is no collision there.

Thanks,

Jun

On Wed, Mar 2, 2022 at 1:20 AM 阮良  wrote:

> Hi all
>
> I am confused about the Log Compaction logic,use OffsetMap
> to deduplicating the log.   in my opinion when there is a hash conflict ,
> data may be lost
> Eg: Record1(key1,offset1)  Record2(key2,offset2)
> Conditionhash(key1) == hash(key2)   &&  (offset1 < offset2)
> *Result  Record1 will be remove by mistake *
>
>
>- Did I misunderstand the implementation logic?please give me some
>guidance, thank you very much
>
>
> *1:OffsetMap  put logic does not deal with the hash collision, if
> hash(key1) == hash(key2)key1 will be overwrire*
>
>
>
>
> 2:the logic of retain record
>
>


Spotting stale KIPs with a better "Under Discussion" table

2022-03-02 Thread Thomas Cooper
Hi All,

I am hoping to get more involved in the upstream Kafka community. To that end, 
I was trying to keep up with the KIPs that were currently under discussion. 
However, I found it hard to keep track of what was and wasn't being discussed 
and the progress they were making. Some KIPs appeared abandoned but will still 
classed as "Under Discussion".

So, during a very rainy week on holiday, I created a tool (which I called 
KIPper [[1](https://github.com/tomncooper/kipper)]) to parse the dev mailing 
list archive and extract all KIP mentions. I paired this with information 
parsed from the confluence (wiki) API to create an enriched table of the KIPs 
Under Discussion [[2](https://tomncooper.github.io/kipper/)].

The table shows a "Status" for each KIP, which is based on the last time the 
KIP was mentioned in the subject line of an email on the dev mailing list. 
Green for within the last month, yellow for the last 3 months and red for 
within the last year. If the status is black then it hasn't been mentioned in 
over a year.

I also added vote information, but this is only indicative as it is based on 
parsing the non-reply lines (without ">" in) of the email bodies so could hold 
false positives.

In the spirit of the discussion on closing stale PRs 
[[3](https://lists.apache.org/thread/66yj9m6tcyz8zqb3lqlbnr386bqwsopt)], it 
might be a good idea to introduce a new KIP "state" alongside "Under 
Discussion", "Accepted" and "Rejected" (and their numerous variants 
[[4](https://github.com/tomncooper/kipper/blob/0bbb5595e79a9e075b0d2dc907c84693734d7846/kipper/wiki.py#L54)]).
 Maybe a KIP with a black status and no votes could be moved to a "Stale" or 
"Rejected" state?

The kipper page is statically generated at the moment so could be updated every 
day with a cron job. The data used to create the page could also be used to 
drive automation, perhaps emailing the KIPs author once a KIP hits "Red" status 
and then automatically setting the state to stale once it turns "Black"?

Anyway, I learned a lot making the tool and I now feel I have a better handle 
on the state of various KIPs. I hope others find it useful. There is loads of 
information to be harvested from the mailing list and wiki APIs so if any one 
has any feature requests please post issues on the GH page. I had one 
suggestion of performing sentiment analysis on the email bodies related to each 
KIP, to get a feel of how the KIP was being received. But maybe that is a step 
too far..

Cheers,

[1] https://github.com/tomncooper/kipper
[2] https://tomncooper.github.io/kipper/
[3] https://lists.apache.org/thread/66yj9m6tcyz8zqb3lqlbnr386bqwsopt
[4] 
https://github.com/tomncooper/kipper/blob/0bbb5595e79a9e075b0d2dc907c84693734d7846/kipper/wiki.py#L54

Tom Cooper

[@tomncooper](https://twitter.com/tomncooper) | https://tomcooper.dev

[jira] [Resolved] (KAFKA-13704) Include TopicId in kafka-topics describe output

2022-03-02 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13704.
-
Resolution: Duplicate

> Include TopicId in kafka-topics describe output
> ---
>
> Key: KAFKA-13704
> URL: https://issues.apache.org/jira/browse/KAFKA-13704
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Priority: Major
>
> It would be helpful if `kafka-topics --describe` displayed the TopicId when 
> we have it available.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13704) Include TopicId in kafka-topics describe output

2022-03-02 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-13704:
---

 Summary: Include TopicId in kafka-topics describe output
 Key: KAFKA-13704
 URL: https://issues.apache.org/jira/browse/KAFKA-13704
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


It would be helpful if `kafka-topics --describe` displayed the TopicId when we 
have it available.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13703) OAUTHBEARER client will not use defined truststore

2022-03-02 Thread Adam Long (Jira)
Adam Long created KAFKA-13703:
-

 Summary: OAUTHBEARER client will not use defined truststore
 Key: KAFKA-13703
 URL: https://issues.apache.org/jira/browse/KAFKA-13703
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.1.0
Reporter: Adam Long


I am developing a Kafka client that uses OAUTHBEARER and SSL to connect.  I'm 
attempting to test against a server using a key from a custom CA.  I added the 
trust-chain for the server to a Truststore JKS file, and referenced it in the 
configuration.  However, I continually get PKIX errors.  After some code 
tracing, I believe the OAUTHBEARER client code ignores defined truststores.

Here is an example based on my configuration:

{code:java}
application.id=my-kafka-client
client.id=my-kafka-client
group.id=my-kafka-client

# OAuth/SSL listener
bootstrap.servers=:9096
security.protocol=SASL_SSL

# OAuth Configuration
sasl.mechanism=OAUTHBEARER
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
sasl.login.connect.timeout.ms=15000
sasl.oauthbearer.token.endpoint.url=https:///auth/realms//protocol/openid-connect/token
ssl.truststore.location=\kafka.truststore.jks
#ssl.truststore.password=changeit

sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
 required \
clientId="my-kafka-client" \
clientSecret="my-kafka-client-secret";

{code}

Note - my Truststore does not have password (I tried setting it to see if that 
would solve the problem initially).

I'm using the following example test code:


{code:java}
package example;

import java.io.IOException;
import java.net.URISyntaxException;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;

public class Main {

   public static void main(final String[] args) throws IOException, 
URISyntaxException {
  Properties config = new Properties();
  
config.load(Main.class.getClassLoader().getResourceAsStream("client.conf"));

  //Consumer
  config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class);
  config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class);
  config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class);
  config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class);
  
  final KafkaConsumer consumer = new 
KafkaConsumer<>(config);
   }
}
{code}

The issue seems to be in the 
{{org.apache.kafka.common.security.oauthbearer.secured}} package - in 
particular the {{AccessTokenRetrieverFactory.create()}} method, as it creates 
an sslContext but does not include the configured truststore from the Kafka 
configuration.  

As such, it appears that unless you alter the JVM-default truststore, you 
cannot connect to a server running a custom trust-chain.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: Mirror Maker 2 - High Throughput Identity Mirroring

2022-03-02 Thread Antón Rodríguez Yuste
Hi Ryanne,

Is there a PR or code I could take a look at or just the KIP? "shallow
mirroring" seems very interesting for our use cases, and I would like to
evaluate it internally.

Thanks,

Antón

On Thu, Jul 29, 2021 at 7:02 PM Ryanne Dolan  wrote:

> Jamie, this would depend on KIP-712 (or similar) aka "shallow mirroring".
> This is a work in progress, but I'm optimistic it'll happen at some point.
>
> ftr, "IdentityReplicationPolicy" has landed for the upcoming release, tho
> "identity" in that context just means that topics aren't renamed.
>
> Ryanne
>
> On Thu, Jul 29, 2021, 11:37 AM Jamie  wrote:
>
> > Hi All,
> > This blog post:
> > https://blog.cloudera.com/a-look-inside-kafka-mirrormaker-2/ mentions
> > that "High Throughput Identity Mirroring" (when the compression is the
> same
> > in both the source and destination cluster) will soon be coming to MM2
> > which would avoid the MM2 consumer decompressing the data only for the
> MM2
> > producer to then re-compress it again.
> > Has this feature been implemented yet in MM2?
> > Many Thanks,
> > Jamie
>


Re: Mirror Maker 2 - High Throughput Identity Mirroring

2022-03-02 Thread Ryanne Dolan
Henry Cai has a PR somewhere.

Ryanne

On Wed, Mar 2, 2022, 3:36 AM Antón Rodríguez Yuste 
wrote:

> Hi Ryanne,
>
> Is there a PR or code I could take a look at or just the KIP? "shallow
> mirroring" seems very interesting for our use cases, and I would like to
> evaluate it internally.
>
> Thanks,
>
> Antón
>
> On Thu, Jul 29, 2021 at 7:02 PM Ryanne Dolan 
> wrote:
>
> > Jamie, this would depend on KIP-712 (or similar) aka "shallow mirroring".
> > This is a work in progress, but I'm optimistic it'll happen at some
> point.
> >
> > ftr, "IdentityReplicationPolicy" has landed for the upcoming release, tho
> > "identity" in that context just means that topics aren't renamed.
> >
> > Ryanne
> >
> > On Thu, Jul 29, 2021, 11:37 AM Jamie 
> wrote:
> >
> > > Hi All,
> > > This blog post:
> > > https://blog.cloudera.com/a-look-inside-kafka-mirrormaker-2/ mentions
> > > that "High Throughput Identity Mirroring" (when the compression is the
> > same
> > > in both the source and destination cluster) will soon be coming to MM2
> > > which would avoid the MM2 consumer decompressing the data only for the
> > MM2
> > > producer to then re-compress it again.
> > > Has this feature been implemented yet in MM2?
> > > Many Thanks,
> > > Jamie
> >
>


RE: Kafka Connect - AvroConverter - Unions with Enums Problem

2022-03-02 Thread Jaakko Puurunen
On 2020/05/02 17:22:43 Nagendra Korrapati wrote:
> Hello
>
> Sorry to write this in this mailing list. This is more specific to my
problem in using Kafka Connect
>
> I have Union with Enum in it. ToConnectSchema converted it into
Schema.STRUCT type with fields and the enum field as STRING.
>
> The SpeicficRecord has that enum field as of Type Enum class.
>
> But in AvroData class the method
isInstanceOfAvroSchemaTypeForSimpleSchema compares value is of  CharSequence
instance.
>
> But in the SpecifiRecord the value type is Enum and it fails. So the end
result the toConnectData throws exception with the message
>
> “Did not find matching union field for data : enumstring”
>
> Please some one comment!!
>
> thanks
> Nagendra

Hi Nagendra,

replying to almost two years old message but we had the same problem today
and got it resolved by upgrading Kafka Connect (Confluent platform from
6.2.2 to 7.0.1 and various connect components updated to recent versions).

Regards,

Jaakko


[jira] [Resolved] (KAFKA-13658) Upgrade vulnerable dependencies jan 2022

2022-03-02 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-13658.
---
Resolution: Fixed

> Upgrade vulnerable dependencies jan 2022
> 
>
> Key: KAFKA-13658
> URL: https://issues.apache.org/jira/browse/KAFKA-13658
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Shivakumar
>Priority: Major
>  Labels: secutiry
> Fix For: 3.0.1, 3.2.0, 3.1.1
>
>
> |Packages|Package Version|CVSS|Fix Status|
> |com.fasterxml.jackson.core_jackson-databind| 2.10.5.1| 7.5| fixed in 2.14, 
> 2.13.1, 2.12.6|
> | | | | |
> Our security scan detected the above vulnerabilities
> upgrade to correct versions for fixing vulnerabilities



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] Apache Kafka 3.2.0 release

2022-03-02 Thread Bruno Cadonna

Hi Ziming Deng,

Thank you for the ping!

I added KIP-815 to the release plan!

Best,
Bruno

On 02.03.22 01:14, deng ziming wrote:

Hey Bruno,

Can we add KIP-815 to the plan? 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-815%3A++Support+max-timestamp+in+GetOffsetShell
 

The vote has passed just 2 days ago.

—
Thanks,
Ziming Deng


On Mar 2, 2022, at 12:41 AM, Bruno Cadonna  wrote:

Hi all,

A quick reminder that KIP freeze for the Apache 3.2.0 is tomorrow. Please make 
sure to close your votes if you want to add a KIP to the release plan.

Best,
Bruno

On 15.02.22 12:37, Bruno Cadonna wrote:

Hi all,
I published a release plan for the Apache Kafka 3.2.0 release here:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.2.0
KIP Freeze: 2 March 2022
Feature Freeze: 16 March 2022
Code Freeze:30 March 2022
At least two weeks of stabilization will follow Code Freeze.
Please let me know if should add or remove KIPs from the plan or if you have 
any other objections.
Best,
Bruno
On 04.02.22 16:03, Bruno Cadonna wrote:

Hi,

I'd like to volunteer to be the release manager for our next
feature release, 3.2.0. If there are no objections, I'll send
out the release plan soon.

Best,
Bruno





Question about the Log Compaction

2022-03-02 Thread 阮良
Hi all 


I am confused about the Log Compaction logic,use OffsetMap to deduplicating the 
log.   in my opinion when there is a hash conflict , data may be lost
Eg: Record1(key1,offset1)  Record2(key2,offset2)
Conditionhash(key1) == hash(key2)   &&  (offset1 < offset2)  
Result  Record1 will be remove by mistake 


Did I misunderstand the implementation logic?please give me some guidance, 
thank you very much


1:OffsetMap  put logic does not deal with the hash collision, if hash(key1) == 
hash(key2)key1 will be overwrire








2:the logic of retain record 



Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #730

2022-03-02 Thread Apache Jenkins Server
See