Re: Kafka consumer group crashing and not able to consume once service is up

2024-02-07 Thread Philip Nee
Hi Santhosh,

Your problem statement confuses me a bit (apologize). You mentioned "if one
of the kafka consumer(Service)" - Do you have a single member consumer
group? Could you elaborate on the setup a bit? Did you also mean after
restarting the "service", the service was not able to resume from consuming
the messages?

Logs would be very helpful to see why these consumers stopped consuming
messages. If you include the logs (that would be really helpful), could you
also include the client version.

P

On Wed, Feb 7, 2024 at 9:45 PM Marigowda, Santhosh Aditya
 wrote:

> Hi Dev team,
>
> Could you Please let me know if any updates on my query.
>
>
>
> Thanks,
>
> Santhosh Aditya
>
>
>
> *From:* Marigowda, Santhosh Aditya
> *Sent:* Wednesday, January 31, 2024 8:50 AM
> *To:* dev@kafka.apache.org
> *Cc:* Jain, Ankit ; Namboodiri, Vishnu <
> vishnu.nambood...@in.unisys.com>; Mudlapur, Rajesh <
> rajesh.mudla...@au.unisys.com>; Reddy, Thanmai 
> *Subject:* RE: Kafka consumer group crashing and not able to consume once
> service is up
>
>
>
> Thanks Matthias for your response. Sorry, my bad please find the updated 
> Image.
>
> ·We have 1 topic and 4 partitions, Each consumer is pointing to the 
> topic. There is no partition assignment specific to the consumer group from 
> our software code.
>
> ·Please let me know if you need Kafka server logs or consumer service 
> logs, Post restart of our service no logs are printed in onMessage method.
>
> ·We are using consumer.subscribe(topic)
>
>
>
>
>
>
>
> Thanks,
>
> Santhosh Aditya
>
>
>
> *From:* Matthias J. Sax 
> *Sent:* Wednesday, January 31, 2024 12:13 AM
> *To:* dev@kafka.apache.org
> *Subject:* Re: Kafka consumer group crashing and not able to consume once
> service is up
>
>
>
> I am a not sure if I can follow completely. From the figures you show, you
> have a topic with 4 partitions, and 4 consumer groups. Thus, each consumer
> group should read all 4 partitions, but the figure indicate that each group
> would read a single
>
> I am a not sure if I can follow completely.
>
>
>
>  From the figures you show, you have a topic with 4 partitions, and 4
>
> consumer groups. Thus, each consumer group should read all 4 partitions,
>
> but the figure indicate that each group would read a single partition only?
>
>
>
> Can you clarify? Are you using `consumer.subscribe` or `consumer.assign`?
>
>
>
> In general, might be good too collect some INFO (or DEBUG) level logs
>
> for the crashing service after restart to see what it's doing.
>
>
>
>
>
> -Matthias
>
>
>
> On 1/30/24 7:17 AM, Marigowda, Santhosh Aditya wrote:
>
> > Hi Kafka Dev Team,
>
> >
>
> > Could you please help us with our problem.
>
> >
>
> > In our POC, if one of the kafka consumer(Service) shuts down or crashes
>
> > then post restart of service none of the messages are getting consumed
>
> > by the crashed Service.
>
> >
>
> > Other services are consuming without any issues.
>
> >
>
> > One of service crash/Shutdown
>
> >
>
> > If we rename the Kafka consumer group name and start the service then
>
> > messages start consuming.
>
> >
>
> > Consumer configuration :
>
> >
>
> > {
>
> >
>
> >  delay:1000
>
> >
>
> >  timeout:0
>
> >
>
> >  topic-name= test
>
> >
>
> >  handlers=["Listener"]
>
> >
>
> >  source="kafka-consumer"
>
> >
>
> >  enable_auto_commit="false"
>
> >
>
> >  group="Consumer-Group-1"
>
> >
>
> >  }
>
> >
>
> > local-kafka-consumer = {
>
> >
>
> >  server=
>
> > "{kafka-hostname}:{kafka-port}"
>
> >
>
> >
>
> > deserializer="org.apache.kafka.common.serialization.StringDeserializer"
>
> >
>
> > auto_offset_reset="latest"
>
> >
>
> > enable_auto_commit="true"
>
> >
>
> > maxRequestSize=20971520
>
> >
>
> >  }
>
> >
>
> > Thanks,
>
> >
>
> > Santhosh Aditya
>
> >
>
>


RE: Kafka consumer group crashing and not able to consume once service is up

2024-02-07 Thread Marigowda, Santhosh Aditya
Hi Dev team,
Could you Please let me know if any updates on my query.

Thanks,
Santhosh Aditya

From: Marigowda, Santhosh Aditya
Sent: Wednesday, January 31, 2024 8:50 AM
To: dev@kafka.apache.org
Cc: Jain, Ankit ; Namboodiri, Vishnu 
; Mudlapur, Rajesh 
; Reddy, Thanmai 
Subject: RE: Kafka consumer group crashing and not able to consume once service 
is up


Thanks Matthias for your response. Sorry, my bad please find the updated Image.

·We have 1 topic and 4 partitions, Each consumer is pointing to the 
topic. There is no partition assignment specific to the consumer group from our 
software code.

·Please let me know if you need Kafka server logs or consumer service 
logs, Post restart of our service no logs are printed in onMessage method.

·We are using consumer.subscribe(topic)


[cid:image001.png@01DA5A80.0A8A7210]


Thanks,
Santhosh Aditya

From: Matthias J. Sax mailto:mj...@apache.org>>
Sent: Wednesday, January 31, 2024 12:13 AM
To: dev@kafka.apache.org
Subject: Re: Kafka consumer group crashing and not able to consume once service 
is up

I am a not sure if I can follow completely. From the figures you show, you have 
a topic with 4 partitions, and 4 consumer groups. Thus, each consumer group 
should read all 4 partitions, but the figure indicate that each group would 
read a single


I am a not sure if I can follow completely.



 From the figures you show, you have a topic with 4 partitions, and 4

consumer groups. Thus, each consumer group should read all 4 partitions,

but the figure indicate that each group would read a single partition only?



Can you clarify? Are you using `consumer.subscribe` or `consumer.assign`?



In general, might be good too collect some INFO (or DEBUG) level logs

for the crashing service after restart to see what it's doing.





-Matthias



On 1/30/24 7:17 AM, Marigowda, Santhosh Aditya wrote:

> Hi Kafka Dev Team,

>

> Could you please help us with our problem.

>

> In our POC, if one of the kafka consumer(Service) shuts down or crashes

> then post restart of service none of the messages are getting consumed

> by the crashed Service.

>

> Other services are consuming without any issues.

>

> One of service crash/Shutdown

>

> If we rename the Kafka consumer group name and start the service then

> messages start consuming.

>

> Consumer configuration :

>

> {

>

>  delay:1000

>

>  timeout:0

>

>  topic-name= test

>

>  handlers=["Listener"]

>

>  source="kafka-consumer"

>

>  enable_auto_commit="false"

>

>  group="Consumer-Group-1"

>

>  }

>

> local-kafka-consumer = {

>

>  server=

> "{kafka-hostname}:{kafka-port}"

>

>

> deserializer="org.apache.kafka.common.serialization.StringDeserializer"

>

> auto_offset_reset="latest"

>

> enable_auto_commit="true"

>

> maxRequestSize=20971520

>

>  }

>

> Thanks,

>

> Santhosh Aditya

>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2628

2024-02-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 231730 lines...]
[2024-02-08T04:37:05.611Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ReassignPartitionsZNodeTest > testDecodeValidJson() PASSED
[2024-02-08T04:37:05.611Z] 
[2024-02-08T04:37:05.612Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T04:37:39.070Z] 
[2024-02-08T04:37:39.070Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-02-08T04:37:39.070Z] 
[2024-02-08T04:37:39.070Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [2] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT STARTED
[2024-02-08T04:38:17.069Z] 
[2024-02-08T04:38:17.070Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [2] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT SKIPPED
[2024-02-08T04:38:17.070Z] 
[2024-02-08T04:38:17.070Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [3] Type=ZK, MetadataVersion=3.6-IV2, 
Security=PLAINTEXT STARTED
[2024-02-08T04:38:46.115Z] 
[2024-02-08T04:38:46.115Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [3] Type=ZK, MetadataVersion=3.6-IV2, 
Security=PLAINTEXT SKIPPED
[2024-02-08T04:38:46.115Z] 
[2024-02-08T04:38:46.115Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [4] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T04:39:20.092Z] 
[2024-02-08T04:39:20.092Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [4] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT SKIPPED
[2024-02-08T04:39:20.092Z] 
[2024-02-08T04:39:20.092Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [5] Type=ZK, MetadataVersion=3.7-IV1, 
Security=PLAINTEXT STARTED
[2024-02-08T04:39:45.102Z] 
[2024-02-08T04:39:45.102Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [5] Type=ZK, MetadataVersion=3.7-IV1, 
Security=PLAINTEXT SKIPPED
[2024-02-08T04:39:45.102Z] 
[2024-02-08T04:39:45.102Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [6] Type=ZK, MetadataVersion=3.7-IV2, 
Security=PLAINTEXT STARTED
[2024-02-08T04:40:14.457Z] 
[2024-02-08T04:40:14.457Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [6] Type=ZK, MetadataVersion=3.7-IV2, 
Security=PLAINTEXT SKIPPED
[2024-02-08T04:40:14.457Z] 
[2024-02-08T04:40:14.457Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [7] Type=ZK, MetadataVersion=3.7-IV4, 
Security=PLAINTEXT STARTED
[2024-02-08T04:40:43.527Z] 
[2024-02-08T04:40:43.527Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [7] Type=ZK, MetadataVersion=3.7-IV4, 
Security=PLAINTEXT SKIPPED
[2024-02-08T04:40:43.527Z] 
[2024-02-08T04:40:43.527Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [8] Type=ZK, MetadataVersion=3.8-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T04:41:12.792Z] 
[2024-02-08T04:41:12.792Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [8] Type=ZK, MetadataVersion=3.8-IV0, 
Security=PLAINTEXT SKIPPED
[2024-02-08T04:41:12.792Z] 
[2024-02-08T04:41:12.792Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testPartitionReassignmentInHybridMode(ClusterInstance) > 

Re: [DISCUSS] KIP-996: Pre-Vote

2024-02-07 Thread ziming deng
Hi Alyssa,

I have a minor question about the description in motivation section

> Pre-Vote (as originally detailed in the Raft paper and in KIP-650)

It seems Pre-vote is not mentioned in Raft paper, can you check out it again 
and rectify it? It would be helpful, thank you!

- 
Thanks,
Ziming


> On Dec 8, 2023, at 16:13, Luke Chen  wrote:
> 
> Hi Alyssa,
> 
> Thanks for the update.
> LGTM now.
> 
> Luke
> 
> On Fri, Dec 8, 2023 at 10:03 AM José Armando García Sancio
>  wrote:
> 
>> Hi Alyssa,
>> 
>> Thanks for the answers and the updates to the KIP. I took a look at
>> the latest version and it looks good to me.
>> 
>> --
>> -José
>> 



Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.6 #142

2024-02-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 312328 lines...]
[2024-02-08T01:54:12.817Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testGetTopicsAndPartitions() PASSED
[2024-02-08T01:54:12.817Z] 
[2024-02-08T01:54:12.817Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testChroot(boolean) > [1] createChrootIfNecessary=true 
STARTED
[2024-02-08T01:54:12.817Z] 
[2024-02-08T01:54:12.817Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testChroot(boolean) > [1] createChrootIfNecessary=true 
PASSED
[2024-02-08T01:54:12.817Z] 
[2024-02-08T01:54:12.817Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testChroot(boolean) > [2] createChrootIfNecessary=false 
STARTED
[2024-02-08T01:54:14.174Z] 
[2024-02-08T01:54:14.174Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testChroot(boolean) > [2] createChrootIfNecessary=false 
PASSED
[2024-02-08T01:54:14.174Z] 
[2024-02-08T01:54:14.174Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testRegisterBrokerInfo() STARTED
[2024-02-08T01:54:14.174Z] 
[2024-02-08T01:54:14.174Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testRegisterBrokerInfo() PASSED
[2024-02-08T01:54:14.174Z] 
[2024-02-08T01:54:14.174Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED
[2024-02-08T01:54:14.174Z] 
[2024-02-08T01:54:14.174Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED
[2024-02-08T01:54:14.174Z] 
[2024-02-08T01:54:14.174Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testConsumerOffsetPath() STARTED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testConsumerOffsetPath() PASSED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() 
STARTED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() 
PASSED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testTopicAssignments() STARTED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testTopicAssignments() PASSED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testControllerManagementMethods() STARTED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testControllerManagementMethods() PASSED
[2024-02-08T01:54:15.530Z] 
[2024-02-08T01:54:15.530Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testTopicAssignmentMethods() STARTED
[2024-02-08T01:54:16.886Z] 
[2024-02-08T01:54:16.886Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testTopicAssignmentMethods() PASSED
[2024-02-08T01:54:16.886Z] 
[2024-02-08T01:54:16.886Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testConnectionViaNettyClient() STARTED
[2024-02-08T01:54:16.886Z] 
[2024-02-08T01:54:16.886Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testConnectionViaNettyClient() PASSED
[2024-02-08T01:54:16.886Z] 
[2024-02-08T01:54:16.886Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testPropagateIsrChanges() STARTED
[2024-02-08T01:54:18.411Z] 
[2024-02-08T01:54:18.411Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testPropagateIsrChanges() PASSED
[2024-02-08T01:54:18.411Z] 
[2024-02-08T01:54:18.411Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testControllerEpochMethods() STARTED
[2024-02-08T01:54:18.411Z] 
[2024-02-08T01:54:18.411Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testControllerEpochMethods() PASSED
[2024-02-08T01:54:18.411Z] 
[2024-02-08T01:54:18.411Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testDeleteRecursive() STARTED
[2024-02-08T01:54:18.411Z] 
[2024-02-08T01:54:18.411Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testDeleteRecursive() PASSED
[2024-02-08T01:54:18.411Z] 
[2024-02-08T01:54:18.411Z] Gradle Test Run :core:test > Gradle Test Executor 93 
> KafkaZkClientTest > testGetTopicPartitionStates() STARTED
[2024-02-08T01:54:18.411Z] 

Re: Kafka-Streams-Scala for Scala 3

2024-02-07 Thread Matthias Berndt
Hi Matthias J., Hi Lucas, Hi Josep,

Thank you for your encouraging responses regarding a Scala 3 port of
Kafka-Streams-Scala, and apologies for the late response from my side.
I have now created a PR to port Kafka-Streams-Scala to Scala 3 (while
retaining support for 2.13 and 2.12). Almost no changes to the code
were required and the tests also pass. Please take a look and let me
know what you think :-)
https://github.com/apache/kafka/pull/15338

All the best
Matthias

Am Do., 1. Feb. 2024 um 16:35 Uhr schrieb Josep Prat
:
>
> Hi,
>
> For reference, prior work on this:
> https://github.com/apache/kafka/pull/11350
> https://github.com/apache/kafka/pull/11432
>
> Best,
>
> On Thu, Feb 1, 2024, 15:55 Lucas Brutschy 
> wrote:
>
> > Hi Matthiases,
> >
> > I know Scala 2 fairly well, so I'd be happy to review changes that add
> > Scala 3 support. However, as Matthias S. said, it has to be driven by
> > people who use Scala day-to-day, since I believe most Kafka Streams
> > committers are working with Java.
> >
> > Rewriting the tests to not use EmbeddedKafkaCluster seems like a large
> > undertaking, so option 1 is the first thing we should explore.
> >
> > I don't have any experience with Scala 3 migration topics, but on the
> > Scala website it says
> > > The first piece of good news is that the Scala 3 compiler is able to
> > read the Scala 2.13 Pickle format and thus it can type check code that
> > depends on modules or libraries compiled with Scala 2.13.
> > > One notable example is the Scala 2.13 library. We have indeed decided
> > that the Scala 2.13 library is the official standard library for Scala 3.
> > So wouldn't that mean that we are safe in terms of standard library
> > upgrades if we use core_2.13 in the tests?
> >
> > Cheers,
> > Lucas
> >
> >
> > On Wed, Jan 31, 2024 at 9:20 PM Matthias J. Sax  wrote:
> > >
> > > Thanks for raising this. The `kafka-streams-scala` module seems to be an
> > > important feature for Kafka Streams and I am generally in favor of your
> > > proposal to add Scala 3 support. However, I am personally no Scala
> > > person and it sounds like quite some overhead.
> > >
> > > If you are willing to drive and own this initiative happy to support you
> > > to the extend I can.
> > >
> > > About the concrete proposal: my understanding is that :core will move
> > > off Scala long-term (not 100% sure what the timeline is, but new modules
> > > are written in Java only). Thus, down the road the compatibility issue
> > > would go away naturally, but it's unclear when.
> > >
> > > Thus, if we can test kafak-stream-scala_3 with core_2.13 it seems we
> > > could add support for Scala 3 now, taking a risk that it might break in
> > > the future assume that the migration off Scala from core is not fast
> > enough.
> > >
> > > For proposal (2), I don't think that it would be easily possible for
> > > unit/integration tests. We could fall back to system tests though, but
> > > they would be much more heavy weight of course.
> > >
> > > Might be good to hear from others. We might actually also want to do a
> > > KIP for this?
> > >
> > >
> > > -Matthias
> > >
> > > On 1/20/24 10:34 AM, Matthias Berndt wrote:
> > > > Hey there,
> > > >
> > > > I'd like to discuss a Scala 3 port of the kafka-streams-scala library.
> > > > Currently, the build system is set up such that kafka-streams-scala
> > > > and core (i. e. kafka itself) are compiled with the same Scala
> > > > compiler versions. This is not an optimal situation because it means
> > > > that a Scala 3 release of kafka-streams-scala cannot happen
> > > > independently of kafka itself. I think this should be changed
> > > >
> > > > The production codebase of scala-streams-kafka actually compiles just
> > > > fine on Scala 3.3.1 with two lines of trivial syntax changes. The
> > > > problem is with the tests. These use the `EmbeddedKafkaCluster` class,
> > > > which means that kafka is pulled into the classpath, potentially
> > > > leading to binary compatibility issues.
> > > > I can see several approaches to fixing this:
> > > >
> > > > 1. Run the kafka-streams-scala tests using the compatible version of
> > > > :core if one is available. Currently, this means that everything can
> > > > be tested (test kafka-streams-scala_2.12 using core_2.12,
> > > > kafka-streams-scala_2.13 using core_2.13 and kafka-streams-scala_3
> > > > using core_2.13, as these should be compatible), but when a new
> > > > scala-library version is released that is no longer compatible with
> > > > 2.13, we won't be able to test that.
> > > > 2. Rewrite the tests to run without EmbeddedKafkaCluster, instead
> > > > running the test cluster in a separate JVM or perhaps even a
> > > > container.
> > > >
> > > > I'd be willing to get my hands dirty working on this, but before I
> > > > start I'd like to get some feedback from the Kafka team regarding the
> > > > approaches outlined above.
> > > >
> > > > All the best
> > > > Matthias Berndt
> >
> KJosep Prat
> Open 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.7 #91

2024-02-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 343083 lines...]
[2024-02-08T00:38:40.875Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [3] Type=ZK, MetadataVersion=3.6-IV2, 
Security=PLAINTEXT STARTED
[2024-02-08T00:39:06.326Z] 
[2024-02-08T00:39:06.326Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [3] Type=ZK, MetadataVersion=3.6-IV2, 
Security=PLAINTEXT SKIPPED
[2024-02-08T00:39:06.326Z] 
[2024-02-08T00:39:06.326Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [4] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T00:39:35.277Z] 
[2024-02-08T00:39:35.277Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [4] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT SKIPPED
[2024-02-08T00:39:35.277Z] 
[2024-02-08T00:39:35.277Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [5] Type=ZK, MetadataVersion=3.7-IV1, 
Security=PLAINTEXT STARTED
[2024-02-08T00:40:00.288Z] 
[2024-02-08T00:40:00.288Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [5] Type=ZK, MetadataVersion=3.7-IV1, 
Security=PLAINTEXT SKIPPED
[2024-02-08T00:40:00.288Z] 
[2024-02-08T00:40:00.288Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [6] Type=ZK, MetadataVersion=3.7-IV2, 
Security=PLAINTEXT STARTED
[2024-02-08T00:40:25.629Z] 
[2024-02-08T00:40:25.629Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [6] Type=ZK, MetadataVersion=3.7-IV2, 
Security=PLAINTEXT SKIPPED
[2024-02-08T00:40:25.629Z] 
[2024-02-08T00:40:25.629Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [7] Type=ZK, MetadataVersion=3.7-IV4, 
Security=PLAINTEXT STARTED
[2024-02-08T00:40:50.807Z] 
[2024-02-08T00:40:50.807Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [7] Type=ZK, MetadataVersion=3.7-IV4, 
Security=PLAINTEXT SKIPPED
[2024-02-08T00:40:50.807Z] 
[2024-02-08T00:40:50.807Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [8] Type=ZK, MetadataVersion=3.8-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T00:41:15.991Z] 
[2024-02-08T00:41:15.991Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [8] Type=ZK, MetadataVersion=3.8-IV0, 
Security=PLAINTEXT SKIPPED
[2024-02-08T00:41:15.991Z] 
[2024-02-08T00:41:15.991Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testPartitionReassignmentInHybridMode(ClusterInstance) > 
testPartitionReassignmentInHybridMode [1] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T00:41:28.045Z] 
[2024-02-08T00:41:28.045Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testPartitionReassignmentInHybridMode(ClusterInstance) > 
testPartitionReassignmentInHybridMode [1] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT PASSED
[2024-02-08T00:41:28.045Z] 
[2024-02-08T00:41:28.045Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
STARTED
[2024-02-08T00:41:38.504Z] 
[2024-02-08T00:41:38.504Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
PASSED
[2024-02-08T00:41:38.504Z] 
[2024-02-08T00:41:38.504Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationIntegrationTest > 
testNewAndChangedTopicsInDualWrite(ClusterInstance) > 
testNewAndChangedTopicsInDualWrite [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T00:41:52.720Z] 
[2024-02-08T00:41:52.720Z] Gradle Test Run :core:test > Gradle 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2627

2024-02-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 345849 lines...]
[2024-02-08T00:28:22.433Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testControllerManagementMethods() STARTED
[2024-02-08T00:28:22.433Z] 
[2024-02-08T00:28:22.433Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testControllerManagementMethods() PASSED
[2024-02-08T00:28:22.433Z] 
[2024-02-08T00:28:22.433Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testTopicAssignmentMethods() STARTED
[2024-02-08T00:28:22.433Z] 
[2024-02-08T00:28:22.433Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testTopicAssignmentMethods() PASSED
[2024-02-08T00:28:22.433Z] 
[2024-02-08T00:28:22.433Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testConnectionViaNettyClient() STARTED
[2024-02-08T00:28:23.551Z] 
[2024-02-08T00:28:23.551Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testConnectionViaNettyClient() PASSED
[2024-02-08T00:28:23.551Z] 
[2024-02-08T00:28:23.551Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testPropagateIsrChanges() STARTED
[2024-02-08T00:28:23.551Z] 
[2024-02-08T00:28:23.551Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testPropagateIsrChanges() PASSED
[2024-02-08T00:28:23.551Z] 
[2024-02-08T00:28:23.551Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testControllerEpochMethods() STARTED
[2024-02-08T00:28:24.620Z] 
[2024-02-08T00:28:24.620Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testControllerEpochMethods() PASSED
[2024-02-08T00:28:24.620Z] 
[2024-02-08T00:28:24.620Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testDeleteRecursive() STARTED
[2024-02-08T00:28:24.620Z] 
[2024-02-08T00:28:24.620Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testDeleteRecursive() PASSED
[2024-02-08T00:28:24.620Z] 
[2024-02-08T00:28:24.620Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testGetTopicPartitionStates() STARTED
[2024-02-08T00:28:24.620Z] 
[2024-02-08T00:28:24.620Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testGetTopicPartitionStates() PASSED
[2024-02-08T00:28:24.620Z] 
[2024-02-08T00:28:24.620Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testCreateConfigChangeNotification() STARTED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testCreateConfigChangeNotification() PASSED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testDelegationTokenMethods() STARTED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> KafkaZkClientTest > testDelegationTokenMethods() PASSED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ReassignPartitionsZNodeTest > testDecodeInvalidJson() STARTED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ReassignPartitionsZNodeTest > testDecodeInvalidJson() PASSED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ReassignPartitionsZNodeTest > testEncode() STARTED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ReassignPartitionsZNodeTest > testEncode() PASSED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ReassignPartitionsZNodeTest > testDecodeValidJson() STARTED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ReassignPartitionsZNodeTest > testDecodeValidJson() PASSED
[2024-02-08T00:28:25.738Z] 
[2024-02-08T00:28:25.738Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-02-08T00:28:50.977Z] 
[2024-02-08T00:28:50.977Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT SKIPPED
[2024-02-08T00:28:50.977Z] 
[2024-02-08T00:28:50.977Z] Gradle Test Run :core:test > Gradle Test Executor 75 
> ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [2] Type=ZK, MetadataVersion=3.5-IV2, 

[jira] [Created] (KAFKA-16235) auto commit still causes delays due to retriable UNKNOWN_TOPIC_OR_PARTITION

2024-02-07 Thread Ryan Leslie (Jira)
Ryan Leslie created KAFKA-16235:
---

 Summary: auto commit still causes delays due to retriable 
UNKNOWN_TOPIC_OR_PARTITION
 Key: KAFKA-16235
 URL: https://issues.apache.org/jira/browse/KAFKA-16235
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 3.2.1, 3.3.0
Reporter: Ryan Leslie


In KAFKA-12256 an issue was described where deleted topics can cause 
auto-commit to get stuck looping on UNKNOWN_TOPIC_OR_PARTITION, resulting in 
message delays. This had also been noted in KAFKA-13310 and a fix was made 
which was included in Kafka 3.2.0: [https://github.com/apache/kafka/pull/11340]

Unfortunately, that commit contributed to another more urgent issue, 
KAFKA-14024, and after subsequent code changes in 
https://github.com/apache/kafka/pull/12349, KAFKA-12256 was no longer fixed, 
and has been an issue again since 3.2.1+

This ticket is primarily for more visibility around this since KAFKA-12256 has 
been resolved for a long time now even though the issue exists. Ideally this 
behavior could once again be corrected in the existing consumer, but at this 
point most development effort appears to be focused on the next-gen consumer 
(KIP-848). I do see that for the next-gen consumer at least, these problems are 
being newly resurfaced and tracked in KAFKA-16233 and KAFKA-16224.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2626

2024-02-07 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 403827 lines...]
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyNotList STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyNotList PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonPrimitiveKeys STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatMapWithNonPrimitiveKeys PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithOneElement STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyListWithOneElement PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatNotMap STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testValidateFormatNotMap PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyWithUnknownSerialization STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyWithUnknownSerialization PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyValidList STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.storage.OffsetUtilsTest > 
testProcessPartitionKeyValidList PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilSuccessfulCompletion STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilSuccessfulCompletion PASSED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertBeforeGetOnSuccessfulCompletion STARTED
[2024-02-07T18:20:15.633Z] 
[2024-02-07T18:20:15.633Z] Gradle Test Run :connect:runtime:test > Gradle Test 
Executor 50 > 

Re: [DISCUSS] KIP-939: Support Participation in 2PC

2024-02-07 Thread Jun Rao
Hi, Artem,

Thanks for the reply.

20. So to abort a prepared transaction after producer start, we could use
either
  producer.initTransactions(false)
or
  producer.initTransactions(true)
  producer.abortTransaction
Could we just always use the latter API? If we do this, we could
potentially eliminate the keepPreparedTxn flag in initTransactions(). After
the initTransactions() call, the outstanding txn is always preserved if 2pc
is enabled and aborted if 2pc is disabled. The use case mentioned for
keepPreparedTxn=true without 2PC doesn't seem very important. If we could
do that, it seems that we have (1) less redundant and simpler APIs; (2)
more symmetric syntax for aborting/committing a prepared txn after producer
restart.

32.
kafka.server:type=transaction-coordinator-metrics,name=active-transaction-open-time-max
Is this a Yammer or kafka metric? The former uses the camel case for name
and type. The latter uses the hyphen notation, but doesn't have the type
attribute.

33. "If the value is 'true' then the corresponding field is set in the
InitProducerIdRequest and the KafkaProducer object is set into a state
which only allows calling .commitTransaction or .abortTransaction."
We should also allow .completeTransaction, right?

Jun


On Tue, Feb 6, 2024 at 3:29 PM Artem Livshits
 wrote:

> Hi Jun,
>
> > 20. For Flink usage, it seems that the APIs used to abort and commit a
> prepared txn are not symmetric.
>
> For Flink it is expected that Flink would call .commitTransaction or
> .abortTransaction directly, it wouldn't need to deal with PreparedTxnState,
> the outcome is actually determined by the Flink's job manager, not by
> comparison of PreparedTxnState.  So for Flink, if the Kafka sync crashes
> and restarts there are 2 cases:
>
> 1. Transaction is not prepared.  In that case just call
> producer.initTransactions(false) and then can start transactions as needed.
> 2. Transaction is prepared.  In that case call
> producer.initTransactions(true) and wait for the decision from the job
> manager.  Note that it's not given that the transaction will get committed,
> the decision could also be an abort.
>
>  > 21. transaction.max.timeout.ms could in theory be MAX_INT. Perhaps we
> could use a negative timeout in the record to indicate 2PC?
>
> -1 sounds good, updated.
>
> > 30. The KIP has two different APIs to abort an ongoing txn. Do we need
> both?
>
> I think of producer.initTransactions() to be an implementation for
> adminClient.forceTerminateTransaction(transactionalId).
>
> > 31. "This would flush all the pending messages and transition the
> producer
>
> Updated the KIP to clarify that IllegalStateException will be thrown.
>
> -Artem
>
>
> On Mon, Feb 5, 2024 at 2:22 PM Jun Rao  wrote:
>
> > Hi, Artem,
> >
> > Thanks for the reply.
> >
> > 20. For Flink usage, it seems that the APIs used to abort and commit a
> > prepared txn are not symmetric.
> > To abort, the app will just call
> >   producer.initTransactions(false)
> >
> > To commit, the app needs to call
> >   producer.initTransactions(true)
> >   producer.completeTransaction(preparedTxnState)
> >
> > Will this be a concern? For the dual-writer usage, both abort/commit use
> > the same API.
> >
> > 21. transaction.max.timeout.ms could in theory be MAX_INT. Perhaps we
> > could
> > use a negative timeout in the record to indicate 2PC?
> >
> > 30. The KIP has two different APIs to abort an ongoing txn. Do we need
> > both?
> >   producer.initTransactions(false)
> >   adminClient.forceTerminateTransaction(transactionalId)
> >
> > 31. "This would flush all the pending messages and transition the
> producer
> > into a mode where only .commitTransaction, .abortTransaction, or
> > .completeTransaction could be called.  If the call is successful (all
> > messages successfully got flushed to all partitions) the transaction is
> > prepared."
> >  If the producer calls send() in that state, what exception will the
> caller
> > receive?
> >
> > Jun
> >
> >
> > On Fri, Feb 2, 2024 at 3:34 PM Artem Livshits
> >  wrote:
> >
> > > Hi Jun,
> > >
> > > >  Then, should we change the following in the example to use
> > > InitProducerId(true) instead?
> > >
> > > We could. I just thought that it's good to make the example
> > self-contained
> > > by starting from a clean state.
> > >
> > > > Also, could Flink just follow the dual-write recipe?
> > >
> > > I think it would bring some unnecessary logic to Flink (or any other
> > system
> > > that already has a transaction coordinator and just wants to drive
> Kafka
> > to
> > > the desired state).  We could discuss it with Flink folks, the current
> > > proposal was developed in collaboration with them.
> > >
> > > > 21. Could a non 2pc user explicitly set the TransactionTimeoutMs to
> > > Integer.MAX_VALUE?
> > >
> > > The server would reject this for regular transactions, it only accepts
> > > values that are <= *transaction.max.timeout.ms
> > >  *(a broker config).
> > >
> > > > 

[jira] [Created] (KAFKA-16234) Log directory failure re-creates partitions in another logdir automatically

2024-02-07 Thread Gaurav Narula (Jira)
Gaurav Narula created KAFKA-16234:
-

 Summary: Log directory failure re-creates partitions in another 
logdir automatically
 Key: KAFKA-16234
 URL: https://issues.apache.org/jira/browse/KAFKA-16234
 Project: Kafka
  Issue Type: Bug
  Components: jbod
Affects Versions: 3.7.0
Reporter: Gaurav Narula


With [KAFKA-16157|https://github.com/apache/kafka/pull/15263] we made changes 
in {{HostedPartition.Offline}} enum variant to embed {{Partition}} object. 
Further, {{ReplicaManager::getOrCreatePartition}} tries to compare the old and 
new topicIds to decide if it needs to create a new log.

The getter for `Partition::topicId` relies on retrieving the topicId from 
{{log}} field or logManager.currentLogs}}. The former is set to {{None}} 
when a partition is marked offline and the key for the partition is removed 
from the latter by LogManager::handleLogDirFailure}}. Therefore, topicId 
for a partitioned marked offline always returns {{None}} and new logs for all 
partitions in a failed log directory are always created on another disk.

The broker will fail to restart after the failed disk is repaired because same 
partitions will occur in two different directories. The error does however 
inform the operator to remove the partitions from the disk that failed which 
should help with broker startup.

We can avoid this with 
[KAFKA-16212|https://issues.apache.org/jira/browse/KAFKA-16212] but in the 
short-term, an immediate solution can be to have {{Partition}} object accept 
{{Option[TopicId]}} in it's constructor and have it fallback to {{log}} or 
{{logManager}} if it's unset.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-390: Support Compression Level (rebooted)

2024-02-07 Thread Mickael Maison
Hi Divij,

Thanks for bringing that point. After reading KIP-984, I don't think
it supersedes KIP-390/KIP-780. Being able to tune the built-in codecs
would directly benefit many users. It may also cover some scenarios
that motivated KIP-984 without requiring users to write a custom
codec.
I've not commented in the KIP-984 thread yet but at the moment it
seems very light on details (no proposed API for codecs, no
explanations of error scenarios if clients or brokers don't have
compatible codecs), including the motivation which is important when
exposing new APIs. On the other hand, KIP-390/KIP-780 have much more
details with benchmarks to support the motivation.

In my opinion starting with the compression level (KIP-390) is a good
first step and I think we should focus on that and deliver it. I
believe one of the reasons KIP-780 wasn't voted is because we never
delivered KIP-390 and nobody was keen on building a KIP on top of
another undelivered KIP.

Thanks,
Mickael


On Wed, Feb 7, 2024 at 12:27 PM Divij Vaidya  wrote:
>
> Hey Mickael
>
> Since this KIP was written, we have a new proposal to make the compression
> completely pluggable
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-984%3A+Add+pluggable+compression+interface+to+Kafka.
> If we implement that KIP, would it supersede the need for adding fine grain
> compression controls in Kafka?
>
> It might be beneficial to have a joint proposal of these two KIPs which may
> satisfy both use cases.
>
> --
> Divij Vaidya
>
>
>
> On Wed, Feb 7, 2024 at 11:14 AM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > I'm resurrecting this old thread as this KIP would be a nice
> > improvement and almost 3 years later the PR for this KIP has still not
> > been merged!
> >
> > The reason is that during reviews we noticed the proposed
> > configuration, compression.level, was not easy to use as each codec
> > has its own valid range of levels [0].
> >
> > As proposed by Jun in the PR [1], I updated the KIP to use
> > compression..level configurations instead of a single
> > compression.level setting. This syntax would also line up with the
> > proposal to add per-codec configuration options from KIP-780 [2]
> > (still to be voted). I moved the original proposal to the rejected
> > section.
> >
> > I've put the original voters and KIP author on CC. Let me know if you
> > have any feedback.
> >
> > 0: https://github.com/apache/kafka/pull/10826
> > 1: https://github.com/apache/kafka/pull/10826#issuecomment-1795952612
> > 2:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options
> >
> > Thanks,
> > Mickael
> >
> >
> > On Fri, Jun 11, 2021 at 10:00 AM Dongjin Lee  wrote:
> > >
> > > This KIP is now passed with:
> > >
> > > - binding: +3 (Ismael, Tom, Konstantine)
> > > - non-binding: +1 (Ryanne)
> > >
> > > Thanks again to all the supporters. I also updated the KIP by moving the
> > > compression buffer option into the 'Future Works' section, as Ismael
> > > proposed.
> > >
> > > Best,
> > > Dongjin
> > >
> > >
> > >
> > > On Fri, Jun 11, 2021 at 3:03 AM Konstantine Karantasis
> > >  wrote:
> > >
> > > > Makes sense. Looks like a good improvement. Thanks for including the
> > > > evaluation in the proposal Dongjin.
> > > >
> > > > +1 (binding)
> > > >
> > > > Konstantine
> > > >
> > > > On Wed, Jun 9, 2021 at 6:59 PM Dongjin Lee  wrote:
> > > >
> > > > > Thanks Ismel, Tom and Ryanne,
> > > > >
> > > > > I am now updating the KIP about the further works. Sure, You won't be
> > > > > disappointed.
> > > > >
> > > > > As of Present:
> > > > >
> > > > > - binding: +2 (Ismael, Tom)
> > > > > - non-binding: +1 (Ryanne)
> > > > >
> > > > > Anyone else?
> > > > >
> > > > > Best,
> > > > > Dongjin
> > > > >
> > > > > On Thu, Jun 10, 2021 at 2:03 AM Tom Bentley 
> > wrote:
> > > > >
> > > > > > Hi Dongjin,
> > > > > >
> > > > > > Thanks for the KIP, +1 (binding).
> > > > > >
> > > > > > Kind regards,
> > > > > >
> > > > > > Tom
> > > > > >
> > > > > > On Wed, Jun 9, 2021 at 5:16 PM Ismael Juma 
> > wrote:
> > > > > >
> > > > > > > I'm +1 on the proposed change. As I stated in the discuss
> > thread, I
> > > > > don't
> > > > > > > think we should rule out the buffer size config, but we could
> > list
> > > > that
> > > > > > as
> > > > > > > future work vs rejected alternatives.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Sat, Jun 5, 2021 at 2:37 PM Dongjin Lee 
> > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to open a voting thread for KIP-390: Support
> > Compression
> > > > > Level
> > > > > > > > (rebooted):
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Support+Compression+Level
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Dongjin
> > > > > > > >
> > > > > > > > --
> > > > > > > > *Dongjin Lee*
> > > > > > > >
> > > > > > > > *A hitchhiker in 

[jira] [Created] (KAFKA-16233) Review auto-commit continuously committing when no progress

2024-02-07 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16233:
--

 Summary: Review auto-commit continuously committing when no 
progress 
 Key: KAFKA-16233
 URL: https://issues.apache.org/jira/browse/KAFKA-16233
 Project: Kafka
  Issue Type: Task
  Components: clients, consumer
Reporter: Lianet Magrans


When auto-commit is enabled, the consumer (legacy and new) will continuously 
send commit requests with the current positions, even if no progress is made 
and positions remain unchanged. We could consider if this is really needed for 
some reason, or if we could improve it and just send auto-commit on the 
interval if positions have moved, avoiding sending repeatedly the same commit 
request.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2625

2024-02-07 Thread Apache Jenkins Server
See 




Re: KRaft controller number of replicas

2024-02-07 Thread Daniel Saiz
Thanks a lot folks, this is really helpful.

> I believe the limitation that this documentation is hinting at is the
> motivation for KIP-996

I'll make sure to check out KIP-996 and the references linked there.
Thanks for the summary as well, I really appreciate it.


Cheers,

Dani

On Tue, Feb 6, 2024 at 6:52 PM Michael K. Edwards  wrote:
>
> A 5-node quorum doesn't make a lot of sense in a setting where those nodes
> are also Kafka brokers.  When they're ZooKeeper voters, a quorum* of 5
> makes a lot of sense, because you can take an unscheduled voter failure
> during a rolling-reboot scheduled maintenance without significant service
> impact.  You can also spread the ZK quorum across multiple AZs (or your
> cloud's equivalent), which I would rarely recommend doing with Kafka.
>
> The trend in Kafka development and deployment is towards KRaft, and there
> is probably no percentage in bucking that trend.  Just don't expect it to
> cover every "worst realistic case" scenario that a ZK-based deployment can.
>
> Scheduled maintenance on an (N+2 for read integrity, N+1 to stay writable)
> system adds vulnerability, and that's just something you have to build into
> your risk model.  N+1 is good enough for finely partitioned data in any use
> case that Kafka fits, because resilvering after a maintenance or a full
> broker loss is highly parallel.  N+1 is also acceptable for consumer group
> coordinator metadata, as long as you tune for aggressive compaction; I
> haven't looked at whether the coordinator code does a good job of
> parallelizing metadata replay, but if it doesn't, there's no real
> difficulty in fixing that.  For global metadata that needs globally
> serialized replay, which is what the controller metadata is, I was a lot
> happier with N+2 to stay writable.  But that's water under the bridge, and
> I'm just a spectator.
>
> Regards,
> - Michael
>
>
> * I hate this misuse of the word "quorum", but what can one do?
>
>
> On Tue, Feb 6, 2024, 8:51 AM Greg Harris 
> wrote:
>
> > Hi Dani,
> >
> > I believe the limitation that this documentation is hinting at is the
> > motivation for KIP-996 [1], and the notice in the documentation would
> > be removed once KIP-996 lands.
> > You can read the KIP for a brief explanation and link to a more
> > in-depth explanation of the failure scenario.
> >
> > While a 3-node quorum would typically be less reliable or available
> > than a 5-node quorum, it happens to be resistant to this failure mode
> > which makes the additional controllers liabilities instead of assets.
> > In the judgement of the maintainers at least, the risk of a network
> > partition which could trigger unavailability in a 5-node quorum is
> > higher than the risk of a 2-controller failure in a 3-node quorum, so
> > 3-node quorums are recommended.
> > You could do your own analysis and practical testing to make this
> > tradeoff yourself in your network context.
> >
> > I hope this helps!
> > Greg
> >
> > [1] https://cwiki.apache.org/confluence/display/KAFKA/KIP-996%3A+Pre-Vote
> >
> > On Tue, Feb 6, 2024 at 4:25 AM Daniel Saiz
> >  wrote:
> > >
> > > Hello,
> > >
> > > I would like to clarify a statement I found in the KRaft documentation,
> > in
> > > the deployment section [1]:
> > >
> > > > More than 3 controllers is not recommended in critical environments. In
> > > the rare case of a partial network failure it is possible for the cluster
> > > metadata quorum to become unavailable. This limitation will be addressed
> > in
> > > a future release of Kafka.
> > >
> > > I would like to clarify what it's meant by that sentence, as intuitively
> > I
> > > don't see why 3 replicas would be better than 5 (or more) for fault
> > > tolerance.
> > > What is the current limitation this is referring to?
> > >
> > > Thanks a lot.
> > >
> > >
> > > Cheers,
> > >
> > > Dani
> > >
> > > [1] https://kafka.apache.org/36/documentation.html#kraft_deployment
> >


Re: [VOTE] KIP-390: Support Compression Level (rebooted)

2024-02-07 Thread Divij Vaidya
Hey Mickael

Since this KIP was written, we have a new proposal to make the compression
completely pluggable
https://cwiki.apache.org/confluence/display/KAFKA/KIP-984%3A+Add+pluggable+compression+interface+to+Kafka.
If we implement that KIP, would it supersede the need for adding fine grain
compression controls in Kafka?

It might be beneficial to have a joint proposal of these two KIPs which may
satisfy both use cases.

--
Divij Vaidya



On Wed, Feb 7, 2024 at 11:14 AM Mickael Maison 
wrote:

> Hi,
>
> I'm resurrecting this old thread as this KIP would be a nice
> improvement and almost 3 years later the PR for this KIP has still not
> been merged!
>
> The reason is that during reviews we noticed the proposed
> configuration, compression.level, was not easy to use as each codec
> has its own valid range of levels [0].
>
> As proposed by Jun in the PR [1], I updated the KIP to use
> compression..level configurations instead of a single
> compression.level setting. This syntax would also line up with the
> proposal to add per-codec configuration options from KIP-780 [2]
> (still to be voted). I moved the original proposal to the rejected
> section.
>
> I've put the original voters and KIP author on CC. Let me know if you
> have any feedback.
>
> 0: https://github.com/apache/kafka/pull/10826
> 1: https://github.com/apache/kafka/pull/10826#issuecomment-1795952612
> 2:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options
>
> Thanks,
> Mickael
>
>
> On Fri, Jun 11, 2021 at 10:00 AM Dongjin Lee  wrote:
> >
> > This KIP is now passed with:
> >
> > - binding: +3 (Ismael, Tom, Konstantine)
> > - non-binding: +1 (Ryanne)
> >
> > Thanks again to all the supporters. I also updated the KIP by moving the
> > compression buffer option into the 'Future Works' section, as Ismael
> > proposed.
> >
> > Best,
> > Dongjin
> >
> >
> >
> > On Fri, Jun 11, 2021 at 3:03 AM Konstantine Karantasis
> >  wrote:
> >
> > > Makes sense. Looks like a good improvement. Thanks for including the
> > > evaluation in the proposal Dongjin.
> > >
> > > +1 (binding)
> > >
> > > Konstantine
> > >
> > > On Wed, Jun 9, 2021 at 6:59 PM Dongjin Lee  wrote:
> > >
> > > > Thanks Ismel, Tom and Ryanne,
> > > >
> > > > I am now updating the KIP about the further works. Sure, You won't be
> > > > disappointed.
> > > >
> > > > As of Present:
> > > >
> > > > - binding: +2 (Ismael, Tom)
> > > > - non-binding: +1 (Ryanne)
> > > >
> > > > Anyone else?
> > > >
> > > > Best,
> > > > Dongjin
> > > >
> > > > On Thu, Jun 10, 2021 at 2:03 AM Tom Bentley 
> wrote:
> > > >
> > > > > Hi Dongjin,
> > > > >
> > > > > Thanks for the KIP, +1 (binding).
> > > > >
> > > > > Kind regards,
> > > > >
> > > > > Tom
> > > > >
> > > > > On Wed, Jun 9, 2021 at 5:16 PM Ismael Juma 
> wrote:
> > > > >
> > > > > > I'm +1 on the proposed change. As I stated in the discuss
> thread, I
> > > > don't
> > > > > > think we should rule out the buffer size config, but we could
> list
> > > that
> > > > > as
> > > > > > future work vs rejected alternatives.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Sat, Jun 5, 2021 at 2:37 PM Dongjin Lee 
> > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I'd like to open a voting thread for KIP-390: Support
> Compression
> > > > Level
> > > > > > > (rebooted):
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Support+Compression+Level
> > > > > > >
> > > > > > > Best,
> > > > > > > Dongjin
> > > > > > >
> > > > > > > --
> > > > > > > *Dongjin Lee*
> > > > > > >
> > > > > > > *A hitchhiker in the mathematical world.*
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > *github:  github.com/dongjinleekr
> > > > > > > keybase:
> > > > > > https://keybase.io/dongjinleekr
> > > > > > > linkedin:
> > > > > > kr.linkedin.com/in/dongjinleekr
> > > > > > > speakerdeck:
> > > > > > > speakerdeck.com/dongjin
> > > > > > > *
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > *Dongjin Lee*
> > > >
> > > > *A hitchhiker in the mathematical world.*
> > > >
> > > >
> > > >
> > > > *github:  github.com/dongjinleekr
> > > > keybase:
> > > https://keybase.io/dongjinleekr
> > > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > > speakerdeck:
> > > > speakerdeck.com/dongjin
> > > > *
> > > >
> > >
> >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> >
> >
> > *github:  github.com/dongjinleekr
> > keybase:
> https://keybase.io/dongjinleekr
> > 

Re: [VOTE] KIP-390: Support Compression Level (rebooted)

2024-02-07 Thread Mickael Maison
Hi,

I'm resurrecting this old thread as this KIP would be a nice
improvement and almost 3 years later the PR for this KIP has still not
been merged!

The reason is that during reviews we noticed the proposed
configuration, compression.level, was not easy to use as each codec
has its own valid range of levels [0].

As proposed by Jun in the PR [1], I updated the KIP to use
compression..level configurations instead of a single
compression.level setting. This syntax would also line up with the
proposal to add per-codec configuration options from KIP-780 [2]
(still to be voted). I moved the original proposal to the rejected
section.

I've put the original voters and KIP author on CC. Let me know if you
have any feedback.

0: https://github.com/apache/kafka/pull/10826
1: https://github.com/apache/kafka/pull/10826#issuecomment-1795952612
2: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options

Thanks,
Mickael


On Fri, Jun 11, 2021 at 10:00 AM Dongjin Lee  wrote:
>
> This KIP is now passed with:
>
> - binding: +3 (Ismael, Tom, Konstantine)
> - non-binding: +1 (Ryanne)
>
> Thanks again to all the supporters. I also updated the KIP by moving the
> compression buffer option into the 'Future Works' section, as Ismael
> proposed.
>
> Best,
> Dongjin
>
>
>
> On Fri, Jun 11, 2021 at 3:03 AM Konstantine Karantasis
>  wrote:
>
> > Makes sense. Looks like a good improvement. Thanks for including the
> > evaluation in the proposal Dongjin.
> >
> > +1 (binding)
> >
> > Konstantine
> >
> > On Wed, Jun 9, 2021 at 6:59 PM Dongjin Lee  wrote:
> >
> > > Thanks Ismel, Tom and Ryanne,
> > >
> > > I am now updating the KIP about the further works. Sure, You won't be
> > > disappointed.
> > >
> > > As of Present:
> > >
> > > - binding: +2 (Ismael, Tom)
> > > - non-binding: +1 (Ryanne)
> > >
> > > Anyone else?
> > >
> > > Best,
> > > Dongjin
> > >
> > > On Thu, Jun 10, 2021 at 2:03 AM Tom Bentley  wrote:
> > >
> > > > Hi Dongjin,
> > > >
> > > > Thanks for the KIP, +1 (binding).
> > > >
> > > > Kind regards,
> > > >
> > > > Tom
> > > >
> > > > On Wed, Jun 9, 2021 at 5:16 PM Ismael Juma  wrote:
> > > >
> > > > > I'm +1 on the proposed change. As I stated in the discuss thread, I
> > > don't
> > > > > think we should rule out the buffer size config, but we could list
> > that
> > > > as
> > > > > future work vs rejected alternatives.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Sat, Jun 5, 2021 at 2:37 PM Dongjin Lee 
> > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to open a voting thread for KIP-390: Support Compression
> > > Level
> > > > > > (rebooted):
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Support+Compression+Level
> > > > > >
> > > > > > Best,
> > > > > > Dongjin
> > > > > >
> > > > > > --
> > > > > > *Dongjin Lee*
> > > > > >
> > > > > > *A hitchhiker in the mathematical world.*
> > > > > >
> > > > > >
> > > > > >
> > > > > > *github:  github.com/dongjinleekr
> > > > > > keybase:
> > > > > https://keybase.io/dongjinleekr
> > > > > > linkedin:
> > > > > kr.linkedin.com/in/dongjinleekr
> > > > > > speakerdeck:
> > > > > > speakerdeck.com/dongjin
> > > > > > *
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > >
> > >
> > > *github:  github.com/dongjinleekr
> > > keybase:
> > https://keybase.io/dongjinleekr
> > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > speakerdeck:
> > > speakerdeck.com/dongjin
> > > *
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *