Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2189

2023-09-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-892: Transactional Semantics for StateStores

2023-09-10 Thread Colt McNealy
Howdy folks,

First I wanted to say fantastic work and thank you to Nick. I built your
branch (https://github.com/nicktelford/kafka/tree/KIP-892-3.5.0) and did
some testing on our Streams app with Kafka 3.5.0, your `kip-892-3.5.0`
branch, and your `kip-892-3.5.0` branch built with Speedb OSS 2.3.0.1. And
it worked! Including the global store (we don't have any segmented stores,
unfortunately).

The test I ran involved running 3,000 workflows with 100 tasks each, and
roughly 650MB state total.

With Streams 3.5.0, I indeed verified that unclean shutdown caused a fresh
restore from scratch. I also benchmarked my application at:
- Running the benchmark took 211 seconds
- 1,421 tasks per second on one partition
- 8 seconds to restore the state (650MB or so)

With KIP 892, I verified that unclean shutdown does not cause a fresh
restore (). I got the following benchmark results:
- Benchmark took 216 seconds
- 1,401 tasks per second on one partition
- 11 seconds to restore the state

I ran the restorations many times to ensure that there was no rounding
error or noise; the results were remarkably consistent. Additionally, I ran
the restorations with KIP-892 built with Speedb OSS. The restoration time
consistently came out as 10 seconds, which was an improvement from the 11
seconds observed with RocksDB + KIP-892.

My application is bottlenecked mostly by serialization and deserialization,
so improving the performance of the state store doesn't really impact our
throughput that much. And the processing performance (benchmark time,
tasks/second) are pretty close in KIP-892 vs Streams 3.5.0. However, at
larger state store sizes, RocksDB performance begins to degrade, so that
might not be true once we pass 20GB per partition.

-- QUESTION: Because we observed a significant (30% or so) and reproducible
slowdown during restoration, it seems like KIP-892 uses the checkpointing
behavior during restoration as well? If so, I would argue that this might
not be necessary, because everything we write is already committed, so we
don't need to change the behavior during restoration or standby tasks.
Perhaps we could write the offsets to RocksDB on every batch (or even every
5 seconds or so).

-- Note: This was a very small-scale test, with <1GB of state (as I didn't
have time to spend hours building up state). In the past I have noted that
RocksDB performance degrades significantly after 25GB of state in one
store. Future work involves determining the performance impact of KIP-892
relative to trunk at larger scale, since it's possible that the relative
behaviors are far different (i.e. relative to trunk, 892's processing and
restoration throughput might be much better or much worse).

-- Note: For those who want to replicate the tests, you can find the branch
of our streams app here:
https://github.com/littlehorse-enterprises/littlehorse/tree/minor/testing-streams-forks
. The example I ran was `examples/hundred-tasks`, and I ran the server with
`./local-dev/do-server.sh one-partition`. The `STREAMS_TESTS.md` file has a
detailed breakdown of the testing.

Anyways, I'm super excited about this KIP and if a bit more future testing
goes well, we plan to ship our product with a build of KIP-892, Speedb OSS,
and potentially a few other minor tweaks that we are thinking about.

Thanks Nick!

Ride well,
Colt McNealy

*Founder, LittleHorse.dev*


On Thu, Aug 24, 2023 at 3:19 AM Nick Telford  wrote:

> Hi Bruno,
>
> Thanks for taking the time to review the KIP. I'm back from leave now and
> intend to move this forwards as quickly as I can.
>
> Addressing your points:
>
> 1.
> Because flush() is part of the StateStore API, it's exposed to custom
> Processors, which might be making calls to flush(). This was actually the
> case in a few integration tests.
> To maintain as much compatibility as possible, I'd prefer not to make this
> an UnsupportedOperationException, as it will cause previously working
> Processors to start throwing exceptions at runtime.
> I agree that it doesn't make sense for it to proxy commit(), though, as
> that would cause it to violate the "StateStores commit only when the Task
> commits" rule.
> Instead, I think we should make this a no-op. That way, existing user
> Processors will continue to work as-before, without violation of store
> consistency that would be caused by premature flush/commit of StateStore
> data to disk.
> What do you think?
>
> 2.
> As stated in the JavaDoc, when a StateStore implementation is
> transactional, but is unable to estimate the uncommitted memory usage, the
> method will return -1.
> The intention here is to permit third-party implementations that may not be
> able to estimate memory usage.
>
> Yes, it will be 0 when nothing has been written to the store yet. I thought
> that was implied by "This method will return an approximation of the memory
> would be freed by the next call to {@link #commit(Map)}" and "@return The
> approximate size of all records awaiting {@link 

Re: KIP-976: Cluster-wide dynamic log adjustment for Kafka Connect

2023-09-10 Thread Sagar
Thanks Chris,

The changes look good to me.

1) Regarding the suggestion to reduce the key sizes, the only intent was to
make it easier to read. But then I missed the fact that the
"org.apache.kafka.connect" isn't always going to be the prefix for these
keys. We can live with whatever we have

2) Hmm, I think it just felt like a useful extension to the current
mechanism of changing log levels per worker. One place where it might come
in handy, and which can't be handled by any of the options listed in Future
Work sections, is if somebody wants to observe the rebalance related
activities per worker on a subset of them using finer grained logs. I am
not sure if it's a strong enough motivation but as I said it just felt like
a useful extension. I will leave it to you if you want to add it or not (I
am ok either way).

Thanks!
Sagar.

On Thu, Sep 7, 2023 at 9:26 PM Chris Egerton 
wrote:

> Hi all,
>
> Thanks again for the reviews!
>
>
> Sagar:
>
> > The updated definition of last_modified looks good to me. As a
> continuation
> to point number 2, could we also mention that this could be used to get
> insights into the propagation of the cluster wide log level updates. It is
> implicit but probably better to add it I feel.
>
> Sure, done. Added to the end of the "Config topic records" section: "There
> may be some delay between when a REST request with scope=cluster is
> received and when all workers have read the corresponding record from the
> config topic. The last modified timestamp (details above) can serve as a
> rudimentary tool to provide insight into the propagation of a cluster-wide
> log level adjustment."
>
> > Yeah I would lean on the side of calling it out explicitly. Since the
> behaviour is similar to the current dynamically set log levels (i.e
> resetting to the log4j config files levels) so we can call it out stating
> that similarity just for completeness sake. It would be useful info for
> new/medium level users reading the KIP considering worker restarts is not
> uncommon.
>
> Alright, did this too. Added near the end of the "Config topic records"
> section: "Restarting a worker will cause it to discard all cluster-wide
> dynamic log level adjustments, and revert to the levels specified in its
> Log4j configuration. This mirrors the current behavior with per-worker
> dynamic log level adjustments."
>
> > I had a nit level suggestion but not sure if it makes sense but would
> still
> call it out. The entire namespace name in the config records key (along
> with logger-cluster prefix) seems to be a bit too verbose. My first thought
> was to not have the prefix org.apache.kafka.connect in the keys considering
> it is the root namespace. But since logging can be enabled at a root level,
> can we just use initials like (o.a.k.c) which is also a standard practice.
> Let me know what you think?
>
> Considering these records aren't meant to be user-visible, there doesn't
> seem to be a pressing need to reduce their key sizes (though I'll happily
> admit that to human eyes, the format is a bit ugly). IMO the increase in
> implementation complexity isn't quite worth it, especially considering
> there are plenty of logging namespaces that won't begin with
> "org.apache.kafka.connect" (likely including all third-party connector
> code), like Yash mentions. Is there a motivation for this suggestion that
> I'm missing?
>
> > Lastly, I was also thinking if we could introduce a new parameter which
> takes a subset of worker ids and enables logging for them in one go. But
> this is already achievable by invoking scope=worker endpoint n times to
> reflect on n workers so maybe not a necessary change. But this could be
> useful on a large cluster. Do you think this is worth listing in the Future
> Work section? It's not important so can be ignored as well.
>
> Hmmm... I think I'd rather leave this out for now because I'm just not
> certain enough it'd be useful. The one advantage I can think of is
> targeting specific workers that are behind a load balancer, but being able
> to identify those workers would be a challenge in that scenario anyways.
> Besides that, are there any cases that couldn't be addressed more
> holistically by targeting based on connector/connector type, like Yash
> asks?
>
>
> Ashwin:
>
> Glad we're on the same page RE request forwarding and integration vs.
> system tests! Let me know if anything else comes up that you'd like to
> discuss.
>
>
> Yash:
>
> Glad that it makes sense to keep these changes ephemeral. I'm not quite
> confident enough to put persistent updates in the "Future work" section but
> have a sneaking suspicion that this isn't the last we'll see of that
> request. Time will tell...
>
>
> Thanks again, all!
>
> Cheers,
>
> Chris
>
> On Wed, Sep 6, 2023 at 8:36 AM Yash Mayya  wrote:
>
> > Hi Chris,
> >
> > Thanks for the clarification on the last modified timestamp tracking here
> > and on the KIP, things look good to me now.
> >
> > On the persistence front, I hadn't 

Re: Complete Kafka replication protocol description

2023-09-10 Thread Haruki Okada
Hi Jack,

Thank you for the great work, not only the spec but also for the
comprehensive documentation about the replication.
Actually I wrote some TLA+ spec to verify unclean leader election behavior
before so I will double-check my understanding with your complete spec :)


Thanks,

2023年9月10日(日) 21:42 David Jacot :

> Hi Jack,
>
> This is great! Thanks for doing it. I will look into it when I have a bit
> of time, likely after Current.
>
> Would you be interested in contributing it to the main repository? That
> would be a great contribution to the project. Having it there would allow
> the community to maintain it while changes to the protocol are made. That
> would also pave the way for having other specs in the future (e.g. new
> consumer group protocol).
>
> Best,
> David
>
> Le dim. 10 sept. 2023 à 12:45, Jack Vanlightly  a
> écrit :
>
> > Hi all,
> >
> > As part of my work on formally verifying different parts of Apache Kafka
> > and working on KIP-966 I have built up a lot of knowledge about how the
> > replication protocol works. Currently it is mostly documented across
> > various KIPs and in the code itself. I have written a complete protocol
> > description (with KIP-966 changes applied) which is inspired by the
> precise
> > but accessible style and language of the Raft paper. The idea is that it
> > makes it easier for contributors and anyone else interested in the
> protocol
> > to learn how it works, the fundamental properties it has and how those
> > properties are supported by the various behaviors and conditions.
> >
> > It currently resides next to the TLA+ specification itself in my
> > kafka-tlaplus repository. I'd be interested to receive feedback from the
> > community.
> >
> >
> >
> https://github.com/Vanlightly/kafka-tlaplus/blob/main/kafka_data_replication/kraft/kip-966/description/0_kafka_replication_protocol.md
> >
> > Thanks
> > Jack
> >
>


-- 

Okada Haruki
ocadar...@gmail.com



[jira] [Reopened] (KAFKA-15140) Improve TopicCommandIntegrationTest to be less flaky

2023-09-10 Thread Deng Ziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deng Ziming reopened KAFKA-15140:
-

This is still flaky, it seems the internal topic is also under min isr 
partitions.

https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13562/23/testReport/junit/kafka.admin/TopicCommandIntegrationTest/Build___JDK_8_and_Scala_2_12___testDescribeUnderMinIsrPartitionsMixed_String__quorum_kraft/

> Improve TopicCommandIntegrationTest to be less flaky
> 
>
> Key: KAFKA-15140
> URL: https://issues.apache.org/jira/browse/KAFKA-15140
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Reporter: Divij Vaidya
>Assignee: Lan Ding
>Priority: Minor
>  Labels: newbie
> Fix For: 3.6.0, 3.5.1
>
>
> *This is a good Jira for folks who are new to contributing to Kafka.*
> Tests in TopicCommandIntegrationTest get flaky from time to time. The 
> objective of the task is to make them more robust by doing the following:
> 1. Replace the usage {-}createAndWaitTopic{-}() adminClient.createTopics() 
> method and other places where were are creating a topic (without waiting) 
> with 
> TestUtils.createTopicWithAdmin(). The latter method already contains the 
> functionality to create a topic and wait for metadata to sync up.
> 2. Replace the number 6 at places such as 
> "adminClient.createTopics(
> Collections.singletonList(new NewTopic("foo_bar", 1, 6.toShort)))" with a 
> meaningful constant.
> 3. Add logs if an assertion fails, for example, lines such as "
> assertTrue(rows(0).startsWith(s"\tTopic: $testTopicName"), output)" should 
> have a third argument which prints the actual output printed so that we can 
> observe in the test logs on what was the output when assertion failed.
> 4. Replace occurrences of "\n" with System.lineSeparator() which is platform 
> independent
> 5. We should wait for reassignment to complete whenever we are re-assigning 
> partitions using alterconfig before we call describe to validate it. We could 
> use 
> TestUtils.waitForAllReassignmentsToComplete()
> *Motivation of this task*
> Try to fix the flaky test behaviour such as observed in 
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-13924/5/testReport/junit/kafka.admin/TopicCommandIntegrationTest/Build___JDK_11_and_Scala_2_13___testDescribeUnderMinIsrPartitionsMixed_String__quorum_zk/]
>  
> {noformat}
> org.opentest4j.AssertionFailedError: expected:  but was: 
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>   at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>   at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)
>   at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:180)
>   at 
> app//kafka.admin.TopicCommandIntegrationTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandIntegrationTest.scala:794){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15286) Migrate ApiVersion related code to kraft

2023-09-10 Thread Deng Ziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deng Ziming resolved KAFKA-15286.
-
Resolution: Fixed

> Migrate ApiVersion related code to kraft
> 
>
> Key: KAFKA-15286
> URL: https://issues.apache.org/jira/browse/KAFKA-15286
> Project: Kafka
>  Issue Type: Task
>Reporter: Deng Ziming
>Assignee: Deng Ziming
>Priority: Major
>
> In many places involving ApiVersion, we only support zk, we should move it 
> forward to kraft.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Apache Kafka 3.6.0 release

2023-09-10 Thread Ismael Juma
Hi Satish,

That sounds great. I think we should aim to only allow blockers
(regressions, impactful security issues, etc.) on the 3.6 branch until
3.6.0 is out.

Ismael


On Sat, Sep 9, 2023, 12:20 AM Satish Duggana 
wrote:

> Hi Ismael,
> It looks like we will publish RC0 by 14th Sep.
>
> Thanks,
> Satish.
>
> On Fri, 8 Sept 2023 at 19:23, Ismael Juma  wrote:
> >
> > Hi Satish,
> >
> > Do you have a sense of when we'll publish RC0?
> >
> > Thanks,
> > Ismael
> >
> > On Fri, Sep 8, 2023 at 6:27 AM David Arthur
> >  wrote:
> >
> > > Quick update on my two blockers: KAFKA-15435 is merged to trunk and
> > > cherry-picked to 3.6. I have a PR open for KAFKA-15441 and will
> hopefully
> > > get it merged today.
> > >
> > > -David
> > >
> > > On Fri, Sep 8, 2023 at 5:26 AM Ivan Yurchenko  wrote:
> > >
> > > > Hi Satish and all,
> > > >
> > > > I wonder if https://issues.apache.org/jira/browse/KAFKA-14993
> should be
> > > > included in the 3.6 release plan. I'm thinking that when
> implemented, it
> > > > would be a small, but still a change in the RSM contract: throw an
> > > > exception instead of returning an empty InputStream. Maybe it should
> be
> > > > included right away to save the migration later? What do you think?
> > > >
> > > > Best,
> > > > Ivan
> > > >
> > > > On Fri, Sep 8, 2023, at 02:52, Satish Duggana wrote:
> > > > > Hi Jose,
> > > > > Thanks for looking into this issue and resolving it with a quick
> fix.
> > > > >
> > > > > ~Satish.
> > > > >
> > > > > On Thu, 7 Sept 2023 at 21:40, José Armando García Sancio
> > > > >  wrote:
> > > > > >
> > > > > > Hi Satish,
> > > > > >
> > > > > > On Wed, Sep 6, 2023 at 4:58 PM Satish Duggana <
> > > > satish.dugg...@gmail.com> wrote:
> > > > > > >
> > > > > > > Hi Greg,
> > > > > > > It seems https://issues.apache.org/jira/browse/KAFKA-14273 has
> > > been
> > > > > > > there in 3.5.x too.
> > > > > >
> > > > > > I also agree that it should be a blocker for 3.6.0. It should
> have
> > > > > > been a blocker for those previous releases. I didn't fix it
> because,
> > > > > > unfortunately, I wasn't aware of the issue and jira.
> > > > > > I'll create a PR with a fix in case the original author doesn't
> > > > respond in time.
> > > > > >
> > > > > > Satish, do you agree?
> > > > > >
> > > > > > Thanks!
> > > > > > --
> > > > > > -José
> > > > >
> > > >
> > >
> > >
> > > --
> > > -David
> > >
>


Re: [VOTE] KIP-858: Handle JBOD broker disk failure in KRaft

2023-09-10 Thread Ron Dagostino
Hi Igor.  Thanks for all your work here.  Before I can vote, I have
the following questions/comments about the KIP:

> When multiple log.dirs are configured, a new property will be included
> in meta.properties — directory.id — which will identify each log directory
> with a UUID. The UUID is randomly generated for each log directory.

Since every log directory gets a random UUID assigned, even if just
one log dir is configured in the Broker, I think the above should not
be qualified with the phrase “When multiple log.dirs are configured".


Similarly:

> When multiple log.dirs are configured, a new property — directory.id — will be
> expected in the meta.properties file in each log directory configured under 
> log.dirs.

I think the qualification "When multiple log.dirs are configured"
should be removed.


Colin had mentioned that in PartitionRecord he would prefer a new
(tagged) array for replica UUIDs, rather than creating the
ReplicaAssignment array.  While adding to an RPC is arguably less
intrusive than replacing, I am inclined to disagree with Colin's
suggestion for the following reason.  We have agreed to not specify
the log dir uuid in the case where a replica only has one registered
log dir.  If we were to add a new tagged array, it would become
necessary to specify all replica log dir uuids for all replicas in the
PartitionRecord if any one such replica had more than one log dir
configured.  By creating the ReplicaAssignment array we can just
specify the uuid -- or not -- for each replica based on whether that
replica itself has multiple registered log dirs or not.


The documentation of the new config “log.dir.failure.timeout.ms"
should indicate that shutdown will only occur if the replica is the
leader for at least one replica in the failed log directory.  The text
in the KIP says this later on, so I think the config documentation
should have it as well.


> a logical update to all partitions in that broker takes place,
> assigning the replica's directory to the single directory previously 
> registered –
> i.e. it is assumed that all replicas are still in the same directory, and 
> this transition
> to JBOD avoids creating partition change records.

I would like to confirm that upon writing a snapshot each
PartitionRecord will now contain an explicit directory for such
replicas since there is no longer just 1 possibility.  It might be
good to state it in the KIP for clarity assuming this statement is
correct.


The “Metadata caching” section states that replicas will also be
considered offline if the replica references a log directory UUID that
is not present in the hosting Broker's latest registration EXCEPT FOR
the case when there is just one registered directory and the presence
of any offline log dirs is not indicated (i.e. OfflineLogDirs is
false).  I wonder about the corner case where a broker that previously
had multiple log dirs is restarted with a new config that specifies
just a single log directory.  What would happen here?  If the broker
were not the leader then perhaps it would replicate the data into the
single log directory.  What would happen if it were the leader of a
partition that had been marked as offline?  Would we have data loss
even if other replicas still had data?

Ron

On Tue, Sep 5, 2023 at 7:46 AM ziming deng  wrote:
>
> Hi, Igor
> I’m +1(binding) for this, looking forward the PR.
>
> --
> Best,
> Ziming
>
> > On Jul 26, 2023, at 01:13, Igor Soarez  wrote:
> >
> > Hi everyone,
> >
> > Following a face-to-face discussion with Ron and Colin,
> > I have just made further improvements to this KIP:
> >
> >
> > 1. Every log directory gets a random UUID assigned, even if just one
> >   log dir is configured in the Broker.
> >
> > 2. All online log directories are registered, even if just one if 
> > configured.
> >
> > 3. Partition-to-directory assignments are only performed if more than
> >   one log directory is configured/registered.
> >
> > 4. A successful reply from the Controller to a AssignReplicasToDirsRequest
> >   is taken as a guarantee that the metadata changes are
> >   successfully persisted.
> >
> > 5. Replica assignments that refer log directories pending a failure
> >   notification are prioritized to guarantee the Controller and Broker
> >   agree on the assignments before acting on the failure notification.
> >
> > 6. The transition from one log directory to multiple log directories
> >   relies on a logical update to efficiently update directory assignments
> >   to the previously registered single log directory when that's possible.
> >
> > I have also introduced a configuration for the maximum time the broker
> > will keep trying to send a log directory notification before shutting down.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft
> >
> > Best,
> >
> > --
> > Igor
> >
>


Re: Complete Kafka replication protocol description

2023-09-10 Thread David Jacot
Hi Jack,

This is great! Thanks for doing it. I will look into it when I have a bit
of time, likely after Current.

Would you be interested in contributing it to the main repository? That
would be a great contribution to the project. Having it there would allow
the community to maintain it while changes to the protocol are made. That
would also pave the way for having other specs in the future (e.g. new
consumer group protocol).

Best,
David

Le dim. 10 sept. 2023 à 12:45, Jack Vanlightly  a
écrit :

> Hi all,
>
> As part of my work on formally verifying different parts of Apache Kafka
> and working on KIP-966 I have built up a lot of knowledge about how the
> replication protocol works. Currently it is mostly documented across
> various KIPs and in the code itself. I have written a complete protocol
> description (with KIP-966 changes applied) which is inspired by the precise
> but accessible style and language of the Raft paper. The idea is that it
> makes it easier for contributors and anyone else interested in the protocol
> to learn how it works, the fundamental properties it has and how those
> properties are supported by the various behaviors and conditions.
>
> It currently resides next to the TLA+ specification itself in my
> kafka-tlaplus repository. I'd be interested to receive feedback from the
> community.
>
>
> https://github.com/Vanlightly/kafka-tlaplus/blob/main/kafka_data_replication/kraft/kip-966/description/0_kafka_replication_protocol.md
>
> Thanks
> Jack
>


Complete Kafka replication protocol description

2023-09-10 Thread Jack Vanlightly
Hi all,

As part of my work on formally verifying different parts of Apache Kafka
and working on KIP-966 I have built up a lot of knowledge about how the
replication protocol works. Currently it is mostly documented across
various KIPs and in the code itself. I have written a complete protocol
description (with KIP-966 changes applied) which is inspired by the precise
but accessible style and language of the Raft paper. The idea is that it
makes it easier for contributors and anyone else interested in the protocol
to learn how it works, the fundamental properties it has and how those
properties are supported by the various behaviors and conditions.

It currently resides next to the TLA+ specification itself in my
kafka-tlaplus repository. I'd be interested to receive feedback from the
community.

https://github.com/Vanlightly/kafka-tlaplus/blob/main/kafka_data_replication/kraft/kip-966/description/0_kafka_replication_protocol.md

Thanks
Jack


[GitHub] [kafka-site] showuon commented on pull request #540: Added public key to KEYS.

2023-09-10 Thread via GitHub


showuon commented on PR #540:
URL: https://github.com/apache/kafka-site/pull/540#issuecomment-1712738051

   Let me know if you face any problem. Good luck!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] satishd commented on pull request #540: Added public key to KEYS.

2023-09-10 Thread via GitHub


satishd commented on PR #540:
URL: https://github.com/apache/kafka-site/pull/540#issuecomment-1712732279

   Thanks @showuon , merging it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] satishd merged pull request #540: Added public key to KEYS.

2023-09-10 Thread via GitHub


satishd merged PR #540:
URL: https://github.com/apache/kafka-site/pull/540


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org