Re: Logging in Kafka

2024-01-10 Thread Luke Chen
Hi Mickael,

I agree it's good to make it clear about what we're going to adopt for the
logging library.
For keeping reload4j or adopting log4j2, I don't have any preference TBH.
But if Viktor is willing to drive log4j2 tasks, then we don't have any
reason not to adopt log4j2. (Thanks Viktor!)

Thanks
Luke


On Thu, Jan 11, 2024 at 4:02 AM Mickael Maison 
wrote:

> Hi,
>
> I think the only thing that would need to be done in 3.8 is the
> deprecation of the log4j appender (KIP-719). This was a pre-req for
> migrating to log4j2 due to conflicts when having both log4j and log4j2
> in the classpath. I don't know if that's still the case with reload4j
> but I think we should take the opportunity of deprecating it before
> 4.0 regardless to avoid relying on multiple logging libraries.
>
> Thanks,
> Mickael
>
> On Wed, Jan 10, 2024 at 7:58 PM Colin McCabe  wrote:
> >
> > Hi Mickael,
> >
> > Thanks for bringing this up.
> >
> > If we move to log4j2 in 4.0, is there any work that needs to be done in
> 3.8? That's probably what we should focus on.
> >
> > P.S. My assumption is that if the log4j2 work misses the train, we'll
> stick with reload4j in 4.0. Hopefully this won't happen.
> >
> > best,
> > Colin
> >
> >
> > On Wed, Jan 10, 2024, at 09:13, Ismael Juma wrote:
> > > Hi Viktor,
> > >
> > > A logging library that requires Java 17 is a deal breaker since we
> need to
> > > log from modules that will only require Java 11 in Apache Kafka 4.0.
> > >
> > > Ismael
> > >
> > > On Wed, Jan 10, 2024 at 6:43 PM Viktor Somogyi-Vass
> > >  wrote:
> > >
> > >> Hi Mickael,
> > >>
> > >> Reacting to your points:
> > >> 1. I think it's somewhat unfortunate that we provide an appender tied
> to a
> > >> chosen logger implementation. I think that this shouldn't be part of
> the
> > >> project in its current form. However, there is the sl4fj2 Fluent API
> which
> > >> may solve our problem and turn KafkaLog4jAppender into a generic
> > >> implementation that doesn't depend on a specific library given that
> we can
> > >> upgrade to slf4j2. That is worth considering.
> > >> 2. Since KIP-1013 we'd move to Java17 anyways by 4.0, so I don't feel
> it's
> > >> a problem if there's a specific dependency that has Java17 as the
> minimum
> > >> supported version. As I read though from your email thread with the
> log4j2
> > >> folks, it'll be supported for years to come and log4j3 isn't yet
> stable.
> > >> Since we already use log4j2 in our fork, I'm happy to contribute to
> this,
> > >> review PRs or drive it if needed.
> > >>
> > >> Thanks,
> > >> Viktor
> > >>
> > >> On Wed, Jan 10, 2024 at 3:58 PM Mickael Maison <
> mickael.mai...@gmail.com>
> > >> wrote:
> > >>
> > >> > I asked for details about the future of log4j2 on the logging user
> list:
> > >> > https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
> > >> >
> > >> > Let's see what they say.
> > >> >
> > >> > Thanks,
> > >> > Mickael
> > >> >
> > >> > On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma 
> wrote:
> > >> > >
> > >> > > Hi Mickael,
> > >> > >
> > >> > > Thanks for starting the discussion and for summarizing the state
> of
> > >> > play. I
> > >> > > agree with you that it would be important to understand how long
> log4j2
> > >> > > will be supported for. An alternative would be sl4fj 2.x and
> logback.
> > >> > >
> > >> > > Ismael
> > >> > >
> > >> > > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison <
> > >> mickael.mai...@gmail.com
> > >> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Hi,
> > >> > > >
> > >> > > > Starting a new thread to discuss the current logging situation
> in
> > >> > > > Kafka. I'll restate everything we know but see the [DISCUSS]
> Road to
> > >> > > > Kafka 4.0 if you are interested in what has already been said.
> [0]
> > >> > > >
> > >> > > > Currently Kafka uses SLF4J and reload4j as the logging backend.
> We
> > >> had
> > >> > > > to adopt reload4j in 3.2.0 as log4j was end of life and has a
> few
> > >> > > > security issues.
> > >> > > >
> > >> > > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> > >> > > > incompatibilities in the configuration mechanism with
> log4j/reload4j
> > >> > > > we decide to delay the upgrade to the next major release, Kafka
> 4.0.
> > >> > > >
> > >> > > > Kafka also currently provides a log4j appender. In 2022, we
> adopted
> > >> > > > KIP-719 to deprecate it since we wanted to switch to log4j2. At
> the
> > >> > > > time Apache Logging also had a Kafka appender that worked with
> > >> log4j2.
> > >> > > > They since deprecated that appender in log4j2 and it is not
> part of
> > >> > > > log4j3. [1]
> > >> > > >
> > >> > > > Log4j3 is also nearing release but it seems it will require
> Java 17.
> > >> > > > The website states Java 11 [2] but the artifacts from the latest
> > >> 3.0.0
> > >> > > > beta are built for Java 17. I was not able to find clear
> maintenance
> > >> > > > statement about log4j2 once log4j3 gets released.
> > >> > > >
> > >> > > > The question is where do we go 

[jira] [Reopened] (KAFKA-15538) Client support for java regex based subscription

2024-01-10 Thread Phuc Hong Tran (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phuc Hong Tran reopened KAFKA-15538:


> Client support for java regex based subscription
> 
>
> Key: KAFKA-15538
> URL: https://issues.apache.org/jira/browse/KAFKA-15538
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Phuc Hong Tran
>Priority: Major
>  Labels: kip-848, kip-848-client-support
> Fix For: 3.8.0
>
>
> When using subscribe with a java regex (Pattern), we need to resolve it on 
> the client side to send the broker a list of topic names to subscribe to.
> Context:
> The new consumer group protocol uses [Google 
> RE2/J|https://github.com/google/re2j] for regular expressions and introduces 
> new methods in the consumer API to subscribe using a `SubscribePattern`. The 
> subscribe using a java `Pattern` will be still supported for a while but 
> eventually removed.
>  * When the subscribe with SubscriptionPattern is used, the client should 
> just send the regex to the broker and it will be resolved on the server side.
>  * In the case of the subscribe with Pattern, the regex should be resolved on 
> the client side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15538) Client support for java regex based subscription

2024-01-10 Thread Phuc Hong Tran (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phuc Hong Tran resolved KAFKA-15538.

Resolution: Fixed

> Client support for java regex based subscription
> 
>
> Key: KAFKA-15538
> URL: https://issues.apache.org/jira/browse/KAFKA-15538
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Phuc Hong Tran
>Priority: Major
>  Labels: kip-848, kip-848-client-support
> Fix For: 3.8.0
>
>
> When using subscribe with a java regex (Pattern), we need to resolve it on 
> the client side to send the broker a list of topic names to subscribe to.
> Context:
> The new consumer group protocol uses [Google 
> RE2/J|https://github.com/google/re2j] for regular expressions and introduces 
> new methods in the consumer API to subscribe using a `SubscribePattern`. The 
> subscribe using a java `Pattern` will be still supported for a while but 
> eventually removed.
>  * When the subscribe with SubscriptionPattern is used, the client should 
> just send the regex to the broker and it will be resolved on the server side.
>  * In the case of the subscribe with Pattern, the regex should be resolved on 
> the client side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16114) Fix partiton not retention after cancel alter intra broker log dir task

2024-01-10 Thread wangliucheng (Jira)
wangliucheng created KAFKA-16114:


 Summary: Fix partiton not retention after cancel alter intra 
broker log dir task 
 Key: KAFKA-16114
 URL: https://issues.apache.org/jira/browse/KAFKA-16114
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 3.6.1, 3.3.2
Reporter: wangliucheng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2560

2024-01-10 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2559

2024-01-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16113) AsyncKafkaConsumer: Add missing offset commit metrics

2024-01-10 Thread Philip Nee (Jira)
Philip Nee created KAFKA-16113:
--

 Summary: AsyncKafkaConsumer: Add missing offset commit metrics
 Key: KAFKA-16113
 URL: https://issues.apache.org/jira/browse/KAFKA-16113
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Philip Nee
Assignee: Philip Nee


The following metrics are missing from the AsyncKafkaConsumer:

commit-latency-avg
commit-latency-max
commit-rate
commit-total
committed-time-ns-total



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Apache Kafka 3.7.0 Release

2024-01-10 Thread Stanislav Kozlovski
Thanks Colin,

With that, I believe we are out of blockers. I was traveling today and
couldn't build an RC - expect one to be published tomorrow (barring any
problems).

In the meanwhile - here is a PR for the 3.7 blog post -
https://github.com/apache/kafka-site/pull/578

Best,
Stan

On Wed, Jan 10, 2024 at 12:06 AM Colin McCabe  wrote:

> KAFKA-16094 has been fixed and backported to 3.7.
>
> Colin
>
>
> On Mon, Jan 8, 2024, at 14:52, Colin McCabe wrote:
> > On an unrelated note, I found a blocker bug related to upgrades from
> > 3.6 (and earlier) to 3.7.
> >
> > The JIRA is here:
> >   https://issues.apache.org/jira/browse/KAFKA-16094
> >
> > Fix here:
> >   https://github.com/apache/kafka/pull/15153
> >
> > best,
> > Colin
> >
> >
> > On Mon, Jan 8, 2024, at 14:47, Colin McCabe wrote:
> >> Hi Ismael,
> >>
> >> I wasn't aware of that. If we are required to publish all modules, then
> >> this is working as intended.
> >>
> >> I am a bit curious if we've discussed why we need to publish the server
> >> modules to Sonatype. Is there a discussion about the pros and cons of
> >> this somewhere?
> >>
> >> regards,
> >> Colin
> >>
> >> On Mon, Jan 8, 2024, at 14:09, Ismael Juma wrote:
> >>> All modules are published to Sonatype - that's a requirement. You may
> be
> >>> missing the fact that `core` is published as `kafka_2.13` and
> `kafka_2.12`.
> >>>
> >>> Ismael
> >>>
> >>> On Tue, Jan 9, 2024 at 12:00 AM Colin McCabe 
> wrote:
> >>>
>  Hi Ismael,
> 
>  It seems like both the metadata gradle module and the server-common
> module
>  are getting published to Sonatype as separate artifacts, unless I'm
>  misunderstanding something. Example:
> 
>  https://central.sonatype.com/search?q=kafka-server-common
> 
>  I don't see kafka-core getting published, but maybe other private
>  server-side gradle modules are getting published.
> 
>  This seems bad. Is there a reason to publish modules that are only
> used by
>  the server on Sonatype?
> 
>  best,
>  Colin
> 
> 
>  On Mon, Jan 8, 2024, at 12:50, Ismael Juma wrote:
>  > Hi Colin,
>  >
>  > I think you may have misunderstood what they mean by gradle
> metadata -
>  it's
>  > not the Kafka metadata module.
>  >
>  > Ismael
>  >
>  > On Mon, Jan 8, 2024 at 9:45 PM Colin McCabe 
> wrote:
>  >
>  >> Oops, hit send too soon. I see that #15127 was already merged. So
> we
>  >> should no longer be publishing :metadata as part of the clients
>  artifacts,
>  >> right?
>  >>
>  >> thanks,
>  >> Colin
>  >>
>  >>
>  >> On Mon, Jan 8, 2024, at 11:42, Colin McCabe wrote:
>  >> > Hi Apporv,
>  >> >
>  >> > Please remove the metadata module from any artifacts published
> for
>  >> > clients. It is only used by the server.
>  >> >
>  >> > best,
>  >> > Colin
>  >> >
>  >> >
>  >> > On Sun, Jan 7, 2024, at 03:04, Apoorv Mittal wrote:
>  >> >> Hi Colin,
>  >> >> Thanks for the response. The only reason for asking the
> question of
>  >> >> publishing the metadata is because that's present in previous
> client
>  >> >> releases. For more context, the description of PR
>  >> >>  holds the details
> and
>  >> waiting
>  >> >> for the confirmation there prior to the merge.
>  >> >>
>  >> >> Regards,
>  >> >> Apoorv Mittal
>  >> >> +44 7721681581
>  >> >>
>  >> >>
>  >> >> On Fri, Jan 5, 2024 at 10:22 PM Colin McCabe <
> cmcc...@apache.org>
>  >> wrote:
>  >> >>
>  >> >>> metadata is an internal gradle module. It is not used by
> clients.
>  So I
>  >> >>> don't see why you would want to publish it (unless I'm
>  misunderstanding
>  >> >>> something).
>  >> >>>
>  >> >>> best,
>  >> >>> Colin
>  >> >>>
>  >> >>>
>  >> >>> On Fri, Jan 5, 2024, at 10:05, Stanislav Kozlovski wrote:
>  >> >>> > Thanks for reporting the blockers, folks. Good job finding.
>  >> >>> >
>  >> >>> > I have one ask - can anybody with Gradle expertise help
> review
>  this
>  >> small
>  >> >>> > PR? https://github.com/apache/kafka/pull/15127 (+1, -1)
>  >> >>> > In particular, we are wondering whether we need to publish
> module
>  >> >>> metadata
>  >> >>> > as part of the gradle publishing process.
>  >> >>> >
>  >> >>> >
>  >> >>> > On Fri, Jan 5, 2024 at 3:56 PM Proven Provenzano
>  >> >>> >  wrote:
>  >> >>> >
>  >> >>> >> We have potentially one more blocker
>  >> >>> >> https://issues.apache.org/jira/browse/KAFKA-16082 which
> might
>  >> cause a
>  >> >>> data
>  >> >>> >> loss scenario with JBOD in KRaft.
>  >> >>> >> Initial analysis thought this is a problem and further
> review
>  looks
>  >> >>> like it
>  >> >>> >> isn't but we are continuing to dig into the issue to 

Re: [PR] 3.7: Add documentation for Kafka 3.7 [kafka-site]

2024-01-10 Thread via GitHub


stanislavkozlovski commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1448042147


##
blog.html:
##
@@ -22,6 +22,119 @@
 
 
 Blog
+
+
+
+Apache 
Kafka 3.7.0 Release Announcement
+
+TODO: January 2024 - Stanislav Kozlovski (https://twitter.com/0xeed;>@BdKozlovski)

Review Comment:
   I assume we have to edit & merge this on the day of the announcement, hence 
left a "TODO:" here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: Kafka 3.0 support Java 8

2024-01-10 Thread Josep Prat
Hi,
We attempt to support the last 3 non-patch versions. This would mean we
would try to Backport security vulnerabilities to a 3.x (probably 3.8) for
6 to 9 months after the last release.

Best,

---
Josep Prat
Open Source Engineering Director, aivenjosep.p...@aiven.io   |
+491715557497 | aiven.io
Aiven Deutschland GmbH
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
Amtsgericht Charlottenburg, HRB 209739 B

On Wed, Jan 10, 2024, 21:43 Devinder Saggu 
wrote:

> Thanks.
>
> And how long Kafka 3.x will be supported.
>
> Thanks
>
> On Wed, Jan 10, 2024 at 3:40 PM Divij Vaidya 
> wrote:
>
> > All versions in the 3.x series of Kafka will support Java 8.
> >
> > Starting Kafka 4.0, we will drop support for Java 8. Clients will support
> > >= JDK 11 and other packages will support >= JDK 17. More details about
> > Java in Kafka 4.0 can be found here:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510
> >
> > Does this answer your question?
> >
> > --
> > Divij Vaidya
> >
> >
> >
> > On Wed, Jan 10, 2024 at 9:37 PM Devinder Saggu <
> saggusinghsu...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I wonder how long Kafka 3.0 can support Java 8.
> > >
> > > Thanks  & Regards,
> > >
> > > *Devinder Singh*
> > > P *Please consider the environment before printing this email*
> > >
> >
>
>
> --
> Thanks  & Regards,
>
> *Devinder Singh*
> P *Please consider the environment before printing this email*
>


Re: Kafka 3.0 support Java 8

2024-01-10 Thread Devinder Saggu
Thanks.

And how long Kafka 3.x will be supported.

Thanks

On Wed, Jan 10, 2024 at 3:40 PM Divij Vaidya 
wrote:

> All versions in the 3.x series of Kafka will support Java 8.
>
> Starting Kafka 4.0, we will drop support for Java 8. Clients will support
> >= JDK 11 and other packages will support >= JDK 17. More details about
> Java in Kafka 4.0 can be found here:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510
>
> Does this answer your question?
>
> --
> Divij Vaidya
>
>
>
> On Wed, Jan 10, 2024 at 9:37 PM Devinder Saggu 
> wrote:
>
> > Hi,
> >
> > I wonder how long Kafka 3.0 can support Java 8.
> >
> > Thanks  & Regards,
> >
> > *Devinder Singh*
> > P *Please consider the environment before printing this email*
> >
>


-- 
Thanks  & Regards,

*Devinder Singh*
P *Please consider the environment before printing this email*


Re: Kafka 3.0 support Java 8

2024-01-10 Thread Divij Vaidya
All versions in the 3.x series of Kafka will support Java 8.

Starting Kafka 4.0, we will drop support for Java 8. Clients will support
>= JDK 11 and other packages will support >= JDK 17. More details about
Java in Kafka 4.0 can be found here:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510

Does this answer your question?

--
Divij Vaidya



On Wed, Jan 10, 2024 at 9:37 PM Devinder Saggu 
wrote:

> Hi,
>
> I wonder how long Kafka 3.0 can support Java 8.
>
> Thanks  & Regards,
>
> *Devinder Singh*
> P *Please consider the environment before printing this email*
>


Kafka 3.0 support Java 8

2024-01-10 Thread Devinder Saggu
Hi,

I wonder how long Kafka 3.0 can support Java 8.

Thanks  & Regards,

*Devinder Singh*
P *Please consider the environment before printing this email*


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2558

2024-01-10 Thread Apache Jenkins Server
See 




Re: Logging in Kafka

2024-01-10 Thread Mickael Maison
Hi,

I think the only thing that would need to be done in 3.8 is the
deprecation of the log4j appender (KIP-719). This was a pre-req for
migrating to log4j2 due to conflicts when having both log4j and log4j2
in the classpath. I don't know if that's still the case with reload4j
but I think we should take the opportunity of deprecating it before
4.0 regardless to avoid relying on multiple logging libraries.

Thanks,
Mickael

On Wed, Jan 10, 2024 at 7:58 PM Colin McCabe  wrote:
>
> Hi Mickael,
>
> Thanks for bringing this up.
>
> If we move to log4j2 in 4.0, is there any work that needs to be done in 3.8? 
> That's probably what we should focus on.
>
> P.S. My assumption is that if the log4j2 work misses the train, we'll stick 
> with reload4j in 4.0. Hopefully this won't happen.
>
> best,
> Colin
>
>
> On Wed, Jan 10, 2024, at 09:13, Ismael Juma wrote:
> > Hi Viktor,
> >
> > A logging library that requires Java 17 is a deal breaker since we need to
> > log from modules that will only require Java 11 in Apache Kafka 4.0.
> >
> > Ismael
> >
> > On Wed, Jan 10, 2024 at 6:43 PM Viktor Somogyi-Vass
> >  wrote:
> >
> >> Hi Mickael,
> >>
> >> Reacting to your points:
> >> 1. I think it's somewhat unfortunate that we provide an appender tied to a
> >> chosen logger implementation. I think that this shouldn't be part of the
> >> project in its current form. However, there is the sl4fj2 Fluent API which
> >> may solve our problem and turn KafkaLog4jAppender into a generic
> >> implementation that doesn't depend on a specific library given that we can
> >> upgrade to slf4j2. That is worth considering.
> >> 2. Since KIP-1013 we'd move to Java17 anyways by 4.0, so I don't feel it's
> >> a problem if there's a specific dependency that has Java17 as the minimum
> >> supported version. As I read though from your email thread with the log4j2
> >> folks, it'll be supported for years to come and log4j3 isn't yet stable.
> >> Since we already use log4j2 in our fork, I'm happy to contribute to this,
> >> review PRs or drive it if needed.
> >>
> >> Thanks,
> >> Viktor
> >>
> >> On Wed, Jan 10, 2024 at 3:58 PM Mickael Maison 
> >> wrote:
> >>
> >> > I asked for details about the future of log4j2 on the logging user list:
> >> > https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
> >> >
> >> > Let's see what they say.
> >> >
> >> > Thanks,
> >> > Mickael
> >> >
> >> > On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
> >> > >
> >> > > Hi Mickael,
> >> > >
> >> > > Thanks for starting the discussion and for summarizing the state of
> >> > play. I
> >> > > agree with you that it would be important to understand how long log4j2
> >> > > will be supported for. An alternative would be sl4fj 2.x and logback.
> >> > >
> >> > > Ismael
> >> > >
> >> > > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison <
> >> mickael.mai...@gmail.com
> >> > >
> >> > > wrote:
> >> > >
> >> > > > Hi,
> >> > > >
> >> > > > Starting a new thread to discuss the current logging situation in
> >> > > > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> >> > > > Kafka 4.0 if you are interested in what has already been said. [0]
> >> > > >
> >> > > > Currently Kafka uses SLF4J and reload4j as the logging backend. We
> >> had
> >> > > > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> >> > > > security issues.
> >> > > >
> >> > > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> >> > > > incompatibilities in the configuration mechanism with log4j/reload4j
> >> > > > we decide to delay the upgrade to the next major release, Kafka 4.0.
> >> > > >
> >> > > > Kafka also currently provides a log4j appender. In 2022, we adopted
> >> > > > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> >> > > > time Apache Logging also had a Kafka appender that worked with
> >> log4j2.
> >> > > > They since deprecated that appender in log4j2 and it is not part of
> >> > > > log4j3. [1]
> >> > > >
> >> > > > Log4j3 is also nearing release but it seems it will require Java 17.
> >> > > > The website states Java 11 [2] but the artifacts from the latest
> >> 3.0.0
> >> > > > beta are built for Java 17. I was not able to find clear maintenance
> >> > > > statement about log4j2 once log4j3 gets released.
> >> > > >
> >> > > > The question is where do we go from here?
> >> > > > We can stick with our plans:
> >> > > > 1. Deprecate the appender in the next 3.x release and plan to remove
> >> > it in
> >> > > > 4.0
> >> > > > 2. Do the necessary work to switch to log4j2 in 4.0
> >> > > > If so we need people to drive these work items. We have PRs for these
> >> > > > with hopefully the bulk of the code but they need
> >> > > > rebasing/completing/reviewing.
> >> > > >
> >> > > > Otherwise we can reconsider KIP-653 and/or KIP-719.
> >> > > >
> >> > > > Assuming log4j2 does not go end of life in the near future (We can
> >> > > > reach out to Apache Logging to clarify that point.), I think it still
> >> > > > makes 

[jira] [Created] (KAFKA-16112) Review JMX metrics in Async Consumer and determine the missing ones

2024-01-10 Thread Kirk True (Jira)
Kirk True created KAFKA-16112:
-

 Summary: Review JMX metrics in Async Consumer and determine the 
missing ones
 Key: KAFKA-16112
 URL: https://issues.apache.org/jira/browse/KAFKA-16112
 Project: Kafka
  Issue Type: Task
  Components: clients, consumer
Reporter: Kirk True
Assignee: Philip Nee
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16111) Implement tests for tricky rebalance callbacks scenarios

2024-01-10 Thread Kirk True (Jira)
Kirk True created KAFKA-16111:
-

 Summary: Implement tests for tricky rebalance callbacks scenarios
 Key: KAFKA-16111
 URL: https://issues.apache.org/jira/browse/KAFKA-16111
 Project: Kafka
  Issue Type: Test
  Components: clients, consumer
Reporter: Kirk True
Assignee: Kirk True
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16110) Implement consumer performance tests

2024-01-10 Thread Kirk True (Jira)
Kirk True created KAFKA-16110:
-

 Summary: Implement consumer performance tests
 Key: KAFKA-16110
 URL: https://issues.apache.org/jira/browse/KAFKA-16110
 Project: Kafka
  Issue Type: New Feature
  Components: clients, consumer
Reporter: Kirk True
Assignee: Kirk True
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16109) Ensure system tests cover the "simple consumer + commit" use case

2024-01-10 Thread Kirk True (Jira)
Kirk True created KAFKA-16109:
-

 Summary: Ensure system tests cover the "simple consumer + commit" 
use case
 Key: KAFKA-16109
 URL: https://issues.apache.org/jira/browse/KAFKA-16109
 Project: Kafka
  Issue Type: Improvement
  Components: clients, consumer, system tests
Reporter: Kirk True
Assignee: Kirk True
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Logging in Kafka

2024-01-10 Thread Colin McCabe
Hi Mickael,

Thanks for bringing this up.

If we move to log4j2 in 4.0, is there any work that needs to be done in 3.8? 
That's probably what we should focus on.

P.S. My assumption is that if the log4j2 work misses the train, we'll stick 
with reload4j in 4.0. Hopefully this won't happen.

best,
Colin


On Wed, Jan 10, 2024, at 09:13, Ismael Juma wrote:
> Hi Viktor,
>
> A logging library that requires Java 17 is a deal breaker since we need to
> log from modules that will only require Java 11 in Apache Kafka 4.0.
>
> Ismael
>
> On Wed, Jan 10, 2024 at 6:43 PM Viktor Somogyi-Vass
>  wrote:
>
>> Hi Mickael,
>>
>> Reacting to your points:
>> 1. I think it's somewhat unfortunate that we provide an appender tied to a
>> chosen logger implementation. I think that this shouldn't be part of the
>> project in its current form. However, there is the sl4fj2 Fluent API which
>> may solve our problem and turn KafkaLog4jAppender into a generic
>> implementation that doesn't depend on a specific library given that we can
>> upgrade to slf4j2. That is worth considering.
>> 2. Since KIP-1013 we'd move to Java17 anyways by 4.0, so I don't feel it's
>> a problem if there's a specific dependency that has Java17 as the minimum
>> supported version. As I read though from your email thread with the log4j2
>> folks, it'll be supported for years to come and log4j3 isn't yet stable.
>> Since we already use log4j2 in our fork, I'm happy to contribute to this,
>> review PRs or drive it if needed.
>>
>> Thanks,
>> Viktor
>>
>> On Wed, Jan 10, 2024 at 3:58 PM Mickael Maison 
>> wrote:
>>
>> > I asked for details about the future of log4j2 on the logging user list:
>> > https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
>> >
>> > Let's see what they say.
>> >
>> > Thanks,
>> > Mickael
>> >
>> > On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
>> > >
>> > > Hi Mickael,
>> > >
>> > > Thanks for starting the discussion and for summarizing the state of
>> > play. I
>> > > agree with you that it would be important to understand how long log4j2
>> > > will be supported for. An alternative would be sl4fj 2.x and logback.
>> > >
>> > > Ismael
>> > >
>> > > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison <
>> mickael.mai...@gmail.com
>> > >
>> > > wrote:
>> > >
>> > > > Hi,
>> > > >
>> > > > Starting a new thread to discuss the current logging situation in
>> > > > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
>> > > > Kafka 4.0 if you are interested in what has already been said. [0]
>> > > >
>> > > > Currently Kafka uses SLF4J and reload4j as the logging backend. We
>> had
>> > > > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
>> > > > security issues.
>> > > >
>> > > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
>> > > > incompatibilities in the configuration mechanism with log4j/reload4j
>> > > > we decide to delay the upgrade to the next major release, Kafka 4.0.
>> > > >
>> > > > Kafka also currently provides a log4j appender. In 2022, we adopted
>> > > > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
>> > > > time Apache Logging also had a Kafka appender that worked with
>> log4j2.
>> > > > They since deprecated that appender in log4j2 and it is not part of
>> > > > log4j3. [1]
>> > > >
>> > > > Log4j3 is also nearing release but it seems it will require Java 17.
>> > > > The website states Java 11 [2] but the artifacts from the latest
>> 3.0.0
>> > > > beta are built for Java 17. I was not able to find clear maintenance
>> > > > statement about log4j2 once log4j3 gets released.
>> > > >
>> > > > The question is where do we go from here?
>> > > > We can stick with our plans:
>> > > > 1. Deprecate the appender in the next 3.x release and plan to remove
>> > it in
>> > > > 4.0
>> > > > 2. Do the necessary work to switch to log4j2 in 4.0
>> > > > If so we need people to drive these work items. We have PRs for these
>> > > > with hopefully the bulk of the code but they need
>> > > > rebasing/completing/reviewing.
>> > > >
>> > > > Otherwise we can reconsider KIP-653 and/or KIP-719.
>> > > >
>> > > > Assuming log4j2 does not go end of life in the near future (We can
>> > > > reach out to Apache Logging to clarify that point.), I think it still
>> > > > makes sense to adopt it. I would also go ahead and deprecate our
>> > > > appender.
>> > > >
>> > > > Thanks,
>> > > > Mickael
>> > > >
>> > > > 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
>> > > > 1: https://github.com/apache/logging-log4j2/issues/1951
>> > > > 2: https://logging.apache.org/log4j/3.x/#requirements
>> > > >
>> >
>>


Re: [DISCUSS] KIP-1014: Managing Unstable Metadata Versions in Apache Kafka

2024-01-10 Thread Colin McCabe
On Wed, Jan 10, 2024, at 09:16, Justine Olshan wrote:
> Hmm it seems like Colin and Proven are disagreeing with whether we can swap
> unstable metadata versions.
>
>>  When we reorder, we are always allocating a new MV and we are never
> reusing an existing MV even if it was also unstable.
>
>> Given that this is true, there's no reason to have special rules about
> what we can and can't do with unstable MVs. We can do anything
>
> I don't have a strong preference either way, but I think we should agree on
> one approach.
> The benefit of reordering and reusing is that we can release features that
> are ready earlier and we have more flexibility. With the approach where we
> always create a new MV, I am concerned with having many "empty" MVs. This
> would encourage waiting until the release before we decide an incomplete
> feature is not ready and moving its MV into the future. (The
> abandoning comment I made earlier -- that is consistent with Proven's
> approach)
>
> I think the only potential issue with reordering is that it could be a bit
> confusing and *potentially *prone to errors. Note I say potentially because
> I think it depends on folks' understanding with this new unstable metadata
> version concept. I echo Federico's comments about making sure the risks are
> highlighted.
>

I agree that the risks should be highlighted. That's why I mentioned that ERROR 
messages should be logged, and so forth.

When we say you can't use this in production, we really really mean it. You 
will not be able to upgrade. Unstable metadata versions are for devlopers only.

best,
Colin

>
> Thanks,
>
> Justine
>
> On Wed, Jan 10, 2024 at 1:16 AM Federico Valeri 
> wrote:
>
>> Hi folks,
>>
>> > If you use an unstable MV, you probably won't be able to upgrade your
>> software. Because whenever something changes, you'll probably get
>> serialization exceptions being thrown inside the controller. Fatal ones.
>>
>> Thanks for this clarification. I think this concrete risk should be
>> highlighted in the KIP and in the "unstable.metadata.versions.enable"
>> documentation.
>>
>> In the test plan, should we also have one system test checking that
>> "features with a stable MV will never have that MV changed"?
>>
>> On Wed, Jan 10, 2024 at 8:16 AM Colin McCabe  wrote:
>> >
>> > On Tue, Jan 9, 2024, at 18:56, Proven Provenzano wrote:
>> > > Hi folks,
>> > >
>> > > Thank you for the questions.
>> > >
>> > > Let me clarify about reorder first. The reorder of unstable metadata
>> > > versions should be infrequent.
>> >
>> > Why does it need to be infrequent? We should be able to reorder unstable
>> metadata versions as often as we like. There are no guarantees about
>> unstable MVs.
>> >
>> > > The time you reorder is when a feature that
>> > > requires a higher metadata version to enable becomes "production
>> ready" and
>> > > the features with unstable metadata versions less than the new stable
>> one
>> > > are moved to metadata versions greater than the new stable feature.
>> When we
>> > > reorder, we are always allocating a new MV and we are never reusing an
>> > > existing MV even if it was also unstable. This way a developer
>> upgrading
>> > > their environment with a specific unstable MV might see existing
>> > > functionality stop working but they won't see new MV dependent
>> > > functionality magically appear. The feature set for a given unstable MV
>> > > version can only decrease with reordering.
>> >
>> > If you use an unstable MV, you probably won't be able to upgrade your
>> software. Because whenever something changes, you'll probably get
>> serialization exceptions being thrown inside the controller. Fatal ones.
>> >
>> > Given that this is true, there's no reason to have special rules about
>> what we can and can't do with unstable MVs. We can do anything.
>> >
>> > >
>> > > How do we define "production ready" and when should we bump
>> > > LATEST_PRODUCTION? I would like to define it to be the point where the
>> > > feature is code complete with tests and the KIP for it is approved.
>> However
>> > > even with this definition if the feature later develops a major issue
>> it
>> > > could still block future features until the issue is fixed which is
>> what we
>> > > are trying to avoid here. We could be much more formal about this and
>> let
>> > > the release manager for a release define what is stable for a given
>> release
>> > > and then do the bump just after the branch is created on the branch.
>> When
>> > > an RC candidate is accepted, the bump would be backported. I would
>> like to
>> > > hear other ideas here.
>> > >
>> >
>> > Yeah, it's an interesting question. Overall, I think developers should
>> define when a feature is production ready.
>> >
>> > The question to ask is, "are you ready to take this feature to
>> production in your workplace?" I think most developers do have a sense of
>> this. Obviously bugs and mistakes can happen, but I think this standard
>> would avoid most of the 

Re: [PROPOSAL] Add commercial support page on website

2024-01-10 Thread Kenneth Eversole
I agree with Divji here and to be more pointed. I worry that if we go down
the path of adding vendors to a list it comes off as supporting their
product, not to mention could be a huge security risk for novice users. I
would rather this be a callout to other purely open source tooling, such as
cruise control.

Divji brings up good question
1.  What value does additional of this page bring to the users of Apache
Kafka?

I think the community would be a better service to have a more synchronous
line of communication such as Slack/Discord and we call that out here. It
would be more inline with other major open source projects.

---
Kenneth Eversole

On Wed, Jan 10, 2024 at 10:30 AM Divij Vaidya 
wrote:

> I don't see a need for this. What additional information does this provide
> over what can be found via a quick google search?
>
> My primary concern is that we are getting in the business of listing
> vendors in the project site which brings it's own complications without
> adding much additional value for users. In the spirit of being vendor
> neutral, I would try to avoid this as much as possible.
>
> So, my question to you is:
> 1. What value does additional of this page bring to the users of Apache
> Kafka?
> 2. When a new PR is submitted to add a vendor, what criteria do we have to
> decide whether to add them or not? If we keep a blanket criteria of
> accepting all PRs, then we may end up in a situation where the llink
> redirects to a phishing page or nefarious website. Hence, we might have to
> at least perform some basic due diligence which adds overhead to the
> resources of the community.
>
> --
> Divij Vaidya
>
>
>
> On Wed, Jan 10, 2024 at 5:00 PM fpapon  wrote:
>
> > Hi,
> >
> > After starting a first thread on this topic (
> > https://lists.apache.org/thread/kkox33rhtjcdr5zztq3lzj7c5s7k9wsr), I
> > would like to propose a PR:
> >
> > https://github.com/apache/kafka-site/pull/577
> >
> > The purpose of this proposal is to help users to find support for sla,
> > training, consulting...whatever that is not provide by the community as,
> > like we can already see in many ASF projects, no commercial support is
> > provided by the foundation. I think it could help with the adoption and
> the
> > growth of the project because the users
> > need commercial support for production issues.
> >
> > If the community is agree about this idea and want to move forward, I
> just
> > add one company in the PR but everybody can add some by providing a new
> PR
> > to complete the list. If people want me to add other you can reply to
> this
> > thread because it will be better to have several company at the first
> > publication of the page.
> >
> > Just provide the company-name and a short description of the service
> offer
> > around Apache Kafka. The information must be factual and informational in
> > nature and not be a marketing statement.
> >
> > regards,
> >
> > François
> >
> >
> >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #134

2024-01-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-1014: Managing Unstable Metadata Versions in Apache Kafka

2024-01-10 Thread Justine Olshan
Hmm it seems like Colin and Proven are disagreeing with whether we can swap
unstable metadata versions.

>  When we reorder, we are always allocating a new MV and we are never
reusing an existing MV even if it was also unstable.

> Given that this is true, there's no reason to have special rules about
what we can and can't do with unstable MVs. We can do anything

I don't have a strong preference either way, but I think we should agree on
one approach.
The benefit of reordering and reusing is that we can release features that
are ready earlier and we have more flexibility. With the approach where we
always create a new MV, I am concerned with having many "empty" MVs. This
would encourage waiting until the release before we decide an incomplete
feature is not ready and moving its MV into the future. (The
abandoning comment I made earlier -- that is consistent with Proven's
approach)

I think the only potential issue with reordering is that it could be a bit
confusing and *potentially *prone to errors. Note I say potentially because
I think it depends on folks' understanding with this new unstable metadata
version concept. I echo Federico's comments about making sure the risks are
highlighted.

Thanks,

Justine

On Wed, Jan 10, 2024 at 1:16 AM Federico Valeri 
wrote:

> Hi folks,
>
> > If you use an unstable MV, you probably won't be able to upgrade your
> software. Because whenever something changes, you'll probably get
> serialization exceptions being thrown inside the controller. Fatal ones.
>
> Thanks for this clarification. I think this concrete risk should be
> highlighted in the KIP and in the "unstable.metadata.versions.enable"
> documentation.
>
> In the test plan, should we also have one system test checking that
> "features with a stable MV will never have that MV changed"?
>
> On Wed, Jan 10, 2024 at 8:16 AM Colin McCabe  wrote:
> >
> > On Tue, Jan 9, 2024, at 18:56, Proven Provenzano wrote:
> > > Hi folks,
> > >
> > > Thank you for the questions.
> > >
> > > Let me clarify about reorder first. The reorder of unstable metadata
> > > versions should be infrequent.
> >
> > Why does it need to be infrequent? We should be able to reorder unstable
> metadata versions as often as we like. There are no guarantees about
> unstable MVs.
> >
> > > The time you reorder is when a feature that
> > > requires a higher metadata version to enable becomes "production
> ready" and
> > > the features with unstable metadata versions less than the new stable
> one
> > > are moved to metadata versions greater than the new stable feature.
> When we
> > > reorder, we are always allocating a new MV and we are never reusing an
> > > existing MV even if it was also unstable. This way a developer
> upgrading
> > > their environment with a specific unstable MV might see existing
> > > functionality stop working but they won't see new MV dependent
> > > functionality magically appear. The feature set for a given unstable MV
> > > version can only decrease with reordering.
> >
> > If you use an unstable MV, you probably won't be able to upgrade your
> software. Because whenever something changes, you'll probably get
> serialization exceptions being thrown inside the controller. Fatal ones.
> >
> > Given that this is true, there's no reason to have special rules about
> what we can and can't do with unstable MVs. We can do anything.
> >
> > >
> > > How do we define "production ready" and when should we bump
> > > LATEST_PRODUCTION? I would like to define it to be the point where the
> > > feature is code complete with tests and the KIP for it is approved.
> However
> > > even with this definition if the feature later develops a major issue
> it
> > > could still block future features until the issue is fixed which is
> what we
> > > are trying to avoid here. We could be much more formal about this and
> let
> > > the release manager for a release define what is stable for a given
> release
> > > and then do the bump just after the branch is created on the branch.
> When
> > > an RC candidate is accepted, the bump would be backported. I would
> like to
> > > hear other ideas here.
> > >
> >
> > Yeah, it's an interesting question. Overall, I think developers should
> define when a feature is production ready.
> >
> > The question to ask is, "are you ready to take this feature to
> production in your workplace?" I think most developers do have a sense of
> this. Obviously bugs and mistakes can happen, but I think this standard
> would avoid most of the issues that we're trying to avoid by having
> unstable MVs in the first place.
> >
> > ELR is a good example. Nobody would have said that it was production
> ready in 3.7 ... hence it belonged (and still belongs) in an unstable MV,
> until that changes (hopefully soon :) )
> >
> > best,
> > Colin
> >
> > > --Proven
> > >
> > > On Tue, Jan 9, 2024 at 3:26 PM Colin McCabe 
> wrote:
> > >
> > >> Hi Justine,
> > >>
> > >> Yes, this is an important point to clarify. Proven can comment 

Re: Logging in Kafka

2024-01-10 Thread Ismael Juma
Hi Viktor,

A logging library that requires Java 17 is a deal breaker since we need to
log from modules that will only require Java 11 in Apache Kafka 4.0.

Ismael

On Wed, Jan 10, 2024 at 6:43 PM Viktor Somogyi-Vass
 wrote:

> Hi Mickael,
>
> Reacting to your points:
> 1. I think it's somewhat unfortunate that we provide an appender tied to a
> chosen logger implementation. I think that this shouldn't be part of the
> project in its current form. However, there is the sl4fj2 Fluent API which
> may solve our problem and turn KafkaLog4jAppender into a generic
> implementation that doesn't depend on a specific library given that we can
> upgrade to slf4j2. That is worth considering.
> 2. Since KIP-1013 we'd move to Java17 anyways by 4.0, so I don't feel it's
> a problem if there's a specific dependency that has Java17 as the minimum
> supported version. As I read though from your email thread with the log4j2
> folks, it'll be supported for years to come and log4j3 isn't yet stable.
> Since we already use log4j2 in our fork, I'm happy to contribute to this,
> review PRs or drive it if needed.
>
> Thanks,
> Viktor
>
> On Wed, Jan 10, 2024 at 3:58 PM Mickael Maison 
> wrote:
>
> > I asked for details about the future of log4j2 on the logging user list:
> > https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
> >
> > Let's see what they say.
> >
> > Thanks,
> > Mickael
> >
> > On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
> > >
> > > Hi Mickael,
> > >
> > > Thanks for starting the discussion and for summarizing the state of
> > play. I
> > > agree with you that it would be important to understand how long log4j2
> > > will be supported for. An alternative would be sl4fj 2.x and logback.
> > >
> > > Ismael
> > >
> > > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Starting a new thread to discuss the current logging situation in
> > > > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> > > > Kafka 4.0 if you are interested in what has already been said. [0]
> > > >
> > > > Currently Kafka uses SLF4J and reload4j as the logging backend. We
> had
> > > > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> > > > security issues.
> > > >
> > > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> > > > incompatibilities in the configuration mechanism with log4j/reload4j
> > > > we decide to delay the upgrade to the next major release, Kafka 4.0.
> > > >
> > > > Kafka also currently provides a log4j appender. In 2022, we adopted
> > > > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> > > > time Apache Logging also had a Kafka appender that worked with
> log4j2.
> > > > They since deprecated that appender in log4j2 and it is not part of
> > > > log4j3. [1]
> > > >
> > > > Log4j3 is also nearing release but it seems it will require Java 17.
> > > > The website states Java 11 [2] but the artifacts from the latest
> 3.0.0
> > > > beta are built for Java 17. I was not able to find clear maintenance
> > > > statement about log4j2 once log4j3 gets released.
> > > >
> > > > The question is where do we go from here?
> > > > We can stick with our plans:
> > > > 1. Deprecate the appender in the next 3.x release and plan to remove
> > it in
> > > > 4.0
> > > > 2. Do the necessary work to switch to log4j2 in 4.0
> > > > If so we need people to drive these work items. We have PRs for these
> > > > with hopefully the bulk of the code but they need
> > > > rebasing/completing/reviewing.
> > > >
> > > > Otherwise we can reconsider KIP-653 and/or KIP-719.
> > > >
> > > > Assuming log4j2 does not go end of life in the near future (We can
> > > > reach out to Apache Logging to clarify that point.), I think it still
> > > > makes sense to adopt it. I would also go ahead and deprecate our
> > > > appender.
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > > 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
> > > > 1: https://github.com/apache/logging-log4j2/issues/1951
> > > > 2: https://logging.apache.org/log4j/3.x/#requirements
> > > >
> >
>


Re: [VOTE] KIP-877: Mechanism for plugins and connectors to register metrics

2024-01-10 Thread Mickael Maison
Bumping this thread since I've not seen any feedback.

Thanks,
Mickael

On Tue, Dec 19, 2023 at 10:03 AM Mickael Maison
 wrote:
>
> Hi,
>
> I'd like to start a vote on KIP-877:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics
>
> Let me know if you have any feedback.
>
> Thanks,
> Mickael


Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-01-10 Thread Jason Gustafson
Hey Jose,

One additional thought. It would be helpful to have an example to justify
the need for this:

> Wait for the fetch offset of the replica (ID, UUID) to catch up to the
log end offset of the leader.

It is helpful also to explain how this affects the AddVoter RPC. Do we wait
indefinitely? Or do we give up and return a timeout error if the new voter
cannot catch up? Probably the latter makes the most sense.

Thanks,
Jason

On Tue, Jan 9, 2024 at 11:42 PM Colin McCabe  wrote:

> On Tue, Jan 9, 2024, at 17:07, Jason Gustafson wrote:
> > Hi Jose,
> >
> > Thanks for the KIP! A few initial questions below:
> >
> > 1. In the user experience section, the user is expected to provide supply
> > the UUID for each voter. I'm assuming this is the directory.id coming
> from
> > KIP-858. I thought it was generated by the format comand automatically?
> It
> > seems like we will need a way to specify it explicitly.
>
> Thanks for highlighting this, Jason. Leaving aside the bootstrapping
> paradox you just mentioned, I think it's extremely unfriendly to ask people
> to paste auto-generated directory IDs into the kafka-storage format
> command. We need a better solution for this -- one that doesn't require
> huge amounts of manual work for people setting up clusters.
>
> I think this is closely related to the problem of taking existing clusters
> (which effectively don't have directory IDs for controllers) and converting
> them to the new world. I think we can agree that a software upgrade process
> that requires manually configuring 3 UUIDs (or whatever) using painstaking
> manual commands on each controller node is a nonstarter.
>
> Overall, I do not think we should add any new flags to "kafka-storage.sh
> format".
>
> One approach might be to support both DIRID-less and DIRID-ful modes of
> operation of the quorum. Empty logs would be considered DIRID-less, and
> would remain so until the active controller decided to write out the record
> establishing the IDs. (And if the MV was low enough, it would just never do
> that). Given that we have to support DIRID-less mode anyway, this seems
> viable.
>
> > 2. Do we need the AddVoters and RemoveVoters control records? Perhaps the
> > VotersRecord is sufficient since changes to the voter set will be rare.
>
> Agreed. Voter changes should be extremely rare. Listing the full set makes
> debugging easier as well.
>
> > 4. Should ReplicaUuid in FetchRequest be a tagged field? It seems like a
> > lot of overhead for all consumer fetches.
>
> +1
>
> best,
> Colin
>
> >
> > On Mon, Jan 8, 2024 at 10:13 AM José Armando García Sancio
> >  wrote:
> >
> >> Hi all,
> >>
> >> KIP-853: KRaft Controller Membership Changes is ready for another
> >> round of discussion.
> >>
> >> There was a previous discussion thread at
> >> https://lists.apache.org/thread/zb5l1fsqw9vj25zkmtnrk6xm7q3dkm1v
> >>
> >> I have changed the KIP quite a bit since that discussion. The core
> >> idea is still the same. I changed some of the details to be consistent
> >> with some of the protocol changes to Kafka since the original KIP. I
> >> also added a section that better describes the feature's UX.
> >>
> >> KIP: https://cwiki.apache.org/confluence/x/nyH1D
> >>
> >> Thanks. Your feedback is greatly appreciated!
> >> --
> >> -José
> >>
>


[jira] [Resolved] (KAFKA-16098) State updater may attempt to resume a task that is not assigned anymore

2024-01-10 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-16098.
---
Resolution: Fixed

> State updater may attempt to resume a task that is not assigned anymore
> ---
>
> Key: KAFKA-16098
> URL: https://issues.apache.org/jira/browse/KAFKA-16098
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Lucas Brutschy
>Assignee: Bruno Cadonna
>Priority: Major
> Attachments: streams.log.gz
>
>
> A long-running soak test brought to light this `IllegalStateException`:
> {code:java}
> [2024-01-07 08:54:13,688] ERROR [i-0637ca8609f50425f-StreamThread-1] Thread 
> encountered an error processing soak test 
> (org.apache.kafka.streams.StreamsSoakTest)
> java.lang.IllegalStateException: No current assignment for partition 
> network-id-repartition-1
>     at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:367)
>     at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.resume(SubscriptionState.java:753)
>     at 
> org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.resume(LegacyKafkaConsumer.java:963)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.resume(KafkaConsumer.java:1524)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.transitRestoredTaskToRunning(TaskManager.java:857)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.handleRestoredTasksFromStateUpdater(TaskManager.java:979)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.checkStateUpdater(TaskManager.java:791)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.checkStateUpdater(StreamThread.java:1141)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnceWithoutProcessingThreads(StreamThread.java:949)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:686)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:645)
> [2024-01-07 08:54:13,688] ERROR [i-0637ca8609f50425f-StreamThread-1] 
> stream-client [i-0637ca8609f50425f] Encountered the following exception 
> during processing and sent shutdown request for the entire application. 
> (org.apache.kafka.streams.KafkaStreams)
> org.apache.kafka.streams.errors.StreamsException: 
> java.lang.IllegalStateException: No current assignment for partition 
> network-id-repartition-1
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:729)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:645)
> Caused by: java.lang.IllegalStateException: No current assignment for 
> partition network-id-repartition-1
>     at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:367)
>     at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.resume(SubscriptionState.java:753)
>     at 
> org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.resume(LegacyKafkaConsumer.java:963)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.resume(KafkaConsumer.java:1524)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.transitRestoredTaskToRunning(TaskManager.java:857)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.handleRestoredTasksFromStateUpdater(TaskManager.java:979)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.checkStateUpdater(TaskManager.java:791)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.checkStateUpdater(StreamThread.java:1141)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnceWithoutProcessingThreads(StreamThread.java:949)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:686)
>     ... 1 more {code}
> Log (with some common messages filtered) attached.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15747) KRaft support in DynamicConnectionQuotaTest

2024-01-10 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15747.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DynamicConnectionQuotaTest
> ---
>
> Key: KAFKA-15747
> URL: https://issues.apache.org/jira/browse/KAFKA-15747
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DynamicConnectionQuotaTest in 
> core/src/test/scala/integration/kafka/network/DynamicConnectionQuotaTest.scala
>  need to be updated to support KRaft
> 77 : def testDynamicConnectionQuota(): Unit = {
> 104 : def testDynamicListenerConnectionQuota(): Unit = {
> 175 : def testDynamicListenerConnectionCreationRateQuota(): Unit = {
> 237 : def testDynamicIpConnectionRateQuota(): Unit = {
> Scanned 416 lines. Found 0 KRaft tests out of 4 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Logging in Kafka

2024-01-10 Thread Viktor Somogyi-Vass
Hi Mickael,

Reacting to your points:
1. I think it's somewhat unfortunate that we provide an appender tied to a
chosen logger implementation. I think that this shouldn't be part of the
project in its current form. However, there is the sl4fj2 Fluent API which
may solve our problem and turn KafkaLog4jAppender into a generic
implementation that doesn't depend on a specific library given that we can
upgrade to slf4j2. That is worth considering.
2. Since KIP-1013 we'd move to Java17 anyways by 4.0, so I don't feel it's
a problem if there's a specific dependency that has Java17 as the minimum
supported version. As I read though from your email thread with the log4j2
folks, it'll be supported for years to come and log4j3 isn't yet stable.
Since we already use log4j2 in our fork, I'm happy to contribute to this,
review PRs or drive it if needed.

Thanks,
Viktor

On Wed, Jan 10, 2024 at 3:58 PM Mickael Maison 
wrote:

> I asked for details about the future of log4j2 on the logging user list:
> https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
>
> Let's see what they say.
>
> Thanks,
> Mickael
>
> On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
> >
> > Hi Mickael,
> >
> > Thanks for starting the discussion and for summarizing the state of
> play. I
> > agree with you that it would be important to understand how long log4j2
> > will be supported for. An alternative would be sl4fj 2.x and logback.
> >
> > Ismael
> >
> > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison  >
> > wrote:
> >
> > > Hi,
> > >
> > > Starting a new thread to discuss the current logging situation in
> > > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> > > Kafka 4.0 if you are interested in what has already been said. [0]
> > >
> > > Currently Kafka uses SLF4J and reload4j as the logging backend. We had
> > > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> > > security issues.
> > >
> > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> > > incompatibilities in the configuration mechanism with log4j/reload4j
> > > we decide to delay the upgrade to the next major release, Kafka 4.0.
> > >
> > > Kafka also currently provides a log4j appender. In 2022, we adopted
> > > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> > > time Apache Logging also had a Kafka appender that worked with log4j2.
> > > They since deprecated that appender in log4j2 and it is not part of
> > > log4j3. [1]
> > >
> > > Log4j3 is also nearing release but it seems it will require Java 17.
> > > The website states Java 11 [2] but the artifacts from the latest 3.0.0
> > > beta are built for Java 17. I was not able to find clear maintenance
> > > statement about log4j2 once log4j3 gets released.
> > >
> > > The question is where do we go from here?
> > > We can stick with our plans:
> > > 1. Deprecate the appender in the next 3.x release and plan to remove
> it in
> > > 4.0
> > > 2. Do the necessary work to switch to log4j2 in 4.0
> > > If so we need people to drive these work items. We have PRs for these
> > > with hopefully the bulk of the code but they need
> > > rebasing/completing/reviewing.
> > >
> > > Otherwise we can reconsider KIP-653 and/or KIP-719.
> > >
> > > Assuming log4j2 does not go end of life in the near future (We can
> > > reach out to Apache Logging to clarify that point.), I think it still
> > > makes sense to adopt it. I would also go ahead and deprecate our
> > > appender.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
> > > 1: https://github.com/apache/logging-log4j2/issues/1951
> > > 2: https://logging.apache.org/log4j/3.x/#requirements
> > >
>


Re: Logging in Kafka

2024-01-10 Thread Mickael Maison
Hi,

A couple of PMC members from Apache Logging replied and they said they
plan to keep supporting log4j2 for several years.
https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl

Thanks,
Mickael

On Wed, Jan 10, 2024 at 3:57 PM Mickael Maison  wrote:
>
> I asked for details about the future of log4j2 on the logging user list:
> https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
>
> Let's see what they say.
>
> Thanks,
> Mickael
>
> On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
> >
> > Hi Mickael,
> >
> > Thanks for starting the discussion and for summarizing the state of play. I
> > agree with you that it would be important to understand how long log4j2
> > will be supported for. An alternative would be sl4fj 2.x and logback.
> >
> > Ismael
> >
> > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison 
> > wrote:
> >
> > > Hi,
> > >
> > > Starting a new thread to discuss the current logging situation in
> > > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> > > Kafka 4.0 if you are interested in what has already been said. [0]
> > >
> > > Currently Kafka uses SLF4J and reload4j as the logging backend. We had
> > > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> > > security issues.
> > >
> > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> > > incompatibilities in the configuration mechanism with log4j/reload4j
> > > we decide to delay the upgrade to the next major release, Kafka 4.0.
> > >
> > > Kafka also currently provides a log4j appender. In 2022, we adopted
> > > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> > > time Apache Logging also had a Kafka appender that worked with log4j2.
> > > They since deprecated that appender in log4j2 and it is not part of
> > > log4j3. [1]
> > >
> > > Log4j3 is also nearing release but it seems it will require Java 17.
> > > The website states Java 11 [2] but the artifacts from the latest 3.0.0
> > > beta are built for Java 17. I was not able to find clear maintenance
> > > statement about log4j2 once log4j3 gets released.
> > >
> > > The question is where do we go from here?
> > > We can stick with our plans:
> > > 1. Deprecate the appender in the next 3.x release and plan to remove it in
> > > 4.0
> > > 2. Do the necessary work to switch to log4j2 in 4.0
> > > If so we need people to drive these work items. We have PRs for these
> > > with hopefully the bulk of the code but they need
> > > rebasing/completing/reviewing.
> > >
> > > Otherwise we can reconsider KIP-653 and/or KIP-719.
> > >
> > > Assuming log4j2 does not go end of life in the near future (We can
> > > reach out to Apache Logging to clarify that point.), I think it still
> > > makes sense to adopt it. I would also go ahead and deprecate our
> > > appender.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
> > > 1: https://github.com/apache/logging-log4j2/issues/1951
> > > 2: https://logging.apache.org/log4j/3.x/#requirements
> > >


Re: [PROPOSAL] Add commercial support page on website

2024-01-10 Thread Divij Vaidya
I don't see a need for this. What additional information does this provide
over what can be found via a quick google search?

My primary concern is that we are getting in the business of listing
vendors in the project site which brings it's own complications without
adding much additional value for users. In the spirit of being vendor
neutral, I would try to avoid this as much as possible.

So, my question to you is:
1. What value does additional of this page bring to the users of Apache
Kafka?
2. When a new PR is submitted to add a vendor, what criteria do we have to
decide whether to add them or not? If we keep a blanket criteria of
accepting all PRs, then we may end up in a situation where the llink
redirects to a phishing page or nefarious website. Hence, we might have to
at least perform some basic due diligence which adds overhead to the
resources of the community.

--
Divij Vaidya



On Wed, Jan 10, 2024 at 5:00 PM fpapon  wrote:

> Hi,
>
> After starting a first thread on this topic (
> https://lists.apache.org/thread/kkox33rhtjcdr5zztq3lzj7c5s7k9wsr), I
> would like to propose a PR:
>
> https://github.com/apache/kafka-site/pull/577
>
> The purpose of this proposal is to help users to find support for sla,
> training, consulting...whatever that is not provide by the community as,
> like we can already see in many ASF projects, no commercial support is
> provided by the foundation. I think it could help with the adoption and the
> growth of the project because the users
> need commercial support for production issues.
>
> If the community is agree about this idea and want to move forward, I just
> add one company in the PR but everybody can add some by providing a new PR
> to complete the list. If people want me to add other you can reply to this
> thread because it will be better to have several company at the first
> publication of the page.
>
> Just provide the company-name and a short description of the service offer
> around Apache Kafka. The information must be factual and informational in
> nature and not be a marketing statement.
>
> regards,
>
> François
>
>
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2557

2024-01-10 Thread Apache Jenkins Server
See 




[PROPOSAL] Add commercial support page on website

2024-01-10 Thread fpapon

Hi,

After starting a first thread on this topic 
(https://lists.apache.org/thread/kkox33rhtjcdr5zztq3lzj7c5s7k9wsr), I would 
like to propose a PR:

https://github.com/apache/kafka-site/pull/577

The purpose of this proposal is to help users to find support for sla, 
training, consulting...whatever that is not provide by the community as, like 
we can already see in many ASF projects, no commercial support is provided by 
the foundation. I think it could help with the adoption and the growth of the 
project because the users
need commercial support for production issues.

If the community is agree about this idea and want to move forward, I just add 
one company in the PR but everybody can add some by providing a new PR to 
complete the list. If people want me to add other you can reply to this thread 
because it will be better to have several company at the first publication of 
the page.

Just provide the company-name and a short description of the service offer 
around Apache Kafka. The information must be factual and informational in 
nature and not be a marketing statement.

regards,

François




Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


fpapon commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1885101241

   @ableegoldman I pushed some changes, feel free to review/comment :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #57

2024-01-10 Thread Apache Jenkins Server
See 




Re: Kafka trunk test & build stability

2024-01-10 Thread Divij Vaidya
Hey folks

We seem to have a handle on the OOM issues with the multiple fixes
community members made. In https://issues.apache.org/jira/browse/KAFKA-16052,
you can see the "before" profile in the description and the "after" profile
in the latest comment to see the difference. To prevent future recurrence,
we have an ongoing solution at https://github.com/apache/kafka/pull/15101
and after that we will start another once to get rid of mockito mocks at
the end of every test suite using a similar extension. Note that this
doesn't solve the flaky test problems in the trunk but it removes the
aspect of build failures due to OOM (one of the many problems).

To fix the flaky test problem, we probably need to run our tests in a
separate CI environment (like Apache Beam does) instead of sharing the 3
hosts that run our CI with many many other Apache projects. This assumption
is based on the fact that the tests are less flaky when running on laptops
/ powerful EC2 machines. One of the avenues to get funding for these
Kafka-only hosts is
https://aws.amazon.com/blogs/opensource/aws-promotional-credits-open-source-projects/
. I will start the conversation on this one with AWS & Apache Infra in the
next 1-2 months.

--
Divij Vaidya



On Tue, Jan 9, 2024 at 9:21 PM Colin McCabe  wrote:

> Sorry, but to put it bluntly, the current build setup isn't good enough at
> partial rebuilds that build caching would make sense. All Kafka devs have
> had the experience of needing to clean the build directory in order to get
> a valid build. The scala code esspecially seems to have this issue.
>
> regards,
> Colin
>
>
> On Tue, Jan 2, 2024, at 07:00, Nick Telford wrote:
> > Addendum: I've opened a PR with what I believe are the changes necessary
> to
> > enable Remote Build Caching, if you choose to go that route:
> > https://github.com/apache/kafka/pull/15109
> >
> > On Tue, 2 Jan 2024 at 14:31, Nick Telford 
> wrote:
> >
> >> Hi everyone,
> >>
> >> Regarding building a "dependency graph"... Gradle already has this
> >> information, albeit fairly coarse-grained. You might be able to get some
> >> considerable improvement by configuring the Gradle Remote Build Cache.
> It
> >> looks like it's currently disabled explicitly:
> >> https://github.com/apache/kafka/blob/trunk/settings.gradle#L46
> >>
> >> The trick is to have trunk builds write to the cache, and PR builds only
> >> read from it. This way, any PR based on trunk should be able to cache
> not
> >> only the compilation, but also the tests from dependent modules that
> >> haven't changed (e.g. for a PR that only touches the connect/streams
> >> modules).
> >>
> >> This would probably be preferable to having to hand-maintain some
> >> rules/dependency graph in the CI configuration, and it's quite
> >> straight-forward to configure.
> >>
> >> Bonus points if the Remote Build Cache is readable publicly, enabling
> >> contributors to benefit from it locally.
> >>
> >> Regards,
> >> Nick
> >>
> >> On Tue, 2 Jan 2024 at 13:00, Lucas Brutschy  .invalid>
> >> wrote:
> >>
> >>> Thanks for all the work that has already been done on this in the past
> >>> days!
> >>>
> >>> Have we considered running our test suite with
> >>> -XX:+HeapDumpOnOutOfMemoryError and uploading the heap dumps as
> >>> Jenkins build artifacts? This could speed up debugging. Even if we
> >>> store them only for a day and do it only for trunk, I think it could
> >>> be worth it. The heap dumps shouldn't contain any secrets, and I
> >>> checked with the ASF infra team, and they are not concerned about the
> >>> additional disk usage.
> >>>
> >>> Cheers,
> >>> Lucas
> >>>
> >>> On Wed, Dec 27, 2023 at 2:25 PM Divij Vaidya 
> >>> wrote:
> >>> >
> >>> > I have started to perform an analysis of the OOM at
> >>> > https://issues.apache.org/jira/browse/KAFKA-16052. Please feel free
> to
> >>> > contribute to the investigation.
> >>> >
> >>> > --
> >>> > Divij Vaidya
> >>> >
> >>> >
> >>> >
> >>> > On Wed, Dec 27, 2023 at 1:23 AM Justine Olshan
> >>> 
> >>> > wrote:
> >>> >
> >>> > > I am still seeing quite a few OOM errors in the builds and I was
> >>> curious if
> >>> > > folks had any ideas on how to identify the cause and fix the
> issue. I
> >>> was
> >>> > > looking in gradle enterprise and found some info about memory
> usage,
> >>> but
> >>> > > nothing detailed enough to help figure the issue out.
> >>> > >
> >>> > > OOMs sometimes fail the build immediately and in other cases I see
> it
> >>> get
> >>> > > stuck for 8 hours. (See
> >>> > >
> >>> > >
> >>>
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/trunk/2508/pipeline/12
> >>> > > )
> >>> > >
> >>> > > I appreciate all the work folks are doing here and I will continue
> to
> >>> try
> >>> > > to help as best as I can.
> >>> > >
> >>> > > Justine
> >>> > >
> >>> > > On Tue, Dec 26, 2023 at 1:04 PM David Arthur
> >>> > >  wrote:
> >>> > >
> >>> > > > S2. We’ve looked into this before, and it wasn’t possible at the
> >>> time
> >>> 

Re: Logging in Kafka

2024-01-10 Thread Mickael Maison
I asked for details about the future of log4j2 on the logging user list:
https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl

Let's see what they say.

Thanks,
Mickael

On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
>
> Hi Mickael,
>
> Thanks for starting the discussion and for summarizing the state of play. I
> agree with you that it would be important to understand how long log4j2
> will be supported for. An alternative would be sl4fj 2.x and logback.
>
> Ismael
>
> On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > Starting a new thread to discuss the current logging situation in
> > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> > Kafka 4.0 if you are interested in what has already been said. [0]
> >
> > Currently Kafka uses SLF4J and reload4j as the logging backend. We had
> > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> > security issues.
> >
> > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> > incompatibilities in the configuration mechanism with log4j/reload4j
> > we decide to delay the upgrade to the next major release, Kafka 4.0.
> >
> > Kafka also currently provides a log4j appender. In 2022, we adopted
> > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> > time Apache Logging also had a Kafka appender that worked with log4j2.
> > They since deprecated that appender in log4j2 and it is not part of
> > log4j3. [1]
> >
> > Log4j3 is also nearing release but it seems it will require Java 17.
> > The website states Java 11 [2] but the artifacts from the latest 3.0.0
> > beta are built for Java 17. I was not able to find clear maintenance
> > statement about log4j2 once log4j3 gets released.
> >
> > The question is where do we go from here?
> > We can stick with our plans:
> > 1. Deprecate the appender in the next 3.x release and plan to remove it in
> > 4.0
> > 2. Do the necessary work to switch to log4j2 in 4.0
> > If so we need people to drive these work items. We have PRs for these
> > with hopefully the bulk of the code but they need
> > rebasing/completing/reviewing.
> >
> > Otherwise we can reconsider KIP-653 and/or KIP-719.
> >
> > Assuming log4j2 does not go end of life in the near future (We can
> > reach out to Apache Logging to clarify that point.), I think it still
> > makes sense to adopt it. I would also go ahead and deprecate our
> > appender.
> >
> > Thanks,
> > Mickael
> >
> > 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
> > 1: https://github.com/apache/logging-log4j2/issues/1951
> > 2: https://logging.apache.org/log4j/3.x/#requirements
> >


[jira] [Resolved] (KAFKA-15866) Refactor OffsetFetchRequestState Error handling to be more consistent with OffsetCommitRequestState

2024-01-10 Thread Lianet Magrans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lianet Magrans resolved KAFKA-15866.

Fix Version/s: 3.7.0
   (was: 3.8.0)
 Assignee: (was: Lan Ding)
   Resolution: Fixed

> Refactor OffsetFetchRequestState Error handling to be more consistent with 
> OffsetCommitRequestState
> ---
>
> Key: KAFKA-15866
> URL: https://issues.apache.org/jira/browse/KAFKA-15866
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, consumer
>Reporter: Philip Nee
>Priority: Minor
>  Labels: consumer-threading-refactor
> Fix For: 3.7.0
>
>
> The current OffsetFetchRequestState error handling uses nested if-else, which 
> is quite different, stylistically, to the OffsetCommitRequestState using a 
> switch statment.  The latter is a bit more readable so we should refactor the 
> error handling using the same style to improve readability.
>  
> A minor point: Some of the error handling seems inconsistent with the commit. 
> The logic was from the current implementation, so we should also review all 
> the error handling.  For example: somehow the current logic doesn't mark the 
> coordinator unavailable when receiving COORDINATOR_NOT_AVAILABLE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


fpapon commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884961276

   @mimaison ok thanks for your feeedbacks. I will update the PR according to 
the comments of the people and then resend an email to the mailing for the 
proposal and see how the community react on it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2024-01-10 Thread Mickael Maison
Hi Elxan,

Thanks for the KIP, it looks like a useful addition.

Can you add to the KIP the default value you propose for
replication.lag.metric.refresh.interval? In MirrorMaker most interval
configs can be set to -1 to disable them, will it be the case for this
new feature or will this setting only accept positive values?
I also wonder if replication-lag, or record-lag would be clearer names
instead of replication-offset-lag, WDYT?

Thanks,
Mickael

On Wed, Jan 3, 2024 at 6:15 PM Elxan Eminov  wrote:
>
> Hi all,
> Here is the vote thread:
> https://lists.apache.org/thread/ftlnolcrh858dry89sjg06mdcdj9mrqv
>
> Cheers!
>
> On Wed, 27 Dec 2023 at 11:23, Elxan Eminov  wrote:
>
> > Hi all,
> > I've updated the KIP with the details we discussed in this thread.
> > I'll call in a vote after the holidays if everything looks good.
> > Thanks!
> >
> > On Sat, 26 Aug 2023 at 15:49, Elxan Eminov 
> > wrote:
> >
> >> Relatively minor change with a new metric for MM2
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-971%3A+Expose+replication-offset-lag+MirrorMaker2+metric
> >>
> >


[jira] [Created] (KAFKA-16108) Backport fix for KAFKA-16093 to 3.7

2024-01-10 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-16108:
-

 Summary: Backport fix for KAFKA-16093 to 3.7
 Key: KAFKA-16108
 URL: https://issues.apache.org/jira/browse/KAFKA-16108
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Reporter: Chris Egerton
Assignee: Chris Egerton
 Fix For: 3.7.1


A fix for KAFKA-16093 is present on the branches trunk (the version for which 
is currently 3.8.0-SNAPSHOT) and 3.6. We are in code freeze for the 3.7.0 
release, and this issue is not a blocker, so it cannot be backported right now.

We should backport the fix once 3.7.0 has been released and before 3.7.1 is 
released.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16107) Ensure consumer does not start fetching from added partitions until onPartitionsAssgined completes

2024-01-10 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16107:
--

 Summary: Ensure consumer does not start fetching from added 
partitions until onPartitionsAssgined completes
 Key: KAFKA-16107
 URL: https://issues.apache.org/jira/browse/KAFKA-16107
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Lianet Magrans


In the new consumer implementation, when new partitions are assigned, the 
subscription state is updated and then the #onPartitionsAssigned triggered. 
This sequence seems sensible but we need to ensure that no data is fetched 
until the onPartitionsAssigned completes (where the user could be setting the 
committed offsets it want to start fetching from).
We should pause the partitions newly added partitions until 
onPartitionsAssigned completes, similar to how it's done on revocation to avoid 
positions getting ahead of the committed offsets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Logging in Kafka

2024-01-10 Thread Ismael Juma
Hi Mickael,

Thanks for starting the discussion and for summarizing the state of play. I
agree with you that it would be important to understand how long log4j2
will be supported for. An alternative would be sl4fj 2.x and logback.

Ismael

On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison 
wrote:

> Hi,
>
> Starting a new thread to discuss the current logging situation in
> Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> Kafka 4.0 if you are interested in what has already been said. [0]
>
> Currently Kafka uses SLF4J and reload4j as the logging backend. We had
> to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> security issues.
>
> In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> incompatibilities in the configuration mechanism with log4j/reload4j
> we decide to delay the upgrade to the next major release, Kafka 4.0.
>
> Kafka also currently provides a log4j appender. In 2022, we adopted
> KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> time Apache Logging also had a Kafka appender that worked with log4j2.
> They since deprecated that appender in log4j2 and it is not part of
> log4j3. [1]
>
> Log4j3 is also nearing release but it seems it will require Java 17.
> The website states Java 11 [2] but the artifacts from the latest 3.0.0
> beta are built for Java 17. I was not able to find clear maintenance
> statement about log4j2 once log4j3 gets released.
>
> The question is where do we go from here?
> We can stick with our plans:
> 1. Deprecate the appender in the next 3.x release and plan to remove it in
> 4.0
> 2. Do the necessary work to switch to log4j2 in 4.0
> If so we need people to drive these work items. We have PRs for these
> with hopefully the bulk of the code but they need
> rebasing/completing/reviewing.
>
> Otherwise we can reconsider KIP-653 and/or KIP-719.
>
> Assuming log4j2 does not go end of life in the near future (We can
> reach out to Apache Logging to clarify that point.), I think it still
> makes sense to adopt it. I would also go ahead and deprecate our
> appender.
>
> Thanks,
> Mickael
>
> 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
> 1: https://github.com/apache/logging-log4j2/issues/1951
> 2: https://logging.apache.org/log4j/3.x/#requirements
>


Re: [VOTE] KIP-971: Expose replication-offset-lag MirrorMaker2 metric

2024-01-10 Thread Viktor Somogyi-Vass
Hi Elxan,

+1 (binding).

Thanks,
Viktor

On Mon, Jan 8, 2024 at 5:57 PM Dániel Urbán  wrote:

> Hi Elxan,
> +1 (non-binding)
> Thanks for the KIP, this will be a very useful metric for MM!
> Daniel
>
> Elxan Eminov  ezt írta (időpont: 2024. jan. 7.,
> V,
> 2:17):
>
> > Hi all,
> > Bumping this for visibility
> >
> > On Wed, 3 Jan 2024 at 18:13, Elxan Eminov 
> wrote:
> >
> > > Hi All,
> > > I'd like to initiate a vote for KIP-971.
> > > This KIP is about adding a new metric to the MirrorSourceTask that
> tracks
> > > the offset lag between a source and a target partition.
> > >
> > > KIP link:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-971%3A+Expose+replication-offset-lag+MirrorMaker2+metric
> > >
> > > Discussion thread:
> > > https://lists.apache.org/thread/gwq9jd75dnm8htmpqkn17bnks6h3wqwp
> > >
> > > Thanks!
> > >
> >
>


[jira] [Resolved] (KAFKA-16097) State updater removes task without pending action in EOSv2

2024-01-10 Thread Lucas Brutschy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucas Brutschy resolved KAFKA-16097.

Resolution: Fixed

> State updater removes task without pending action in EOSv2
> --
>
> Key: KAFKA-16097
> URL: https://issues.apache.org/jira/browse/KAFKA-16097
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.8.0
>Reporter: Lucas Brutschy
>Priority: Major
>
> A long-running soak encountered the following exception:
>  
> {code:java}
> [2024-01-08 03:06:00,586] ERROR [i-081c089d2ed054443-StreamThread-3] Thread 
> encountered an error processing soak test 
> (org.apache.kafka.streams.StreamsSoakTest)
> java.lang.IllegalStateException: Got a removed task 1_0 from the state 
> updater that is not for recycle, closing, or updating input partitions; this 
> should not happen
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.handleRemovedTasksFromStateUpdater(TaskManager.java:939)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.checkStateUpdater(TaskManager.java:788)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.checkStateUpdater(StreamThread.java:1141)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnceWithoutProcessingThreads(StreamThread.java:949)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:686)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:645)
> [2024-01-08 03:06:00,587] ERROR [i-081c089d2ed054443-StreamThread-3] 
> stream-client [i-081c089d2ed054443] Encountered the following exception 
> during processing and sent shutdown request for the entire application. 
> (org.apache.kafka.streams.KafkaStreams)
> org.apache.kafka.streams.errors.StreamsException: 
> java.lang.IllegalStateException: Got a removed task 1_0 from the state 
> updater that is not for recycle, closing, or updating input partitions; this 
> should not happen
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:729)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:645)
> Caused by: java.lang.IllegalStateException: Got a removed task 1_0 from the 
> state updater that is not for recycle, closing, or updating input partitions; 
> this should not happen
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.handleRemovedTasksFromStateUpdater(TaskManager.java:939)
>     at 
> org.apache.kafka.streams.processor.internals.TaskManager.checkStateUpdater(TaskManager.java:788)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.checkStateUpdater(StreamThread.java:1141)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnceWithoutProcessingThreads(StreamThread.java:949)
>     at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:686)
>     ... 1 more{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


mimaison commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884814599

   No worries, there's no need to blame anybody for such a small issue. The 
initiative to add this page is, in my opinion, good and contributions are 
always welcome. It's just the initial content and quick approvals that looked 
concerning.
   
   @fpapon As pointed by several committers there are a few issues with the 
current content but we should be able to agree on what to do and update this PR 
accordingly. I wonder if the first step would be to decide whether we want this 
page on the website. It's probably best to email the dev to get some agreement, 
or at least allow people to raise objections and then start gathering content.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


jbonofre commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884745111

   @mimaison I take the blame on me: I did mistakes with the bright side of 
things. 
   1. I thought this kind of change was approved by the community/PMC
   2. As the same page exists in several ASF projects, and it was a copy from 
Camel, I thought it was ok straight forward without checking the diff in 
details.
   
   I share your points, but I would not be too sharp. @fpapon is just trying to 
help. I would add more companies/providers in this page as part of this PR. I 
think we can say that it's for the good of the Kafka community and diversity.
   
   I'm happy to chat with you about that on Slack or on a call if you want.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Logging in Kafka

2024-01-10 Thread Mickael Maison
Hi,

Starting a new thread to discuss the current logging situation in
Kafka. I'll restate everything we know but see the [DISCUSS] Road to
Kafka 4.0 if you are interested in what has already been said. [0]

Currently Kafka uses SLF4J and reload4j as the logging backend. We had
to adopt reload4j in 3.2.0 as log4j was end of life and has a few
security issues.

In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
incompatibilities in the configuration mechanism with log4j/reload4j
we decide to delay the upgrade to the next major release, Kafka 4.0.

Kafka also currently provides a log4j appender. In 2022, we adopted
KIP-719 to deprecate it since we wanted to switch to log4j2. At the
time Apache Logging also had a Kafka appender that worked with log4j2.
They since deprecated that appender in log4j2 and it is not part of
log4j3. [1]

Log4j3 is also nearing release but it seems it will require Java 17.
The website states Java 11 [2] but the artifacts from the latest 3.0.0
beta are built for Java 17. I was not able to find clear maintenance
statement about log4j2 once log4j3 gets released.

The question is where do we go from here?
We can stick with our plans:
1. Deprecate the appender in the next 3.x release and plan to remove it in 4.0
2. Do the necessary work to switch to log4j2 in 4.0
If so we need people to drive these work items. We have PRs for these
with hopefully the bulk of the code but they need
rebasing/completing/reviewing.

Otherwise we can reconsider KIP-653 and/or KIP-719.

Assuming log4j2 does not go end of life in the near future (We can
reach out to Apache Logging to clarify that point.), I think it still
makes sense to adopt it. I would also go ahead and deprecate our
appender.

Thanks,
Mickael

0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
1: https://github.com/apache/logging-log4j2/issues/1951
2: https://logging.apache.org/log4j/3.x/#requirements


Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


fpapon commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884696661

   @mimaison I replied to the original thread on the mailing list to rebirth 
the discussion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2556

2024-01-10 Thread Apache Jenkins Server
See 




Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


fpapon commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884677270

   > You have to admit this PR raised quite a few red flags!
   > 
   > First it's directly copied from Camel without replacing mentions to Kafka. 
Then within minutes it's approved by 2 other ASF members clearly without 
looking at the diff. Finally it's adding your company as the sole provider for 
commercial support.
   > 
   > A more open approach would have been to reach out the Kafka community on 
the dev or users lists and discuss a nice way to build such a page. I tend to 
agree that a page listing commercial offerings is helpful but I'm sure you 
understand why we can't merge the PR as is.
   
   Hi @mimaison,
   
   I admitted that I made a mistake about copy/paste from Camel website and I 
fixed it. About adding my company, as I explained in other comments, it's just 
a starting point to list companies. I cannot add the others for trademark 
purpose because I'm not owning them but everybody can add their company as 
explain in the text or I'm open to do it if people ask me in this PR.
   
   About the approach, I already start a thread on the mailing list about this 
topic before doing a proposal, here the link to thread:
   https://lists.apache.org/thread/kkox33rhtjcdr5zztq3lzj7c5s7k9wsr
   
   Sorry, it was last year, so may be I would have to resend it again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


mimaison commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884671597

   As I said, I think a page listing commercial offerings is useful. I was just 
commenting on the approach you took.
   
   I don't think we necessarily need a committer to build this page (obviously 
a committer will have to review the PR before merging). If you want you can 
start the discussion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] Road to Kafka 4.0

2024-01-10 Thread Ismael Juma
It may be worth starting a new thread with regards to the logging situation.

Ismael

On Wed, Jan 10, 2024 at 12:00 PM Mickael Maison 
wrote:

> Hi Colin,
>
> Regarding KIP-719, I think need it to land in 3.8 if we want to remove
> the appender in 4.0. I also just noticed the log4j's KafkaAppender is
> being deprecated in log4j2 and will not be part of log4j3.
>
> For KIP-653, as I said, my point was to gauge interest in getting it
> done. While it may not be a "must-do" to keep Kafka working, we can
> only do this type of change in major releases. So if we don't do it
> now, it won't happen for a few more years.
>
> Regarding log4j3, even though the website states it requires Java 11
> [1], it seems the latest beta release requires Java 17 so it's not
> something we'll be able to adopt now.
>
> 0: https://github.com/apache/logging-log4j2/issues/1951
> 1: https://logging.apache.org/log4j/3.x/#requirements
>
> Thanks,
> Mickael
>
> On Fri, Jan 5, 2024 at 12:18 AM Colin McCabe  wrote:
> >
> > Hi Mickael,
> >
> > Thanks for bringing this up.
> >
> > The main motivation given in KIP-653 for moving to log4j 2.x is that
> log4j 1.x is no longer supported. But since we moved to reload4j, which is
> still supported, that isn't a concern any longer.
> >
> > To be clear, I'm not saying we shouldn't upgrade, but I'm just trying to
> explain why I think there hasn't been as much interest in this lately. I
> see this as a "cool feature" rather than as a must-do.
> >
> > If we still want to do this for 4.0, it would be good to understand
> whether there's any work that has to land in 3.8. Do we have to get KIP-719
> into 3.8 so that we have a reasonable deprecation period?
> >
> > Also, if we do upgrade, I agree with Ismael that we should consider
> going to log4j3. Assuming they have a non-beta release by the time 4.0 is
> ready.
> >
> > best,
> > Colin
> >
> > On Thu, Jan 4, 2024, at 03:08, Mickael Maison wrote:
> > > Hi Ismael,
> > >
> > > Yes both KIPs have been voted.
> > > My point, which admittedly wasn't clear, was to gauge the interest in
> > > getting them done and if so identifying people to drive these tasks.
> > >
> > > KIP-719 shouldn't require too much more work to complete. There's a PR
> > > [0] which is relatively straightforward. I pinged Lee Dongjin.
> > > KIP-653 is more involved and depends on KIP-719. There's also a PR [1]
> > > which is pretty large.
> > >
> > > Yes log4j3 was on my mind as it's expected to be compatible with
> > > log4j2 and bring significant improvements.
> > >
> > > 0: https://github.com/apache/kafka/pull/10244
> > > 1: https://github.com/apache/kafka/pull/7898
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Thu, Jan 4, 2024 at 11:34 AM Ismael Juma  wrote:
> > >>
> > >> Hi Mickael,
> > >>
> > >> Given that KIP-653 was accepted, the current position is that we
> would move
> > >> to log4j2 - provided that someone is available to drive that. It's
> also
> > >> worth noting that log4j3 is now a thing (but not yet final):
> > >>
> > >> https://logging.apache.org/log4j/3.x/
> > >>
> > >> Ismael
> > >>
> > >> On Thu, Jan 4, 2024 at 2:15 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > >> wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > I've not seen replies about log4j2.
> > >> > The plan was to deprecated the appender (KIP-719) and switch to
> log4j2
> > >> > (KIP-653).
> > >> >
> > >> > While reload4j works well, I'd still be in favor of switching to
> > >> > log4j2 in Kafka 4.0.
> > >> >
> > >> > Thanks,
> > >> > Mickael
> > >> >
> > >> > On Fri, Dec 29, 2023 at 2:19 AM Colin McCabe 
> wrote:
> > >> > >
> > >> > > Hi all,
> > >> > >
> > >> > > Let's continue this dicsussion on the "[DISCUSS] KIP-1012: The
> need for
> > >> > a Kafka 3.8.x release" email thread.
> > >> > >
> > >> > > Colin
> > >> > >
> > >> > >
> > >> > > On Tue, Dec 26, 2023, at 12:50, José Armando García Sancio wrote:
> > >> > > > Hi Divij,
> > >> > > >
> > >> > > > Thanks for the feedback. I agree that having a 3.8 release is
> > >> > > > beneficial but some of the comments in this message are
> inaccurate and
> > >> > > > could mislead the community and users.
> > >> > > >
> > >> > > > On Thu, Dec 21, 2023 at 7:00 AM Divij Vaidya <
> divijvaidy...@gmail.com>
> > >> > wrote:
> > >> > > >> 1\ Durability/availability bugs in kraft - Even though kraft
> has been
> > >> > > >> around for a while, we keep finding bugs that impact
> availability and
> > >> > data
> > >> > > >> durability in it almost with every release [1] [2]. It's a
> complex
> > >> > feature
> > >> > > >> and such bugs are expected during the stabilization phase. But
> we
> > >> > can't
> > >> > > >> remove the alternative until we see stabilization in kraft
> i.e. no new
> > >> > > >> stability/durability bugs for at least 2 releases.
> > >> > > >
> > >> > > > I took a look at both of these issues and neither of them are
> bugs
> > >> > > > that affect KRaft's durability and availability.
> > >> > > >
> > >> > > >> [1] 

Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


rmannibucau commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884622256

   @mimaison well I wouldn't say red flags but I fully understand the surprise 
on kafka side if it was never discussed on/offline first - we had the same 
debate on TomEE side years ago.
   But it is not uncommon at apache to have such a page, even with a single 
company or even with no company and request for companies:
   
   * https://hop.apache.org/community/commercial/ (0)
   * https://struts.apache.org/commercial-support.html (1)
   * https://directory.apache.org/commercial-support.html (1)
   * https://superset.apache.org/community (direct link to the company behind - 
this one is more surprising for me since instead of encouraging the 
"registration" of vendors it hides it somehow but guess it is a small 
clumsiness)
   * https://tomee.apache.org/commercial-support.html (2)
   * https://plc4x.apache.org/users/commercial-support.html (2)
   * https://camel.apache.org/community/support/
   * https://openmeetings.apache.org/commercial-support.html
   * https://guacamole.apache.org/support/
   * 
https://cwiki.apache.org/confluence/display/HADOOP2/Distributions+and+Commercial+Support
   * https://activemq.apache.org/support
   * https://netbeans.apache.org/front/main/help/commercial-support/
   * https://royale.apache.org/royale-commercial-support/
   * (way more but guess you got the idea)
   
   To give an example of the "rationale behind" please have a look to 
https://github.com/apache/superset/issues/8852 issue.
   
   An important point to take into consideration is that several products - and 
I think Kafka is exactly there - wouldn't live without external support (it is 
true for most broker or server) so it is guaranteeing its community and helping 
its user base to do this kind of page - and trust me, ~5 years ago I didn't 
understand that as well as today so I say it very humbly.
   
   So overall, due to kafka adoption it can only be good to get such a page 
IMHO but I agree that if you feel it is not straight forward and it must be 
discussed more deeply cause Kafka wants to write its own content, a thread 
sounds the way to move it forward and a core commiter should then probably take 
the leadership on that "doc" track and this pr marked as pending while this 
work is done.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


mimaison commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884592204

   You have to admit this PR raised quite a few red flags!
   
   First it's directly copied from Camel without replacing mentions to Kafka. 
Then within minutes it's approved by 2 other ASF members clearly without 
looking at the diff. Finally it's adding your company as the sole provider for 
commercial support.
   
   A more open approach would have been to reach out the Kafka community on the 
dev or users lists and discuss a nice way to build such a page. I tend to agree 
that a page listing commercial offerings is helpful but I'm sure you understand 
why we can't merge the PR as is.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] Road to Kafka 4.0

2024-01-10 Thread Mickael Maison
Hi Colin,

Regarding KIP-719, I think need it to land in 3.8 if we want to remove
the appender in 4.0. I also just noticed the log4j's KafkaAppender is
being deprecated in log4j2 and will not be part of log4j3.

For KIP-653, as I said, my point was to gauge interest in getting it
done. While it may not be a "must-do" to keep Kafka working, we can
only do this type of change in major releases. So if we don't do it
now, it won't happen for a few more years.

Regarding log4j3, even though the website states it requires Java 11
[1], it seems the latest beta release requires Java 17 so it's not
something we'll be able to adopt now.

0: https://github.com/apache/logging-log4j2/issues/1951
1: https://logging.apache.org/log4j/3.x/#requirements

Thanks,
Mickael

On Fri, Jan 5, 2024 at 12:18 AM Colin McCabe  wrote:
>
> Hi Mickael,
>
> Thanks for bringing this up.
>
> The main motivation given in KIP-653 for moving to log4j 2.x is that log4j 
> 1.x is no longer supported. But since we moved to reload4j, which is still 
> supported, that isn't a concern any longer.
>
> To be clear, I'm not saying we shouldn't upgrade, but I'm just trying to 
> explain why I think there hasn't been as much interest in this lately. I see 
> this as a "cool feature" rather than as a must-do.
>
> If we still want to do this for 4.0, it would be good to understand whether 
> there's any work that has to land in 3.8. Do we have to get KIP-719 into 3.8 
> so that we have a reasonable deprecation period?
>
> Also, if we do upgrade, I agree with Ismael that we should consider going to 
> log4j3. Assuming they have a non-beta release by the time 4.0 is ready.
>
> best,
> Colin
>
> On Thu, Jan 4, 2024, at 03:08, Mickael Maison wrote:
> > Hi Ismael,
> >
> > Yes both KIPs have been voted.
> > My point, which admittedly wasn't clear, was to gauge the interest in
> > getting them done and if so identifying people to drive these tasks.
> >
> > KIP-719 shouldn't require too much more work to complete. There's a PR
> > [0] which is relatively straightforward. I pinged Lee Dongjin.
> > KIP-653 is more involved and depends on KIP-719. There's also a PR [1]
> > which is pretty large.
> >
> > Yes log4j3 was on my mind as it's expected to be compatible with
> > log4j2 and bring significant improvements.
> >
> > 0: https://github.com/apache/kafka/pull/10244
> > 1: https://github.com/apache/kafka/pull/7898
> >
> > Thanks,
> > Mickael
> >
> > On Thu, Jan 4, 2024 at 11:34 AM Ismael Juma  wrote:
> >>
> >> Hi Mickael,
> >>
> >> Given that KIP-653 was accepted, the current position is that we would move
> >> to log4j2 - provided that someone is available to drive that. It's also
> >> worth noting that log4j3 is now a thing (but not yet final):
> >>
> >> https://logging.apache.org/log4j/3.x/
> >>
> >> Ismael
> >>
> >> On Thu, Jan 4, 2024 at 2:15 AM Mickael Maison 
> >> wrote:
> >>
> >> > Hi,
> >> >
> >> > I've not seen replies about log4j2.
> >> > The plan was to deprecated the appender (KIP-719) and switch to log4j2
> >> > (KIP-653).
> >> >
> >> > While reload4j works well, I'd still be in favor of switching to
> >> > log4j2 in Kafka 4.0.
> >> >
> >> > Thanks,
> >> > Mickael
> >> >
> >> > On Fri, Dec 29, 2023 at 2:19 AM Colin McCabe  wrote:
> >> > >
> >> > > Hi all,
> >> > >
> >> > > Let's continue this dicsussion on the "[DISCUSS] KIP-1012: The need for
> >> > a Kafka 3.8.x release" email thread.
> >> > >
> >> > > Colin
> >> > >
> >> > >
> >> > > On Tue, Dec 26, 2023, at 12:50, José Armando García Sancio wrote:
> >> > > > Hi Divij,
> >> > > >
> >> > > > Thanks for the feedback. I agree that having a 3.8 release is
> >> > > > beneficial but some of the comments in this message are inaccurate 
> >> > > > and
> >> > > > could mislead the community and users.
> >> > > >
> >> > > > On Thu, Dec 21, 2023 at 7:00 AM Divij Vaidya 
> >> > > > 
> >> > wrote:
> >> > > >> 1\ Durability/availability bugs in kraft - Even though kraft has 
> >> > > >> been
> >> > > >> around for a while, we keep finding bugs that impact availability 
> >> > > >> and
> >> > data
> >> > > >> durability in it almost with every release [1] [2]. It's a complex
> >> > feature
> >> > > >> and such bugs are expected during the stabilization phase. But we
> >> > can't
> >> > > >> remove the alternative until we see stabilization in kraft i.e. no 
> >> > > >> new
> >> > > >> stability/durability bugs for at least 2 releases.
> >> > > >
> >> > > > I took a look at both of these issues and neither of them are bugs
> >> > > > that affect KRaft's durability and availability.
> >> > > >
> >> > > >> [1] https://issues.apache.org/jira/browse/KAFKA-15495
> >> > > >
> >> > > > This issue is not specific to KRaft and has been an issue in Apache
> >> > > > Kafka since the ISR leader election and replication algorithm was
> >> > > > added to Apache Kafka. I acknowledge that this misunderstanding is
> >> > > > partially due to the Jira description which insinuates that this only
> >> > > > applies to KRaft which is 

Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


fpapon commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884502000

   > @fpapon , thanks for the PR. Could you explain the motivation that why we 
should add `Get Support` page? Why is the `Contact Us` page not enough? Thanks.
   
   - This page explain more how to get supported. @ableegoldman didn't said 
that it's weird to have commercial support list, but it's weird to have only 
one company listed, that I'm agree and we can ask on the mailing list if some 
other company want to be listed.
   
   - Agree for the vendor neutral slack channel, that's why I proposed to list 
the existing official ASF Kafka slack channel. About StackOverFlow, as there is 
a lot of content on that channel, it can be used as a FAQ for the user but 
users can also ask on the mailing list. I think users can search content more 
easily on StackOverFlow rather than on the mailing list thread history.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] KIP-1014: Managing Unstable Metadata Versions in Apache Kafka

2024-01-10 Thread Federico Valeri
Hi folks,

> If you use an unstable MV, you probably won't be able to upgrade your 
> software. Because whenever something changes, you'll probably get 
> serialization exceptions being thrown inside the controller. Fatal ones.

Thanks for this clarification. I think this concrete risk should be
highlighted in the KIP and in the "unstable.metadata.versions.enable"
documentation.

In the test plan, should we also have one system test checking that
"features with a stable MV will never have that MV changed"?

On Wed, Jan 10, 2024 at 8:16 AM Colin McCabe  wrote:
>
> On Tue, Jan 9, 2024, at 18:56, Proven Provenzano wrote:
> > Hi folks,
> >
> > Thank you for the questions.
> >
> > Let me clarify about reorder first. The reorder of unstable metadata
> > versions should be infrequent.
>
> Why does it need to be infrequent? We should be able to reorder unstable 
> metadata versions as often as we like. There are no guarantees about unstable 
> MVs.
>
> > The time you reorder is when a feature that
> > requires a higher metadata version to enable becomes "production ready" and
> > the features with unstable metadata versions less than the new stable one
> > are moved to metadata versions greater than the new stable feature. When we
> > reorder, we are always allocating a new MV and we are never reusing an
> > existing MV even if it was also unstable. This way a developer upgrading
> > their environment with a specific unstable MV might see existing
> > functionality stop working but they won't see new MV dependent
> > functionality magically appear. The feature set for a given unstable MV
> > version can only decrease with reordering.
>
> If you use an unstable MV, you probably won't be able to upgrade your 
> software. Because whenever something changes, you'll probably get 
> serialization exceptions being thrown inside the controller. Fatal ones.
>
> Given that this is true, there's no reason to have special rules about what 
> we can and can't do with unstable MVs. We can do anything.
>
> >
> > How do we define "production ready" and when should we bump
> > LATEST_PRODUCTION? I would like to define it to be the point where the
> > feature is code complete with tests and the KIP for it is approved. However
> > even with this definition if the feature later develops a major issue it
> > could still block future features until the issue is fixed which is what we
> > are trying to avoid here. We could be much more formal about this and let
> > the release manager for a release define what is stable for a given release
> > and then do the bump just after the branch is created on the branch. When
> > an RC candidate is accepted, the bump would be backported. I would like to
> > hear other ideas here.
> >
>
> Yeah, it's an interesting question. Overall, I think developers should define 
> when a feature is production ready.
>
> The question to ask is, "are you ready to take this feature to production in 
> your workplace?" I think most developers do have a sense of this. Obviously 
> bugs and mistakes can happen, but I think this standard would avoid most of 
> the issues that we're trying to avoid by having unstable MVs in the first 
> place.
>
> ELR is a good example. Nobody would have said that it was production ready in 
> 3.7 ... hence it belonged (and still belongs) in an unstable MV, until that 
> changes (hopefully soon :) )
>
> best,
> Colin
>
> > --Proven
> >
> > On Tue, Jan 9, 2024 at 3:26 PM Colin McCabe  wrote:
> >
> >> Hi Justine,
> >>
> >> Yes, this is an important point to clarify. Proven can comment more, but
> >> my understanding is that we can do anything to unstable metadata versions.
> >> Reorder them, delete them, change them in any other way. There are no
> >> stability guarantees. If the current text is unclear let's add more
> >> examples of what we can do (which is anything) :)
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Mon, Jan 8, 2024, at 14:18, Justine Olshan wrote:
> >> > Hey Colin,
> >> >
> >> > I had some offline discussions with Proven previously and it seems like
> >> he
> >> > said something different so I'm glad I brought it up here.
> >> >
> >> > Let's clarify if we are ok with reordering unstable metadata versions :)
> >> >
> >> > Justine
> >> >
> >> > On Mon, Jan 8, 2024 at 1:56 PM Colin McCabe  wrote:
> >> >
> >> >> On Mon, Jan 8, 2024, at 13:19, Justine Olshan wrote:
> >> >> > Hey all,
> >> >> >
> >> >> > I was wondering how often we plan to update LATEST_PRODUCTION metadata
> >> >> > version. Is this something we should do as soon as the feature is
> >> >> complete
> >> >> > or something we do when we are releasing kafka. When is the time we
> >> >> abandon
> >> >> > a MV so that other features can be unblocked?
> >> >>
> >> >> Hi Justine,
> >> >>
> >> >> Thanks for reviewing.
> >> >>
> >> >> The idea is that you should bump LATEST_PRODUCTION when you want to
> >> take a
> >> >> feature to production. That could mean deploying it internally
> >> somewhere to
> >> >> 

Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


fpapon commented on PR #577:
URL: https://github.com/apache/kafka-site/pull/577#issuecomment-1884372269

   @showuon There is a lot of ASF projects that provide a page with community 
support, it help the adoption and the growth of the project because the users 
need commercial support for production (SLAs), consulting or training. So if 
some company are listed in the project website it can be very useful for them 
to find support. You can check Camel, ActiveMQ, Karaf websites for example.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add get support page [kafka-site]

2024-01-10 Thread via GitHub


rmannibucau commented on code in PR #577:
URL: https://github.com/apache/kafka-site/pull/577#discussion_r1447016147


##
support.html:
##
@@ -0,0 +1,54 @@
+
+
+
+
+   
+   
+   Support
+   Community support
+
+   
+   This is an open source project so the amount of time 
community members have available to help resolve your issue
+   is often limited as help is provided on a volunteer 
basis from
+   https://www.apache.org/foundation/how-it-works/#hats; 
target="_blank">individuals.
+   However, it is free, and you are often able to discuss 
issues with the developers who actually wrote the code
+   you are running.
+   
+   
+   If you want community support then please https://kafka.apache.org/contact;>contact us.
+   If you’re fairly certain you’re hitting a bug please 
report it via one of our
+   https://kafka.apache.org/contributing;>issue 
trackers. Be sure to review
+   these pages carefully as they contain tips and tricks 
about working with Apache communities in general and
+   Apache Kafka in particular.
+   
+
+   Commercial support
+   
+   Apache Kafka is a widely used project. As such, several 
companies have built products and services around Kafka.
+   This page is dedicated to providing descriptions of 
those offerings and links to more information.
+   Companies are definitely encouraged to update this page 
directly or send a mail to the Kafka PMC with a description
+   of your offerings, and we can update the page. The 
products and services listed on this page are provided for
+   information use only to our users. The Kafka PMC does 
not endorse or recommend any of the products or services
+   on this page. See below for information about what is 
appropriate to add to the page.
+   
+
+   
+   https://www.yupiik.com; 
target="_blank">Yupiik contributes and commits to many Apache projects. 
Provides consulting, training and support

Review Comment:
   For one rational (coming from other ASF projects): asking for help always 
require some sort of registration and when a company just want support (often 
SLA related or expertise related) this is perceived as negative so several ASF 
projects (most of the ones able to run as standalone, ie not libs) started to 
add such a page or at least point to a page enabling to find out the info.
   
   hope it helps



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org