Re: [ANNOUNCE] New committer: Justine Olshan

2023-01-10 Thread Bruno Cadonna

Better late than never!

Congrats!

Best,
Bruno

On 04.01.23 20:25, Kirk True wrote:

Congratulations!

On Tue, Jan 3, 2023, at 7:34 PM, John Roesler wrote:

Congrats, Justine!
-John

On Tue, Jan 3, 2023, at 13:03, Matthias J. Sax wrote:

Congrats!

On 12/29/22 6:47 PM, ziming deng wrote:

Congratulations Justine!
—
Best,
Ziming


On Dec 30, 2022, at 10:06, Luke Chen  wrote:

Congratulations, Justine!
Well deserved!

Luke

On Fri, Dec 30, 2022 at 9:15 AM Ron Dagostino  wrote:


Congratulations, Justine!Well-deserved., and I’m very happy for you.

Ron


On Dec 29, 2022, at 6:13 PM, Israel Ekpo  wrote:

Congratulations Justine!



On Thu, Dec 29, 2022 at 5:05 PM Greg Harris



wrote:

Congratulations Justine!


On Thu, Dec 29, 2022 at 1:37 PM Bill Bejeck  wrote:

Congratulations Justine!


-Bill


On Thu, Dec 29, 2022 at 4:36 PM Philip Nee 

wrote:



wow congrats!

On Thu, Dec 29, 2022 at 1:05 PM Chris Egerton <

fearthecel...@gmail.com



wrote:


Congrats, Justine!

On Thu, Dec 29, 2022, 15:58 David Jacot  wrote:


Hi all,

The PMC of Apache Kafka is pleased to announce a new Kafka

committer

Justine
Olshan.

Justine has been contributing to Kafka since June 2019. She

contributed

53

PRs including the following KIPs.

KIP-480: Sticky Partitioner
KIP-516: Topic Identifiers & Topic Deletion State Improvements
KIP-854: Separate configuration for producer ID expiry
KIP-890: Transactions Server-Side Defense (in progress)

Congratulations, Justine!

Thanks,

David (on behalf of the Apache Kafka PMC)




















Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1503

2023-01-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14613) Move BrokerReconfigurable/KafkaConfig to server-common module.

2023-01-10 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-14613:
--

 Summary: Move BrokerReconfigurable/KafkaConfig to server-common 
module.
 Key: KAFKA-14613
 URL: https://issues.apache.org/jira/browse/KAFKA-14613
 Project: Kafka
  Issue Type: Sub-task
Reporter: Satish Duggana






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-890 Server Side Defense

2023-01-10 Thread Artem Livshits
There are some workflows in the client that are implied by protocol
changes, e.g.:

- for new clients, epoch changes with every transaction and can overflow,
in old clients this condition was handled transparently, because epoch was
bumped in InitProducerId and it would return a new producer id if epoch
overflows, the new clients would need to implement some workflow to refresh
producer id
- how to handle fenced producers, for new clients epoch changes with every
transaction, so in presence of failures during commits / aborts, the
producer could get easily fenced, old clients would pretty much would get
fenced when a new incarnation of the producer was initialized with
InitProducerId so it's ok to treat as a fatal error, the new clients would
need to implement some workflow to handle that error, otherwise they could
get fenced by themselves
- in particular (as a subset of the previous issue), what would the client
do if it got a timeout during commit?  commit could've succeeded or failed

Not sure if this has to be defined in the KIP as implementing those
probably wouldn't require protocol changes, but we have multiple
implementations of Kafka clients, so probably would be good to have some
client implementation guidance.  Could also be done as a separate doc.

-Artem

On Mon, Jan 9, 2023 at 3:38 PM Justine Olshan 
wrote:

> Hey all, I've updated the KIP to incorporate Jason's suggestions.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense
>
>
> 1. Use AddPartitionsToTxn + verify flag to check on old clients
> 2. Updated AddPartitionsToTxn API to support transaction batching
> 3. Mention IBP bump
> 4. Mention auth change on new AddPartitionsToTxn version.
>
> I'm planning on opening a vote soon.
> Thanks,
> Justine
>
> On Fri, Jan 6, 2023 at 3:32 PM Justine Olshan 
> wrote:
>
> > Thanks Jason. Those changes make sense to me. I will update the KIP.
> >
> >
> >
> > On Fri, Jan 6, 2023 at 3:31 PM Jason Gustafson
> 
> > wrote:
> >
> >> Hey Justine,
> >>
> >> > I was wondering about compatibility here. When we send requests
> >> between brokers, we want to ensure that the receiving broker understands
> >> the request (specifically the new fields). Typically this is done via
> >> IBP/metadata version.
> >> I'm trying to think if there is a way around it but I'm not sure there
> is.
> >>
> >> Yes. I think we would gate usage of this behind an IBP bump. Does that
> >> seem
> >> reasonable?
> >>
> >> > As for the improvements -- can you clarify how the multiple
> >> transactional
> >> IDs would help here? Were you thinking of a case where we wait/batch
> >> multiple produce requests together? My understanding for now was 1
> >> transactional ID and one validation per 1 produce request.
> >>
> >> Each call to `AddPartitionsToTxn` is essentially a write to the
> >> transaction
> >> log and must block on replication. The more we can fit into a single
> >> request, the more writes we can do in parallel. The alternative is to
> make
> >> use of more connections, but usually we prefer batching since the
> network
> >> stack is not really optimized for high connection/request loads.
> >>
> >> > Finally with respect to the authorizations, I think it makes sense to
> >> skip
> >> topic authorizations, but I'm a bit confused by the "leader ID" field.
> >> Wouldn't we just want to flag the request as from a broker (does it
> matter
> >> which one?).
> >>
> >> We could also make it version-based. For the next version, we could
> >> require
> >> CLUSTER auth. So clients would not be able to use the API anymore, which
> >> is
> >> probably what we want.
> >>
> >> -Jason
> >>
> >> On Fri, Jan 6, 2023 at 10:43 AM Justine Olshan
> >> 
> >> wrote:
> >>
> >> > As a follow up, I was just thinking about the batching a bit more.
> >> > I suppose if we have one request in flight and we queue up the other
> >> > produce requests in some sort of purgatory, we could send information
> >> out
> >> > for all of them rather than one by one. So that would be a benefit of
> >> > batching partitions to add per transaction.
> >> >
> >> > I'll need to think a bit more on the design of this part of the KIP,
> and
> >> > will update the KIP in the next few days.
> >> >
> >> > Thanks,
> >> > Justine
> >> >
> >> > On Fri, Jan 6, 2023 at 10:22 AM Justine Olshan 
> >> > wrote:
> >> >
> >> > > Hey Jason -- thanks for the input -- I was just digging a bit deeper
> >> into
> >> > > the design + implementation of the validation calls here and what
> you
> >> say
> >> > > makes sense.
> >> > >
> >> > > I was wondering about compatibility here. When we send requests
> >> > > between brokers, we want to ensure that the receiving broker
> >> understands
> >> > > the request (specifically the new fields). Typically this is done
> via
> >> > > IBP/metadata version.
> >> > > I'm trying to think if there is a way around it but I'm not sure
> there
> >> > is.
> >> > >
> >> > > As for the improvements -- can you 

Re: [VOTE] KIP-890: Transactions Server Side Defense

2023-01-10 Thread Colt McNealy
(non-binding) +1. Thank you for the KIP, Justine! I've read it; it makes
sense to me and I am excited for the implementation.

Colt McNealy
*Founder, LittleHorse.io*


On Tue, Jan 10, 2023 at 10:46 AM Justine Olshan
 wrote:

> Hi everyone,
>
> I would like to start a vote on KIP-890 which aims to prevent some of the
> common causes of hanging transactions and make other general improvements
> to transactions in Kafka.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense
>
> Please take a look if you haven't already and vote!
>
> Justine
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1502

2023-01-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-896: Remove old client protocol API versions in Kafka 4.0

2023-01-10 Thread Ismael Juma
Hi Jose,

The KIP describes a couple of existing metrics that can be used to answer
that question:

The following metrics are used to determine both questions:
>
>- Client name and version:
>
> kafka.server:clientSoftwareName=(client-software-name),clientSoftwareVersion=(client-software-version),listener=(listener),networkProcessor=(processor-index),type=(type)
>- Request name and version:
>
> kafka.network:type=RequestMetrics,name=RequestsPerSec,request=(api-name),version=(api-version)}
>
>
Are you suggesting that this is too complicated and hence we should add a
metric that tracks AK 4.0 support explicitly?

Ismael

On Tue, Jan 10, 2023 at 10:33 AM José Armando García Sancio
 wrote:

> Hi Ismael,
>
> Thanks for the improvement.
>
> I haven't been following the discussion in detail so it is possible
> that this was already discussed.
>
> If a user upgrades to Apache Kafka 4.0 it is possible for some of
> their clients to stop working because the request's version would not
> be a version that Kafka 4.0 supports. Should we add metrics or some
> other mechanism that the user can monitor to determine if it is safe
> to upgrade Kafka to 4.0. For example, the metrics could report if a
> Kafka broker received a request or response in the past 7 days that
> would not be supported by Kafka 4.0.
>
> Thanks
> --
> -José
>


[jira] [Created] (KAFKA-14612) Topic config records written to log even when topic creation fails

2023-01-10 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-14612:
---

 Summary: Topic config records written to log even when topic 
creation fails
 Key: KAFKA-14612
 URL: https://issues.apache.org/jira/browse/KAFKA-14612
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Reporter: Jason Gustafson


Config records are added when handling a `CreateTopics` request here: 
[https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java#L549.]
 If the subsequent validations fail and the topic is not created, these records 
will still be written to the log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14557) Missing .lock file when using metadata.log.dir

2023-01-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Armando García Sancio resolved KAFKA-14557.

Resolution: Fixed

> Missing .lock file when using metadata.log.dir
> --
>
> Key: KAFKA-14557
> URL: https://issues.apache.org/jira/browse/KAFKA-14557
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: José Armando García Sancio
>Assignee: José Armando García Sancio
>Priority: Major
> Fix For: 3.4.0, 3.3.3
>
>
> If the Kafka node is configured to use a metadata.log.dir that is different 
> from the one specified in log.dir or log.dirs, Kafka doesn't create and grab 
> a file lock for the metadata.lor.dir. The log dir lock file is named .lock.
> This makes is possible for multiple Kafka node to use the same metadata log 
> dir at the same time. This is not supported and should not be allowed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14611) ZK broker should not send epoch during registration

2023-01-10 Thread David Arthur (Jira)
David Arthur created KAFKA-14611:


 Summary: ZK broker should not send epoch during registration
 Key: KAFKA-14611
 URL: https://issues.apache.org/jira/browse/KAFKA-14611
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Arthur
Assignee: David Arthur
 Fix For: 3.4.0


We need to remove the integer field from the protocol for 
"migratingZkBrokerEpoch" and replace it with a simple boolean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New committer: Satish Duggana

2023-01-10 Thread Rajini Sivaram
Congratulations, Satish!

Regards,

Rajini

On Tue, Jan 10, 2023 at 5:12 PM Bruno Cadonna  wrote:

> Congrats!
>
> Best,
> Bruno
>
> On 24.12.22 12:44, Manikumar wrote:
> > Congrats, Satish!  Well deserved.
> >
> > On Sat, Dec 24, 2022, 5:10 PM Tom Bentley  wrote:
> >
> >> Congratulations!
> >>
> >> On Sat, 24 Dec 2022 at 05:05, Luke Chen  wrote:
> >>
> >>> Congratulations, Satish!
> >>>
> >>> On Sat, Dec 24, 2022 at 4:12 AM Federico Valeri 
> >>> wrote:
> >>>
>  Hi Satish, congrats!
> 
>  On Fri, Dec 23, 2022, 8:46 PM Viktor Somogyi-Vass
>   wrote:
> 
> > Congrats Satish!
> >
> > On Fri, Dec 23, 2022, 19:38 Mickael Maison  >>>
> > wrote:
> >
> >> Congratulations Satish!
> >>
> >> On Fri, Dec 23, 2022 at 7:36 PM Divij Vaidya <
> >>> divijvaidy...@gmail.com>
> >> wrote:
> >>>
> >>> Congratulations Satish! 
> >>>
> >>> On Fri 23. Dec 2022 at 19:32, Josep Prat
> >>>  >
> >>> wrote:
> >>>
>  Congrats Satish!
> 
>  ———
>  Josep Prat
> 
>  Aiven Deutschland GmbH
> 
>  Immanuelkirchstraße 26, 10405 Berlin
>  <
> >>
> >
> 
> >>>
> >>
> https://www.google.com/maps/search/Immanuelkirchstra%C3%9Fe+26,+10405+Berlin?entry=gmail=g
> >>>
> 
>  Amtsgericht Charlottenburg, HRB 209739 B
> 
>  Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> 
>  m: +491715557497
> 
>  w: aiven.io
> 
>  e: josep.p...@aiven.io
> 
>  On Fri, Dec 23, 2022, 19:23 Chris Egerton <
> >>> fearthecel...@gmail.com
> >
> >> wrote:
> 
> > Congrats, Satish!
> >
> > On Fri, Dec 23, 2022, 13:19 Arun Raju 
> > wrote:
> >
> >> Congratulations 
> >>
> >> On Fri, Dec 23, 2022, 1:08 PM Jun Rao
> >>>  >
>  wrote:
> >>
> >>> Hi, Everyone,
> >>>
> >>> The PMC of Apache Kafka is pleased to announce a new
> >> Kafka
> >> committer
> >> Satish
> >>> Duggana.
> >>>
> >>> Satish has been a long time Kafka contributor since 2017.
> >>> He
>  is
> >> the
> > main
> >>> driver behind KIP-405 that integrates Kafka with remote
> > storage,
> >> a
> >>> significant and much anticipated feature in Kafka.
> >>>
> >>> Congratulations, Satish!
> >>>
> >>> Thanks,
> >>>
> >>> Jun (on behalf of the Apache Kafka PMC)
> >>>
> >>
> >
> 
> >>> --
> >>> Divij Vaidya
> >>
> >
> 
> >>>
> >>
> >
>


[VOTE] KIP-890: Transactions Server Side Defense

2023-01-10 Thread Justine Olshan
Hi everyone,

I would like to start a vote on KIP-890 which aims to prevent some of the
common causes of hanging transactions and make other general improvements
to transactions in Kafka.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense

Please take a look if you haven't already and vote!

Justine


Re: [DISCUSS] KIP-896: Remove old client protocol API versions in Kafka 4.0

2023-01-10 Thread José Armando García Sancio
Hi Ismael,

Thanks for the improvement.

I haven't been following the discussion in detail so it is possible
that this was already discussed.

If a user upgrades to Apache Kafka 4.0 it is possible for some of
their clients to stop working because the request's version would not
be a version that Kafka 4.0 supports. Should we add metrics or some
other mechanism that the user can monitor to determine if it is safe
to upgrade Kafka to 4.0. For example, the metrics could report if a
Kafka broker received a request or response in the past 7 days that
would not be supported by Kafka 4.0.

Thanks
-- 
-José


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1501

2023-01-10 Thread Apache Jenkins Server
See 




Re: [VOTE] 3.3.2 RC1

2023-01-10 Thread José Armando García Sancio
Hey Chris,

Here are the results:
http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1673314598--apache--HEAD--b66af662e6/2023-01-09--001./2023-01-09--001./report.html

It looks like all of the failures are when trying to upgrade to
3.3.2-SNAPSHOT. I saw a similar error in my PR here but I am not sure
if it is related: https://github.com/apache/kafka/pull/13077

Maybe someone familiar with Kafka Streams can help.

Thanks,
-- 
-José


Re: [DISCUSS] KIP-877: Mechanism for plugins and connectors to register metrics

2023-01-10 Thread Mickael Maison
Hi Chris/Yash,

Thanks for taking a look and providing feedback.

1) Yes you're right, when using incompatible version, metrics() would
trigger NoSuchMethodError. I thought using the context to pass the
Metrics object would be more idiomatic for Connect but maybe
implementing Monitorable would be simpler. It would also allow other
Connect plugins (transformations, converters, etc) to register
metrics. So I'll make that change.

2) As mentioned in the rejected alternatives, I considered having a
PluginMetrics class/interface with a limited API. But since Metrics is
part of the public API, I thought it would be simpler to reuse it.
That said you bring interesting points so I took another look today.
It's true that the Metrics API is pretty complex and most methods are
useless for plugin authors. I'd expect most use cases only need one
addMetric and one sensor methods. Rather than subclassing Metrics, I
think a delegate/forwarding pattern might work well here. A
PluginMetric class would forward its method to the Metrics instance
and could perform some basic validations such as only letting plugins
delete metrics they created, or automatically injecting tags with the
class name for example.

3) Between the clients, brokers, streams and connect, Kafka has quite
a lot! In practice I think registering metrics should be beneficial
for all plugins, I think the only exception would be metrics reporters
(which are instantiated before the Metrics object). I'll try to build
a list of all plugin types and add that to the KIP.

Thanks,
Mickael



On Tue, Dec 27, 2022 at 4:54 PM Chris Egerton  wrote:
>
> Hi Yash,
>
> Yes, a default no-op is exactly what I had in mind should the Connector and
> Task classes implement the Monitorable interface.
>
> Cheers,
>
> Chris
>
> On Tue, Dec 20, 2022 at 2:46 AM Yash Mayya  wrote:
>
> > Hi Mickael,
> >
> > Thanks for creating this KIP, this will be a super useful feature to
> > enhance existing connectors in the Kafka Connect ecosystem.
> >
> > I have some similar concerns to the ones that Chris has outlined above,
> > especially with regard to directly exposing Connect's Metrics object to
> > plugins. I believe it would be a lot friendlier to developers if we instead
> > exposed wrapper methods in the context classes - such as one for
> > registering a new metric, one for recording metric values and so on. This
> > would also have the added benefit of minimizing the surface area for
> > potential misuse by custom plugins.
> >
> > > for connectors and tasks they should handle the
> > > metrics() method returning null when deployed on
> > > an older runtime.
> >
> > I believe this won't be the case, and instead they'll need to handle a
> > `NoSuchMethodError` right? This is similar to previous KIPs that added
> > methods to connector context classes and will arise due to an
> > incompatibility between the `connect-api` dependency that a plugin will be
> > compiled against versus what it will actually get at runtime.
> >
> > Hi Chris,
> >
> > > WDYT about having the Connector and Task classes
> > > implement the Monitorable interface, both for
> > > consistency's sake, and to prevent classloading
> > > headaches?
> >
> > Are you suggesting that the framework should configure connectors / tasks
> > with a Metrics instance during their startup rather than the connector /
> > task asking the framework to provide one? In this case, I'm guessing you're
> > envisioning a default no-op implementation for the metrics configuration
> > method rather than the framework having to handle the case where the
> > connector was compiled against an older version of Connect right?
> >
> > Thanks,
> > Yash
> >
> > On Wed, Nov 30, 2022 at 1:38 AM Chris Egerton 
> > wrote:
> >
> > > Hi Mickael,
> > >
> > > Thanks for the KIP! This seems especially useful to reduce the
> > > implementation cost and divergence in behavior for connectors that choose
> > > to publish their own metrics.
> > >
> > > My initial thoughts:
> > >
> > > 1. Are you certain that the default implementation of the "metrics"
> > method
> > > for the various connector/task context classes will be used on older
> > > versions of the Connect runtime? My understanding was that a
> > > NoSuchMethodError (or some similar classloading exception) would be
> > thrown
> > > in that case. If that turns out to be true, WDYT about having the
> > Connector
> > > and Task classes implement the Monitorable interface, both for
> > > consistency's sake, and to prevent classloading headaches?
> > >
> > > 2. Although I agree that administrators should be careful about which
> > > plugins they run on their clients, Connect clusters, etc., I wonder if
> > > there might still be value in wrapping the Metrics class behind a new
> > > interface, for a few reasons:
> > >
> > >   a. Developers and administrators may still make mistakes, and if we can
> > > reduce the blast radius by preventing plugins from, e.g., closing the
> > > Metrics instance we give them, it 

[GitHub] [kafka-site] bbejeck commented on pull request #476: Add CloudScale to powered-by page

2023-01-10 Thread GitBox


bbejeck commented on PR #476:
URL: https://github.com/apache/kafka-site/pull/476#issuecomment-1377622904

   merged #476 into asf-site


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] bbejeck merged pull request #476: Add CloudScale to powered-by page

2023-01-10 Thread GitBox


bbejeck merged PR #476:
URL: https://github.com/apache/kafka-site/pull/476


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] bbejeck commented on pull request #476: Add CloudScale to powered-by page

2023-01-10 Thread GitBox


bbejeck commented on PR #476:
URL: https://github.com/apache/kafka-site/pull/476#issuecomment-1377622368

   @nandita-cloudscaleinc thanks for the PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [ANNOUNCE] New committer: Satish Duggana

2023-01-10 Thread Bruno Cadonna

Congrats!

Best,
Bruno

On 24.12.22 12:44, Manikumar wrote:

Congrats, Satish!  Well deserved.

On Sat, Dec 24, 2022, 5:10 PM Tom Bentley  wrote:


Congratulations!

On Sat, 24 Dec 2022 at 05:05, Luke Chen  wrote:


Congratulations, Satish!

On Sat, Dec 24, 2022 at 4:12 AM Federico Valeri 
wrote:


Hi Satish, congrats!

On Fri, Dec 23, 2022, 8:46 PM Viktor Somogyi-Vass
 wrote:


Congrats Satish!

On Fri, Dec 23, 2022, 19:38 Mickael Maison 


wrote:


Congratulations Satish!

On Fri, Dec 23, 2022 at 7:36 PM Divij Vaidya <

divijvaidy...@gmail.com>

wrote:


Congratulations Satish! 

On Fri 23. Dec 2022 at 19:32, Josep Prat




wrote:


Congrats Satish!

———
Josep Prat

Aiven Deutschland GmbH

Immanuelkirchstraße 26, 10405 Berlin
<









https://www.google.com/maps/search/Immanuelkirchstra%C3%9Fe+26,+10405+Berlin?entry=gmail=g




Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen

m: +491715557497

w: aiven.io

e: josep.p...@aiven.io

On Fri, Dec 23, 2022, 19:23 Chris Egerton <

fearthecel...@gmail.com



wrote:



Congrats, Satish!

On Fri, Dec 23, 2022, 13:19 Arun Raju 

wrote:



Congratulations 

On Fri, Dec 23, 2022, 1:08 PM Jun Rao




wrote:



Hi, Everyone,

The PMC of Apache Kafka is pleased to announce a new

Kafka

committer

Satish

Duggana.

Satish has been a long time Kafka contributor since 2017.

He

is

the

main

driver behind KIP-405 that integrates Kafka with remote

storage,

a

significant and much anticipated feature in Kafka.

Congratulations, Satish!

Thanks,

Jun (on behalf of the Apache Kafka PMC)








--
Divij Vaidya














Re: [ANNOUNCE] New committer: Edoardo Comar

2023-01-10 Thread Bruno Cadonna

Congrats!

Best,
Bruno

On 10.01.23 11:00, Edoardo Comar wrote:

Many thanks everyone !

On Mon, 9 Jan 2023 at 19:40, Rajini Sivaram  wrote:


Congratulations, Edo!

Regards,

Rajini

On Mon, Jan 9, 2023 at 10:16 AM Tom Bentley  wrote:


Congratulations!

On Sun, 8 Jan 2023 at 01:14, Satish Duggana 
wrote:


Congratulations, Edorado!

On Sun, 8 Jan 2023 at 00:15, Viktor Somogyi-Vass
 wrote:


Congrats Edoardo!

On Sat, Jan 7, 2023, 18:15 Bill Bejeck  wrote:


Congratulations, Edoardo!

-Bill

On Sat, Jan 7, 2023 at 12:11 PM John Roesler 

wrote:



Congrats, Edoardo!
-John

On Fri, Jan 6, 2023, at 20:47, Matthias J. Sax wrote:

Congrats!

On 1/6/23 5:15 PM, Luke Chen wrote:

Congratulations, Edoardo!

Luke

On Sat, Jan 7, 2023 at 7:58 AM Mickael Maison <

mickael.mai...@gmail.com



wrote:


Congratulations Edo!


On Sat, Jan 7, 2023 at 12:05 AM Jun Rao




wrote:


Hi, Everyone,

The PMC of Apache Kafka is pleased to announce a new Kafka

committer

Edoardo

Comar.

Edoardo has been a long time Kafka contributor since 2016.

His

major

contributions are the following.

KIP-302: Enable Kafka clients to use all DNS resolved IP

addresses

KIP-277: Fine Grained ACL for CreateTopics API
KIP-136: Add Listener name to SelectorMetrics tags

Congratulations, Edoardo!

Thanks,

Jun (on behalf of the Apache Kafka PMC)



















[GitHub] [kafka-site] nandita-cloudscaleinc opened a new pull request, #476: Add CloudScale to powered-by page

2023-01-10 Thread GitBox


nandita-cloudscaleinc opened a new pull request, #476:
URL: https://github.com/apache/kafka-site/pull/476

   On behalf of the Cloud Scale® Inc team, I would like to add it to the 
powered-by page.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #35

2023-01-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 468700 lines...]
[2023-01-10T11:09:24.605Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-01-10T11:09:24.605Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-01-10T11:09:24.605Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-01-10T11:09:24.605Z] 25 warnings
[2023-01-10T11:09:24.605Z] 
[2023-01-10T11:09:24.605Z] > Task :clients:javadoc
[2023-01-10T11:09:24.605Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/OAuthBearerLoginCallbackHandler.java:151:
 warning - Tag @link: reference not found: 
[2023-01-10T11:09:25.535Z] 
[2023-01-10T11:09:25.535Z] > Task :streams:javadocJar
[2023-01-10T11:09:25.535Z] > Task :streams:processTestResources UP-TO-DATE
[2023-01-10T11:09:30.450Z] 
[2023-01-10T11:09:30.450Z] > Task :clients:javadoc
[2023-01-10T11:09:30.450Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
[2023-01-10T11:09:30.450Z] 
/home/jenkins/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
[2023-01-10T11:09:30.450Z] 3 warnings
[2023-01-10T11:09:31.633Z] 
[2023-01-10T11:09:31.633Z] > Task :clients:javadocJar
[2023-01-10T11:09:33.663Z] 
[2023-01-10T11:09:33.664Z] > Task :clients:srcJar
[2023-01-10T11:09:33.664Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2023-01-10T11:09:33.664Z]   - Gradle detected a problem with the following 
location: '/home/jenkins/workspace/Kafka_kafka_3.4/clients/src/generated/java'. 
Reason: Task ':clients:srcJar' uses this output of task 
':clients:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.6/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2023-01-10T11:09:33.664Z] 
[2023-01-10T11:09:33.664Z] > Task :clients:testJar
[2023-01-10T11:09:34.848Z] > Task :clients:testSrcJar
[2023-01-10T11:09:34.848Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2023-01-10T11:09:34.848Z] > Task :clients:publishToMavenLocal
[2023-01-10T11:09:43.741Z] > Task :core:compileScala
[2023-01-10T11:11:53.703Z] > Task :core:classes
[2023-01-10T11:11:53.703Z] > Task :core:compileTestJava NO-SOURCE
[2023-01-10T11:12:15.823Z] > Task :core:compileTestScala
[2023-01-10T11:13:53.402Z] > Task :core:testClasses
[2023-01-10T11:14:09.887Z] > Task :streams:compileTestJava
[2023-01-10T11:14:09.887Z] > Task :streams:testClasses
[2023-01-10T11:14:09.887Z] > Task :streams:testJar
[2023-01-10T11:14:10.992Z] > Task :streams:testSrcJar
[2023-01-10T11:14:10.992Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2023-01-10T11:14:10.992Z] > Task :streams:publishToMavenLocal
[2023-01-10T11:14:10.992Z] 
[2023-01-10T11:14:10.992Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2023-01-10T11:14:10.992Z] 
[2023-01-10T11:14:10.992Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2023-01-10T11:14:10.992Z] 
[2023-01-10T11:14:10.992Z] See 
https://docs.gradle.org/7.6/userguide/command_line_interface.html#sec:command_line_warnings
[2023-01-10T11:14:10.992Z] 
[2023-01-10T11:14:10.992Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to ensure correctness.
[2023-01-10T11:14:10.992Z] Please consult deprecation warnings for more details.
[2023-01-10T11:14:10.992Z] 
[2023-01-10T11:14:10.992Z] BUILD SUCCESSFUL in 5m 26s
[2023-01-10T11:14:10.992Z] 81 actionable tasks: 36 executed, 45 up-to-date
[Pipeline] sh
[2023-01-10T11:14:14.247Z] + grep ^version= gradle.properties
[2023-01-10T11:14:14.247Z] + cut -d= -f 2
[Pipeline] dir
[2023-01-10T11:14:15.274Z] Running in 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/quickstart
[Pipeline] {
[Pipeline] sh
[2023-01-10T11:14:17.847Z] + mvn clean install -Dgpg.skip
[2023-01-10T11:14:19.900Z] [INFO] Scanning for projects...
[2023-01-10T11:14:19.900Z] [INFO] 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1500

2023-01-10 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-12558) MM2 may not sync partition offsets correctly

2023-01-10 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-12558.
---
Fix Version/s: 3.5.0
   Resolution: Fixed

> MM2 may not sync partition offsets correctly
> 
>
> Key: KAFKA-12558
> URL: https://issues.apache.org/jira/browse/KAFKA-12558
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Alan Ning
>Priority: Major
> Fix For: 3.5.0
>
>
> There is a race condition in {{MirrorSourceTask}} where certain partition 
> offsets may never be sent. The bug occurs when the [outstandingOffsetSync 
> semaphore is 
> full|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L207].
>  In this case, the sendOffsetSync [will silently 
> fail|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L207].
> This failure is normally acceptable since offset sync will retry frequently. 
> However, {{maybeSyncOffsets}} has a bug where it will [mutate the partition 
> state|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L199]
>  prior to confirming the result of {{sendOffsetSync}}. The end result is that 
> the partition state is mutated prematurely, and prevent future offset syncs 
> to recover.
> Since {{MAX_OUTSTANDING_OFFSET_SYNCS}} is 10, this bug happens when you 
> assign more than 10 partitions to each task.
> In my test cases where I had over 100 partitions per task, the majority of 
> the offsets were wrong. Here's an example of such a failure. 
> https://issues.apache.org/jira/browse/KAFKA-12468?focusedCommentId=17308308=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17308308
> During my troubleshooting, I customized the {{MirrorSourceTask}} to confirm 
> that all partitions that have the wrong offset were failing to acquire the 
> initial semaphore. The condition [can be trapped 
> here|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceTask.java#L208].
> *Possible Fix:*
> A possible fix is to create a {{shouldUpdate}} method in {{PartitionState}}. 
> This method should be read-only and return true if {{sendOffsetSync}} is 
> needed. Once {{sendOffsetSync}} is successful, only then {{update}} should be 
> called.
> Here's some pseudocode
> {code:java}
> private void maybeSyncOffsets(TopicPartition topicPartition, long 
> upstreamOffset,
> long downstreamOffset) {
> PartitionState partitionState =
> partitionStates.computeIfAbsent(topicPartition, x -> new 
> PartitionState(maxOffsetLag));
> if (partitionState.shouldUpdate(upstreamOffset, downstreamOffset)) {
> if(sendOffsetSync(topicPartition, upstreamOffset, downstreamOffset)) {
> partitionState.update(upstreamOffset, downstreamOffset)
> }
> }
> }
> {code}
>  
> *Workaround:*
> For those who are experiencing this issue, the workaround is to make sure you 
> have less than or equal to 10 partitions per task. Set your `tasks.max` value 
> accordingly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-858: Handle JBOD broker disk failure in KRaft

2023-01-10 Thread Tom Bentley
Hi Igor,

20. The description of the changes to meta.properties says "If there any
meta.properties file is missing directory.id a new UUID is generated, and
assigned to that log directory by updating the file", and the
upgrade/migration section says "As the upgraded brokers come up, the
existing meta.properties  files in each broker are updated with a generated
directory.id  and directory.ids ." Currently MetaProperties#parse() checks
that the version is 1, so would the described behaviour prevent downgrade
of a broker to an older version of the software?

21. "If the indicated log directory UUID is not a registered log directory
then the call fails with an error" can you specify which error (is it a new
error code)?

22. "If multiple log directories are registered the broker will remain
fenced until the controller learns of all the partition to log directory
placements in that broker - i.e. no remaining replicas assigned to Uuid.ZERO ."
Is an error code used in the BrokerHeartbeatResponse to indicate this
state? (Or is the only way to diagnose the reason for a broker remaining
fenced for this reason to look at the controller logs?)

23. Will there be a system test to cover the upgrade of a ZK+JBOD cluster
to KRaft+JBOD cluster?

24. In the rejected alternatives: "However the broker is in a better
position to make a choice of log directory than the broker". I think that
should be "...than the controller", right?

25. I wonder about the inconsistency of the RPC names: We have the existing
AlterReplicaLogDirs (and log.dirs broker config), but the new RPC is
AssignReplicasToDirectories.

Many thanks!

Tom

On Tue, 3 Jan 2023 at 18:05, Igor Soarez  wrote:

> Hi Jun,
>
> Thank you for having another look.
>
> 11. That is correct. I have updated the KIP in an attempt to make this
> clearer.
> I think the goal should be to try to minimize the chance that a log
> directory
> may happen while the metadata is incorrect about the log directory
> assignment,
> but also have a fallback safety mechanism to indicate to the controller
> that
> some replica was missed in case of a bad race.
>
> 13. Ok, I think I have misunderstood this. Thank you for correcting me.
> In this case the broker can update the existing meta.properties and create
> new meta.properties in the new log directories.
> This also means that the update-directories subcommand in kafka-storage.sh
> is not necessary.
> I have updated the KIP to reflect this.
>
> Please have another look.
>
>
> Thank you,
>
> --
> Igor
>
>
> > On 22 Dec 2022, at 00:25, Jun Rao  wrote:
> >
> > Hi, Igor,
> >
> > Thanks for the reply.
> >
> > 11. Yes, your proposal could work. Once the broker receives confirmation
> of
> > the metadata change, I guess it needs to briefly block appends to the old
> > replica, make sure the future log fully catches up and then make the
> switch?
> >
> > 13 (b). The kafka-storage.sh is only required in KIP-631 for a brand new
> > KRaft cluster. If the cluster already exists and one just wants to add a
> > log dir, it seems inconvenient to have to run the kafka-storage.sh tool
> > again.
> >
> > Thanks,
> >
> > Jun
>
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1499

2023-01-10 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2023-01-10 Thread Mickael Maison
Hi Daniel,

Can you confirm that, following this KIP, MM in dedicated mode will be
able to run with exactly once enabled?
(Once the PR [0] to add KIP-618 support to MM is merged)

0: https://github.com/apache/kafka/pull/12366

Thanks,
Mickael

On Tue, Jan 10, 2023 at 11:36 AM Viktor Somogyi-Vass
 wrote:
>
> Ok, then +1 (binding) :)
>
> On Mon, Jan 9, 2023 at 3:44 PM John Roesler  wrote:
>
> > Yes, you are!
> >
> > Congrats again :)
> > -John
> >
> > On Mon, Jan 9, 2023, at 08:25, Viktor Somogyi-Vass wrote:
> > > Hey all,
> > >
> > > Now that I'm a committer am I allowed to change my non-binding vote to
> > > binding to pass the KIP? :)
> > >
> > > On Thu, Nov 10, 2022 at 6:13 PM Greg Harris  > >
> > > wrote:
> > >
> > >> +1 (non-binding)
> > >>
> > >> Thanks for the KIP, this is an important improvement.
> > >>
> > >> Greg
> > >>
> > >> On Thu, Nov 10, 2022 at 7:21 AM John Roesler 
> > wrote:
> > >>
> > >> > Thanks for the KIP, Daniel!
> > >> >
> > >> > I'm no MM expert, but I've read over the KIP and discussion, and it
> > seems
> > >> > reasonable to me.
> > >> >
> > >> > I'm +1 (binding).
> > >> >
> > >> > Thanks,
> > >> > -John
> > >> >
> > >> > On 2022/10/22 07:38:38 Urbán Dániel wrote:
> > >> > > Hi everyone,
> > >> > >
> > >> > > I would like to start a vote on KIP-710 which aims to support
> > running a
> > >> > > dedicated MM2 cluster in distributed mode:
> > >> > >
> > >> > >
> > >> >
> > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-710%3A+Full+support+for+distributed+mode+in+dedicated+MirrorMaker+2.0+clusters
> > >> > >
> > >> > > Regards,
> > >> > > Daniel
> > >> > >
> > >> > >
> > >> > > --
> > >> > > Ezt az e-mailt átvizsgálta az Avast AntiVirus szoftver.
> > >> > > www.avast.com
> > >> > >
> > >> >
> > >>
> >


Re: [VOTE] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2023-01-10 Thread Viktor Somogyi-Vass
Ok, then +1 (binding) :)

On Mon, Jan 9, 2023 at 3:44 PM John Roesler  wrote:

> Yes, you are!
>
> Congrats again :)
> -John
>
> On Mon, Jan 9, 2023, at 08:25, Viktor Somogyi-Vass wrote:
> > Hey all,
> >
> > Now that I'm a committer am I allowed to change my non-binding vote to
> > binding to pass the KIP? :)
> >
> > On Thu, Nov 10, 2022 at 6:13 PM Greg Harris  >
> > wrote:
> >
> >> +1 (non-binding)
> >>
> >> Thanks for the KIP, this is an important improvement.
> >>
> >> Greg
> >>
> >> On Thu, Nov 10, 2022 at 7:21 AM John Roesler 
> wrote:
> >>
> >> > Thanks for the KIP, Daniel!
> >> >
> >> > I'm no MM expert, but I've read over the KIP and discussion, and it
> seems
> >> > reasonable to me.
> >> >
> >> > I'm +1 (binding).
> >> >
> >> > Thanks,
> >> > -John
> >> >
> >> > On 2022/10/22 07:38:38 Urbán Dániel wrote:
> >> > > Hi everyone,
> >> > >
> >> > > I would like to start a vote on KIP-710 which aims to support
> running a
> >> > > dedicated MM2 cluster in distributed mode:
> >> > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-710%3A+Full+support+for+distributed+mode+in+dedicated+MirrorMaker+2.0+clusters
> >> > >
> >> > > Regards,
> >> > > Daniel
> >> > >
> >> > >
> >> > > --
> >> > > Ezt az e-mailt átvizsgálta az Avast AntiVirus szoftver.
> >> > > www.avast.com
> >> > >
> >> >
> >>
>


Re: [ANNOUNCE] New committer: Edoardo Comar

2023-01-10 Thread Edoardo Comar
Many thanks everyone !

On Mon, 9 Jan 2023 at 19:40, Rajini Sivaram  wrote:

> Congratulations, Edo!
>
> Regards,
>
> Rajini
>
> On Mon, Jan 9, 2023 at 10:16 AM Tom Bentley  wrote:
>
> > Congratulations!
> >
> > On Sun, 8 Jan 2023 at 01:14, Satish Duggana 
> > wrote:
> >
> > > Congratulations, Edorado!
> > >
> > > On Sun, 8 Jan 2023 at 00:15, Viktor Somogyi-Vass
> > >  wrote:
> > > >
> > > > Congrats Edoardo!
> > > >
> > > > On Sat, Jan 7, 2023, 18:15 Bill Bejeck  wrote:
> > > >
> > > > > Congratulations, Edoardo!
> > > > >
> > > > > -Bill
> > > > >
> > > > > On Sat, Jan 7, 2023 at 12:11 PM John Roesler 
> > > wrote:
> > > > >
> > > > > > Congrats, Edoardo!
> > > > > > -John
> > > > > >
> > > > > > On Fri, Jan 6, 2023, at 20:47, Matthias J. Sax wrote:
> > > > > > > Congrats!
> > > > > > >
> > > > > > > On 1/6/23 5:15 PM, Luke Chen wrote:
> > > > > > >> Congratulations, Edoardo!
> > > > > > >>
> > > > > > >> Luke
> > > > > > >>
> > > > > > >> On Sat, Jan 7, 2023 at 7:58 AM Mickael Maison <
> > > > > mickael.mai...@gmail.com
> > > > > > >
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >>> Congratulations Edo!
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> On Sat, Jan 7, 2023 at 12:05 AM Jun Rao
> >  > > >
> > > > > > wrote:
> > > > > > 
> > > > > >  Hi, Everyone,
> > > > > > 
> > > > > >  The PMC of Apache Kafka is pleased to announce a new Kafka
> > > committer
> > > > > > >>> Edoardo
> > > > > >  Comar.
> > > > > > 
> > > > > >  Edoardo has been a long time Kafka contributor since 2016.
> His
> > > major
> > > > > >  contributions are the following.
> > > > > > 
> > > > > >  KIP-302: Enable Kafka clients to use all DNS resolved IP
> > > addresses
> > > > > >  KIP-277: Fine Grained ACL for CreateTopics API
> > > > > >  KIP-136: Add Listener name to SelectorMetrics tags
> > > > > > 
> > > > > >  Congratulations, Edoardo!
> > > > > > 
> > > > > >  Thanks,
> > > > > > 
> > > > > >  Jun (on behalf of the Apache Kafka PMC)
> > > > > > >>>
> > > > > > >>
> > > > > >
> > > > >
> > >
> > >
> >
>