[jira] [Resolved] (KAFKA-14568) Move FetchDataInfo and related to storage module

2023-01-12 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-14568.
-
Resolution: Fixed

> Move FetchDataInfo and related to storage module
> 
>
> Key: KAFKA-14568
> URL: https://issues.apache.org/jira/browse/KAFKA-14568
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Federico Valeri
>Assignee: Federico Valeri
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.4 #39

2023-01-12 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : Kafka » Kafka Branch Builder » trunk #1510

2023-01-12 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-14612) Topic config records written to log even when topic creation fails

2023-01-12 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-14612.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Topic config records written to log even when topic creation fails
> --
>
> Key: KAFKA-14612
> URL: https://issues.apache.org/jira/browse/KAFKA-14612
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Reporter: Jason Gustafson
>Assignee: Andrew Grant
>Priority: Major
> Fix For: 3.4.0
>
>
> Config records are added when handling a `CreateTopics` request here: 
> [https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java#L549.]
>  If the subsequent validations fail and the topic is not created, these 
> records will still be written to the log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1509

2023-01-12 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14620) Add a type for SnapshotId

2023-01-12 Thread Jira
José Armando García Sancio created KAFKA-14620:
--

 Summary: Add a type for SnapshotId
 Key: KAFKA-14620
 URL: https://issues.apache.org/jira/browse/KAFKA-14620
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Reporter: José Armando García Sancio
Assignee: José Armando García Sancio


We have seen issues where the state machine assumes that offset in the snapshot 
id is inclusive. I think adding at type that makes this clear would help 
developers and reviewers catch such issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14619) KRaft validate snapshot id are at batch boundries

2023-01-12 Thread Jira
José Armando García Sancio created KAFKA-14619:
--

 Summary: KRaft validate snapshot id are at batch boundries
 Key: KAFKA-14619
 URL: https://issues.apache.org/jira/browse/KAFKA-14619
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Reporter: José Armando García Sancio
Assignee: José Armando García Sancio


When the state machine creates a snapshot, kraft should validate that the 
provided offset lands at a record batch boundaries. This is required because 
the current log layer and replication protocol do not handle the case where the 
snapshot id points to the middle of a record batch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14618) Off by one error in generated snapshot IDs causes misaligned fetching

2023-01-12 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-14618:
---

 Summary: Off by one error in generated snapshot IDs causes 
misaligned fetching
 Key: KAFKA-14618
 URL: https://issues.apache.org/jira/browse/KAFKA-14618
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 3.4.0


We implemented new snapshot generation logic here: 
[https://github.com/apache/kafka/pull/12983.] A few days prior to this patch 
getting merged, we had changed the `RaftClient` API to pass the _exclusive_ 
offset when generating snapshots instead of the inclusive offset: 
[https://github.com/apache/kafka/pull/12981.] Unfortunately, the new snapshot 
generation logic was not updated accordingly. The consequence of this is that 
the state on replicas can get out of sync. In the best case, the followers fail 
replication because the offset after loading a snapshot is no longer aligned on 
a batch boundary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14617) Replicas with stale broker epoch should not be allowed to join the ISR

2023-01-12 Thread Calvin Liu (Jira)
Calvin Liu created KAFKA-14617:
--

 Summary: Replicas with stale broker epoch should not be allowed to 
join the ISR
 Key: KAFKA-14617
 URL: https://issues.apache.org/jira/browse/KAFKA-14617
 Project: Kafka
  Issue Type: Improvement
Reporter: Calvin Liu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1508

2023-01-12 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Add "Security Implications" section to KIP template

2023-01-12 Thread Chris Egerton
Hi Luke and Bruno,

Thanks for taking a look! Happy to provide some examples here to clarify
the points, and if they seem useful enough, we can also add them to the
template.

> Does it make Kafka or any of its components (brokers, clients, Kafka
Connect, Kafka Streams, Mirror Maker 2, etc.) less secure when run with
default settings?

Examples include allowing unauthenticated users to access the file system
of, or execute code on, the machine running Kafka/one of its components, or
create or configure Kafka clients with arbitrary settings

> Does it give users new access to configure clients, brokers, topics, etc.
in situations where they did not have this access before? Keep in mind that
the ability to arbitrarily configure a Kafka client can add to the attack
surface of a project and may be safer to disable by default.

With examples provided, this point is likely made redundant by the
first/third points

> Does it make Kafka or any of its components more difficult to run in a
fully-secured fashion?

Examples include requiring new ACLs to run existing components (e.g.,
requiring write permission for a specific transactional ID in order to
start Kafka Connect), or adding new APIs that, if left unsecured, would
leave the component vulnerable to malicious users (e.g., adding a REST
server to Kafka Streams that allows topologies to be dynamically
manipulated).

I hope this helps; let me know what you think.

Cheers,

Chris
-

On Thu, Jan 12, 2023 at 3:51 AM Bruno Cadonna  wrote:

> Hi Chris,
>
> Thank you for the proposal!
>
> Could you add some examples to each of your points?
> I think that would make it easier to discussion them.
>
> Best,
> Bruno
>
> On 12.01.23 03:15, Luke Chen wrote:
> > Hi Chris,
> >
> > I like this idea.
> > Thanks for raising this!
> >
> > One question to the template bullet:
> > • Does it make Kafka or any of its components more difficult to run in a
> > fully-secured fashion?
> >
> > I don't quite understand what it means. Could you elaborate on it?
> >
> > Thank you.
> > Luke
> >
> > On Wed, Jan 11, 2023 at 11:59 PM Chris Egerton 
> > wrote:
> >
> >> Hi all,
> >>
> >> I'd like to propose augmenting the KIP template with a "Security
> >> Implications" section. Similar to the recently-added "test plan"
> section,
> >> the purpose here is to draw explicit attention to the security impact of
> >> the changes in the KIP during the design and discussion phase. On top of
> >> that, it should provide a common framework for how to reason about
> security
> >> so that everyone from new contributors to seasoned committers/PMC
> members
> >> can use the same standards when evaluating the security implications of
> a
> >> proposal.
> >>
> >> Here's the draft wording I've come up with so far for the template:
> >>
> >> How does this impact the security of the project?
> >> • Does it make Kafka or any of its components (brokers, clients, Kafka
> >> Connect, Kafka Streams, Mirror Maker 2, etc.) less secure when run with
> >> default settings?
> >> • Does it give users new access to configure clients, brokers, topics,
> etc.
> >> in situations where they did not have this access before? Keep in mind
> that
> >> the ability to arbitrarily configure a Kafka client can add to the
> attack
> >> surface of a project and may be safer to disable by default.
> >> • Does it make Kafka or any of its components more difficult to run in a
> >> fully-secured fashion?
> >>
> >> Let me know your thoughts. My tentative plan is to add this (with any
> >> modifications after discussion) to the KIP template after at least one
> week
> >> has elapsed, there has been approval from at least a couple seasoned
> >> contributors, and there are no unaddressed objections.
> >>
> >> Cheers,
> >>
> >> Chris
> >>
> >
>


Re: [DISCUSS] KIP-890 Server Side Defense

2023-01-12 Thread Justine Olshan
Thanks for the discussion Artem.

With respect to the handling of fenced producers, we have some behavior
already in place. As of KIP-588:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-588%3A+Allow+producers+to+recover+gracefully+from+transaction+timeouts,
we handle timeouts more gracefully. The producer can recover.

Produce requests can also recover from epoch fencing by aborting the
transaction and starting over.

What other cases were you considering that would cause us to have a fenced
epoch but we'd want to recover?

The first point about handling epoch overflows is fair. I think there is
some logic we'd need to consider. (ie, if we are one away from the max
epoch, we need to reset the producer ID.) I'm still wondering if there is a
way to direct this from the response, or if everything should be done on
the client side. Let me know if you have any thoughts here.

Thanks,
Justine

On Tue, Jan 10, 2023 at 4:06 PM Artem Livshits
 wrote:

> There are some workflows in the client that are implied by protocol
> changes, e.g.:
>
> - for new clients, epoch changes with every transaction and can overflow,
> in old clients this condition was handled transparently, because epoch was
> bumped in InitProducerId and it would return a new producer id if epoch
> overflows, the new clients would need to implement some workflow to refresh
> producer id
> - how to handle fenced producers, for new clients epoch changes with every
> transaction, so in presence of failures during commits / aborts, the
> producer could get easily fenced, old clients would pretty much would get
> fenced when a new incarnation of the producer was initialized with
> InitProducerId so it's ok to treat as a fatal error, the new clients would
> need to implement some workflow to handle that error, otherwise they could
> get fenced by themselves
> - in particular (as a subset of the previous issue), what would the client
> do if it got a timeout during commit?  commit could've succeeded or failed
>
> Not sure if this has to be defined in the KIP as implementing those
> probably wouldn't require protocol changes, but we have multiple
> implementations of Kafka clients, so probably would be good to have some
> client implementation guidance.  Could also be done as a separate doc.
>
> -Artem
>
> On Mon, Jan 9, 2023 at 3:38 PM Justine Olshan  >
> wrote:
>
> > Hey all, I've updated the KIP to incorporate Jason's suggestions.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense
> >
> >
> > 1. Use AddPartitionsToTxn + verify flag to check on old clients
> > 2. Updated AddPartitionsToTxn API to support transaction batching
> > 3. Mention IBP bump
> > 4. Mention auth change on new AddPartitionsToTxn version.
> >
> > I'm planning on opening a vote soon.
> > Thanks,
> > Justine
> >
> > On Fri, Jan 6, 2023 at 3:32 PM Justine Olshan 
> > wrote:
> >
> > > Thanks Jason. Those changes make sense to me. I will update the KIP.
> > >
> > >
> > >
> > > On Fri, Jan 6, 2023 at 3:31 PM Jason Gustafson
> > 
> > > wrote:
> > >
> > >> Hey Justine,
> > >>
> > >> > I was wondering about compatibility here. When we send requests
> > >> between brokers, we want to ensure that the receiving broker
> understands
> > >> the request (specifically the new fields). Typically this is done via
> > >> IBP/metadata version.
> > >> I'm trying to think if there is a way around it but I'm not sure there
> > is.
> > >>
> > >> Yes. I think we would gate usage of this behind an IBP bump. Does that
> > >> seem
> > >> reasonable?
> > >>
> > >> > As for the improvements -- can you clarify how the multiple
> > >> transactional
> > >> IDs would help here? Were you thinking of a case where we wait/batch
> > >> multiple produce requests together? My understanding for now was 1
> > >> transactional ID and one validation per 1 produce request.
> > >>
> > >> Each call to `AddPartitionsToTxn` is essentially a write to the
> > >> transaction
> > >> log and must block on replication. The more we can fit into a single
> > >> request, the more writes we can do in parallel. The alternative is to
> > make
> > >> use of more connections, but usually we prefer batching since the
> > network
> > >> stack is not really optimized for high connection/request loads.
> > >>
> > >> > Finally with respect to the authorizations, I think it makes sense
> to
> > >> skip
> > >> topic authorizations, but I'm a bit confused by the "leader ID" field.
> > >> Wouldn't we just want to flag the request as from a broker (does it
> > matter
> > >> which one?).
> > >>
> > >> We could also make it version-based. For the next version, we could
> > >> require
> > >> CLUSTER auth. So clients would not be able to use the API anymore,
> which
> > >> is
> > >> probably what we want.
> > >>
> > >> -Jason
> > >>
> > >> On Fri, Jan 6, 2023 at 10:43 AM Justine Olshan
> > >> 
> > >> wrote:
> > >>
> > >> > As a follow up, I was just thinking about the batching a bit more.
> > >> > 

[jira] [Resolved] (KAFKA-14611) ZK broker should not send epoch during registration

2023-01-12 Thread David Arthur (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Arthur resolved KAFKA-14611.
--
Resolution: Fixed

> ZK broker should not send epoch during registration
> ---
>
> Key: KAFKA-14611
> URL: https://issues.apache.org/jira/browse/KAFKA-14611
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Blocker
> Fix For: 3.4.0, 3.5.0
>
>
> We need to remove the integer field from the protocol for 
> "migratingZkBrokerEpoch" and replace it with a simple boolean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Creating Kafka Jira account

2023-01-12 Thread Mickael Maison
Hi Kamalesh,

Yes that's correct. I've replied to your other email.

Thanks,
Mickael

On Thu, Jan 12, 2023 at 4:23 PM kamalesh palanisamy
 wrote:
>
> Hi,
> I wanted to create a Jira account for the apache kafka project. I sent an
> email regarding this to priv...@kafka.apache.org and I wanted to check if
> it is the correct email to contact regarding this. Thank you.
> ᐧ


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1507

2023-01-12 Thread Apache Jenkins Server
See 




Creating Kafka Jira account

2023-01-12 Thread kamalesh palanisamy
Hi,
I wanted to create a Jira account for the apache kafka project. I sent an
email regarding this to priv...@kafka.apache.org and I wanted to check if
it is the correct email to contact regarding this. Thank you.
ᐧ


[jira] [Created] (KAFKA-14616) Topic recreation with offline broker causes permanent URPs

2023-01-12 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-14616:
-

 Summary: Topic recreation with offline broker causes permanent URPs
 Key: KAFKA-14616
 URL: https://issues.apache.org/jira/browse/KAFKA-14616
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.3.1
Reporter: Omnia Ibrahim


We are facing an odd situation when we delete and recreate a topic while broker 
is offline in KRAFT mode. 
Here’s what we saw step by step
 # Created topic {{foo.test}} with 10 partitions and 4 replicas — Topic 
{{foo.test}} was created with topic ID {{MfuZbwdmSMaiSa0g6__TPg}}
 # Took broker 4 offline — which held replicas for partitions __ {{0, 3, 4, 5, 
7, 8, 9}}
 # Deleted topic {{foo.test}} — The deletion process was successful, despite 
the fact that broker 4 still held replicas for partitions {{0, 3, 4, 5, 7, 8, 
9}} on local disk.
 # Recreated topic {{foo.test}} with 10 partitions and 4 replicas. — Topic 
{{foo.test}} was created with topic ID {{RzalpqQ9Q7ub2M2afHxY4Q}} and 
partitions {{0, 1, 2, 7, 8, 9}} got assigned to broker 4 (which was still 
offline). Notice here that partitions {{0, 7, 8, 9}} are common between the 
assignment of the deleted topic ({{{}topic_id: MfuZbwdmSMaiSa0g6__TPg{}}}) and 
the recreated topic {{{}topic_id: RzalpqQ9Q7ub2M2afHxY4Q{}}}).

 # Brough broker 4 back online.
 # Broker started to create new partition replicas for the recreated topic 
{{foo.test}} ({{{}topic_id: RzalpqQ9Q7ub2M2afHxY4Q{}}})
 # The broker hit the following error {{Tried to assign topic ID 
RzalpqQ9Q7ub2M2afHxY4Q to log for topic partition foo.test-9,but log already 
contained topic ID MfuZbwdmSMaiSa0g6__TPg}} . As a result of this error the 
broker decided to rename log dir for partitions {{0, 3, 4, 5, 7, 8, 9}} to 
{{{}-.-delete{}}}.
 # Ran {{ls }}

{code:java}
foo.test-0.658f87fb9a2e42a590b5d7dcc28862b5-delete/
foo.test-1/
foo.test-2/
foo.test-3.a68f05d05bcc4e579087551b539af311-delete/
foo.test-4.79ce30a5310d4950ad1b28f226f74895-delete/
foo.test-5.76ed04da75bf46c3a63342be1eb44450-delete/
foo.test-6/
foo.test-7.c2d33db3bf844e9ebbcd9ef22f5270da-delete/
foo.test-8.33836969ac714b41b69b5334a5068ce0-delete/
foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/{code}
 # Waited until the deletion of the old topic was done and ran {{ls 
}} again, now we were expecting to see log dir for partitions 
{{0, 1, 2, 7, 8, 9}} however the result is:

{code:java}
foo.test-1/
foo.test-2/
foo.test-6/{code}
 # Ran {{kafka-topics.sh --command-config cmd.properties --bootstrap-server 
 --describe --topic foo.test}}

{code:java}
Topic: foo.test TopicId: RzalpqQ9Q7ub2M2afHxY4Q PartitionCount: 10 
ReplicationFactor: 4 Configs: 
min.insync.replicas=2,segment.bytes=1073741824,max.message.bytes=3145728,unclean.leader.election.enable=false,retention.bytes=10
Topic: foo.test Partition: 0 Leader: 2 Replicas: 2,3,4,5 Isr: 2,3,5
Topic: foo.test Partition: 1 Leader: 3 Replicas: 3,4,5,6 Isr: 3,5,6,4
Topic: foo.test Partition: 2 Leader: 5 Replicas: 5,4,6,1 Isr: 5,6,1,4
Topic: foo.test Partition: 3 Leader: 5 Replicas: 5,6,1,2 Isr: 5,6,1,2
Topic: foo.test Partition: 4 Leader: 6 Replicas: 6,1,2,3 Isr: 6,1,2,3
Topic: foo.test Partition: 5 Leader: 1 Replicas: 1,6,2,5 Isr: 1,6,2,5
Topic: foo.test Partition: 6 Leader: 6 Replicas: 6,2,5,4 Isr: 6,2,5,4
Topic: foo.test Partition: 7 Leader: 2 Replicas: 2,5,4,3 Isr: 2,5,3
Topic: foo.test Partition: 8 Leader: 5 Replicas: 5,4,3,1 Isr: 5,3,1
Topic: foo.test Partition: 9 Leader: 3 Replicas: 3,4,1,6 Isr: 3,1,6{code}
Here’s a sample of broker logs

 
{code:java}
{"timestamp":"2023-01-11T15:19:53,620Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
 log for partition foo.test-9 in 
/kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete.","logger":"kafka.log.LogManager"}
{"timestamp":"2023-01-11T15:19:53,617Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
 time index 
/kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/.timeindex.deleted.","logger":"kafka.log.LogSegment"}
{"timestamp":"2023-01-11T15:19:53,617Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
 offset index 
/kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/.index.deleted.","logger":"kafka.log.LogSegment"}
{"timestamp":"2023-01-11T15:19:53,615Z","level":"INFO","thread":"kafka-scheduler-8","message":"Deleted
 log 
/kafka/d1/data/foo.test-9.48e1494f4fac48c8aec009bf77d5e4ee-delete/.log.deleted.","logger":"kafka.log.LogSegment"}
{"timestamp":"2023-01-11T15:19:53,614Z","level":"INFO","thread":"kafka-scheduler-8","message":"[LocalLog
 partition=foo.test-9, dir=/kafka/d1/data] Deleting segment files 
LogSegment(baseOffset=0, size=0, lastModifiedTime=1673439574661, 
largestRecordTimestamp=None)","logger":"kafka.log.LocalLog$"}
{"timestamp":"2023-01-11T15:19:53,612Z","level":"INFO","thread":"

[jira] [Resolved] (KAFKA-14199) Installed kafka in ubuntu and not able to access in browser. org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 10

2023-01-12 Thread Christo Lolov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christo Lolov resolved KAFKA-14199.
---
Resolution: Fixed

> Installed kafka in ubuntu and not able to access in browser.  
> org.apache.kafka.common.network.InvalidReceiveException: Invalid receive 
> (size = 1195725856 larger than 104857600)
> 
>
> Key: KAFKA-14199
> URL: https://issues.apache.org/jira/browse/KAFKA-14199
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Gops
>Priority: Blocker
>
> I am new to kafka. I have installed the zookeeper and kafka in my local 
> ubuntu machine. When i try to access the kafka in my browser 
> [http://ip:9092|http://ip:9092/]  ia m facing this error.
> +++
> [SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from 
> /127.0.0.1; closing connection (org.apache.kafka.common.network.Selector)
> org.apache.kafka.common.network.InvalidReceiveException: Invalid receive 
> (size = 1195725856 larger than 104857600)
>     at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:105)
>     at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
>     at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
>     at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
>     at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
>     at kafka.network.Processor.poll(SocketServer.scala:989)
>     at kafka.network.Processor.run(SocketServer.scala:892)
>     at java.base/java.lang.Thread.run(Thread.java:829)
> +++
> Also I have checked by updating the socket.request.max.bytes=5 in 
> ~/kafka/config/server.properties file still getting same error
>  
> pls figure it out. Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13805) Upgrade vulnerable dependencies march 2022

2023-01-12 Thread Christo Lolov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christo Lolov resolved KAFKA-13805.
---
Resolution: Fixed

> Upgrade vulnerable dependencies march 2022
> --
>
> Key: KAFKA-13805
> URL: https://issues.apache.org/jira/browse/KAFKA-13805
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.1, 3.0.1
>Reporter: Shivakumar
>Priority: Blocker
>  Labels: secutiry
>
> https://nvd.nist.gov/vuln/detail/CVE-2020-36518
> |Packages|Package Version|CVSS|Fix Status|
> |com.fasterxml.jackson.core_jackson-databind| 2.10.5.1| 7.5|fixed in 2.13.2.1|
> |com.fasterxml.jackson.core_jackson-databind|2.13.1|7.5|fixed in 2.13.2.1|
> Our security scan detected the above vulnerabilities
> upgrade to correct versions for fixing vulnerabilities



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-896: Remove old client protocol API versions in Kafka 4.0

2023-01-12 Thread Ismael Juma
Hi Jose,

I think it's reasonable to add more user-friendly metrics as you described.
I'll update the KIP soon with that. I'll try to define them in a way where
they track deprecated protocols for the next major release. That way, they
can be useful even after AK 4.0 is released.

Ismael

On Wed, Jan 11, 2023 at 12:34 PM José Armando García Sancio
 wrote:

> Thanks Ismael.
>
> > The following metrics are used to determine both questions:
> > >
> > >- Client name and version:
> > >
> kafka.server:clientSoftwareName=(client-software-name),clientSoftwareVersion=(client-software-version),listener=(listener),networkProcessor=(processor-index),type=(type)
> > >- Request name and version:
> > >
> kafka.network:type=RequestMetrics,name=RequestsPerSec,request=(api-name),version=(api-version)}
> > >
> > >
> > Are you suggesting that this is too complicated and hence we should add a
> > metric that tracks AK 4.0 support explicitly?
>
> Correct. It doesn't look trivial for the users to implement this check
> against the RequestMetrics. I was wondering if it is worth it for
> Kafka to implement this for them and expose a simple metric that they
> can check.
>
> --
> -José
>


Re: [DISCUSS] Add "Security Implications" section to KIP template

2023-01-12 Thread Bruno Cadonna

Hi Chris,

Thank you for the proposal!

Could you add some examples to each of your points?
I think that would make it easier to discussion them.

Best,
Bruno

On 12.01.23 03:15, Luke Chen wrote:

Hi Chris,

I like this idea.
Thanks for raising this!

One question to the template bullet:
• Does it make Kafka or any of its components more difficult to run in a
fully-secured fashion?

I don't quite understand what it means. Could you elaborate on it?

Thank you.
Luke

On Wed, Jan 11, 2023 at 11:59 PM Chris Egerton 
wrote:


Hi all,

I'd like to propose augmenting the KIP template with a "Security
Implications" section. Similar to the recently-added "test plan" section,
the purpose here is to draw explicit attention to the security impact of
the changes in the KIP during the design and discussion phase. On top of
that, it should provide a common framework for how to reason about security
so that everyone from new contributors to seasoned committers/PMC members
can use the same standards when evaluating the security implications of a
proposal.

Here's the draft wording I've come up with so far for the template:

How does this impact the security of the project?
• Does it make Kafka or any of its components (brokers, clients, Kafka
Connect, Kafka Streams, Mirror Maker 2, etc.) less secure when run with
default settings?
• Does it give users new access to configure clients, brokers, topics, etc.
in situations where they did not have this access before? Keep in mind that
the ability to arbitrarily configure a Kafka client can add to the attack
surface of a project and may be safer to disable by default.
• Does it make Kafka or any of its components more difficult to run in a
fully-secured fashion?

Let me know your thoughts. My tentative plan is to add this (with any
modifications after discussion) to the KIP template after at least one week
has elapsed, there has been approval from at least a couple seasoned
contributors, and there are no unaddressed objections.

Cheers,

Chris