[jira] [Resolved] (KAFKA-14435) Kraft: StandardAuthorizer allowing a non-authorized user when `allow.everyone.if.no.acl.found` is enabled

2023-02-20 Thread Purshotam Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Purshotam Chauhan resolved KAFKA-14435.
---
Fix Version/s: 3.3.2
   3.4.0
   Resolution: Fixed

> Kraft: StandardAuthorizer allowing a non-authorized user when 
> `allow.everyone.if.no.acl.found` is enabled
> -
>
> Key: KAFKA-14435
> URL: https://issues.apache.org/jira/browse/KAFKA-14435
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.2.0, 3.3.0, 3.2.1, 3.2.2, 3.2.3, 3.3.1
>Reporter: Purshotam Chauhan
>Assignee: Purshotam Chauhan
>Priority: Critical
> Fix For: 3.3.2, 3.4.0
>
>
> When `allow.everyone.if.no.acl.found` is enabled, the authorizer should allow 
> everyone only if there is no ACL present for a particular resource. But if 
> there are ACL present for the resource, then it shouldn't be allowing 
> everyone.
> StandardAuthorizer is allowing the principals for which no ACLs are defined 
> even when the resource has other ACLs.
>  
> This behavior can be validated with the following test case:
>  
> {code:java}
> @Test
> public void testAllowEveryoneConfig() throws Exception {
> StandardAuthorizer authorizer = new StandardAuthorizer();
> HashMap configs = new HashMap<>();
> configs.put(SUPER_USERS_CONFIG, "User:alice;User:chris");
> configs.put(ALLOW_EVERYONE_IF_NO_ACL_IS_FOUND_CONFIG, "true");
> authorizer.configure(configs);
> authorizer.start(new 
> AuthorizerTestServerInfo(Collections.singletonList(PLAINTEXT)));
> authorizer.completeInitialLoad();
> // Allow User:Alice to read topic "foobar"
> List acls = asList(
> withId(new StandardAcl(TOPIC, "foobar", LITERAL, "User:Alice", 
> WILDCARD, READ, ALLOW))
> );
> acls.forEach(acl -> authorizer.addAcl(acl.id(), acl.acl()));
> // User:Bob shouldn't be allowed to read topic "foobar"
> assertEquals(singletonList(DENIED),
> authorizer.authorize(new MockAuthorizableRequestContext.Builder().
> setPrincipal(new KafkaPrincipal(USER_TYPE, "Bob")).build(),
> singletonList(newAction(READ, TOPIC, "foobar";
> }
>  {code}
>  
> In the above test, `User:Bob` should be DENIED but the above test case fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1600

2023-02-20 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14736) Kafka Connect REST API: POST/PUT/DELETE requests are not working

2023-02-20 Thread lingsbigm (Jira)
lingsbigm created KAFKA-14736:
-

 Summary: Kafka Connect REST API: POST/PUT/DELETE requests are not 
working
 Key: KAFKA-14736
 URL: https://issues.apache.org/jira/browse/KAFKA-14736
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 3.1.1
 Environment: deverlopment
Reporter: lingsbigm


Hi,
  We now using debezium 1.8.1. Final with kafka connect in distributed mode, 
But suddenly one day we found that we can't add a new connector and found 
nothing in the log when we try to delete a connector or update the 
configuration of the connector, and not work too. 
  Besides, I found connect-configs topic has no messages before the first 
operation, and it also has some messages when updating or deleting the 
connector, but the connector has nothing changed.
  Have Anyone occurred the same problem too? 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[REVIEW REQUEST] Adjust Kafka log format with ZK

2023-02-20 Thread Николай Ижиков
Hello.

I investigate some production failure lately and found that current log format 
for WA ZOOKEEPER-2985 [1] not aligned with ZK internal log format.
Kafka print ZK session id as decimal while Zookeeper outputs using hex format.

For example:
```
[2023-02-01 00:42:17,590] INFO Session establishment complete on server 
some.server.name/[some_server_ip]:[server_port], sessionid = 0x4002429158c0005, 
negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2023-02-01 00:42:17,650] ERROR Error while creating ephemeral at 
/brokers/ids/2, node already exists and owner '144155473589174273' does not 
match current session '288270135025467397' 
(kafka.zk.KafkaZkClient$CheckedEphemeral)
[2023-02-01 00:42:22,823] WARN Client session timed out, have not heard from 
server in 4743ms for sessionid 0x100241c44620009 
(org.apache.zookeeper.ClientCnxn)
```

Please, note that 
```
scala> java.lang.Long.toHexString(288270135025467397L)
val res1: String = 4002429158c0005

scala> java.lang.Long.toHexString(144155473589174273L)
val res0: String = 20024a3b3b60001
```

So «288270135025467397» from CheckedEphemeral log actually points to newly 
created ZK session from previous line «0x4002429158c0005» 
It seems more convenient to print session id in hex format.

I prepared PR [2] to fix this.
Please, review.

[1] https://issues.apache.org/jira/browse/ZOOKEEPER-2985
[2] https://github.com/apache/kafka/pull/13281



[jira] [Created] (KAFKA-14735) Improve KRaft metadata image change performance at high topic counts

2023-02-20 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-14735:
-

 Summary: Improve KRaft metadata image change performance at high 
topic counts
 Key: KAFKA-14735
 URL: https://issues.apache.org/jira/browse/KAFKA-14735
 Project: Kafka
  Issue Type: Improvement
  Components: kraft
Reporter: Ron Dagostino
Assignee: Ron Dagostino
 Fix For: 3.5.0


Performance of KRaft metadata image changes is currently O(<# of topics in 
cluster>).  This means the amount of time it takes to create just a *single* 
topic scales linearly with the number of topics in the entire cluster.  This 
impact both controllers and brokers because both use the metadata image to 
represent the KRaft metadata log.  The performance of these changes should 
scale with the number of topics being changed -- so creating a single topic 
should perform similarly regardless of the number of topics in the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-641 An new java interface to replace 'kafka.common.MessageReader'

2023-02-20 Thread Alexandre Dupriez
Hi Chia-Ping,

Thank you for the KIP and apologies for missing it earlier.

A minor comment. The SPI is currently used exclusively for the
ConsoleProducer. However, it exposes high-level methods which hint at
it being a generic component. What is the actual scope of the SPI
inside the Kafka codebase? Is it planned to be re-used in other tools?
Or is this interface used (not implemented) outside of the
ConsoleProducer?

Thanks,
Alexandre

Le sam. 18 févr. 2023 à 19:02, Chia-Ping Tsai  a écrit :
>
>
>
> On 2023/02/18 08:44:05 Tom Bentley wrote:
> > Hi Chia-Ping,
> >
> > To be honest the stateful version, setting an input stream once using the
> > `readFrom(InputStream)` method and then repeatedly asking for the next
> > record using a parameterless `readRecord()`, seems a bit more natural to me
> > than `readRecord(InputStream inputStream)` being called repeatedly with (I
> > assume) the same input stream. I think the contract is simpler to describe
> > and understand.
>
> I prefer readRecord() also. It is a trade-off between having `Configurable` 
> interface and having a parameterless readRecord(). If the `Configurable` is 
> not required, I'd like to revert to readRecord(). WDYT?
>
> >
> > It's worth thinking about how implementers might have to read bytes from
> > the stream to discover the end of one record and the start of the next.
> > Unless we've guaranteed that the input stream supports mark and reset then
> > they have to buffer the initial bytes of the next record that they've just
> > read from the stream so that they can use them when called next time. So I
> > think RecordReaders are (in general) inherently stateful and therefore it
> > seems harmless for them to also have the input stream itself as some of
> > that state.
>
> you are right. As the input stream is keyboard input, it would be hard to 
> expect the number of bytes for one record.
>


[jira] [Created] (KAFKA-14734) Use CommandDefaultOptions in StreamsResetter

2023-02-20 Thread Sagar Rao (Jira)
Sagar Rao created KAFKA-14734:
-

 Summary: Use CommandDefaultOptions in StreamsResetter 
 Key: KAFKA-14734
 URL: https://issues.apache.org/jira/browse/KAFKA-14734
 Project: Kafka
  Issue Type: Sub-task
Reporter: Sagar Rao


This came up as a suggestion here: 
[https://github.com/apache/kafka/pull/13127#issuecomment-1433155607] .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8752) Ensure plugin classes are instantiable when discovering plugins

2023-02-20 Thread Alexandre Dupriez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Dupriez resolved KAFKA-8752.
--
Resolution: Not A Problem

> Ensure plugin classes are instantiable when discovering plugins
> ---
>
> Key: KAFKA-8752
> URL: https://issues.apache.org/jira/browse/KAFKA-8752
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Alexandre Dupriez
>Assignee: Alexandre Dupriez
>Priority: Minor
> Attachments: stacktrace.log
>
>
> While running integration tests from the IntelliJ IDE, it appears plugins 
> fail to load in {{DelegatingClassLoader.scanUrlsAndAddPlugins}}. The reason 
> was, in this case, that the class 
> {{org.apache.kafka.connect.connector.ConnectorReconfigurationTest$TestConnector}}
>  could not be instantiated - which it does not intend to be.
> The problem does not appear when running integration tests with Gradle as the 
> runtime closure is different from IntelliJ - which includes test sources from 
> module dependencies on the classpath.
> While debugging this minor inconvenience, I could see that 
> {{DelegatingClassLoader}} performs a sanity check on the plugin class to 
> instantiate - as of now, it verifies the class is concrete. A quick fix for 
> the problem highlighted above could to add an extra condition on the Java 
> modifiers of the class to ensure it will be instantiable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Request permissions to contribute to Kafka

2023-02-20 Thread ze zhang
Hi,
I'd like to request permissions to contribute to Kafka to propose a KIP

wiki iD : bo bo
email: zhangze1...@gmail.com

Thanks


Re: [DISCUSS] KIP-894: Use incrementalAlterConfigs API for syncing topic configurations

2023-02-20 Thread Gantigmaa Selenge
Hi Chris,

I have fixed the nits you mentioned and tried to improve the flow a little
bit.

Please let me know what you think :) .

Thank you.
Regards,
Tina

On Fri, Feb 17, 2023 at 6:15 PM Chris Egerton 
wrote:

> Hi Tina,
>
> This is looking great. I have a few nits remaining but apart from those I'm
> happy with the KIP and ready to vote.
>
> 1. In the description for how MM2 will behave when configured with
> "use.incremental.alter.configs" set to "requested", the KIP states that "If
> the first request receives an error from an incompatible broker, it will
> fallback to the deprecated AlterConfigs API for the subsequent calls". I
> think this should be "If any request receives an error" instead of "If the
> first request receives an error" since the first request may fail with a
> different error (temporarily unreachable broker, for example), but
> subsequent requests may reveal that the targeted cluster does not support
> the incremental API.
>
> 2. I've realized that I was imagining that
> ConfigPropertyFilter::shouldReplicateSourceDefault would accept a single
> string parameter (the name of the property to replicate) and return a
> boolean value, but this isn't actually laid out in the KIP anywhere. Can
> you include a Java snippet of the interface definition for the new method?
> It might look something like this if the behavior matches what I had in
> mind:
>
> public interface ConfigPropertyFilter extends Configurable, AutoCloseable {
>   boolean shouldReplicateSourceDefault(String prop); // New method
> }
>
> 3. In the "Compatibility, Deprecation, and Migration Plan" section it's
> stated that the default value for "use.incremental.alter.configs" will be
> "required". I believe this should instead be "requested".
>
> Cheers,
>
> Chris
>
> On Fri, Feb 17, 2023 at 6:26 AM Gantigmaa Selenge 
> wrote:
>
> > Hi Chris,
> >
> > > - The incremental API is used
> > - ConfigPropertyFilter::shouldReplicateConfigProperty returns true
> > - ConfigPropertyFilter::shouldReplicateSourceDefault returns false
> >
> > This sounds good to me. So just to clarify this in my head, when
> > incremental API is used, the MM2 will check shouldReplicateSourceDefault
> > first, which is false by default. When false, as you said it will
> manually
> > delete the default configs in the target cluster. When set to true, it
> will
> > include all the configs including the defaults for syncing.
> >
> > It would then check shouldReplicateConfigProperty for each config, it
> will
> > return true unless the config is specified in "config.properties.exclude"
> > property of DefaultConfigPropertyFilter.
> >
> > > I also think that extending the DefaultConfigPropertyFilter to allow
> > users
> > to control which defaults (source or target) get applied on the target
> > cluster is worth adding to the KIP. This could be as simple as adding a
> > property "use.defaults.from" with accepted values "source" and "target",
> or
> > it could be something more granular like
> > "config.properties.source.default.exclude" which, similar to the
> > "config.properties.exclude" property, could take a list of regular
> > expressions of properties whose default values should not be propagated
> > from the source cluster to the target cluster (with a default value of
> > ".*", to preserve existing behavior). I'm leaning toward keeping things
> > simple for now but both seem like viable options. And of course, if you
> > believe we should refrain from doing this, it's at least worth adding to
> > the rejected alternatives section.
> >
> > I agree with extending DefaultConfigPropertyFilter, and I would go with
> the
> > first option that adds  "use.defaults.from". The user can then use the
> > existing "config.properties.exclude" property to exclude certain
> > configurations from the replication.
> >
> > I have addressed these now in the KIP.
> >
> > Regards,
> > Tina
> >
> > On Wed, Feb 15, 2023 at 5:36 PM Chris Egerton 
> > wrote:
> >
> > > Hi Tina,
> > >
> > > It's looking better! A few thoughts:
> > >
> > > I think we should clarify in the KIP that under these conditions, MM2
> > will
> > > explicitly wipe properties from topic configs on the target cluster
> (via
> > a
> > > DELETE operation):
> > > - The incremental API is used
> > > - ConfigPropertyFilter::shouldReplicateConfigProperty returns true
> > > - ConfigPropertyFilter::shouldReplicateSourceDefault returns false
> > >
> > > I also think that extending the DefaultConfigPropertyFilter to allow
> > users
> > > to control which defaults (source or target) get applied on the target
> > > cluster is worth adding to the KIP. This could be as simple as adding a
> > > property "use.defaults.from" with accepted values "source" and
> "target",
> > or
> > > it could be something more granular like
> > > "config.properties.source.default.exclude" which, similar to the
> > > "config.properties.exclude" property, could take a list of regular
> > > expressions of properties whose default values should 

[jira] [Created] (KAFKA-14733) Update AclAuthorizerTest to run tests for both zk and kraft mode

2023-02-20 Thread Purshotam Chauhan (Jira)
Purshotam Chauhan created KAFKA-14733:
-

 Summary: Update AclAuthorizerTest to run tests for both zk and 
kraft mode
 Key: KAFKA-14733
 URL: https://issues.apache.org/jira/browse/KAFKA-14733
 Project: Kafka
  Issue Type: Improvement
Reporter: Purshotam Chauhan
Assignee: Purshotam Chauhan


Currently, we have two test classes AclAuthorizerTest and 
StandardAuthorizerTest that are used exclusively for zk and kraft mode.

But AclAuthorizerTest has a lot of tests covering various scenarios. We should 
change AclAuthorizerTest to run for both zk and kraft modes so as to keep 
parity between both modes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1599

2023-02-20 Thread Apache Jenkins Server
See