Hi Luke,
1. I'm going to assume that migration from ZK to KRaft will sync the info
from ZK into the metadata log.
There will be no need to bootstrap SCRAM for KRaft in the migration case. I
will need to test this.
2. Done! I've also added a comment to each case so people understand the
change.
Hi Proven,
Thanks for the KIP.
Some questions:
1. During ZK migrating to KRaft, should we format the KRaft controller with
the SCRAM info, or it'll sync from ZK?
2. In the KIP, you provided a link to a doc from Confluent (i.e. this line: See
Configuring SCRAM
Hi Proven,
On Tue, Feb 21, 2023 at 1:37 PM Proven Provenzano
wrote:
>
> Hi Jose,
>
> 1. The SCRAM in SCRAM-SHA-256 is required as the mechanism name is
> SCRAM-SHA-256.
> I do realize there is a bit of redundancy here.
>
> 2. I'll add documentation for all the possible values. They are
>
Hi Jose,
1. The SCRAM in SCRAM-SHA-256 is required as the mechanism name is
SCRAM-SHA-256.
I do realize there is a bit of redundancy here.
2. I'll add documentation for all the possible values. They are
SCRAM-SHA-256 and SCRAM-SHA-512.
3. I'd like to keep it with a capital letter as it is a
Hi Proven,
Thanks for the changes to KIP-900. It looks good to me in general.
Here are some suggestions and questions.
1. In the KIP you give the following example:
--add-scram SCRAM-SHA-256=[user=alice,password=alice-secret]
Is "SCRAM-" required as a prefix? The flag already has the suffix
The original idea was to pass in the JSON string representing the
UserScramCredentialsRecord
directly to make this simple and to require no parsing at all. Here is a
example of the JSON object:
Comments below.
On Wed, Feb 1, 2023 at 2:24 PM Proven Provenzano
wrote:
> The following are also acceptable from your example. I changed the ordering
> because it does't matter.
>
> --add-config entity-type brokers entity-name 0 foo=bar
> --add-config default-entity entity-type broker baaz=quux
Comments below.
On Mon, Feb 13, 2023 at 11:44 AM Proven Provenzano
wrote:
>
> Hi Jose
>
> I want to clarify that the file parsing that Argparse4j provides is just a
> mechanism for
> taking command line args and putting them in a file. It doesn't
> actually change what the
> command line args
Hi Jose
I want to clarify that the file parsing that Argparse4j provides is just a
mechanism for
taking command line args and putting them in a file. It doesn't
actually change what the
command line args are for processing the file. So I can add any
kafka-storage command
line arg into the file
Thanks for the discussion Colin and Proven.
CLIs can be difficult to design when there are complicated use cases.
Did we consider having CLI flags only for the common case? I would
think that the common case is SCRAM for one user.
For the complicated and less common cases they have to provide a
Hi Colin,
I agree that the --alter flag is not useful. I just wanted to gauge how
close to the original command line we want.
I also think that positional args are not the way to go. Parsing can be
smarter than that.
Using --add-config to be the leed for setting dynamic config is fine. I
think
On Fri, Jan 27, 2023, at 11:25, Proven Provenzano wrote:
> Hi Colin,
>
> So I think we are converging where we put the adding of records in
> kafka-storage, allow for salt and iterations to be optional and now we are
> getting to how to parse the individual arguments.
>
> My previous approach is
Hi Colin,
So I think we are converging where we put the adding of records in
kafka-storage, allow for salt and iterations to be optional and now we are
getting to how to parse the individual arguments.
My previous approach is to keep '--' as a delimiter for primary
kafka-storage arguments but to
On Fri, Jan 20, 2023, at 13:02, Proven Provenzano wrote:
> Hi Colin,
> Thanks for the response.
>
> I chose raw records, thinking it might be useful for future additions of
> records that customers might want to add before the first start of the
> cluster. I do see that it is at best an engineer
Hi Colin,
Thanks for the response.
I chose raw records, thinking it might be useful for future additions of
records that customers might want to add before the first start of the
cluster. I do see that it is at best an engineer friendly interface.
I do think kafka-storage is the correct place to
Hi Proven,
Thanks for putting this together.
We always intended to have a way to bootstrap into using an all-SCRAM cluster,
from scratch.
I have two big comments here. First, I think we need a better interface than
raw records. And second, I'm not sure that kafka-storage.sh is the right place
I have written a KIP describing the API additions needed to kafka-storage
to store SCRAM
credentials at bootstrap time. Please take a look at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-900%3A+KRaft+kafka-storage.sh+API+additions+to+support+SCRAM+for+Kafka+Brokers
--
--Proven
17 matches
Mail list logo