Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2021-06-02 Thread Rajini Sivaram
Hi Igor,

If we want to support migration of server-side credentials using Admin API,
we would need to get all of the data that is in the stored credential
(salt, number of iterations and salted password). That is sufficient for a
dictionary attack and to impersonate a server. Even though the salt may be
random, returning the combination in a public API would be unsafe. Until
now, we have refrained from returning sensitive values using Admin API
(e.g. any password config), even though it is possible to use SSL plus
authentication plus ACLs to protect usage of the API. We have relied on
this restriction to prevent leakage of sensitive configs through the APIs
and in logs.

Agree that we should find a way to support migration use cases where
credentials need to be transferred across clusters. Not sure what the
safest approach is, when ZK is removed. But a KIP that discusses the
options sounds like a good idea.

Regards,

Rajini


On Wed, Jun 2, 2021 at 11:17 AM Igor Soarez 
wrote:

> Hi all,
>
> First of all, apologies for digging up this year-old thread.
>
> I believe that without further changes we will be losing support for a
> couple of important SCRAM management scenarios after the transition to a
> Zookeeper-less Kafka.
>
> One of the scenarios is a migration of a cluster. Topics and their
> configuration can be read and re-created in a new cluster, ACLs can be
> copied over as well, and even messages can mirrored. The SCRAM credentials
> could also be read from one Zookeeper/chroot to another, but without
> Zookeeper this will no longer be true as we don't have Admin client
> operations for reading and setting the hashed/encrypted credentials.
>
> Another scenario is one where a federated group of clusters allows for
> clients to all use the same set of credentials in different clusters. The
> hashed/encrypted credentials can be pre-computed and then added to each
> cluster's ZK/chroot, or even just copied over from the first cluster to
> others. With access to Zookeeper this could be done without having to store
> the actual password anywhere, only the hashed/encrypted credentials are
> moved around. But because the Upsert operation requires the actual
> password, this will no longer be possible.
>
> I think we could maintain support for both of these scenarios if we expand
> the broker-side API slightly with support for these two operations:
>
> - Fetching the encrypted credentials for a given SCRAM user
> - Creating a SCRAM user with already encrypted credentials instead of with
> a password
>
> Does this make sense? Should we have another KIP?
>
> It seems the first operation at least was originally part of this KIP, but
> Rajini flagged it as a concern:
>
> > With AdminClient, we have been more conservative because we are now
> giving
> > access over the network. You cannot retrieve any sensitive broker
> configs,
> > even in encrypted form. I think it would be better to follow the same
> model
> > for SCRAM credentials. It is not easy to decode the encoded SCRAM
> > credentials, but it is not impossible. In particular, it is prone to
> > dictionary attacks. I think the only information we need to return from
> > `listScramUsers` is the SCRAM mechanism that is supported for that user.
>
> Surely not impossible, but the mechanisms make use of a salt and an
> arbitrary number of iterations which I believe make dictionary attacks not
> be a concern. Besides, calls to broker APIs can be authenticated which
> keeps access to the encrypted credentials limited.
>
> What do you think?
>
> Best,
>
> --
> Igor
>
> On Tue, Jun 30, 2020, at 10:45 PM, Colin McCabe wrote:
> > Hi Rajini,
> >
> > OK.  Let's remove the encrypted credentials from ListScramUsersResponse
> > and the associated API.  I have updated the KIP-- take a look when you
> > get a chance.
> >
> > best,
> > Colin
> >
> >
> > On Fri, May 15, 2020, at 06:54, Rajini Sivaram wrote:
> >> Hi Colin,
> >>
> >> We have used different approaches for kafka-configs using ZooKeeper and
> >> using brokers until now. This is based on the fact that whatever you can
> >> access using kafka-configs with ZooKeeper, you can also access directly
> >> using ZooKeeper shell. For example, you can retrieve any config stored
> in
> >> ZooKeeper including sensitive configs. They are encrypted, so you will
> need
> >> the secret for decoding it, but you can see all encrypted values.
> Similarly
> >> for SCRAM credentials, you can retrieve the encoded credentials. We
> allow
> >> this because if you have physical access to ZK, you could have obtained
> it
> >> from ZK anyway. Our recommendation is to use ZK for SCRAM only if ZK is
> >> secure.
> >>
> >> With AdminClient, we have been more conservative because we are now
> giving
> >> access over the network. You cannot retrieve any sensitive broker
> configs,
> >> even in encrypted form. I think it would be better to follow the same
> model
> >> for SCRAM credentials. It is not easy to decode the encoded SCRAM
> >> creden

Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2021-06-02 Thread Igor Soarez
Hi all,

First of all, apologies for digging up this year-old thread.

I believe that without further changes we will be losing support for a couple 
of important SCRAM management scenarios after the transition to a 
Zookeeper-less Kafka.

One of the scenarios is a migration of a cluster. Topics and their 
configuration can be read and re-created in a new cluster, ACLs can be copied 
over as well, and even messages can mirrored. The SCRAM credentials could also 
be read from one Zookeeper/chroot to another, but without Zookeeper this will 
no longer be true as we don't have Admin client operations for reading and 
setting the hashed/encrypted credentials.

Another scenario is one where a federated group of clusters allows for clients 
to all use the same set of credentials in different clusters. The 
hashed/encrypted credentials can be pre-computed and then added to each 
cluster's ZK/chroot, or even just copied over from the first cluster to others. 
With access to Zookeeper this could be done without having to store the actual 
password anywhere, only the hashed/encrypted credentials are moved around. But 
because the Upsert operation requires the actual password, this will no longer 
be possible.

I think we could maintain support for both of these scenarios if we expand the 
broker-side API slightly with support for these two operations:

- Fetching the encrypted credentials for a given SCRAM user
- Creating a SCRAM user with already encrypted credentials instead of with a 
password

Does this make sense? Should we have another KIP?

It seems the first operation at least was originally part of this KIP, but 
Rajini flagged it as a concern:

> With AdminClient, we have been more conservative because we are now giving
> access over the network. You cannot retrieve any sensitive broker configs,
> even in encrypted form. I think it would be better to follow the same model
> for SCRAM credentials. It is not easy to decode the encoded SCRAM
> credentials, but it is not impossible. In particular, it is prone to
> dictionary attacks. I think the only information we need to return from
> `listScramUsers` is the SCRAM mechanism that is supported for that user.

Surely not impossible, but the mechanisms make use of a salt and an arbitrary 
number of iterations which I believe make dictionary attacks not be a concern. 
Besides, calls to broker APIs can be authenticated which keeps access to the 
encrypted credentials limited.

What do you think?

Best,

--
Igor

On Tue, Jun 30, 2020, at 10:45 PM, Colin McCabe wrote:
> Hi Rajini,
> 
> OK.  Let's remove the encrypted credentials from ListScramUsersResponse 
> and the associated API.  I have updated the KIP-- take a look when you 
> get a chance.
> 
> best,
> Colin
> 
> 
> On Fri, May 15, 2020, at 06:54, Rajini Sivaram wrote:
>> Hi Colin,
>> 
>> We have used different approaches for kafka-configs using ZooKeeper and
>> using brokers until now. This is based on the fact that whatever you can
>> access using kafka-configs with ZooKeeper, you can also access directly
>> using ZooKeeper shell. For example, you can retrieve any config stored in
>> ZooKeeper including sensitive configs. They are encrypted, so you will need
>> the secret for decoding it, but you can see all encrypted values. Similarly
>> for SCRAM credentials, you can retrieve the encoded credentials. We allow
>> this because if you have physical access to ZK, you could have obtained it
>> from ZK anyway. Our recommendation is to use ZK for SCRAM only if ZK is
>> secure.
>> 
>> With AdminClient, we have been more conservative because we are now giving
>> access over the network. You cannot retrieve any sensitive broker configs,
>> even in encrypted form. I think it would be better to follow the same model
>> for SCRAM credentials. It is not easy to decode the encoded SCRAM
>> credentials, but it is not impossible. In particular, it is prone to
>> dictionary attacks. I think the only information we need to return from
>> `listScramUsers` is the SCRAM mechanism that is supported for that user.
>> 
>> Regards,
>> 
>> Rajini
>> 
>> 
>> On Fri, May 15, 2020 at 9:25 AM Tom Bentley  wrote:
>> 
>>> Hi Colin,
>>> 
>>> The AdminClient should do the hashing, right?  I don't see any advantage to
 doing it externally.
>>> 
>>> 
>>> I'm happy so long as the AdminClient interface doesn't require users to do
>>> the hashing themselves.
>>> 
>>> I do think we should support setting the salt explicitly, but really only
 for testing purposes.  Normally, it should be randomized.
 
>>> 
 
 I also wonder a little about consistency with the other APIs which have
> separate create/alter/delete methods. I imagine you considered exposing
> separate methods in the Java API,  implementing them using the same
>>> RPC,
> but can you share your rationale?
 
 I wanted this to match up with the command-line API, which doesn't
 distinguish between create and alter.
 
>>> 
>>> OK, makes sense.

Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-06-30 Thread Colin McCabe
Hi Rajini,

OK.  Let's remove the encrypted credentials from ListScramUsersResponse and the 
associated API.  I have updated the KIP-- take a look when you get a chance.

best,
Colin


On Fri, May 15, 2020, at 06:54, Rajini Sivaram wrote:
> Hi Colin,
> 
> We have used different approaches for kafka-configs using ZooKeeper and
> using brokers until now. This is based on the fact that whatever you can
> access using kafka-configs with ZooKeeper, you can also access directly
> using ZooKeeper shell. For example, you can retrieve any config stored in
> ZooKeeper including sensitive configs. They are encrypted, so you will need
> the secret for decoding it, but you can see all encrypted values. Similarly
> for SCRAM credentials, you can retrieve the encoded credentials. We allow
> this because if you have physical access to ZK, you could have obtained it
> from ZK anyway. Our recommendation is to use ZK for SCRAM only if ZK is
> secure.
> 
> With AdminClient, we have been more conservative because we are now giving
> access over the network. You cannot retrieve any sensitive broker configs,
> even in encrypted form. I think it would be better to follow the same model
> for SCRAM credentials. It is not easy to decode the encoded SCRAM
> credentials, but it is not impossible. In particular, it is prone to
> dictionary attacks. I think the only information we need to return from
> `listScramUsers` is the SCRAM mechanism that is supported for that user.
> 
> Regards,
> 
> Rajini
> 
> 
> On Fri, May 15, 2020 at 9:25 AM Tom Bentley  wrote:
> 
> > Hi Colin,
> >
> > The AdminClient should do the hashing, right?  I don't see any advantage to
> > > doing it externally.
> >
> >
> > I'm happy so long as the AdminClient interface doesn't require users to do
> > the hashing themselves.
> >
> > I do think we should support setting the salt explicitly, but really only
> > > for testing purposes.  Normally, it should be randomized.
> > >
> >
> > >
> > > I also wonder a little about consistency with the other APIs which have
> > > > separate create/alter/delete methods. I imagine you considered exposing
> > > > separate methods in the Java API,  implementing them using the same
> > RPC,
> > > > but can you share your rationale?
> > >
> > > I wanted this to match up with the command-line API, which doesn't
> > > distinguish between create and alter.
> > >
> >
> > OK, makes sense.
> >
> > Cheers,
> >
> > Tom
> >
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-15 Thread Rajini Sivaram
Hi Colin,

We have used different approaches for kafka-configs using ZooKeeper and
using brokers until now. This is based on the fact that whatever you can
access using kafka-configs with ZooKeeper, you can also access directly
using ZooKeeper shell. For example, you can retrieve any config stored in
ZooKeeper including sensitive configs. They are encrypted, so you will need
the secret for decoding it, but you can see all encrypted values. Similarly
for SCRAM credentials, you can retrieve the encoded credentials. We allow
this because if you have physical access to ZK, you could have obtained it
from ZK anyway. Our recommendation is to use ZK for SCRAM only if ZK is
secure.

With AdminClient, we have been more conservative because we are now giving
access over the network. You cannot retrieve any sensitive broker configs,
even in encrypted form. I think it would be better to follow the same model
for SCRAM credentials. It is not easy to decode the encoded SCRAM
credentials, but it is not impossible. In particular, it is prone to
dictionary attacks. I think the only information we need to return from
`listScramUsers` is the SCRAM mechanism that is supported for that user.

Regards,

Rajini


On Fri, May 15, 2020 at 9:25 AM Tom Bentley  wrote:

> Hi Colin,
>
> The AdminClient should do the hashing, right?  I don't see any advantage to
> > doing it externally.
>
>
> I'm happy so long as the AdminClient interface doesn't require users to do
> the hashing themselves.
>
> I do think we should support setting the salt explicitly, but really only
> > for testing purposes.  Normally, it should be randomized.
> >
>
> >
> > I also wonder a little about consistency with the other APIs which have
> > > separate create/alter/delete methods. I imagine you considered exposing
> > > separate methods in the Java API,  implementing them using the same
> RPC,
> > > but can you share your rationale?
> >
> > I wanted this to match up with the command-line API, which doesn't
> > distinguish between create and alter.
> >
>
> OK, makes sense.
>
> Cheers,
>
> Tom
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-15 Thread Tom Bentley
Hi Colin,

The AdminClient should do the hashing, right?  I don't see any advantage to
> doing it externally.


I'm happy so long as the AdminClient interface doesn't require users to do
the hashing themselves.

I do think we should support setting the salt explicitly, but really only
> for testing purposes.  Normally, it should be randomized.
>

>
> I also wonder a little about consistency with the other APIs which have
> > separate create/alter/delete methods. I imagine you considered exposing
> > separate methods in the Java API,  implementing them using the same RPC,
> > but can you share your rationale?
>
> I wanted this to match up with the command-line API, which doesn't
> distinguish between create and alter.
>

OK, makes sense.

Cheers,

Tom


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-14 Thread Colin McCabe
Hi Cheng,

Good point.  I updated the KIP to include the same information that is 
currently returned.

best,
Colin


On Sun, May 10, 2020, at 22:40, Cheng Tan wrote:
> Hi Colin,
> 
> 
> If I understood correctly, in your design, listScramUsers will return 
> the mechanism and iteration. Let’s use the field naming of RFC 5802 for 
> this discussion:
> 
>  SaltedPassword  := Hi(Normalize(password), salt, i)
>  ClientKey   := HMAC(SaltedPassword, "Client Key")
>  StoredKey   := H(ClientKey)
>  AuthMessage := client-first-message-bare + "," +
> server-first-message + "," +
> client-final-message-without-proof
>  ClientSignature := HMAC(StoredKey, AuthMessage)
>  ClientProof := ClientKey XOR ClientSignature
>  ServerKey   := HMAC(SaltedPassword, "Server Key")
>  ServerSignature := HMAC(ServerKey, AuthMessage)
> 
> I think it’s also safe and useful for listScramUsers to return salt and 
> ServerKey. The current practice of —describe with —zookeeper is 
> returning these two fields (KIP-84)
> 
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe 
> --entity-type users --entity-name alice
> Configs for user-principal 'alice' are 
> SCRAM-SHA-512=[salt=djR5dXdtZGNqamVpeml6NGhiZmMwY3hrbg==,stored_key=sb5jkqStV9RwPVTGxG1ZJHxF89bqjsD1jT4SFDK4An2goSnWpbNdY0nkq0fNV8xFcZqb7MVMJ1tyEgif5OXKDQ==,
>  
> server_key=3EfuHB4LPOcjDH0O5AysSSPiLskQfM5K9+mOzGmkixasmWEGJWZv7svtgkP+acO2Q9ms9WQQ9EndAJCvKHmjjg==,iterations=4096],SCRAM-SHA-256=[salt=10ibs0z7xzlu6w5ns0n188sis5,stored_key=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,server_key=nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k=,iterations=4096]
> 
> 
> Please let me know what you think.
> 
> Best, - Cheng Tan
> 
> > On Apr 30, 2020, at 11:16 PM, Colin McCabe  wrote:
> > 
> > 
> 
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-14 Thread Colin McCabe
On Tue, May 12, 2020, at 06:43, Tom Bentley wrote:
> Hi Colin,
> 
> It's not clear whether users of the Java API would need to supply the salt
> and salted password directly, or whether the constructor of ScramCredential
> would take the password and perform the hashing itself.
> 

Hi Tom,

The AdminClient should do the hashing, right?  I don't see any advantage to 
doing it externally.  I do think we should support setting the salt explicitly, 
but really only for testing purposes.  Normally, it should be randomized.

> I also wonder a little about consistency with the other APIs which have
> separate create/alter/delete methods. I imagine you considered exposing
> separate methods in the Java API,  implementing them using the same RPC,
> but can you share your rationale?

I wanted this to match up with the command-line API, which doesn't distinguish 
between create and alter.

best,
Colin

> 
> Kind regards,
> 
> Tom
> 
> On Mon, May 11, 2020 at 6:48 AM Cheng Tan  wrote:
> 
> > Hi Colin,
> >
> >
> > If I understood correctly, in your design, listScramUsers will return the
> > mechanism and iteration. Let’s use the field naming of RFC 5802 for this
> > discussion:
> >
> >  SaltedPassword  := Hi(Normalize(password), salt, i)
> >  ClientKey   := HMAC(SaltedPassword, "Client Key")
> >  StoredKey   := H(ClientKey)
> >  AuthMessage := client-first-message-bare + "," +
> > server-first-message + "," +
> > client-final-message-without-proof
> >  ClientSignature := HMAC(StoredKey, AuthMessage)
> >  ClientProof := ClientKey XOR ClientSignature
> >  ServerKey   := HMAC(SaltedPassword, "Server Key")
> >  ServerSignature := HMAC(ServerKey, AuthMessage)
> >
> > I think it’s also safe and useful for listScramUsers to return salt and
> > ServerKey. The current practice of —describe with —zookeeper is returning
> > these two fields (KIP-84)
> >
> > bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type
> > users --entity-name alice
> > Configs for user-principal 'alice' are
> > SCRAM-SHA-512=[salt=djR5dXdtZGNqamVpeml6NGhiZmMwY3hrbg==,stored_key=sb5jkqStV9RwPVTGxG1ZJHxF89bqjsD1jT4SFDK4An2goSnWpbNdY0nkq0fNV8xFcZqb7MVMJ1tyEgif5OXKDQ==,
> > server_key=3EfuHB4LPOcjDH0O5AysSSPiLskQfM5K9+mOzGmkixasmWEGJWZv7svtgkP+acO2Q9ms9WQQ9EndAJCvKHmjjg==,iterations=4096],SCRAM-SHA-256=[salt=10ibs0z7xzlu6w5ns0n188sis5,stored_key=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,server_key=nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k=,iterations=4096]
> >
> >
> > Please let me know what you think.
> >
> > Best, - Cheng Tan
> >
> > > On Apr 30, 2020, at 11:16 PM, Colin McCabe  wrote:
> > >
> > >
> >
> >
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-12 Thread Tom Bentley
Hi Colin,

It's not clear whether users of the Java API would need to supply the salt
and salted password directly, or whether the constructor of ScramCredential
would take the password and perform the hashing itself.

I also wonder a little about consistency with the other APIs which have
separate create/alter/delete methods. I imagine you considered exposing
separate methods in the Java API,  implementing them using the same RPC,
but can you share your rationale?

Kind regards,

Tom

On Mon, May 11, 2020 at 6:48 AM Cheng Tan  wrote:

> Hi Colin,
>
>
> If I understood correctly, in your design, listScramUsers will return the
> mechanism and iteration. Let’s use the field naming of RFC 5802 for this
> discussion:
>
>  SaltedPassword  := Hi(Normalize(password), salt, i)
>  ClientKey   := HMAC(SaltedPassword, "Client Key")
>  StoredKey   := H(ClientKey)
>  AuthMessage := client-first-message-bare + "," +
> server-first-message + "," +
> client-final-message-without-proof
>  ClientSignature := HMAC(StoredKey, AuthMessage)
>  ClientProof := ClientKey XOR ClientSignature
>  ServerKey   := HMAC(SaltedPassword, "Server Key")
>  ServerSignature := HMAC(ServerKey, AuthMessage)
>
> I think it’s also safe and useful for listScramUsers to return salt and
> ServerKey. The current practice of —describe with —zookeeper is returning
> these two fields (KIP-84)
>
> bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type
> users --entity-name alice
> Configs for user-principal 'alice' are
> SCRAM-SHA-512=[salt=djR5dXdtZGNqamVpeml6NGhiZmMwY3hrbg==,stored_key=sb5jkqStV9RwPVTGxG1ZJHxF89bqjsD1jT4SFDK4An2goSnWpbNdY0nkq0fNV8xFcZqb7MVMJ1tyEgif5OXKDQ==,
> server_key=3EfuHB4LPOcjDH0O5AysSSPiLskQfM5K9+mOzGmkixasmWEGJWZv7svtgkP+acO2Q9ms9WQQ9EndAJCvKHmjjg==,iterations=4096],SCRAM-SHA-256=[salt=10ibs0z7xzlu6w5ns0n188sis5,stored_key=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,server_key=nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k=,iterations=4096]
>
>
> Please let me know what you think.
>
> Best, - Cheng Tan
>
> > On Apr 30, 2020, at 11:16 PM, Colin McCabe  wrote:
> >
> >
>
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-10 Thread Cheng Tan
Hi Colin,


If I understood correctly, in your design, listScramUsers will return the 
mechanism and iteration. Let’s use the field naming of RFC 5802 for this 
discussion:

 SaltedPassword  := Hi(Normalize(password), salt, i)
 ClientKey   := HMAC(SaltedPassword, "Client Key")
 StoredKey   := H(ClientKey)
 AuthMessage := client-first-message-bare + "," +
server-first-message + "," +
client-final-message-without-proof
 ClientSignature := HMAC(StoredKey, AuthMessage)
 ClientProof := ClientKey XOR ClientSignature
 ServerKey   := HMAC(SaltedPassword, "Server Key")
 ServerSignature := HMAC(ServerKey, AuthMessage)

I think it’s also safe and useful for listScramUsers to return salt and 
ServerKey. The current practice of —describe with —zookeeper is returning these 
two fields (KIP-84)

bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type users 
--entity-name alice
Configs for user-principal 'alice' are 
SCRAM-SHA-512=[salt=djR5dXdtZGNqamVpeml6NGhiZmMwY3hrbg==,stored_key=sb5jkqStV9RwPVTGxG1ZJHxF89bqjsD1jT4SFDK4An2goSnWpbNdY0nkq0fNV8xFcZqb7MVMJ1tyEgif5OXKDQ==,
 
server_key=3EfuHB4LPOcjDH0O5AysSSPiLskQfM5K9+mOzGmkixasmWEGJWZv7svtgkP+acO2Q9ms9WQQ9EndAJCvKHmjjg==,iterations=4096],SCRAM-SHA-256=[salt=10ibs0z7xzlu6w5ns0n188sis5,stored_key=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,server_key=nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k=,iterations=4096]


Please let me know what you think.

Best, - Cheng Tan

> On Apr 30, 2020, at 11:16 PM, Colin McCabe  wrote:
> 
> 



Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-08 Thread Colin McCabe
Hi Jakub,

This supersedes KIP-506.

However, I think maybe some of the ideas from KIP-506 might be worth exploring 
in separate KIPs down the road (like having secure endpoints where certain 
admin operations were disabled).  But we want to stay focused on getting SCRAM 
to work without the ZK dependency here.

best,
Colin


On Thu, May 7, 2020, at 05:35, Jakub Scholz wrote:
> Hi Colin,
> 
> Could you clarify how this fits with KIP-506 which seems to deal with the
> same?
> 
> Thanks & Regards
> Jakub
> 
> On Fri, May 1, 2020 at 8:18 AM Colin McCabe  wrote:
> 
> > Hi all,
> >
> > I posted a KIP about adding a new SCRAM configuration API on the broker.
> > Check it out here if you get a chance:
> > https://cwiki.apache.org/confluence/x/ihERCQ
> >
> > cheers,
> > Colin
> >
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-08 Thread Colin McCabe
On Thu, May 7, 2020, at 04:27, Rajini Sivaram wrote:
> Hi Colin,
> 
> Thanks for the KIP. A couple of comments below:
> 
> 1) SCRAM password is never sent over the wire today, not by clients, not by
> tools. A salted-hashed version of it stored in ZooKeeper is sent over the
> wire to ZK and read by brokers from ZK. Another randomly-salted-hashed
> version is sent by clients during authentication. The transformation of the
> password to salted version is performed by kafka-configs tool. I think we
> should continue to do the same.

Thanks, Rajini.  I didn't realize we used a randomly generated salt here.  That 
is indeed a very good idea, and something we should continue doing.  I changed 
the RPC to add a salt field.

I do still feel that plaintext is inherently insecure.  But if we can easily 
add a little more security then we should do it.  I feel a bit silly now :)

> We should still treat this credential as a
> `password` config to ensure we don't log it anywhere. One of the biggest
> advantages of SCRAM is that broker (or ZooKeeper) is never is possession of
> the client password, it has the ability to verify the client password, but
> not impersonate the user with that password. The proposed API breaks that
> and hence we should perform transformation on the tool, not the broker.

Agreed.

> 
> 2) The naming in the API seems a bit confusing. Scram mechanism is a thing
> in SASL. So ScramMechanism would be SCRAM-SHA-256 or SCRAM-SHA-512. These
> are standard names (but we use underscore instead of hyphen for the enums).
> The underlying algorithms are internal and don't need to be in the public
> API. We are using ScramMechanism in the new API to refer to a
> ScramCredential. And ScramMechanismType to use strings that are not the
> actual SCRAM mechanism. Perhaps these could just be `ScramMechanism` and
> `ScramCredential` like they are currently in the Kafka codebase, but just
> refactored to separate out internals from the public API?
> 

Good point.  I changed the API so that ScramMechanism is the enum, 
ScramMechanismInfo is the enum + the number of iterations, and ScramCredential 
is the enum, iterations, salt, and password data.

I also changed the salt and password to be bytes fields instead of strings to 
reflect the fact that they are binary data.

best,
Colin

>
> Regards,
> 
> Rajini
> 
> 
> On Thu, May 7, 2020 at 5:48 AM Colin McCabe  wrote:
> 
> > On Tue, May 5, 2020, at 08:12, Tom Bentley wrote:
> > > Hi Colin,
> > >
> > > SCRAM is better than SASL PLAIN because it doesn't send the password over
> > > the wire in the clear. Presumably this property is important for some
> > users
> > > who have chosen to use SCRAM. This proposal does send the password in the
> > > clear when setting the password. That doesn't mean it can't be used
> > > securely (e.g. connect over TLS–if available–when setting or changing a
> > > password, or connect to the broker from the same machine over localhost),
> > > but won't this just result in some CVE against Kafka? It's a tricky
> > problem
> > > to solve in a cluster without TLS (you basically just end up reinventing
> > > TLS).
> > >
> >
> > Hi Tom,
> >
> > Thanks for the thoughtful reply.
> >
> > If you don't set up SSL, we currently do send passwords in the clear over
> > the wire.  There's just no other option-- as you yourself said, we're not
> > going to reinvent TLS from first principles.  So this KIP isn't changing
> > our policy about this.
> >
> > One example of this is if you have a zookeeper connection and it is not
> > encrypted, your SCRAM password currently goes over the wire in the clear
> > when you run the kafka-configs.sh command.  Another example is if you have
> > one plaintext endpoint Kafka and one SSL Kafka endpoint, you can send the
> > keystore password for the SSL endpoint in cleartext over the plaintext
> > endpoint.
> >
> > >
> > > I know you're not a few of the ever-growing list of configs, but when
> > > I wrote KIP-506 I suggested some configs which could have been used to at
> > > least make it secure by default.
> > >
> >
> > I think if we want to add a configuration like that, it should be done in
> > a separate KIP, because it affects more than just SCRAM.  We would also
> > have to disallow setting any "sensitive" configuration over
> > IncrementalAlterConfigs / AlterConfigs.
> >
> > Although I haven't thought about it that much, I doubt that such a KIP
> > would be successful  Think about who still uses plaintext mode,.
> > Developers use it for testing things locally.  They don't want additional
> > restrictions on what they can do.  Sysadmins who are really convinced that
> > their network is secure (I know, I know...) or who are setting up a
> > proof-of-concept might use plaintext mode.  They don't want restrictions
> > either.
> >
> > If the network is insecure and you're using plaintext, then we shouldn't
> > allow you to send or receive messages either, since they could contain
> > sensitive data.  So I thin

Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-07 Thread Jakub Scholz
Hi Colin,

Could you clarify how this fits with KIP-506 which seems to deal with the
same?

Thanks & Regards
Jakub

On Fri, May 1, 2020 at 8:18 AM Colin McCabe  wrote:

> Hi all,
>
> I posted a KIP about adding a new SCRAM configuration API on the broker.
> Check it out here if you get a chance:
> https://cwiki.apache.org/confluence/x/ihERCQ
>
> cheers,
> Colin
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-07 Thread Rajini Sivaram
Hi Colin,

Thanks for the KIP. A couple of comments below:

1) SCRAM password is never sent over the wire today, not by clients, not by
tools. A salted-hashed version of it stored in ZooKeeper is sent over the
wire to ZK and read by brokers from ZK. Another randomly-salted-hashed
version is sent by clients during authentication. The transformation of the
password to salted version is performed by kafka-configs tool. I think we
should continue to do the same. We should still treat this credential as a
`password` config to ensure we don't log it anywhere. One of the biggest
advantages of SCRAM is that broker (or ZooKeeper) is never is possession of
the client password, it has the ability to verify the client password, but
not impersonate the user with that password. The proposed API breaks that
and hence we should perform transformation on the tool, not the broker.

2) The naming in the API seems a bit confusing. Scram mechanism is a thing
in SASL. So ScramMechanism would be SCRAM-SHA-256 or SCRAM-SHA-512. These
are standard names (but we use underscore instead of hyphen for the enums).
The underlying algorithms are internal and don't need to be in the public
API. We are using ScramMechanism in the new API to refer to a
ScramCredential. And ScramMechanismType to use strings that are not the
actual SCRAM mechanism. Perhaps these could just be `ScramMechanism` and
`ScramCredential` like they are currently in the Kafka codebase, but just
refactored to separate out internals from the public API?

Regards,

Rajini


On Thu, May 7, 2020 at 5:48 AM Colin McCabe  wrote:

> On Tue, May 5, 2020, at 08:12, Tom Bentley wrote:
> > Hi Colin,
> >
> > SCRAM is better than SASL PLAIN because it doesn't send the password over
> > the wire in the clear. Presumably this property is important for some
> users
> > who have chosen to use SCRAM. This proposal does send the password in the
> > clear when setting the password. That doesn't mean it can't be used
> > securely (e.g. connect over TLS–if available–when setting or changing a
> > password, or connect to the broker from the same machine over localhost),
> > but won't this just result in some CVE against Kafka? It's a tricky
> problem
> > to solve in a cluster without TLS (you basically just end up reinventing
> > TLS).
> >
>
> Hi Tom,
>
> Thanks for the thoughtful reply.
>
> If you don't set up SSL, we currently do send passwords in the clear over
> the wire.  There's just no other option-- as you yourself said, we're not
> going to reinvent TLS from first principles.  So this KIP isn't changing
> our policy about this.
>
> One example of this is if you have a zookeeper connection and it is not
> encrypted, your SCRAM password currently goes over the wire in the clear
> when you run the kafka-configs.sh command.  Another example is if you have
> one plaintext endpoint Kafka and one SSL Kafka endpoint, you can send the
> keystore password for the SSL endpoint in cleartext over the plaintext
> endpoint.
>
> >
> > I know you're not a few of the ever-growing list of configs, but when
> > I wrote KIP-506 I suggested some configs which could have been used to at
> > least make it secure by default.
> >
>
> I think if we want to add a configuration like that, it should be done in
> a separate KIP, because it affects more than just SCRAM.  We would also
> have to disallow setting any "sensitive" configuration over
> IncrementalAlterConfigs / AlterConfigs.
>
> Although I haven't thought about it that much, I doubt that such a KIP
> would be successful  Think about who still uses plaintext mode,.
> Developers use it for testing things locally.  They don't want additional
> restrictions on what they can do.  Sysadmins who are really convinced that
> their network is secure (I know, I know...) or who are setting up a
> proof-of-concept might use plaintext mode.  They don't want restrictions
> either.
>
> If the network is insecure and you're using plaintext, then we shouldn't
> allow you to send or receive messages either, since they could contain
> sensitive data.  So I think it's impossible to follow this logic very far
> before you arrive at plaintext delenda est.  And indeed, there have been
> people who have said we should remove the option to use plaintext mode from
> Kafka.  But so far, we're not ready to do that.
>
> >
> > You mentioned on the discussion for KIP-595 that there's a bootstrapping
> > problem to be solved in this area. Maybe KIP-595 is the better place for
> > that, but I wondered if you had any thoughts about it. I thought about
> > using a broker CLI option to read a password from stdin (`--scram-user
> tom`
> > would prompt for the password for user 'tom' on boot), that way the
> > password doesn't have to be on the command line arguments or in a file.
> In
> > fact this could be a solution to both the bootstrap problem and plaintext
> > password problem in the absence of TLS.
> >
>
> Yeah, I think this would be a good improvement.  The ability to read a
> p

Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-06 Thread Colin McCabe
On Tue, May 5, 2020, at 08:12, Tom Bentley wrote:
> Hi Colin,
> 
> SCRAM is better than SASL PLAIN because it doesn't send the password over
> the wire in the clear. Presumably this property is important for some users
> who have chosen to use SCRAM. This proposal does send the password in the
> clear when setting the password. That doesn't mean it can't be used
> securely (e.g. connect over TLS–if available–when setting or changing a
> password, or connect to the broker from the same machine over localhost),
> but won't this just result in some CVE against Kafka? It's a tricky problem
> to solve in a cluster without TLS (you basically just end up reinventing
> TLS).
>

Hi Tom,

Thanks for the thoughtful reply.

If you don't set up SSL, we currently do send passwords in the clear over the 
wire.  There's just no other option-- as you yourself said, we're not going to 
reinvent TLS from first principles.  So this KIP isn't changing our policy 
about this.

One example of this is if you have a zookeeper connection and it is not 
encrypted, your SCRAM password currently goes over the wire in the clear when 
you run the kafka-configs.sh command.  Another example is if you have one 
plaintext endpoint Kafka and one SSL Kafka endpoint, you can send the keystore 
password for the SSL endpoint in cleartext over the plaintext endpoint.  

>
> I know you're not a few of the ever-growing list of configs, but when
> I wrote KIP-506 I suggested some configs which could have been used to at
> least make it secure by default.
>

I think if we want to add a configuration like that, it should be done in a 
separate KIP, because it affects more than just SCRAM.  We would also have to 
disallow setting any "sensitive" configuration over IncrementalAlterConfigs / 
AlterConfigs.

Although I haven't thought about it that much, I doubt that such a KIP would be 
successful  Think about who still uses plaintext mode,.  Developers use it 
for testing things locally.  They don't want additional restrictions on what 
they can do.  Sysadmins who are really convinced that their network is secure 
(I know, I know...) or who are setting up a proof-of-concept might use 
plaintext mode.  They don't want restrictions either.

If the network is insecure and you're using plaintext, then we shouldn't allow 
you to send or receive messages either, since they could contain sensitive 
data.  So I think it's impossible to follow this logic very far before you 
arrive at plaintext delenda est.  And indeed, there have been people who have 
said we should remove the option to use plaintext mode from Kafka.  But so far, 
we're not ready to do that.

> 
> You mentioned on the discussion for KIP-595 that there's a bootstrapping
> problem to be solved in this area. Maybe KIP-595 is the better place for
> that, but I wondered if you had any thoughts about it. I thought about
> using a broker CLI option to read a password from stdin (`--scram-user tom`
> would prompt for the password for user 'tom' on boot), that way the
> password doesn't have to be on the command line arguments or in a file. In
> fact this could be a solution to both the bootstrap problem and plaintext
> password problem in the absence of TLS.
> 

Yeah, I think this would be a good improvement.  The ability to read a password 
from stdin without echoing it to the terminal would be nice.  But it also 
deserves its own separate KIP, and should also apply to the other stuff you can 
do with kafka-configs.sh (SSL passwords, etc.)

best,
Colin

>
> Kind regards,
> 
> Tom
> 
> Cheers,
> 
> Tom
> 
> On Tue, May 5, 2020 at 12:52 AM Guozhang Wang  wrote:
> 
> > Cool, that makes sense.
> >
> > Guozhang
> >
> >
> > On Mon, May 4, 2020 at 2:50 PM Colin McCabe  wrote:
> >
> > > I think once something becomes more complex than just key = value it's
> > > time to consider an official Kafka API, rather than trying to fit it into
> > > AlterConfigs.  For example, for client quotas, we have KIP-546.
> > >
> > > There are just so many reasons.  Real Kafka APIs have well-defined
> > > compatibility policies, Java types defined that make them easy to use,
> > and
> > > APIs that can return partial results rather than needing to do the
> > > filtering on the client side.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Mon, May 4, 2020, at 14:30, Guozhang Wang wrote:
> > > > Got it.
> > > >
> > > > Besides SCRAM, are there other scenarios that we may have such
> > > > "hierarchical" (I know the term may not be very accurate here :P)
> > configs
> > > > such as "config1=[key1=value1, key2=value2]" compared with most common
> > > > pattern of "config1=value1" or "config1=value1,config2=value2"? For
> > > example
> > > > I know that quotas may be specified in the former pattern as well. If
> > we
> > > > believe that such hierarchical configuration may be more common in the
> > > > future, I'm wondering should we just consider support it more natively
> > in
> > > > alter/describe config patterns.
> > > >

Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-05 Thread Tom Bentley
Hi Colin,

SCRAM is better than SASL PLAIN because it doesn't send the password over
the wire in the clear. Presumably this property is important for some users
who have chosen to use SCRAM. This proposal does send the password in the
clear when setting the password. That doesn't mean it can't be used
securely (e.g. connect over TLS–if available–when setting or changing a
password, or connect to the broker from the same machine over localhost),
but won't this just result in some CVE against Kafka? It's a tricky problem
to solve in a cluster without TLS (you basically just end up reinventing
TLS). I know you're not a few of the ever-growing list of configs, but when
I wrote KIP-506 I suggested some configs which could have been used to at
least make it secure by default.

You mentioned on the discussion for KIP-595 that there's a bootstrapping
problem to be solved in this area. Maybe KIP-595 is the better place for
that, but I wondered if you had any thoughts about it. I thought about
using a broker CLI option to read a password from stdin (`--scram-user tom`
would prompt for the password for user 'tom' on boot), that way the
password doesn't have to be on the command line arguments or in a file. In
fact this could be a solution to both the bootstrap problem and plaintext
password problem in the absence of TLS.

Kind regards,

Tom

Cheers,

Tom

On Tue, May 5, 2020 at 12:52 AM Guozhang Wang  wrote:

> Cool, that makes sense.
>
> Guozhang
>
>
> On Mon, May 4, 2020 at 2:50 PM Colin McCabe  wrote:
>
> > I think once something becomes more complex than just key = value it's
> > time to consider an official Kafka API, rather than trying to fit it into
> > AlterConfigs.  For example, for client quotas, we have KIP-546.
> >
> > There are just so many reasons.  Real Kafka APIs have well-defined
> > compatibility policies, Java types defined that make them easy to use,
> and
> > APIs that can return partial results rather than needing to do the
> > filtering on the client side.
> >
> > best,
> > Colin
> >
> >
> > On Mon, May 4, 2020, at 14:30, Guozhang Wang wrote:
> > > Got it.
> > >
> > > Besides SCRAM, are there other scenarios that we may have such
> > > "hierarchical" (I know the term may not be very accurate here :P)
> configs
> > > such as "config1=[key1=value1, key2=value2]" compared with most common
> > > pattern of "config1=value1" or "config1=value1,config2=value2"? For
> > example
> > > I know that quotas may be specified in the former pattern as well. If
> we
> > > believe that such hierarchical configuration may be more common in the
> > > future, I'm wondering should we just consider support it more natively
> in
> > > alter/describe config patterns.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Mon, May 4, 2020 at 1:32 PM Colin McCabe 
> wrote:
> > >
> > > > If we use AlterConfigs then we end up parsing strings like
> > > >
> >
> 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
> > > > on the broker into the same information that's currently in
> > > > ScramUserAlteration.  This doesn't change the complexity of the
> > > > command-line tool, since it does that parsing anyway.  But it does
> mean
> > > > that other programs that wanted to interact with SCRAM via the API
> > would
> > > > not really have datatypes to describe what they were doing, just
> lumps
> > of
> > > > text.
> > > >
> > > > Another question is how would we even list SCRAM users if we were to
> > > > re-purpose AlterConfigs / DescribeConfigs for this.  I suppose if we
> > wanted
> > > > to go down this path we could define a new resource and use
> > DescribeConfigs
> > > > to describe its keys.  But its values would always have to be
> returned
> > as
> > > > null by DescribeConfigs, since they would be considered "sensitive."
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Sun, May 3, 2020, at 17:30, Guozhang Wang wrote:
> > > > > Hello Colin,
> > > > >
> > > > > Thanks for the KIP. The proposal itself looks good to me; but could
> > you
> > > > > elaborate a bit more on the rejected alternative of reusing
> > > > > IncrementalAlterConfigs? What do you mean by complex string
> > manipulation,
> > > > > as well as error conditions?
> > > > >
> > > > > Guozhang
> > > > >
> > > > >
> > > > > On Fri, May 1, 2020 at 5:12 PM Colin McCabe 
> > wrote:
> > > > >
> > > > > > On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote:
> > > > > > > Hi Colin,
> > > > > > >
> > > > > > > Thanks for the KIP. Is it also in scope to add support for the
> > new
> > > > API
> > > > > > > to the Admin interface and the implementation in
> > KafkaAdminClient?
> > > > > > >
> > > > > >
> > > > > > Hi Aneel,
> > > > > >
> > > > > > Yes, we will have a Java API.  The new Admin API is described in
> > the
> > > > KIP.
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > > > >
> > > > > > > On Fri, May 1, 2020 at 1:18 AM Colin McCabe <
> cmcc...@apache.org>
> > > > wrote:
> > > > > > > >

Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-04 Thread Guozhang Wang
Cool, that makes sense.

Guozhang


On Mon, May 4, 2020 at 2:50 PM Colin McCabe  wrote:

> I think once something becomes more complex than just key = value it's
> time to consider an official Kafka API, rather than trying to fit it into
> AlterConfigs.  For example, for client quotas, we have KIP-546.
>
> There are just so many reasons.  Real Kafka APIs have well-defined
> compatibility policies, Java types defined that make them easy to use, and
> APIs that can return partial results rather than needing to do the
> filtering on the client side.
>
> best,
> Colin
>
>
> On Mon, May 4, 2020, at 14:30, Guozhang Wang wrote:
> > Got it.
> >
> > Besides SCRAM, are there other scenarios that we may have such
> > "hierarchical" (I know the term may not be very accurate here :P) configs
> > such as "config1=[key1=value1, key2=value2]" compared with most common
> > pattern of "config1=value1" or "config1=value1,config2=value2"? For
> example
> > I know that quotas may be specified in the former pattern as well. If we
> > believe that such hierarchical configuration may be more common in the
> > future, I'm wondering should we just consider support it more natively in
> > alter/describe config patterns.
> >
> >
> > Guozhang
> >
> >
> > On Mon, May 4, 2020 at 1:32 PM Colin McCabe  wrote:
> >
> > > If we use AlterConfigs then we end up parsing strings like
> > >
> 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
> > > on the broker into the same information that's currently in
> > > ScramUserAlteration.  This doesn't change the complexity of the
> > > command-line tool, since it does that parsing anyway.  But it does mean
> > > that other programs that wanted to interact with SCRAM via the API
> would
> > > not really have datatypes to describe what they were doing, just lumps
> of
> > > text.
> > >
> > > Another question is how would we even list SCRAM users if we were to
> > > re-purpose AlterConfigs / DescribeConfigs for this.  I suppose if we
> wanted
> > > to go down this path we could define a new resource and use
> DescribeConfigs
> > > to describe its keys.  But its values would always have to be returned
> as
> > > null by DescribeConfigs, since they would be considered "sensitive."
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Sun, May 3, 2020, at 17:30, Guozhang Wang wrote:
> > > > Hello Colin,
> > > >
> > > > Thanks for the KIP. The proposal itself looks good to me; but could
> you
> > > > elaborate a bit more on the rejected alternative of reusing
> > > > IncrementalAlterConfigs? What do you mean by complex string
> manipulation,
> > > > as well as error conditions?
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Fri, May 1, 2020 at 5:12 PM Colin McCabe 
> wrote:
> > > >
> > > > > On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote:
> > > > > > Hi Colin,
> > > > > >
> > > > > > Thanks for the KIP. Is it also in scope to add support for the
> new
> > > API
> > > > > > to the Admin interface and the implementation in
> KafkaAdminClient?
> > > > > >
> > > > >
> > > > > Hi Aneel,
> > > > >
> > > > > Yes, we will have a Java API.  The new Admin API is described in
> the
> > > KIP.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > >
> > > > > > On Fri, May 1, 2020 at 1:18 AM Colin McCabe 
> > > wrote:
> > > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I posted a KIP about adding a new SCRAM configuration API on
> the
> > > > > broker.  Check it out here if you get a chance:
> > > > > https://cwiki.apache.org/confluence/x/ihERCQ
> > > > > > >
> > > > > > > cheers,
> > > > > > > Colin
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-04 Thread Colin McCabe
I think once something becomes more complex than just key = value it's time to 
consider an official Kafka API, rather than trying to fit it into AlterConfigs. 
 For example, for client quotas, we have KIP-546. 

There are just so many reasons.  Real Kafka APIs have well-defined 
compatibility policies, Java types defined that make them easy to use, and APIs 
that can return partial results rather than needing to do the filtering on the 
client side.

best,
Colin


On Mon, May 4, 2020, at 14:30, Guozhang Wang wrote:
> Got it.
> 
> Besides SCRAM, are there other scenarios that we may have such
> "hierarchical" (I know the term may not be very accurate here :P) configs
> such as "config1=[key1=value1, key2=value2]" compared with most common
> pattern of "config1=value1" or "config1=value1,config2=value2"? For example
> I know that quotas may be specified in the former pattern as well. If we
> believe that such hierarchical configuration may be more common in the
> future, I'm wondering should we just consider support it more natively in
> alter/describe config patterns.
> 
> 
> Guozhang
> 
> 
> On Mon, May 4, 2020 at 1:32 PM Colin McCabe  wrote:
> 
> > If we use AlterConfigs then we end up parsing strings like
> > 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
> > on the broker into the same information that's currently in
> > ScramUserAlteration.  This doesn't change the complexity of the
> > command-line tool, since it does that parsing anyway.  But it does mean
> > that other programs that wanted to interact with SCRAM via the API would
> > not really have datatypes to describe what they were doing, just lumps of
> > text.
> >
> > Another question is how would we even list SCRAM users if we were to
> > re-purpose AlterConfigs / DescribeConfigs for this.  I suppose if we wanted
> > to go down this path we could define a new resource and use DescribeConfigs
> > to describe its keys.  But its values would always have to be returned as
> > null by DescribeConfigs, since they would be considered "sensitive."
> >
> > best,
> > Colin
> >
> >
> > On Sun, May 3, 2020, at 17:30, Guozhang Wang wrote:
> > > Hello Colin,
> > >
> > > Thanks for the KIP. The proposal itself looks good to me; but could you
> > > elaborate a bit more on the rejected alternative of reusing
> > > IncrementalAlterConfigs? What do you mean by complex string manipulation,
> > > as well as error conditions?
> > >
> > > Guozhang
> > >
> > >
> > > On Fri, May 1, 2020 at 5:12 PM Colin McCabe  wrote:
> > >
> > > > On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote:
> > > > > Hi Colin,
> > > > >
> > > > > Thanks for the KIP. Is it also in scope to add support for the new
> > API
> > > > > to the Admin interface and the implementation in KafkaAdminClient?
> > > > >
> > > >
> > > > Hi Aneel,
> > > >
> > > > Yes, we will have a Java API.  The new Admin API is described in the
> > KIP.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > > On Fri, May 1, 2020 at 1:18 AM Colin McCabe 
> > wrote:
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I posted a KIP about adding a new SCRAM configuration API on the
> > > > broker.  Check it out here if you get a chance:
> > > > https://cwiki.apache.org/confluence/x/ihERCQ
> > > > > >
> > > > > > cheers,
> > > > > > Colin
> > > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> 
> 
> -- 
> -- Guozhang
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-04 Thread Guozhang Wang
Got it.

Besides SCRAM, are there other scenarios that we may have such
"hierarchical" (I know the term may not be very accurate here :P) configs
such as "config1=[key1=value1, key2=value2]" compared with most common
pattern of "config1=value1" or "config1=value1,config2=value2"? For example
I know that quotas may be specified in the former pattern as well. If we
believe that such hierarchical configuration may be more common in the
future, I'm wondering should we just consider support it more natively in
alter/describe config patterns.


Guozhang


On Mon, May 4, 2020 at 1:32 PM Colin McCabe  wrote:

> If we use AlterConfigs then we end up parsing strings like
> 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
> on the broker into the same information that's currently in
> ScramUserAlteration.  This doesn't change the complexity of the
> command-line tool, since it does that parsing anyway.  But it does mean
> that other programs that wanted to interact with SCRAM via the API would
> not really have datatypes to describe what they were doing, just lumps of
> text.
>
> Another question is how would we even list SCRAM users if we were to
> re-purpose AlterConfigs / DescribeConfigs for this.  I suppose if we wanted
> to go down this path we could define a new resource and use DescribeConfigs
> to describe its keys.  But its values would always have to be returned as
> null by DescribeConfigs, since they would be considered "sensitive."
>
> best,
> Colin
>
>
> On Sun, May 3, 2020, at 17:30, Guozhang Wang wrote:
> > Hello Colin,
> >
> > Thanks for the KIP. The proposal itself looks good to me; but could you
> > elaborate a bit more on the rejected alternative of reusing
> > IncrementalAlterConfigs? What do you mean by complex string manipulation,
> > as well as error conditions?
> >
> > Guozhang
> >
> >
> > On Fri, May 1, 2020 at 5:12 PM Colin McCabe  wrote:
> >
> > > On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote:
> > > > Hi Colin,
> > > >
> > > > Thanks for the KIP. Is it also in scope to add support for the new
> API
> > > > to the Admin interface and the implementation in KafkaAdminClient?
> > > >
> > >
> > > Hi Aneel,
> > >
> > > Yes, we will have a Java API.  The new Admin API is described in the
> KIP.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > > On Fri, May 1, 2020 at 1:18 AM Colin McCabe 
> wrote:
> > > > >
> > > > > Hi all,
> > > > >
> > > > > I posted a KIP about adding a new SCRAM configuration API on the
> > > broker.  Check it out here if you get a chance:
> > > https://cwiki.apache.org/confluence/x/ihERCQ
> > > > >
> > > > > cheers,
> > > > > Colin
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-04 Thread Colin McCabe
If we use AlterConfigs then we end up parsing strings like 
'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
 on the broker into the same information that's currently in 
ScramUserAlteration.  This doesn't change the complexity of the command-line 
tool, since it does that parsing anyway.  But it does mean that other programs 
that wanted to interact with SCRAM via the API would not really have datatypes 
to describe what they were doing, just lumps of text.

Another question is how would we even list SCRAM users if we were to re-purpose 
AlterConfigs / DescribeConfigs for this.  I suppose if we wanted to go down 
this path we could define a new resource and use DescribeConfigs to describe 
its keys.  But its values would always have to be returned as null by 
DescribeConfigs, since they would be considered "sensitive."

best,
Colin


On Sun, May 3, 2020, at 17:30, Guozhang Wang wrote:
> Hello Colin,
> 
> Thanks for the KIP. The proposal itself looks good to me; but could you
> elaborate a bit more on the rejected alternative of reusing
> IncrementalAlterConfigs? What do you mean by complex string manipulation,
> as well as error conditions?
> 
> Guozhang
> 
> 
> On Fri, May 1, 2020 at 5:12 PM Colin McCabe  wrote:
> 
> > On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote:
> > > Hi Colin,
> > >
> > > Thanks for the KIP. Is it also in scope to add support for the new API
> > > to the Admin interface and the implementation in KafkaAdminClient?
> > >
> >
> > Hi Aneel,
> >
> > Yes, we will have a Java API.  The new Admin API is described in the KIP.
> >
> > best,
> > Colin
> >
> >
> > > On Fri, May 1, 2020 at 1:18 AM Colin McCabe  wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I posted a KIP about adding a new SCRAM configuration API on the
> > broker.  Check it out here if you get a chance:
> > https://cwiki.apache.org/confluence/x/ihERCQ
> > > >
> > > > cheers,
> > > > Colin
> > >
> >
> 
> 
> -- 
> -- Guozhang
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-03 Thread Guozhang Wang
Hello Colin,

Thanks for the KIP. The proposal itself looks good to me; but could you
elaborate a bit more on the rejected alternative of reusing
IncrementalAlterConfigs? What do you mean by complex string manipulation,
as well as error conditions?

Guozhang


On Fri, May 1, 2020 at 5:12 PM Colin McCabe  wrote:

> On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote:
> > Hi Colin,
> >
> > Thanks for the KIP. Is it also in scope to add support for the new API
> > to the Admin interface and the implementation in KafkaAdminClient?
> >
>
> Hi Aneel,
>
> Yes, we will have a Java API.  The new Admin API is described in the KIP.
>
> best,
> Colin
>
>
> > On Fri, May 1, 2020 at 1:18 AM Colin McCabe  wrote:
> > >
> > > Hi all,
> > >
> > > I posted a KIP about adding a new SCRAM configuration API on the
> broker.  Check it out here if you get a chance:
> https://cwiki.apache.org/confluence/x/ihERCQ
> > >
> > > cheers,
> > > Colin
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-01 Thread Colin McCabe
On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote:
> Hi Colin,
> 
> Thanks for the KIP. Is it also in scope to add support for the new API
> to the Admin interface and the implementation in KafkaAdminClient?
> 

Hi Aneel,

Yes, we will have a Java API.  The new Admin API is described in the KIP.

best,
Colin


> On Fri, May 1, 2020 at 1:18 AM Colin McCabe  wrote:
> >
> > Hi all,
> >
> > I posted a KIP about adding a new SCRAM configuration API on the broker.  
> > Check it out here if you get a chance: 
> > https://cwiki.apache.org/confluence/x/ihERCQ
> >
> > cheers,
> > Colin
>


Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-01 Thread Aneel Nazareth
Hi Colin,

Thanks for the KIP. Is it also in scope to add support for the new API
to the Admin interface and the implementation in KafkaAdminClient?

On Fri, May 1, 2020 at 1:18 AM Colin McCabe  wrote:
>
> Hi all,
>
> I posted a KIP about adding a new SCRAM configuration API on the broker.  
> Check it out here if you get a chance: 
> https://cwiki.apache.org/confluence/x/ihERCQ
>
> cheers,
> Colin


[DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-04-30 Thread Colin McCabe
Hi all,

I posted a KIP about adding a new SCRAM configuration API on the broker.  Check 
it out here if you get a chance: https://cwiki.apache.org/confluence/x/ihERCQ

cheers,
Colin