On Thu, May 7, 2020, at 04:27, Rajini Sivaram wrote: > Hi Colin, > > Thanks for the KIP. A couple of comments below: > > 1) SCRAM password is never sent over the wire today, not by clients, not by > tools. A salted-hashed version of it stored in ZooKeeper is sent over the > wire to ZK and read by brokers from ZK. Another randomly-salted-hashed > version is sent by clients during authentication. The transformation of the > password to salted version is performed by kafka-configs tool. I think we > should continue to do the same.
Thanks, Rajini. I didn't realize we used a randomly generated salt here. That is indeed a very good idea, and something we should continue doing. I changed the RPC to add a salt field. I do still feel that plaintext is inherently insecure. But if we can easily add a little more security then we should do it. I feel a bit silly now :) > We should still treat this credential as a > `password` config to ensure we don't log it anywhere. One of the biggest > advantages of SCRAM is that broker (or ZooKeeper) is never is possession of > the client password, it has the ability to verify the client password, but > not impersonate the user with that password. The proposed API breaks that > and hence we should perform transformation on the tool, not the broker. Agreed. > > 2) The naming in the API seems a bit confusing. Scram mechanism is a thing > in SASL. So ScramMechanism would be SCRAM-SHA-256 or SCRAM-SHA-512. These > are standard names (but we use underscore instead of hyphen for the enums). > The underlying algorithms are internal and don't need to be in the public > API. We are using ScramMechanism in the new API to refer to a > ScramCredential. And ScramMechanismType to use strings that are not the > actual SCRAM mechanism. Perhaps these could just be `ScramMechanism` and > `ScramCredential` like they are currently in the Kafka codebase, but just > refactored to separate out internals from the public API? > Good point. I changed the API so that ScramMechanism is the enum, ScramMechanismInfo is the enum + the number of iterations, and ScramCredential is the enum, iterations, salt, and password data. I also changed the salt and password to be bytes fields instead of strings to reflect the fact that they are binary data. best, Colin > > Regards, > > Rajini > > > On Thu, May 7, 2020 at 5:48 AM Colin McCabe <cmcc...@apache.org> wrote: > > > On Tue, May 5, 2020, at 08:12, Tom Bentley wrote: > > > Hi Colin, > > > > > > SCRAM is better than SASL PLAIN because it doesn't send the password over > > > the wire in the clear. Presumably this property is important for some > > users > > > who have chosen to use SCRAM. This proposal does send the password in the > > > clear when setting the password. That doesn't mean it can't be used > > > securely (e.g. connect over TLS–if available–when setting or changing a > > > password, or connect to the broker from the same machine over localhost), > > > but won't this just result in some CVE against Kafka? It's a tricky > > problem > > > to solve in a cluster without TLS (you basically just end up reinventing > > > TLS). > > > > > > > Hi Tom, > > > > Thanks for the thoughtful reply. > > > > If you don't set up SSL, we currently do send passwords in the clear over > > the wire. There's just no other option-- as you yourself said, we're not > > going to reinvent TLS from first principles. So this KIP isn't changing > > our policy about this. > > > > One example of this is if you have a zookeeper connection and it is not > > encrypted, your SCRAM password currently goes over the wire in the clear > > when you run the kafka-configs.sh command. Another example is if you have > > one plaintext endpoint Kafka and one SSL Kafka endpoint, you can send the > > keystore password for the SSL endpoint in cleartext over the plaintext > > endpoint. > > > > > > > > I know you're not a few of the ever-growing list of configs, but when > > > I wrote KIP-506 I suggested some configs which could have been used to at > > > least make it secure by default. > > > > > > > I think if we want to add a configuration like that, it should be done in > > a separate KIP, because it affects more than just SCRAM. We would also > > have to disallow setting any "sensitive" configuration over > > IncrementalAlterConfigs / AlterConfigs. > > > > Although I haven't thought about it that much, I doubt that such a KIP > > would be successful.... Think about who still uses plaintext mode,. > > Developers use it for testing things locally. They don't want additional > > restrictions on what they can do. Sysadmins who are really convinced that > > their network is secure (I know, I know...) or who are setting up a > > proof-of-concept might use plaintext mode. They don't want restrictions > > either. > > > > If the network is insecure and you're using plaintext, then we shouldn't > > allow you to send or receive messages either, since they could contain > > sensitive data. So I think it's impossible to follow this logic very far > > before you arrive at plaintext delenda est. And indeed, there have been > > people who have said we should remove the option to use plaintext mode from > > Kafka. But so far, we're not ready to do that. > > > > > > > > You mentioned on the discussion for KIP-595 that there's a bootstrapping > > > problem to be solved in this area. Maybe KIP-595 is the better place for > > > that, but I wondered if you had any thoughts about it. I thought about > > > using a broker CLI option to read a password from stdin (`--scram-user > > tom` > > > would prompt for the password for user 'tom' on boot), that way the > > > password doesn't have to be on the command line arguments or in a file. > > In > > > fact this could be a solution to both the bootstrap problem and plaintext > > > password problem in the absence of TLS. > > > > > > > Yeah, I think this would be a good improvement. The ability to read a > > password from stdin without echoing it to the terminal would be nice. But > > it also deserves its own separate KIP, and should also apply to the other > > stuff you can do with kafka-configs.sh (SSL passwords, etc.) > > > > best, > > Colin > > > > > > > > Kind regards, > > > > > > Tom > > > > > > Cheers, > > > > > > Tom > > > > > > On Tue, May 5, 2020 at 12:52 AM Guozhang Wang <wangg...@gmail.com> > > wrote: > > > > > > > Cool, that makes sense. > > > > > > > > Guozhang > > > > > > > > > > > > On Mon, May 4, 2020 at 2:50 PM Colin McCabe <cmcc...@apache.org> > > wrote: > > > > > > > > > I think once something becomes more complex than just key = value > > it's > > > > > time to consider an official Kafka API, rather than trying to fit it > > into > > > > > AlterConfigs. For example, for client quotas, we have KIP-546. > > > > > > > > > > There are just so many reasons. Real Kafka APIs have well-defined > > > > > compatibility policies, Java types defined that make them easy to > > use, > > > > and > > > > > APIs that can return partial results rather than needing to do the > > > > > filtering on the client side. > > > > > > > > > > best, > > > > > Colin > > > > > > > > > > > > > > > On Mon, May 4, 2020, at 14:30, Guozhang Wang wrote: > > > > > > Got it. > > > > > > > > > > > > Besides SCRAM, are there other scenarios that we may have such > > > > > > "hierarchical" (I know the term may not be very accurate here :P) > > > > configs > > > > > > such as "config1=[key1=value1, key2=value2]" compared with most > > common > > > > > > pattern of "config1=value1" or "config1=value1,config2=value2"? For > > > > > example > > > > > > I know that quotas may be specified in the former pattern as well. > > If > > > > we > > > > > > believe that such hierarchical configuration may be more common in > > the > > > > > > future, I'm wondering should we just consider support it more > > natively > > > > in > > > > > > alter/describe config patterns. > > > > > > > > > > > > > > > > > > Guozhang > > > > > > > > > > > > > > > > > > On Mon, May 4, 2020 at 1:32 PM Colin McCabe <cmcc...@apache.org> > > > > wrote: > > > > > > > > > > > > > If we use AlterConfigs then we end up parsing strings like > > > > > > > > > > > > > > > > > > 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' > > > > > > > on the broker into the same information that's currently in > > > > > > > ScramUserAlteration. This doesn't change the complexity of the > > > > > > > command-line tool, since it does that parsing anyway. But it > > does > > > > mean > > > > > > > that other programs that wanted to interact with SCRAM via the > > API > > > > > would > > > > > > > not really have datatypes to describe what they were doing, just > > > > lumps > > > > > of > > > > > > > text. > > > > > > > > > > > > > > Another question is how would we even list SCRAM users if we > > were to > > > > > > > re-purpose AlterConfigs / DescribeConfigs for this. I suppose > > if we > > > > > wanted > > > > > > > to go down this path we could define a new resource and use > > > > > DescribeConfigs > > > > > > > to describe its keys. But its values would always have to be > > > > returned > > > > > as > > > > > > > null by DescribeConfigs, since they would be considered > > "sensitive." > > > > > > > > > > > > > > best, > > > > > > > Colin > > > > > > > > > > > > > > > > > > > > > On Sun, May 3, 2020, at 17:30, Guozhang Wang wrote: > > > > > > > > Hello Colin, > > > > > > > > > > > > > > > > Thanks for the KIP. The proposal itself looks good to me; but > > could > > > > > you > > > > > > > > elaborate a bit more on the rejected alternative of reusing > > > > > > > > IncrementalAlterConfigs? What do you mean by complex string > > > > > manipulation, > > > > > > > > as well as error conditions? > > > > > > > > > > > > > > > > Guozhang > > > > > > > > > > > > > > > > > > > > > > > > On Fri, May 1, 2020 at 5:12 PM Colin McCabe < > > cmcc...@apache.org> > > > > > wrote: > > > > > > > > > > > > > > > > > On Fri, May 1, 2020, at 08:35, Aneel Nazareth wrote: > > > > > > > > > > Hi Colin, > > > > > > > > > > > > > > > > > > > > Thanks for the KIP. Is it also in scope to add support for > > the > > > > > new > > > > > > > API > > > > > > > > > > to the Admin interface and the implementation in > > > > > KafkaAdminClient? > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Aneel, > > > > > > > > > > > > > > > > > > Yes, we will have a Java API. The new Admin API is > > described in > > > > > the > > > > > > > KIP. > > > > > > > > > > > > > > > > > > best, > > > > > > > > > Colin > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, May 1, 2020 at 1:18 AM Colin McCabe < > > > > cmcc...@apache.org> > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > > > > > > > I posted a KIP about adding a new SCRAM configuration > > API on > > > > > the > > > > > > > > > broker. Check it out here if you get a chance: > > > > > > > > > https://cwiki.apache.org/confluence/x/ihERCQ > > > > > > > > > > > > > > > > > > > > > > cheers, > > > > > > > > > > > Colin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > -- Guozhang > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > -- Guozhang > > > > > > > > > > > > > > > > > > > > > > > -- > > > > -- Guozhang > > > > > > > > > >