[ 
https://issues.apache.org/jira/browse/KAFKA-6195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16389585#comment-16389585
 ] 

Rajini Sivaram commented on KAFKA-6195:
---------------------------------------

[~jonathanskrzypek] Sorry for the delay, I was out till today. I had a look at 
the PR. I think it will break SSL hostname verification when IP addresses are 
used instead of hostnames. At the moment, we can create certificates with 
broker IP address in the SubjectAlternativeName. Clients using IP addresses in 
the bootstrap server list connect successfully since the address used for 
connection is the same as the address in the certificate. With the changes in 
the PR, clients will do reverse DNS lookup and match the hostname returned by 
the lookup against the IP address in the certificate and fail SSL handshake.

This is a useful feature and the code makes sense, but I think it needs to be 
optional. Is there some way we can specify that a reverse DNS lookup is 
required (eg. in the bootstrap server list itself as part of the URL)?

> DNS alias support for secured connections
> -----------------------------------------
>
>                 Key: KAFKA-6195
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6195
>             Project: Kafka
>          Issue Type: Improvement
>          Components: clients
>            Reporter: Jonathan Skrzypek
>            Priority: Major
>
> It seems clients can't use a dns alias in front of a secured Kafka cluster.
> So applications can only specify a list of hosts or IPs in bootstrap.servers 
> instead of an alias encompassing all cluster nodes.
> Using an alias in bootstrap.servers results in the following error : 
> javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Fail to create credential. (63) - No service creds)]) 
> occurred when evaluating SASL token received from the Kafka Broker. Kafka 
> Client will go to AUTH_FAILED state. [Caused by 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Fail to create 
> credential. (63) - No service creds)]]
> When using SASL/Kerberos authentication, the kafka server principal is of the 
> form kafka@kafka/broker1.hostname....@example.com
> Kerberos requires that the hosts can be resolved by their FQDNs.
> During SASL handshake, the client will create a SASL token and then send it 
> to kafka for auth.
> But to create a SASL token the client first needs to be able to validate that 
> the broker's kerberos is a valid one.
> There are 3 potential options :
> 1. Creating a single kerberos principal not linked to a host but to an alias 
> and reference it in the broker jaas file.
> But I think the kerberos infrastructure would refuse to validate it, so the 
> SASL handshake would still fail
> 2. Modify the client bootstrap mechanism to detect whether bootstrap.servers 
> contains a dns alias. If it does, resolve and expand the alias to retrieve 
> all hostnames behind it and add them to the list of nodes.
> This could be done by modifying parseAndValidateAddresses() in ClientUtils
> 3. Add a cluster.alias parameter that would be handled by the logic above. 
> Having another parameter to avoid confusion on how bootstrap.servers works 
> behind the scene.
> Thoughts ?
> I would be happy to contribute the change for any of the options.
> I believe the ability to use a dns alias instead of static lists of brokers 
> would bring good deployment flexibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to