[ https://issues.apache.org/jira/browse/KAFKA-6195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390947#comment-16390947 ]
Rajini Sivaram commented on KAFKA-6195: --------------------------------------- In general, it is good to avoid configuration options if we can determine the behaviour automatically. But in this case, my preference is for a config option because a DNS lookup can add delays in some environments. Will be good to see if there are other opinions. SSL certificates can contain hostnames that are wildcarded, making it easy to add a single certificate for the cluster. If you are using IP addresses instead of hostnames, I believe you need to specify the full IP address, but you can have multiple addresses (or hostnames) in a single certificate. > DNS alias support for secured connections > ----------------------------------------- > > Key: KAFKA-6195 > URL: https://issues.apache.org/jira/browse/KAFKA-6195 > Project: Kafka > Issue Type: Improvement > Components: clients > Reporter: Jonathan Skrzypek > Priority: Major > > It seems clients can't use a dns alias in front of a secured Kafka cluster. > So applications can only specify a list of hosts or IPs in bootstrap.servers > instead of an alias encompassing all cluster nodes. > Using an alias in bootstrap.servers results in the following error : > javax.security.sasl.SaslException: An error: > (java.security.PrivilegedActionException: javax.security.sasl.SaslException: > GSS initiate failed [Caused by GSSException: No valid credentials provided > (Mechanism level: Fail to create credential. (63) - No service creds)]) > occurred when evaluating SASL token received from the Kafka Broker. Kafka > Client will go to AUTH_FAILED state. [Caused by > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Fail to create > credential. (63) - No service creds)]] > When using SASL/Kerberos authentication, the kafka server principal is of the > form kafka@kafka/broker1.hostname....@example.com > Kerberos requires that the hosts can be resolved by their FQDNs. > During SASL handshake, the client will create a SASL token and then send it > to kafka for auth. > But to create a SASL token the client first needs to be able to validate that > the broker's kerberos is a valid one. > There are 3 potential options : > 1. Creating a single kerberos principal not linked to a host but to an alias > and reference it in the broker jaas file. > But I think the kerberos infrastructure would refuse to validate it, so the > SASL handshake would still fail > 2. Modify the client bootstrap mechanism to detect whether bootstrap.servers > contains a dns alias. If it does, resolve and expand the alias to retrieve > all hostnames behind it and add them to the list of nodes. > This could be done by modifying parseAndValidateAddresses() in ClientUtils > 3. Add a cluster.alias parameter that would be handled by the logic above. > Having another parameter to avoid confusion on how bootstrap.servers works > behind the scene. > Thoughts ? > I would be happy to contribute the change for any of the options. > I believe the ability to use a dns alias instead of static lists of brokers > would bring good deployment flexibility. -- This message was sent by Atlassian JIRA (v7.6.3#76005)