This is an automated email from the ASF dual-hosted git repository.
chia7712 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/trunk by this push:
new 73f0c22958c MINOR: Fix numbered list formatting in security
documentation (#21324)
73f0c22958c is described below
commit 73f0c22958caffbb6780f4e56b16925d367c1d40
Author: Ming-Yen Chung <[email protected]>
AuthorDate: Tue Jan 20 00:07:11 2026 +0800
MINOR: Fix numbered list formatting in security documentation (#21324)
The numbered lists in the security documentation were indented by 2–8
spaces, causing Markdown to treat them as plain text rather than lists.
This resulted in the list items being rendered without proper line
breaks and collapsed into a single paragraph.
### Changes:
- Removed excessive indentation from numbered list items (now start at
column 0)
- Added proper blank lines between list items for better readability
- Fixed bullet list indentation to properly nest under numbered items
(using 3 spaces)
- Added code fences (```) around code blocks for clearer formatting
Reviewers: Chia-Ping Tsai <[email protected]>
---
docs/security/authentication-using-sasl.md | 313 +++++++++++----------
.../encryption-and-authentication-using-ssl.md | 48 ++--
docs/security/security-overview.md | 23 +-
3 files changed, 206 insertions(+), 178 deletions(-)
diff --git a/docs/security/authentication-using-sasl.md
b/docs/security/authentication-using-sasl.md
index cfcb24d06fe..9cb55b3dd93 100644
--- a/docs/security/authentication-using-sasl.md
+++ b/docs/security/authentication-using-sasl.md
@@ -45,10 +45,11 @@ Brokers may also configure JAAS using the broker
configuration property `sasl.ja
user_admin="admin-secret" \
user_alice="alice-secret";
-If JAAS configuration is defined at different levels, the order of precedence
used is:
- * Broker configuration property
`listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config`
- * `{listenerName}.KafkaServer` section of static JAAS configuration
- * `KafkaServer` section of static JAAS configuration
+If JAAS configuration is defined at different levels, the order of precedence
used is:
+
+* Broker configuration property
`listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config`
+* `{listenerName}.KafkaServer` section of static JAAS configuration
+* `KafkaServer` section of static JAAS configuration
See GSSAPI (Kerberos), PLAIN, SCRAM, or non-production/production OAUTHBEARER
for example broker configurations.
@@ -64,20 +65,25 @@ See GSSAPI (Kerberos), PLAIN, SCRAM, or
non-production/production OAUTHBEARER fo
##### JAAS configuration using static config file
-To configure SASL authentication on the clients using static JAAS config file:
- 1. Add a JAAS config file with a client login section named
`KafkaClient`. Configure a login module in `KafkaClient` for the selected
mechanism as described in the examples for setting up GSSAPI (Kerberos), PLAIN,
SCRAM, or non-production/production OAUTHBEARER. For example, GSSAPI
credentials may be configured as:
-
- KafkaClient {
- com.sun.security.auth.module.Krb5LoginModule required
- useKeyTab=true
- storeKey=true
- keyTab="/etc/security/keytabs/kafka_client.keytab"
- principal="[email protected]";
- };
-
- 2. Pass the JAAS config file location as JVM parameter to each
client JVM. For example:
-
-
-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf
+To configure SASL authentication on the clients using static JAAS config file:
+
+1. Add a JAAS config file with a client login section named `KafkaClient`.
Configure a login module in `KafkaClient` for the selected mechanism as
described in the examples for setting up GSSAPI (Kerberos), PLAIN, SCRAM, or
non-production/production OAUTHBEARER. For example, GSSAPI credentials may be
configured as:
+
+ ```
+ KafkaClient {
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=true
+ storeKey=true
+ keyTab="/etc/security/keytabs/kafka_client.keytab"
+ principal="[email protected]";
+ };
+ ```
+
+2. Pass the JAAS config file location as JVM parameter to each client JVM. For
example:
+
+ ```
+ -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf
+ ```
### SASL configuration
@@ -85,23 +91,28 @@ SASL may be used with PLAINTEXT or SSL as the transport
layer using the security
#### SASL mechanisms
-Kafka supports the following SASL mechanisms:
- * GSSAPI (Kerberos)
- * PLAIN
- * SCRAM-SHA-256
- * SCRAM-SHA-512
- * OAUTHBEARER
+Kafka supports the following SASL mechanisms:
+
+* GSSAPI (Kerberos)
+* PLAIN
+* SCRAM-SHA-256
+* SCRAM-SHA-512
+* OAUTHBEARER
#### SASL configuration for Kafka brokers
- 1. Configure a SASL port in server.properties, by adding at least one
of SASL_PLAINTEXT or SASL_SSL to the _listeners_ parameter, which contains one
or more comma-separated values:
-
- listeners=SASL_PLAINTEXT://host.name:port
+1. Configure a SASL port in server.properties, by adding at least one of
SASL_PLAINTEXT or SASL_SSL to the _listeners_ parameter, which contains one or
more comma-separated values:
-If you are only configuring a SASL port (or if you want the Kafka brokers to
authenticate each other using SASL) then make sure you set the same SASL
protocol for inter-broker communication:
-
- security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
+ ```
+ listeners=SASL_PLAINTEXT://host.name:port
+ ```
- 2. Select one or more supported mechanisms to enable in the broker and
follow the steps to configure SASL for the mechanism. To enable multiple
mechanisms in the broker, follow the steps here.
+ If you are only configuring a SASL port (or if you want the Kafka brokers
to authenticate each other using SASL) then make sure you set the same SASL
protocol for inter-broker communication:
+
+ ```
+ security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
+ ```
+
+2. Select one or more supported mechanisms to enable in the broker and follow
the steps to configure SASL for the mechanism. To enable multiple mechanisms in
the broker, follow the steps here.
#### SASL configuration for Kafka clients
SASL authentication is only supported for the new Java Kafka producer and
consumer, the older API is not supported.
@@ -114,50 +125,64 @@ Note: When establishing connections to brokers via SASL,
clients may perform a r
#### Prerequisites
- 1. **Kerberos**
-If your organization is already using a Kerberos server (for example, by using
Active Directory), there is no need to install a new server just for Kafka.
Otherwise you will need to install one, your Linux vendor likely has packages
for Kerberos and a short guide on how to install and configure it
([Ubuntu](https://help.ubuntu.com/community/Kerberos),
[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html)).
No [...]
- 2. **Create Kerberos Principals**
-If you are using the organization's Kerberos or Active Directory server, ask
your Kerberos administrator for a principal for each Kafka broker in your
cluster and for every operating system user that will access Kafka with
Kerberos authentication (via clients and tools).
-If you have installed your own Kerberos, you will need to create these
principals yourself using the following commands:
-
- $ sudo /usr/sbin/kadmin.local -q 'addprinc -randkey
kafka/{hostname}@{REALM}'
- $ sudo /usr/sbin/kadmin.local -q "ktadd -k
/etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"
+1. **Kerberos**
+ If your organization is already using a Kerberos server (for example, by
using Active Directory), there is no need to install a new server just for
Kafka. Otherwise you will need to install one, your Linux vendor likely has
packages for Kerberos and a short guide on how to install and configure it
([Ubuntu](https://help.ubuntu.com/community/Kerberos),
[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html)).
[...]
+
+2. **Create Kerberos Principals**
+ If you are using the organization's Kerberos or Active Directory server,
ask your Kerberos administrator for a principal for each Kafka broker in your
cluster and for every operating system user that will access Kafka with
Kerberos authentication (via clients and tools).
- 3. **Make sure all hosts can be reachable using hostnames** \- it is a
Kerberos requirement that all your hosts can be resolved with their FQDNs.
+ If you have installed your own Kerberos, you will need to create these
principals yourself using the following commands:
+
+ ```
+ $ sudo /usr/sbin/kadmin.local -q 'addprinc -randkey
kafka/{hostname}@{REALM}'
+ $ sudo /usr/sbin/kadmin.local -q "ktadd -k
/etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"
+ ```
+
+3. **Make sure all hosts can be reachable using hostnames** - it is a Kerberos
requirement that all your hosts can be resolved with their FQDNs.
#### Configuring Kafka Brokers
- 1. Add a suitably modified JAAS file similar to the one below to each
Kafka broker's config directory, let's call it kafka_server_jaas.conf for this
example (note that each broker should have its own keytab):
-
- KafkaServer {
- com.sun.security.auth.module.Krb5LoginModule required
- useKeyTab=true
- storeKey=true
- keyTab="/etc/security/keytabs/kafka_server.keytab"
- principal="kafka/[email protected]";
- };
+1. Add a suitably modified JAAS file similar to the one below to each Kafka
broker's config directory, let's call it kafka_server_jaas.conf for this
example (note that each broker should have its own keytab):
-`KafkaServer` section in the JAAS file tells the broker which principal to use
and the location of the keytab where this principal is stored. It allows the
broker to login using the keytab specified in this section.
- 2. Pass the JAAS and optionally the krb5 file locations as JVM
parameters to each Kafka broker (see
[here](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html)
for more details):
-
- -Djava.security.krb5.conf=/etc/kafka/krb5.conf
-
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
+ ```
+ KafkaServer {
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=true
+ storeKey=true
+ keyTab="/etc/security/keytabs/kafka_server.keytab"
+ principal="kafka/[email protected]";
+ };
+ ```
- 3. Make sure the keytabs configured in the JAAS file are readable by
the operating system user who is starting kafka broker.
- 4. Configure SASL port and SASL mechanisms in server.properties as
described here. For example:
-
- listeners=SASL_PLAINTEXT://host.name:port
- security.inter.broker.protocol=SASL_PLAINTEXT
- sasl.mechanism.inter.broker.protocol=GSSAPI
- sasl.enabled.mechanisms=GSSAPI
+ `KafkaServer` section in the JAAS file tells the broker which principal to
use and the location of the keytab where this principal is stored. It allows
the broker to login using the keytab specified in this section.
-We must also configure the service name in server.properties, which should
match the principal name of the kafka brokers. In the above example, principal
is "kafka/[email protected]", so:
-
- sasl.kerberos.service.name=kafka
+2. Pass the JAAS and optionally the krb5 file locations as JVM parameters to
each Kafka broker (see
[here](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html)
for more details):
+
+ ```
+ -Djava.security.krb5.conf=/etc/kafka/krb5.conf
+ -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
+ ```
+
+3. Make sure the keytabs configured in the JAAS file are readable by the
operating system user who is starting kafka broker.
+
+4. Configure SASL port and SASL mechanisms in server.properties as described
here. For example:
+
+ ```
+ listeners=SASL_PLAINTEXT://host.name:port
+ security.inter.broker.protocol=SASL_PLAINTEXT
+ sasl.mechanism.inter.broker.protocol=GSSAPI
+ sasl.enabled.mechanisms=GSSAPI
+ ```
+
+ We must also configure the service name in server.properties, which should
match the principal name of the kafka brokers. In the above example, principal
is "kafka/[email protected]", so:
+
+ ```
+ sasl.kerberos.service.name=kafka
+ ```
#### Configuring Kafka Clients
To configure SASL authentication on the clients:
- 1. Clients (producers, consumers, connect workers, etc) will
authenticate to the cluster with their own principal (usually with the same
name as the user running the client), so obtain or create these principals as
needed. Then configure the JAAS configuration property for each client.
Different clients within a JVM may run as different users by specifying
different principals. The property `sasl.jaas.config` in producer.properties or
consumer.properties describes how clients lik [...]
+1. Clients (producers, consumers, connect workers, etc) will authenticate to
the cluster with their own principal (usually with the same name as the user
running the client), so obtain or create these principals as needed. Then
configure the JAAS configuration property for each client. Different clients
within a JVM may run as different users by specifying different principals. The
property `sasl.jaas.config` in producer.properties or consumer.properties
describes how clients like produc [...]
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule
required \
useKeyTab=true \
@@ -171,12 +196,12 @@ For command-line utilities like kafka-console-consumer or
kafka-console-producer
useTicketCache=true;
JAAS configuration for clients may alternatively be specified as a JVM
parameter similar to brokers as described here. Clients use the login section
named `KafkaClient`. This option allows only one user for all client
connections from a JVM.
- 2. Make sure the keytabs configured in the JAAS configuration are
readable by the operating system user who is starting kafka client.
- 3. Optionally pass the krb5 file locations as JVM parameters to each
client JVM (see
[here](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html)
for more details):
+2. Make sure the keytabs configured in the JAAS configuration are readable by
the operating system user who is starting kafka client.
+3. Optionally pass the krb5 file locations as JVM parameters to each client
JVM (see
[here](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html)
for more details):
-Djava.security.krb5.conf=/etc/kafka/krb5.conf
- 4. Configure the following properties in producer.properties or
consumer.properties:
+4. Configure the following properties in producer.properties or
consumer.properties:
security.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism=GSSAPI
@@ -189,7 +214,7 @@ SASL/PLAIN is a simple username/password authentication
mechanism that is typica
Under the default implementation of `principal.builder.class`, the username is
used as the authenticated `Principal` for configuration of ACLs etc.
#### Configuring Kafka Brokers
- 1. Add a suitably modified JAAS file similar to the one below to each
Kafka broker's config directory, let's call it kafka_server_jaas.conf for this
example:
+1. Add a suitably modified JAAS file similar to the one below to each Kafka
broker's config directory, let's call it kafka_server_jaas.conf for this
example:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule
required
@@ -200,11 +225,11 @@ Under the default implementation of
`principal.builder.class`, the username is u
};
This configuration defines two users (_admin_ and _alice_). The properties
`username` and `password` in the `KafkaServer` section are used by the broker
to initiate connections to other brokers. In this example, _admin_ is the user
for inter-broker communication. The set of properties `user__userName_` defines
the passwords for all users that connect to the broker and the broker validates
all client connections including those from other brokers using these
properties.
- 2. Pass the JAAS config file location as JVM parameter to each Kafka
broker:
+2. Pass the JAAS config file location as JVM parameter to each Kafka broker:
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
- 3. Configure SASL port and SASL mechanisms in server.properties as
described here. For example:
+3. Configure SASL port and SASL mechanisms in server.properties as described
here. For example:
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
@@ -214,7 +239,7 @@ This configuration defines two users (_admin_ and _alice_).
The properties `user
#### Configuring Kafka Clients
To configure SASL authentication on the clients:
- 1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the PLAIN mechanism:
+1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the PLAIN mechanism:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
required \
username="alice" \
@@ -224,16 +249,16 @@ The options `username` and `password` are used by clients
to configure the user
JAAS configuration for clients may alternatively be specified as a JVM
parameter similar to brokers as described here. Clients use the login section
named `KafkaClient`. This option allows only one user for all client
connections from a JVM.
- 2. Configure the following properties in producer.properties or
consumer.properties:
+2. Configure the following properties in producer.properties or
consumer.properties:
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
#### Use of SASL/PLAIN in production
- * SASL/PLAIN should be used only with SSL as transport layer to ensure
that clear passwords are not transmitted on the wire without encryption.
- * The default implementation of SASL/PLAIN in Kafka specifies
usernames and passwords in the JAAS configuration file as shown here. From
Kafka version 2.0 onwards, you can avoid storing clear passwords on disk by
configuring your own callback handlers that obtain username and password from
an external source using the configuration options
`sasl.server.callback.handler.class` and `sasl.client.callback.handler.class`.
- * In production systems, external authentication servers may implement
password authentication. From Kafka version 2.0 onwards, you can plug in your
own callback handlers that use external authentication servers for password
verification by configuring `sasl.server.callback.handler.class`.
+* SASL/PLAIN should be used only with SSL as transport layer to ensure that
clear passwords are not transmitted on the wire without encryption.
+* The default implementation of SASL/PLAIN in Kafka specifies usernames and
passwords in the JAAS configuration file as shown here. From Kafka version 2.0
onwards, you can avoid storing clear passwords on disk by configuring your own
callback handlers that obtain username and password from an external source
using the configuration options `sasl.server.callback.handler.class` and
`sasl.client.callback.handler.class`.
+* In production systems, external authentication servers may implement
password authentication. From Kafka version 2.0 onwards, you can plug in your
own callback handlers that use external authentication servers for password
verification by configuring `sasl.server.callback.handler.class`.
### Authentication using SASL/SCRAM
Salted Challenge Response Authentication Mechanism (SCRAM) is a family of SASL
mechanisms that addresses the security concerns with traditional mechanisms
that perform username/password authentication like PLAIN and DIGEST-MD5. The
mechanism is defined in [RFC 5802](https://tools.ietf.org/html/rfc5802). Kafka
supports [SCRAM-SHA-256](https://tools.ietf.org/html/rfc7677) and SCRAM-SHA-512
which can be used with TLS to perform secure authentication. Under the default
implementation of `pri [...]
@@ -262,7 +287,7 @@ Credentials may be deleted for one or more SCRAM mechanisms
using the _\--alter
#### Configuring Kafka Brokers
- 1. Add a suitably modified JAAS file similar to the one below to each
Kafka broker's config directory, let's call it kafka_server_jaas.conf for this
example:
+1. Add a suitably modified JAAS file similar to the one below to each Kafka
broker's config directory, let's call it kafka_server_jaas.conf for this
example:
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule
required
@@ -271,11 +296,11 @@ Credentials may be deleted for one or more SCRAM
mechanisms using the _\--alter
};
The properties `username` and `password` in the `KafkaServer` section are used
by the broker to initiate connections to other brokers. In this example,
_admin_ is the user for inter-broker communication.
- 2. Pass the JAAS config file location as JVM parameter to each Kafka
broker:
+2. Pass the JAAS config file location as JVM parameter to each Kafka broker:
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
- 3. Configure SASL port and SASL mechanisms in server.properties as
described here. For example:
+3. Configure SASL port and SASL mechanisms in server.properties as described
here. For example:
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
@@ -285,7 +310,7 @@ The properties `username` and `password` in the
`KafkaServer` section are used b
#### Configuring Kafka Clients
To configure SASL authentication on the clients:
- 1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the SCRAM mechanisms:
+1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the SCRAM mechanisms:
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="alice" \
@@ -295,18 +320,18 @@ The options `username` and `password` are used by clients
to configure the user
JAAS configuration for clients may alternatively be specified as a JVM
parameter similar to brokers as described here. Clients use the login section
named `KafkaClient`. This option allows only one user for all client
connections from a JVM.
- 2. Configure the following properties in producer.properties or
consumer.properties:
+2. Configure the following properties in producer.properties or
consumer.properties:
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)
#### Security Considerations for SASL/SCRAM
- * The default implementation of SASL/SCRAM in Kafka stores SCRAM
credentials in the metadata log. This is suitable for production use in
installations where KRaft controllers are secure and on a private network.
- * Kafka supports only the strong hash functions SHA-256 and SHA-512
with a minimum iteration count of 4096. Strong hash functions combined with
strong passwords and high iteration counts protect against brute force attacks
if KRaft controllers security is compromised.
- * SCRAM should be used only with TLS-encryption to prevent
interception of SCRAM exchanges. This protects against dictionary or brute
force attacks and against impersonation if KRaft controllers security is
compromised.
- * From Kafka version 2.0 onwards, the default SASL/SCRAM credential
store may be overridden using custom callback handlers by configuring
`sasl.server.callback.handler.class` in installations where KRaft controllers
are not secure.
- * For more details on security considerations, refer to [RFC
5802](https://tools.ietf.org/html/rfc5802#section-9).
+* The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials
in the metadata log. This is suitable for production use in installations where
KRaft controllers are secure and on a private network.
+* Kafka supports only the strong hash functions SHA-256 and SHA-512 with a
minimum iteration count of 4096. Strong hash functions combined with strong
passwords and high iteration counts protect against brute force attacks if
KRaft controllers security is compromised.
+* SCRAM should be used only with TLS-encryption to prevent interception of
SCRAM exchanges. This protects against dictionary or brute force attacks and
against impersonation if KRaft controllers security is compromised.
+* From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may
be overridden using custom callback handlers by configuring
`sasl.server.callback.handler.class` in installations where KRaft controllers
are not secure.
+* For more details on security considerations, refer to [RFC
5802](https://tools.ietf.org/html/rfc5802#section-9).
### Authentication using SASL/OAUTHBEARER
The [OAuth 2 Authorization Framework](https://tools.ietf.org/html/rfc6749)
"enables a third-party application to obtain limited access to an HTTP service,
either on behalf of a resource owner by orchestrating an approval interaction
between the resource owner and the HTTP service, or by allowing the third-party
application to obtain access on its own behalf." The SASL OAUTHBEARER mechanism
enables the use of the framework in a SASL (i.e. a non-HTTP) context; it is
defined in [RFC 7628](h [...]
@@ -316,7 +341,7 @@ Under the default implementation of
`principal.builder.class`, the principalName
The default implementation of SASL/OAUTHBEARER in Kafka creates and validates
[Unsecured JSON Web Tokens](https://tools.ietf.org/html/rfc7515#appendix-A.5).
While suitable only for non-production use, it does provide the flexibility to
create arbitrary tokens in a DEV or TEST environment.
- 1. Add a suitably modified JAAS file similar to the one below to each
Kafka broker's config directory, let's call it kafka_server_jaas.conf for this
example:
+1. Add a suitably modified JAAS file similar to the one below to each Kafka
broker's config directory, let's call it kafka_server_jaas.conf for this
example:
KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
@@ -373,11 +398,11 @@ Set to a space-delimited list of scope values if you wish
the `String/String Lis
Set to a positive integer value if you wish to allow up to some number of
positive milliseconds of clock skew (the default is 0).
</td> </tr> </table>
- 2. Pass the JAAS config file location as JVM parameter to each Kafka
broker:
+2. Pass the JAAS config file location as JVM parameter to each Kafka broker:
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
- 3. Configure SASL port and SASL mechanisms in server.properties as
described here. For example:
+3. Configure SASL port and SASL mechanisms in server.properties as described
here. For example:
listeners=SASL_SSL://host.name:port (or SASL_PLAINTEXT if
non-production)
security.inter.broker.protocol=SASL_SSL (or SASL_PLAINTEXT if
non-production)
@@ -386,17 +411,17 @@ Set to a positive integer value if you wish to allow up
to some number of positi
#### Configuring Production Kafka Brokers
- 1. Add a suitably modified JAAS file similar to the one below to each
Kafka broker's config directory, let's call it kafka_server_jaas.conf for this
example:
+1. Add a suitably modified JAAS file similar to the one below to each Kafka
broker's config directory, let's call it kafka_server_jaas.conf for this
example:
KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ;
};
- 2. Pass the JAAS config file location as JVM parameter to each Kafka
broker:
+2. Pass the JAAS config file location as JVM parameter to each Kafka broker:
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
- 3. Configure SASL port and SASL mechanisms in server.properties as
described here. For example:
+3. Configure SASL port and SASL mechanisms in server.properties as described
here. For example:
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
@@ -406,19 +431,19 @@ Set to a positive integer value if you wish to allow up
to some number of positi
listener.name.<listener
name>.oauthbearer.sasl.oauthbearer.jwks.endpoint.url=https://example.com/oauth2/v1/keys
The OAUTHBEARER broker configuration includes:
- * sasl.oauthbearer.clock.skew.seconds
- * sasl.oauthbearer.expected.audience
- * sasl.oauthbearer.expected.issuer
- * sasl.oauthbearer.jwks.endpoint.refresh.ms
- * sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms
- * sasl.oauthbearer.jwks.endpoint.retry.backoff.ms
- * sasl.oauthbearer.jwks.endpoint.url
- * sasl.oauthbearer.scope.claim.name
- * sasl.oauthbearer.sub.claim.name
+ * sasl.oauthbearer.clock.skew.seconds
+ * sasl.oauthbearer.expected.audience
+ * sasl.oauthbearer.expected.issuer
+ * sasl.oauthbearer.jwks.endpoint.refresh.ms
+ * sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms
+ * sasl.oauthbearer.jwks.endpoint.retry.backoff.ms
+ * sasl.oauthbearer.jwks.endpoint.url
+ * sasl.oauthbearer.scope.claim.name
+ * sasl.oauthbearer.sub.claim.name
#### Configuring Non-production Kafka Clients
To configure SASL authentication on the clients:
- 1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the OAUTHBEARER
mechanisms:
+1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the OAUTHBEARER
mechanisms:
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
required \
unsecuredLoginStringClaim_sub="alice";
@@ -503,22 +528,22 @@ Set to a custom claim name if you wish the name of the
`String` or `String List`
JAAS configuration for clients may alternatively be specified as a JVM
parameter similar to brokers as described here. Clients use the login section
named `KafkaClient`. This option allows only one user for all client
connections from a JVM.
- 2. Configure the following properties in producer.properties or
consumer.properties:
+2. Configure the following properties in producer.properties or
consumer.properties:
security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
sasl.mechanism=OAUTHBEARER
- 3. The default implementation of SASL/OAUTHBEARER depends on the
jackson-databind library. Since it's an optional dependency, users have to
configure it as a dependency via their build tool.
+3. The default implementation of SASL/OAUTHBEARER depends on the
jackson-databind library. Since it's an optional dependency, users have to
configure it as a dependency via their build tool.
#### Configuring Production Kafka Clients
To configure SASL authentication on the clients:
- 1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the OAUTHBEARER
mechanisms:
+1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the OAUTHBEARER
mechanisms:
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
required ;
JAAS configuration for clients may alternatively be specified as a JVM
parameter similar to brokers as described here. Clients use the login section
named `KafkaClient`. This option allows only one user for all client
connections from a JVM.
- 2. Configure the following properties in producer.properties or
consumer.properties. For example, if using the OAuth `client_credentials` grant
type to communicate with the OAuth identity provider, the configuration might
look like this:
+2. Configure the following properties in producer.properties or
consumer.properties. For example, if using the OAuth `client_credentials` grant
type to communicate with the OAuth identity provider, the configuration might
look like this:
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
@@ -541,25 +566,25 @@ Or, if using the OAuth
`urn:ietf:params:oauth:grant-type:jwt-bearer` grant type
sasl.oauthbearer.token.endpoint.url=https://example.com/oauth2/v1/token
The OAUTHBEARER client configuration includes:
- * sasl.oauthbearer.assertion.algorithm
- * sasl.oauthbearer.assertion.claim.aud
- * sasl.oauthbearer.assertion.claim.exp.seconds
- * sasl.oauthbearer.assertion.claim.iss
- * sasl.oauthbearer.assertion.claim.jti.include
- * sasl.oauthbearer.assertion.claim.nbf.seconds
- * sasl.oauthbearer.assertion.claim.sub
- * sasl.oauthbearer.assertion.file
- * sasl.oauthbearer.assertion.private.key.file
- * sasl.oauthbearer.assertion.private.key.passphrase
- * sasl.oauthbearer.assertion.template.file
- * sasl.oauthbearer.client.credentials.client.id
- * sasl.oauthbearer.client.credentials.client.secret
- * sasl.oauthbearer.header.urlencode
- * sasl.oauthbearer.jwt.retriever.class
- * sasl.oauthbearer.jwt.validator.class
- * sasl.oauthbearer.scope
- * sasl.oauthbearer.token.endpoint.url
- 3. The default implementation of SASL/OAUTHBEARER depends on the
jackson-databind library. Since it's an optional dependency, users have to
configure it as a dependency via their build tool.
+ * sasl.oauthbearer.assertion.algorithm
+ * sasl.oauthbearer.assertion.claim.aud
+ * sasl.oauthbearer.assertion.claim.exp.seconds
+ * sasl.oauthbearer.assertion.claim.iss
+ * sasl.oauthbearer.assertion.claim.jti.include
+ * sasl.oauthbearer.assertion.claim.nbf.seconds
+ * sasl.oauthbearer.assertion.claim.sub
+ * sasl.oauthbearer.assertion.file
+ * sasl.oauthbearer.assertion.private.key.file
+ * sasl.oauthbearer.assertion.private.key.passphrase
+ * sasl.oauthbearer.assertion.template.file
+ * sasl.oauthbearer.client.credentials.client.id
+ * sasl.oauthbearer.client.credentials.client.secret
+ * sasl.oauthbearer.header.urlencode
+ * sasl.oauthbearer.jwt.retriever.class
+ * sasl.oauthbearer.jwt.validator.class
+ * sasl.oauthbearer.scope
+ * sasl.oauthbearer.token.endpoint.url
+3. The default implementation of SASL/OAUTHBEARER depends on the
jackson-databind library. Since it's an optional dependency, users have to
configure it as a dependency via their build tool.
#### Token Refresh for SASL/OAUTHBEARER
Kafka periodically refreshes any token before it expires so that the client
can continue to make connections to brokers. The parameters that impact how the
refresh algorithm operates are specified as part of the
producer/consumer/broker configuration and are as follows. See the
documentation for these properties elsewhere for details. The default values
are usually reasonable, in which case these configuration parameters would not
need to be explicitly set.
@@ -596,13 +621,13 @@ Production use cases will require writing an
implementation of `org.apache.kafka
Production use cases will also require writing an implementation of
`org.apache.kafka.common.security.auth.AuthenticateCallbackHandler` that can
handle an instance of
`org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback` and
declaring it via the
`listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class` broker
configuration option.
#### Security Considerations for SASL/OAUTHBEARER
- * The default implementation of SASL/OAUTHBEARER in Kafka creates and
validates [Unsecured JSON Web
Tokens](https://tools.ietf.org/html/rfc7515#appendix-A.5). This is suitable
only for non-production use.
- * OAUTHBEARER should be used in production environments only with
TLS-encryption to prevent interception of tokens.
- * The default unsecured SASL/OAUTHBEARER implementation may be
overridden (and must be overridden in production environments) using custom
login and SASL Server callback handlers as described above.
- * For more details on OAuth 2 security considerations in general,
refer to [RFC 6749, Section 10](https://tools.ietf.org/html/rfc6749#section-10).
+* The default implementation of SASL/OAUTHBEARER in Kafka creates and
validates [Unsecured JSON Web
Tokens](https://tools.ietf.org/html/rfc7515#appendix-A.5). This is suitable
only for non-production use.
+* OAUTHBEARER should be used in production environments only with
TLS-encryption to prevent interception of tokens.
+* The default unsecured SASL/OAUTHBEARER implementation may be overridden (and
must be overridden in production environments) using custom login and SASL
Server callback handlers as described above.
+* For more details on OAuth 2 security considerations in general, refer to
[RFC 6749, Section 10](https://tools.ietf.org/html/rfc6749#section-10).
### Enabling multiple SASL mechanisms in a broker
- 1. Specify configuration for the login modules of all enabled mechanisms
in the `KafkaServer` section of the JAAS config file. For example:
+1. Specify configuration for the login modules of all enabled mechanisms in
the `KafkaServer` section of the JAAS config file. For example:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
@@ -618,24 +643,24 @@ Production use cases will also require writing an
implementation of `org.apache.
user_alice="alice-secret";
};
- 2. Enable the SASL mechanisms in server.properties:
+2. Enable the SASL mechanisms in server.properties:
sasl.enabled.mechanisms=GSSAPI,PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER
- 3. Specify the SASL security protocol and mechanism for inter-broker
communication in server.properties if required:
+3. Specify the SASL security protocol and mechanism for inter-broker
communication in server.properties if required:
security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other
enabled mechanisms)
- 4. Follow the mechanism-specific steps in GSSAPI (Kerberos), PLAIN,
SCRAM, and non-production/production OAUTHBEARER to configure SASL for the
enabled mechanisms.
+4. Follow the mechanism-specific steps in GSSAPI (Kerberos), PLAIN, SCRAM, and
non-production/production OAUTHBEARER to configure SASL for the enabled
mechanisms.
### Modifying SASL mechanism in a Running Cluster
SASL mechanism can be modified in a running cluster using the following
sequence:
- 1. Enable new SASL mechanism by adding the mechanism to
`sasl.enabled.mechanisms` in server.properties for each broker. Update JAAS
config file to include both mechanisms as described here. Incrementally bounce
the cluster nodes.
- 2. Restart clients using the new mechanism.
- 3. To change the mechanism of inter-broker communication (if this is
required), set `sasl.mechanism.inter.broker.protocol` in server.properties to
the new mechanism and incrementally bounce the cluster again.
- 4. To remove old mechanism (if this is required), remove the old
mechanism from `sasl.enabled.mechanisms` in server.properties and remove the
entries for the old mechanism from JAAS config file. Incrementally bounce the
cluster again.
+1. Enable new SASL mechanism by adding the mechanism to
`sasl.enabled.mechanisms` in server.properties for each broker. Update JAAS
config file to include both mechanisms as described here. Incrementally bounce
the cluster nodes.
+2. Restart clients using the new mechanism.
+3. To change the mechanism of inter-broker communication (if this is
required), set `sasl.mechanism.inter.broker.protocol` in server.properties to
the new mechanism and incrementally bounce the cluster again.
+4. To remove old mechanism (if this is required), remove the old mechanism
from `sasl.enabled.mechanisms` in server.properties and remove the entries for
the old mechanism from JAAS config file. Incrementally bounce the cluster again.
### Authentication using Delegation Tokens
Delegation token based authentication is a lightweight authentication
mechanism to complement existing SASL/SSL methods. Delegation tokens are shared
secrets between kafka brokers and clients. Delegation tokens will help
processing frameworks to distribute the workload to available workers in a
secure environment without the added cost of distributing Kerberos TGT/keytabs
or keystores when 2-way SSL is used. See
[KIP-48](https://cwiki.apache.org/confluence/x/tfmnAw) for more details.
@@ -644,9 +669,9 @@ Under the default implementation of
`principal.builder.class`, the owner of dele
Typical steps for delegation token usage are:
- 1. User authenticates with the Kafka cluster via SASL or SSL, and obtains
a delegation token. This can be done using Admin APIs or using
`kafka-delegation-tokens.sh` script.
- 2. User securely passes the delegation token to Kafka clients for
authenticating with the Kafka cluster.
- 3. Token owner/renewer can renew/expire the delegation tokens.
+1. User authenticates with the Kafka cluster via SASL or SSL, and obtains a
delegation token. This can be done using Admin APIs or using
`kafka-delegation-tokens.sh` script.
+2. User securely passes the delegation token to Kafka clients for
authenticating with the Kafka cluster.
+3. Token owner/renewer can renew/expire the delegation tokens.
#### Token Management
A secret is used to generate and verify delegation tokens. This is supplied
using config option `delegation.token.secret.key`. The same secret key must be
configured across all the brokers. The controllers must also be configured with
the secret using the same config option. If the secret is not set or set to
empty string, delegation token authentication and API operations will fail.
@@ -687,7 +712,7 @@ Delegation token authentication piggybacks on the current
SASL/SCRAM authenticat
Configuring Kafka Clients:
- 1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the token
authentication:
+1. Configure the JAAS configuration property for each client in
producer.properties or consumer.properties. The login module describes how the
clients like producer and consumer can connect to the Kafka Broker. The
following is an example configuration for a client for the token
authentication:
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="tokenID123" \
@@ -702,9 +727,9 @@ JAAS configuration for clients may alternatively be
specified as a JVM parameter
We require a re-deployment when the secret needs to be rotated. During this
process, already connected clients will continue to work. But any new
connection requests and renew/expire requests with old tokens can fail. Steps
are given below.
- 1. Expire all existing tokens.
- 2. Rotate the secret by rolling upgrade, and
- 3. Generate new tokens
+1. Expire all existing tokens.
+2. Rotate the secret by rolling upgrade, and
+3. Generate new tokens
We intend to automate this in a future Kafka release.
diff --git a/docs/security/encryption-and-authentication-using-ssl.md
b/docs/security/encryption-and-authentication-using-ssl.md
index eb1c5e592c0..11dd9fad03f 100644
--- a/docs/security/encryption-and-authentication-using-ssl.md
+++ b/docs/security/encryption-and-authentication-using-ssl.md
@@ -35,8 +35,8 @@ The first step of deploying one or more brokers with SSL
support is to generate
$ keytool -keystore {keystorefile} -alias localhost -validity
{validity} -genkey -keyalg RSA -storetype pkcs12
You need to specify two parameters in the above command:
- 1. keystorefile: the keystore file that stores the keys (and later the
certificate) for this broker. The keystore file contains the private and public
keys of this broker, therefore it needs to be kept safe. Ideally this step is
run on the Kafka broker that the key will be used on, as this key should never
be transmitted/leave the server that it is intended for.
- 2. validity: the valid time of the key in days. Please note that this
differs from the validity period for the certificate, which will be determined
in Signing the certificate. You can use the same key to request multiple
certificates: if your key has a validity of 10 years, but your CA will only
sign certificates that are valid for one year, you can use the same key with 10
certificates over time.
+1. keystorefile: the keystore file that stores the keys (and later the
certificate) for this broker. The keystore file contains the private and public
keys of this broker, therefore it needs to be kept safe. Ideally this step is
run on the Kafka broker that the key will be used on, as this key should never
be transmitted/leave the server that it is intended for.
+2. validity: the valid time of the key in days. Please note that this differs
from the validity period for the certificate, which will be determined in
Signing the certificate. You can use the same key to request multiple
certificates: if your key has a validity of 10 years, but your CA will only
sign certificates that are valid for one year, you can use the same key with 10
certificates over time.
To obtain a certificate that can be used with the private key that was just
created a certificate signing request needs to be created. This signing
request, when signed by a trusted CA results in the actual certificate which
can then be installed in the keystore and used for authentication purposes.
To generate certificate signing requests run the following command for all
server keystores created so far.
@@ -61,8 +61,8 @@ Normally there is no good reason to disable hostname
verification apart from bei
Getting hostname verification right is not that hard when done at the right
time, but gets much harder once the cluster is up and running - do yourself a
favor and do it now!
If host name verification is enabled, clients will verify the server's fully
qualified domain name (FQDN) or ip address against one of the following two
fields:
- 1. Common Name (CN)
- 2. [Subject Alternative Name
(SAN)](https://tools.ietf.org/html/rfc5280#section-4.2.1.6)
+1. Common Name (CN)
+2. [Subject Alternative Name
(SAN)](https://tools.ietf.org/html/rfc5280#section-4.2.1.6)
While Kafka checks both fields, usage of the common name field for hostname
verification has been
[deprecated](https://tools.ietf.org/html/rfc2818#section-3.1) since 2000 and
should be avoided if possible. In addition the SAN field is much more flexible,
allowing for multiple DNS and IP entries to be declared in a certificate.
Another advantage is that if the SAN field is used for hostname verification
the common name can be set to a more meaningful value for authorization
purposes. Since we need the SAN field to be contained in the signed
certificate, it will be specified when generating the signing request. It can
also be specified when generating the keypair, but this will not automatically
be copied into the signing request.
@@ -195,10 +195,10 @@ Finally, you need to import both the certificate of the
CA and the signed certif
$ keytool -keystore {keystore} -alias localhost -import -file
cert-signed
The definitions of the parameters are the following:
- 1. keystore: the location of the keystore
- 2. CA certificate: the certificate of the CA
- 3. certificate signing request: the csr created with the server key
- 4. server certificate: the file to write the signed certificate of the
server to
+1. keystore: the location of the keystore
+2. CA certificate: the certificate of the CA
+3. certificate signing request: the csr created with the server key
+4. server certificate: the file to write the signed certificate of the server
to
This will leave you with one truststore called _truststore.jks_ \- this can be
the same for all clients and brokers and does not contain any sensitive
information, so there is no need to secure this.
Additionally you will have one _server.keystore.jks_ file per node which
contains that nodes keys, certificate and your CAs certificate, please refer to
Configuring Kafka Brokers and Configuring Kafka Clients for information on how
to use these files.
@@ -213,15 +213,15 @@ Store password configs `ssl.keystore.password` and
`ssl.truststore.password` are
### Common Pitfalls in Production
The above paragraphs show the process to create your own CA and use it to sign
certificates for your cluster. While very useful for sandbox, dev, test, and
similar systems, this is usually not the correct process to create certificates
for a production cluster in a corporate environment. Enterprises will normally
operate their own CA and users can send in CSRs to be signed with this CA,
which has the benefit of users not being responsible to keep the CA secure as
well as a central author [...]
- 1. **[Extended Key
Usage](https://tools.ietf.org/html/rfc5280#section-4.2.1.12)**
+1. **[Extended Key
Usage](https://tools.ietf.org/html/rfc5280#section-4.2.1.12)**
Certificates may contain an extension field that controls the purpose for
which the certificate can be used. If this field is empty, there are no
restrictions on the usage, but if any usage is specified in here, valid SSL
implementations have to enforce these usages.
Relevant usages for Kafka are:
- * Client authentication
- * Server authentication
+* Client authentication
+* Server authentication
Kafka brokers need both these usages to be allowed, as for intra-cluster
communication every broker will behave as both the client and the server
towards other brokers. It is not uncommon for corporate CAs to have a signing
profile for webservers and use this for Kafka as well, which will only contain
the _serverAuth_ usage value and cause the SSL handshake to fail.
- 2. **Intermediate Certificates**
+2. **Intermediate Certificates**
Corporate Root CAs are often kept offline for security reasons. To enable
day-to-day usage, so called intermediate CAs are created, which are then used
to sign the final certificates. When importing a certificate into the keystore
that was signed by an intermediate CA it is necessary to provide the entire
chain of trust up to the root CA. This can be done by simply _cat_ ing the
certificate files into one combined certificate file and then importing this
with keytool.
- 3. **Failure to copy extension fields**
+3. **Failure to copy extension fields**
CA operators are often hesitant to copy and requested extension fields from
CSRs and prefer to specify these themselves as this makes it harder for a
malicious party to obtain certificates with potentially misleading or
fraudulent values. It is advisable to double check signed certificates, whether
these contain all requested SAN fields to enable proper hostname verification.
The following command can be used to print certificate details to the console,
which should be compared with what [...]
$ openssl x509 -in certificate.crt -text -noout
@@ -241,12 +241,12 @@ Following SSL configs are needed on the broker side
ssl.truststore.password=test1234
Note: ssl.truststore.password is technically optional but highly recommended.
If a password is not set access to the truststore is still available, but
integrity checking is disabled. Optional settings that are worth considering:
- 1. ssl.client.auth=none ("required" => client authentication is required,
"requested" => client authentication is requested and client without certs can
still connect. The usage of "requested" is discouraged as it provides a false
sense of security and misconfigured clients will still connect successfully.)
- 2. ssl.cipher.suites (Optional). A cipher suite is a named combination of
authentication, encryption, MAC and key exchange algorithm used to negotiate
the security settings for a network connection using TLS or SSL network
protocol. (Default is an empty list)
- 3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL
protocols that you are going to accept from clients. Do note that SSL is
deprecated in favor of TLS and using SSL in production is not recommended)
- 4. ssl.keystore.type=JKS
- 5. ssl.truststore.type=JKS
- 6. ssl.secure.random.implementation=SHA1PRNG
+1. ssl.client.auth=none ("required" => client authentication is required,
"requested" => client authentication is requested and client without certs can
still connect. The usage of "requested" is discouraged as it provides a false
sense of security and misconfigured clients will still connect successfully.)
+2. ssl.cipher.suites (Optional). A cipher suite is a named combination of
authentication, encryption, MAC and key exchange algorithm used to negotiate
the security settings for a network connection using TLS or SSL network
protocol. (Default is an empty list)
+3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols
that you are going to accept from clients. Do note that SSL is deprecated in
favor of TLS and using SSL in production is not recommended)
+4. ssl.keystore.type=JKS
+5. ssl.truststore.type=JKS
+6. ssl.secure.random.implementation=SHA1PRNG
If you want to enable SSL for inter-broker communication, add the following to
the server.properties file (it defaults to PLAINTEXT)
security.inter.broker.protocol=SSL
@@ -289,11 +289,11 @@ Note: ssl.truststore.password is technically optional but
highly recommended. If
ssl.key.password=test1234
Other configuration settings that may also be needed depending on our
requirements and the broker configuration:
- 1. ssl.provider (Optional). The name of the security provider used for
SSL connections. Default value is the default security provider of the JVM.
- 2. ssl.cipher.suites (Optional). A cipher suite is a named combination of
authentication, encryption, MAC and key exchange algorithm used to negotiate
the security settings for a network connection using TLS or SSL network
protocol.
- 3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at least
one of the protocols configured on the broker side
- 4. ssl.truststore.type=JKS
- 5. ssl.keystore.type=JKS
+1. ssl.provider (Optional). The name of the security provider used for SSL
connections. Default value is the default security provider of the JVM.
+2. ssl.cipher.suites (Optional). A cipher suite is a named combination of
authentication, encryption, MAC and key exchange algorithm used to negotiate
the security settings for a network connection using TLS or SSL network
protocol.
+3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at least one of
the protocols configured on the broker side
+4. ssl.truststore.type=JKS
+5. ssl.keystore.type=JKS
Examples using console-producer and console-consumer:
diff --git a/docs/security/security-overview.md
b/docs/security/security-overview.md
index d976c252958..f13f863531e 100644
--- a/docs/security/security-overview.md
+++ b/docs/security/security-overview.md
@@ -26,15 +26,18 @@ type: docs
-->
-The following security measures are currently supported:
-
- 1. Authentication of connections to brokers from clients (producers and
consumers), other brokers and tools, using either SSL or SASL. Kafka supports
the following SASL mechanisms:
- * SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0
- * SASL/PLAIN - starting at version 0.10.0.0
- * SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0
- * SASL/OAUTHBEARER - starting at version 2.0
- 2. Encryption of data transferred between brokers and clients, between
brokers, or between brokers and tools using SSL (Note that there is a
performance degradation when SSL is enabled, the magnitude of which depends on
the CPU type and the JVM implementation.)
- 3. Authorization of read / write operations by clients
- 4. Authorization is pluggable and integration with external authorization
services is supported
+The following security measures are currently supported:
+
+1. Authentication of connections to brokers from clients (producers and
consumers), other brokers and tools, using either SSL or SASL. Kafka supports
the following SASL mechanisms:
+ * SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0
+ * SASL/PLAIN - starting at version 0.10.0.0
+ * SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0
+ * SASL/OAUTHBEARER - starting at version 2.0
+
+2. Encryption of data transferred between brokers and clients, between
brokers, or between brokers and tools using SSL (Note that there is a
performance degradation when SSL is enabled, the magnitude of which depends on
the CPU type and the JVM implementation.)
+
+3. Authorization of read / write operations by clients
+
+4. Authorization is pluggable and integration with external authorization
services is supported
It's worth noting that security is optional - non-secured clusters are
supported, as well as a mix of authenticated, unauthenticated, encrypted and
non-encrypted clients. The guides below explain how to configure and use the
security features in both clients and brokers.