Kafka SASL/SCRAM with-w/o SSL

In my last post Kafka SASL/PLAIN with-w/o SSL we setup SASL/PLAIN with-w/o SSL.  Let us implement SASL/SCRAM with-w/o SSL now.

Assuming you already have a 3 Broker kafka Cluster running on a single machine. If not, set it up using Implementing Kafka.

What we have:

  • 1  ZK instance running on host apache-kafka.abc.com on port 2181
  • 3 Kafka Brokers running on host apache-kafka.abc.com on ports 9090, 9091 and 9092 (same machine).

Start the implementation:

SASL/SCRAM:

  • The credentials for Clients/Users/Brokers will be created using kafka-configs.sh and stored in Zookeeper.
  • For each SCRAM mechanism (SCRAM-SHA-512 or SCRAM-SHA-256) enabled, credentials must be created by adding a config with the mechanism name.
  • Credentials for inter-broker communication must be created before Kafka brokers are started.
  • Client (new users) credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections

Step 1:  Create a user ‘admin’ for inter-broker communication

[root@apache-kafka]# $KAFKA_HOME/bin/kafka-configs.sh --zookeeper apache-kafka.abc.com:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

Completed Updating config for entity: user-principal 'admin'.

Step 2: Create SCRAM credentials for user nrsh13 with password nrsh13-secret.

[root@apache-kafka ]# $KAFKA_HOME/bin/kafka-configs.sh --zookeeper apache-kafka.abc.com:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=nrsh13-secret],SCRAM-SHA-512=[password=nrsh13-secret]' --entity-type users --entity-name nrsh13
Completed Updating config for entity: user-principal 'nrsh13'.

Step 3: Existing credentials may be listed using the –describe option or DELETED using the –delete-config option.

$KAFKA_HOME/bin/kafka-configs.sh --zookeeper apache-kafka.abc.com:2181 --describe --entity-type users --entity-name nrsh13
$KAFKA_HOME/bin/kafka-configs.sh --zookeeper apache-kafka.abc.com:2181 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name nrsh13

Step 4: Configuring Kafka Brokers: Setting property to tell kafka Brokers to use SCRAM authentication for inter broker communication.

Note that we can have multiple SASL details in this file. Which one Brokers will use is depend on ‘sasl.mechanism.inter.broker.protocol’.

Create kafka_server_jaas.conf in $KAFKA_HOME/config with below contents.

KafkaServer {
 org.apache.kafka.common.security.scram.ScramLoginModule required
 username="admin"
 password="admin-secret";
};

The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication.

Step 5: Pass the JAAS config file location as JVM parameter (by putting below in ~/.bashrc) to each Kafka broker:

export KAFKA_PLAIN_PARAMS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
export KAFKA_OPTS="$KAFKA_PLAIN_PARAMS $KAFKA_OPTS"

Step 6:  Generate the SSL Certificates (node.ks and node.ts) for your machine following Generating SSL Certificates and place to some location and take a note of this location. will use this in our configuration further.

Step 7: Update $KAFKA_HOME/config/server.properties for all 3 brokers

# Use below for SASL/SCRAM only (No SSL)
# For rest of the brokers change port (highlighted below) to 9091 and 9092
# Using SASL_PLAINTEXT as we do not have SSL
listeners=SASL_PLAINTEXT://apache-kafka.abc.com:9090
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256

# Use below for SASL/PLAIN + SSL
listeners=SASL_SSL://apache-kafka.abc.com:9090
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256

ssl.keystore.location=/root/ssl/myCluster/apache-kafka.abc.com/node.ks
ssl.keystore.password=kspassword
ssl.key.password=password
ssl.truststore.location=/root/ssl/myCluster/apache-kafka.abc.com/node.ts
ssl.truststore.password=password
ssl.client.auth=required

Step 8: Restart ZK/Kafka Services. All Brokers will refer ‘kafka_server_jaas.conf’ before connecting to each other.

# Zookeeper
ZOO_LOG_DIR=/usr/local/zookeeper/logs ZOO_LOG4J_PROP='INFO,ROLLINGFILE' /usr/local/zookeeper/bin/zkServer.sh start $ZOOKEEPER_HOME/conf/zoo-1.cfg

# Kafka Broker
nohup $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server0.properties &
nohup $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server1.properties &
nohup $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server2.properties &

Step 9: Try Running Producer and Consumer

Create client.properties which will have the SASL credential details and will be used as value for –producer.config/–consumer.config in the producer/consumer command.

Note that we are passing the credentials which we created in Step 2. This will be done for every user who wants to use Kafka Cluster.

Given details will be authenticated via Zookeeper which is storing all users details. If a user is not there, authentication will fail and user cannot proceed.

 

# For SASL/SCRAM (No SSL):
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
        username="nrsh13" \
        password="nrsh13-secret";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256

# For SASL/SCRAM + SSL:
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
        username="nrsh13" \
        password="nrsh13-secret";
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256

ssl.keystore.location=/root/ssl/myCluster/apache-kafka.abc.com/node.ks
ssl.keystore.password=kspassword
ssl.key.password=password
ssl.truststore.location=/root/ssl/myCluster/apache-kafka.abc.com/node.ts
ssl.truststore.password=password

Alternatively, clients/users credentials may be specified as a JVM parameter similar to brokers, just like we placed the inter-broker communication credentials in kafka_server_jaas.conf. This option allows only one user for all client connections from a JVM.

For eq. we can add below in $KAFKA_HOME/config/kafka_server_jaas.conf.

KafkaClient {
         org.apache.kafka.common.security.scram.ScramLoginModule required
         username="nrsh13"
         password="nrsh13-secret";
 };

Run Producers and Consumers:

As we already added user ‘nrsh13’ in the very beginning, we can simply start our Consumer/producer using client.properties.

# Producer
[root@apache-kafka ~]$ $KAFKA_HOME/bin/kafka-console-producer.sh --broker-list apache-kafka.abc.com:9090 --topic test --producer.config client.properties
hello this is a test for SCRAM authentication
All looks good Cheers !!

# Consumer
[root@apache-kafka ~]$ $KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server apache-kafka.abc.com:9090 --topic test --from-beginning --consumer.config client.properties
hello this is a test for SCRAM authentication
All looks good Cheers !!

In case of invalid credentials, will get below:

ERROR [Consumer clientId=consumer-1, groupId=test-consumer-group] 
Connection to node -1 failed authentication due to: Authentication failed 
due to invalid credentials with SASL mechanism SCRAM-SHA-256 
(org.apache.kafka.clients.NetworkClient)

With this, we are done with PLAIN and SCRAM SASL (plus SSL).

Cheers.

Advertisements