Repository: incubator-metron
Updated Branches:
  refs/heads/master 8c264f8e7 -> fbce3b5f9


METRON-835 Use Profiler with Kerberos (nickwallen) closes 
apache/incubator-metron#521


Project: http://git-wip-us.apache.org/repos/asf/incubator-metron/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-metron/commit/fbce3b5f
Tree: http://git-wip-us.apache.org/repos/asf/incubator-metron/tree/fbce3b5f
Diff: http://git-wip-us.apache.org/repos/asf/incubator-metron/diff/fbce3b5f

Branch: refs/heads/master
Commit: fbce3b5f9e396be997c315f39c6749d39b9105a3
Parents: 8c264f8
Author: nickwallen <[email protected]>
Authored: Tue Apr 25 11:20:28 2017 -0400
Committer: nickallen <[email protected]>
Committed: Tue Apr 25 11:20:28 2017 -0400

----------------------------------------------------------------------
 metron-deployment/vagrant/Kerberos-setup.md | 397 ++++++++++++++++++++---
 1 file changed, 345 insertions(+), 52 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/fbce3b5f/metron-deployment/vagrant/Kerberos-setup.md
----------------------------------------------------------------------
diff --git a/metron-deployment/vagrant/Kerberos-setup.md 
b/metron-deployment/vagrant/Kerberos-setup.md
index f02cc5f..759d63d 100644
--- a/metron-deployment/vagrant/Kerberos-setup.md
+++ b/metron-deployment/vagrant/Kerberos-setup.md
@@ -1,48 +1,112 @@
-# Setting Up Kerberos in Vagrant Full Dev
-**Note:** These are instructions for Kerberizing Metron Storm topologies from 
Kafka to Kafka. This does not cover the sensor connections or MAAS.
-General Kerberization notes can be found in the metron-deployment 
[README.md](../README.md)
+Kerberos Setup
+==============
 
-## Setup the KDC
+This document provides instructions for kerberizing Metron's Vagrant-based 
development environments; "Quick Dev" and "Full Dev".  These instructions do 
not cover the Ambari MPack or sensors.  General Kerberization notes can be 
found in the metron-deployment [README.md](../README.md).
 
-1. Build full dev and ssh into the machine
-    ```
-    cd incubator-metron/metron-deployment/vagrant/full-dev-platform
-    vagrant up
-    vagrant ssh
-    ```
+* [Setup](#setup)
+* [Create a KDC](#create-a-kdc)
+* [Enable Kerberos](#enable-kerberos)
+* [Kafka Authorization](#kafka-authorization)
+* [HBase Authorization](#hbase-authorization)
+* [Storm Authorization](#storm-authorization)
+* [Start Metron](#start-metron)
+
+Setup
+-----
+
+1. Deploy a Vagrant development environment; either [Full 
Dev](full-dev-platform) or [Quick Dev](quick-dev-platform).
+
+1. Export the following environment variables.  These need to be set for the 
remainder of the instructions. Replace `node1` with the appropriate hosts, if 
you are running Metron anywhere other than Vagrant.  
 
-2. Export env vars. Replace *node1* with the appropriate hosts if running 
anywhere other than full-dev Vagrant.
     ```
-    # execute as root
-    sudo su -
-    export ZOOKEEPER=node1
-    export BROKERLIST=node1
+    export ZOOKEEPER=node1:2181
+    export ELASTICSEARCH=node1:9200
+    export KAFKA=node1:6667
+    
     export HDP_HOME="/usr/hdp/current"
+    export KAFKA_HOME="${HDP_HOME}/kafka-broker"
     export METRON_VERSION="0.4.0"
     export METRON_HOME="/usr/metron/${METRON_VERSION}"
     ```
 
-3. Setup Kerberos
-    ```
-    # Note: if you copy/paste this full set of commands, the kdb5_util command 
will not run as expected, so run the commands individually to ensure they all 
execute
-    # set 'node1' to the correct host for your kdc
-    yum -y install krb5-server krb5-libs krb5-workstation
-    sed -i 's/kerberos.example.com/node1/g' /etc/krb5.conf
-    /bin/cp -f /etc/krb5.conf /var/lib/ambari-server/resources/scripts
-    # This step takes a moment. It creates the kerberos database.
-    kdb5_util create -s
-    /etc/rc.d/init.d/krb5kdc start
-    /etc/rc.d/init.d/kadmin start
-    chkconfig krb5kdc on
-    chkconfig kadmin on
-    ```
+1. Execute the following commands as root.
+       
+       ```
+       sudo su -
+       ```
+
+1. Stop all Metron topologies.  They will be restarted again once Kerberos has 
been enabled.
+
+       ```
+       for topology in bro snort enrichment indexing; do
+               storm kill $topology;
+       done
+       ```
+
+1. Create the `metron` user's home directory in HDFS.
+
+       ```
+       sudo -u hdfs hdfs dfs -mkdir /user/metron
+       sudo -u hdfs hdfs dfs -chown metron:hdfs /user/metron
+       sudo -u hdfs hdfs dfs -chmod 770 /user/metron
+       ```
+
+Create a KDC
+------------
+
+1. Install dependencies.
+
+       ```
+       yum -y install krb5-server krb5-libs krb5-workstation
+       ```
+
+1. Define the host, `node1`, as the KDC.  
+
+       ```
+       sed -i 's/kerberos.example.com/node1/g' /etc/krb5.conf
+       cp -f /etc/krb5.conf /var/lib/ambari-server/resources/scripts
+       ```
+
+1. Do not copy/paste this full set of commands as the `kdb5_util` command will 
not run as expected. Run the commands individually to ensure they all execute.  
This step takes a moment. It creates the kerberos database.
+
+       ```
+       kdb5_util create -s
+
+       /etc/rc.d/init.d/krb5kdc start
+       chkconfig krb5kdc on
+
+       /etc/rc.d/init.d/kadmin start
+       chkconfig kadmin on
+       ```
+
+1. Setup the `admin` and `metron` principals. You'll `kinit` as the `metron` 
principal when running topologies. Make sure to remember the passwords.
+
+       ```
+       kadmin.local -q "addprinc admin/admin"
+       kadmin.local -q "addprinc metron"
+       ```
+
+Enable Kerberos
+---------------
+
+1. In [Ambari](http://node1:8080), setup Storm to use Kerberos and run worker 
jobs as the submitting user.
+
+    a. Add the following properties to the custom storm-site:
 
-4. Setup the admin user principal. You'll kinit as the metron user when 
running topologies. Make sure to remember the password.
     ```
-    kadmin.local -q "addprinc admin/admin"
+    
topology.auto-credentials=['org.apache.storm.security.auth.kerberos.AutoTGT']
+    
nimbus.credential.renewers.classes=['org.apache.storm.security.auth.kerberos.AutoTGT']
+    supervisor.run.worker.as.user=true
     ```
 
-## Ambari Setup
+    b. In the Storm config section in Ambari, choose “Add Property” under 
custom storm-site:
+
+    ![custom storm-site](../readme-images/ambari-storm-site.png)
+
+    c. In the dialog window, choose the “bulk property add mode” toggle 
button and add the below values:
+
+    ![custom storm-site 
properties](../readme-images/ambari-storm-site-properties.png)
+
 1. Kerberize the cluster via Ambari. More detailed documentation can be found 
[here](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_enabling_kerberos_security_in_ambari.html).
 
     a. For this exercise, choose existing MIT KDC (this is what we setup and 
installed in the previous steps.)
@@ -55,35 +119,252 @@ General Kerberization notes can be found in the 
metron-deployment [README.md](..
 
     ![enable keberos 
configure](../readme-images/enable-kerberos-configure-kerberos.png)
 
-    c. Click through to “Start and Test Services.” Let the cluster spin up.
+    c. Click through to “Start and Test Services.” Let the cluster spin 
up, but don't worry about starting up Metron via Ambari - we're going to run 
the parsers manually against the rest of the Hadoop cluster Kerberized. The 
wizard will fail at starting Metron, but this is OK. Click “continue.” When 
you’re finished, the custom storm-site should look similar to the following:
+
+    ![enable keberos configure](../readme-images/custom-storm-site-final.png)
+
+1. Create a Metron keytab
+
+    ```
+       kadmin.local -q "ktadd -k metron.headless.keytab [email protected]"
+       cp metron.headless.keytab /etc/security/keytabs
+       chown metron:hadoop /etc/security/keytabs/metron.headless.keytab
+       chmod 440 /etc/security/keytabs/metron.headless.keytab
+       ```
+
+Kafka Authorization
+-------------------
+
+1. Acquire a Kerberos ticket using the `metron` principal.
+
+    ```
+       kinit -kt /etc/security/keytabs/metron.headless.keytab 
[email protected]
+       ```
+
+1. Create any additional Kafka topics that you will need. We need to create 
the topics before adding the required ACLs. The current full dev installation 
will deploy bro, snort, enrichments, and indexing only.  For example, you may 
want to add a topic for 'yaf' telemetry.
+
+    ```
+       ${KAFKA_HOME}/bin/kafka-topics.sh \
+      --zookeeper ${ZOOKEEPER} \
+      --create \
+      --topic yaf \
+      --partitions 1 \
+      --replication-factor 1
+       ```
+
+1. Setup Kafka ACLs for the `bro`, `snort`, `enrichments`, and `indexing` 
topics.  Run the same command against any additional topics that you might be 
using; for example `yaf`.
+
+    ```
+       export KERB_USER=metron
+
+       for topic in bro snort enrichments indexing; do
+               ${KAFKA_HOME}/bin/kafka-acls.sh \
+          --authorizer kafka.security.auth.SimpleAclAuthorizer \
+          --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
+          --add \
+          --allow-principal User:${KERB_USER} \
+          --topic ${topic}
+       done
+       ```
+
+1. Setup Kafka ACLs for the consumer groups.  This command sets the ACLs for 
Bro, Snort, YAF, Enrichments, Indexing, and the Profiler.  Execute the same 
command for any additional Parsers that you may be running.
+
+    ```
+    export KERB_USER=metron
+
+       for group in bro_parser snort_parser yaf_parser enrichments indexing 
profiler; do
+               ${KAFKA_HOME}/bin/kafka-acls.sh \
+          --authorizer kafka.security.auth.SimpleAclAuthorizer \
+          --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
+          --add \
+          --allow-principal User:${KERB_USER} \
+          --group ${group}
+       done
+       ```
+
+1. Add the `metron` principal to the `kafka-cluster` ACL.
+
+    ```
+       ${KAFKA_HOME}/bin/kafka-acls.sh \
+        --authorizer kafka.security.auth.SimpleAclAuthorizer \
+        --authorizer-properties zookeeper.connect=${ZOOKEEPER} \
+        --add \
+        --allow-principal User:${KERB_USER} \
+        --cluster kafka-cluster
+       ```
+
+HBase Authorization
+-------------------
+
+1. Acquire a Kerberos ticket using the `hbase` principal
+
+    ```
+       kinit -kt /etc/security/keytabs/hbase.headless.keytab 
[email protected]
+       ```
+
+1. Grant permissions for the HBase tables used in Metron.
+
+    ```
+       echo "grant 'metron', 'RW', 'threatintel'" | hbase shell
+       echo "grant 'metron', 'RW', 'enrichment'" | hbase shell
+       ```
+
+1. If you are using the Profiler, do the same for its HBase table.
+
+    ```
+       echo "create 'profiler', 'P'" | hbase shell
+       echo "grant 'metron', 'RW', 'profiler', 'P'" | hbase shell
+       ```
+
+Storm Authorization
+-------------------
+
+1. Switch to the `metron` user and acquire a Kerberos ticket for the `metron` 
principal.
+
+    ```
+       su metron
+       kinit -kt /etc/security/keytabs/metron.headless.keytab 
[email protected]
+       ```
+
+1. Create the directory `/home/metron/.storm` and switch to that directory.
 
-## Push Data
-1. Kinit with the metron user
     ```
-    kinit -kt /etc/security/keytabs/metron.headless.keytab [email protected]
+       mkdir /home/metron/.storm
+       cd /home/metron/.storm
+       ```
+
+1. Create a client JAAS file at `/home/metron/.storm/client_jaas.conf`.  This 
should look identical to the Storm client JAAS file located at 
`/etc/storm/conf/client_jaas.conf` except for the addition of a `Client` 
stanza. The `Client` stanza is used for Zookeeper. All quotes and semicolons 
are necessary.
+
     ```
+    cat << EOF > client_jaas.conf
+    StormClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useTicketCache=true
+        renewTicket=true
+        serviceName="nimbus";
+    };
+    Client {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        keyTab="/etc/security/keytabs/metron.headless.keytab"
+        storeKey=true
+        useTicketCache=false
+        serviceName="zookeeper"
+        principal="[email protected]";
+    };
+    KafkaClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        keyTab="/etc/security/keytabs/metron.headless.keytab"
+        storeKey=true
+        useTicketCache=false
+        serviceName="kafka"
+        principal="[email protected]";
+    };
+    EOF
+    ```
+
+1. Create a YAML file at `/home/metron/.storm/storm.yaml`.  This should point 
to the client JAAS file.  Set the array of nimbus hosts accordingly.
 
-2. Push some sample data to one of the parser topics. E.g for bro we took raw 
data from 
[incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput](../../metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput)
     ```
-    cat sample-bro.txt | 
${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh --broker-list 
${BROKERLIST}:6667 --security-protocol SASL_PLAINTEXT --topic bro
+    cat << EOF > /home/metron/.storm/storm.yaml
+    nimbus.seeds : ['node1']
+    java.security.auth.login.config : '/home/metron/.storm/client_jaas.conf'
+    storm.thrift.transport : 
'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
+    EOF
     ```
 
-3. Wait a few moments for data to flow through the system and then check for 
data in the Elasticsearch indexes. Replace bro with whichever parser type 
you’ve chosen.
+1. Create an auxiliary storm configuration file at 
`/home/metron/storm-config.json`. Note the login config option in the file 
points to the client JAAS file.
+
     ```
-    curl -XGET "${ZOOKEEPER}:9200/bro*/_search"
-    curl -XGET "${ZOOKEEPER}:9200/bro*/_count"
+    cat << EOF > /home/metron/storm-config.json
+    {
+        "topology.worker.childopts" : 
"-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
+    }
+    EOF
     ```
 
-4. You should have data flowing from the parsers all the way through to the 
indexes. This completes the Kerberization instructions
+1. Configure the Enrichment, Indexing and the Profiler topologies to use the 
client JAAS file.  Add the following properties to each of the topology 
properties files.
+
+       ```
+       kafka.security.protocol=PLAINTEXTSASL
+       
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+       ```
+
+    * `${METRON_HOME}/config/enrichment.properties`
+    * `${METRON_HOME}/config/elasticsearch.properties`
+    * `${METRON_HOME}/config/profiler.properties`
+
+    Use the following command to automate this step.
+
+    ```
+    for file in enrichment.properties elasticsearch.properties 
profiler.properties; do
+      echo ${file}
+      sed -i 
"s/^kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/" 
"${METRON_HOME}/config/${file}"
+      sed -i 
"s/^topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/"
 "${METRON_HOME}/config/${file}"
+    done
+    ```
+
+Start Metron
+------------
+
+1. Switch to the `metron` user and acquire a Kerberos ticket for the `metron` 
principal.
+
+    ```
+       su metron
+       kinit -kt /etc/security/keytabs/metron.headless.keytab 
[email protected]
+       ```
+
+1. Restart the parser topologies. Be sure to pass in the new parameter, `-ksp` 
or `--kafka_security_protocol`.  The following command will start only the Bro 
and Snort topologies.  Execute the same command for any other Parsers that you 
may need, for example `yaf`.  
+
+    ```
+    for parser in bro snort; do
+       ${METRON_HOME}/bin/start_parser_topology.sh \
+               -z ${ZOOKEEPER} \
+               -s ${parser} \
+               -ksp SASL_PLAINTEXT \
+               -e /home/metron/storm-config.json;
+    done
+    ```
+
+1. Restart the Enrichment and Indexing topologies.
+
+    ```
+       ${METRON_HOME}/bin/start_enrichment_topology.sh
+       ${METRON_HOME}/bin/start_elasticsearch_topology.sh
+       ```
+
+1. Push some sample data to one of the parser topics. E.g for Bro we took raw 
data from 
[incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput](../../metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput)
+
+    ```
+       cat sample-bro.txt | 
${KAFKA_HOME}/kafka-broker/bin/kafka-console-producer.sh \
+               --broker-list ${KAFKA} \
+               --security-protocol SASL_PLAINTEXT \
+               --topic bro
+       ```
+
+1. Wait a few moments for data to flow through the system and then check for 
data in the Elasticsearch indices. Replace yaf with whichever parser type 
you’ve chosen.
+
+    ```
+       curl -XGET "${ELASTICSEARCH}/bro*/_search"
+       curl -XGET "${ELASTICSEARCH}/bro*/_count"
+       ```
+
+1. You should have data flowing from the parsers all the way through to the 
indexes. This completes the Kerberization instructions
+
+More Information
+----------------
+
+### Kerberos
 
-### Other useful commands
-#### Kerberos
 Unsure of your Kerberos principal associated with a keytab? There are a couple 
ways to get this. One is via the list of principals that Ambari provides via 
downloadable csv. If you didn’t download this list, you can also check the 
principal manually by running the following against the keytab.
+
 ```
 klist -kt /etc/security/keytabs/<keytab-file-name>
 ```
 
 E.g.
+
 ```
 klist -kt /etc/security/keytabs/hbase.headless.keytab
 Keytab name: FILE:/etc/security/keytabs/hbase.headless.keytab
@@ -96,23 +377,35 @@ KVNO Timestamp         Principal
    1 03/28/17 19:29:36 [email protected]
 ```
 
-#### Kafka with Kerberos enabled
+### Kafka with Kerberos enabled
+
+#### Write data to a topic with SASL
 
-##### Write data to a topic with SASL
 ```
-cat sample-yaf.txt | ${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh 
--broker-list ${BROKERLIST}:6667 --security-protocol PLAINTEXTSASL --topic yaf
+cat sample-yaf.txt | ${KAFKA_HOME}/bin/kafka-console-producer.sh \
+       --broker-list ${KAFKA} \
+       --security-protocol PLAINTEXTSASL \
+       --topic yaf
 ```
 
-##### View topic data from latest offset with SASL
+#### View topic data from latest offset with SASL
+
 ```
-${HDP_HOME}/kafka-broker/bin/kafka-console-consumer.sh --zookeeper 
${ZOOKEEPER}:2181 --security-protocol PLAINTEXTSASL --topic yaf
+${KAFKA_HOME}/bin/kafka-console-consumer.sh \
+       --zookeeper ${ZOOKEEPER} \
+       --security-protocol PLAINTEXTSASL \
+       --topic yaf
 ```
 
-##### Modify the sensor-stubs to send logs via SASL
+#### Modify the sensor-stubs to send logs via SASL
 ```
 sed -i 's/node1:6667 --topic/node1:6667 --security-protocol PLAINTEXTSASL 
--topic/' /opt/sensor-stubs/bin/start-*-stub
-for sensorstub in bro snort; do service sensor-stubs stop $sensorstub; service 
sensor-stubs start $sensorstub; done
+for sensorstub in bro snort; do 
+       service sensor-stubs stop ${sensorstub}; 
+       service sensor-stubs start ${sensorstub}; 
+done
 ```
 
-#### References
+### References
+
 * 
[https://github.com/apache/storm/blob/master/SECURITY.md](https://github.com/apache/storm/blob/master/SECURITY.md)

Reply via email to