METRON-821 Minor fixes in full dev kerberos setup instructions (JonZeolla) 
closes apache/incubator-metron#510


Project: http://git-wip-us.apache.org/repos/asf/incubator-metron/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-metron/commit/29d7cb37
Tree: http://git-wip-us.apache.org/repos/asf/incubator-metron/tree/29d7cb37
Diff: http://git-wip-us.apache.org/repos/asf/incubator-metron/diff/29d7cb37

Branch: refs/heads/Metron_0.4.0
Commit: 29d7cb378835b819b326660ed59aca4becbf7099
Parents: 2ecabaa
Author: JonZeolla <[email protected]>
Authored: Wed Apr 19 21:32:58 2017 -0400
Committer: jonzeolla <[email protected]>
Committed: Wed Apr 19 21:32:58 2017 -0400

----------------------------------------------------------------------
 .github/PULL_REQUEST_TEMPLATE.md            |   2 +-
 metron-deployment/vagrant/Kerberos-setup.md | 313 ++++++++++++-----------
 2 files changed, 166 insertions(+), 149 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/29d7cb37/.github/PULL_REQUEST_TEMPLATE.md
----------------------------------------------------------------------
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 56ca8f3..7d5ce2c 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -40,5 +40,5 @@ In order to streamline the review of the contribution we ask 
you follow these gu
 
 #### Note:
 Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
-It is also recommened that [travis-ci](https://travis-ci.org) is set up for 
your personal repository such that your branches are built there before 
submitting a pull request.
+It is also recommended that [travis-ci](https://travis-ci.org) is set up for 
your personal repository such that your branches are built there before 
submitting a pull request.
 

http://git-wip-us.apache.org/repos/asf/incubator-metron/blob/29d7cb37/metron-deployment/vagrant/Kerberos-setup.md
----------------------------------------------------------------------
diff --git a/metron-deployment/vagrant/Kerberos-setup.md 
b/metron-deployment/vagrant/Kerberos-setup.md
index 1fce650..a66da8a 100644
--- a/metron-deployment/vagrant/Kerberos-setup.md
+++ b/metron-deployment/vagrant/Kerberos-setup.md
@@ -2,60 +2,59 @@
 **Note:** These are manual instructions for Kerberizing Metron Storm 
topologies from Kafka to Kafka. This does not cover the Ambari MPack, sensor 
connections, or MAAS.
 
 1. Build full dev and ssh into the machine
-  ```
-cd incubator-metron/metron-deployment/vagrant/full-dev-platform
-vagrant up
-vagrant ssh
-  ```
+    ```
+    cd incubator-metron/metron-deployment/vagrant/full-dev-platform
+    vagrant up
+    vagrant ssh
+    ```
 
 2. Export env vars. Replace *node1* with the appropriate hosts if running 
anywhere other than full-dev Vagrant.
-  ```
-# execute as root
-sudo su -
-export ZOOKEEPER=node1
-export BROKERLIST=node1
-export HDP_HOME="/usr/hdp/current"
-export METRON_VERSION="0.4.0"
-export METRON_HOME="/usr/metron/${METRON_VERSION}"
-  ```
+    ```
+    # execute as root
+    sudo su -
+    export ZOOKEEPER=node1
+    export BROKERLIST=node1
+    export HDP_HOME="/usr/hdp/current"
+    export METRON_VERSION="0.4.0"
+    export METRON_HOME="/usr/metron/${METRON_VERSION}"
+    ```
 
 3. Stop all topologies - we will  restart them again once Kerberos has been 
enabled.
-  ```
-for topology in bro snort enrichment indexing; do storm kill $topology; done
-  ```
+    ```
+    for topology in bro snort enrichment indexing; do storm kill $topology; 
done
+    ```
 
 4. Setup Kerberos
-  ```
-# Note: if you copy/paste this full set of commands, the kdb5_util command 
will not run as expected, so run the commands individually to ensure they all 
execute
-# set 'node1' to the correct host for your kdc
-yum -y install krb5-server krb5-libs krb5-workstation
-sed -i 's/kerberos.example.com/node1/g' /etc/krb5.conf
-cp /etc/krb5.conf /var/lib/ambari-server/resources/scripts
-# This step takes a moment. It creates the kerberos database.
-kdb5_util create -s
-/etc/rc.d/init.d/krb5kdc start
-/etc/rc.d/init.d/kadmin start
-chkconfig krb5kdc on
-chkconfig kadmin on
-  ```
+    ```
+    # Note: if you copy/paste this full set of commands, the kdb5_util command 
will not run as expected, so run the commands individually to ensure they all 
execute
+    # set 'node1' to the correct host for your kdc
+    yum -y install krb5-server krb5-libs krb5-workstation
+    sed -i 's/kerberos.example.com/node1/g' /etc/krb5.conf
+    /bin/cp -f /etc/krb5.conf /var/lib/ambari-server/resources/scripts
+    # This step takes a moment. It creates the kerberos database.
+    kdb5_util create -s
+    /etc/rc.d/init.d/krb5kdc start
+    /etc/rc.d/init.d/kadmin start
+    chkconfig krb5kdc on
+    chkconfig kadmin on
+    ```
 
 5. Setup the admin and metron user principals. You'll kinit as the metron user 
when running topologies. Make sure to remember the passwords.
-  ```
-kadmin.local -q "addprinc admin/admin"
-kadmin.local -q "addprinc metron"
-  ```
+    ```
+    kadmin.local -q "addprinc admin/admin"
+    kadmin.local -q "addprinc metron"
+    ```
 
 6. Create the metron user HDFS home directory
-  ```
-sudo -u hdfs hdfs dfs -mkdir /user/metron && \
-sudo -u hdfs hdfs dfs -chown metron:hdfs /user/metron && \
-sudo -u hdfs hdfs dfs -chmod 770 /user/metron
-  ```
+    ```
+    sudo -u hdfs hdfs dfs -mkdir /user/metron && \
+    sudo -u hdfs hdfs dfs -chown metron:hdfs /user/metron && \
+    sudo -u hdfs hdfs dfs -chmod 770 /user/metron
+    ```
 
-7. In Ambari, setup Storm to run with Kerberos and run worker jobs as the 
submitting user:
+7. In [Ambari](http://node1:8080), setup Storm to run with Kerberos and run 
worker jobs as the submitting user:
 
     a. Add the following properties to custom storm-site:
-
     ```
     
topology.auto-credentials=['org.apache.storm.security.auth.kerberos.AutoTGT']
     
nimbus.credential.renewers.classes=['org.apache.storm.security.auth.kerberos.AutoTGT']
@@ -87,147 +86,159 @@ sudo -u hdfs hdfs dfs -chmod 770 /user/metron
     ![enable keberos configure](readme-images/custom-storm-site-final.png)
 
 9. Setup Metron keytab
-  ```
-kadmin.local -q "ktadd -k metron.headless.keytab [email protected]" && \
-cp metron.headless.keytab /etc/security/keytabs && \
-chown metron:hadoop /etc/security/keytabs/metron.headless.keytab && \
-chmod 440 /etc/security/keytabs/metron.headless.keytab
-  ```
+    ```
+    kadmin.local -q "ktadd -k metron.headless.keytab [email protected]" && \
+    cp metron.headless.keytab /etc/security/keytabs && \
+    chown metron:hadoop /etc/security/keytabs/metron.headless.keytab && \
+    chmod 440 /etc/security/keytabs/metron.headless.keytab
+    ```
 
 10. Kinit with the metron user
-  ```
-kinit -kt /etc/security/keytabs/metron.headless.keytab [email protected]
-  ```
+    ```
+    kinit -kt /etc/security/keytabs/metron.headless.keytab [email protected]
+    ```
 
 11. First create any additional Kafka topics you will need. We need to create 
the topics before adding the required ACLs. The current full dev installation 
will deploy bro, snort, enrichments, and indexing only. e.g.
-  ```
-${HDP_HOME}/kafka-broker/bin/kafka-topics.sh --zookeeper ${ZOOKEEPER}:2181 
--create --topic yaf --partitions 1 --replication-factor 1
-  ```
+    ```
+    ${HDP_HOME}/kafka-broker/bin/kafka-topics.sh --zookeeper ${ZOOKEEPER}:2181 
--create --topic yaf --partitions 1 --replication-factor 1
+    ```
 
 12. Setup Kafka ACLs for the topics
-  ```
-export KERB_USER=metron;
-for topic in bro enrichments indexing snort; do
-${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--topic ${topic};
-done;
-  ```
+    ```
+    export KERB_USER=metron
+    for topic in bro enrichments indexing snort; do
+        ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--topic ${topic}
+    done
+    ```
 
 13. Setup Kafka ACLs for the consumer groups
-  ```
-${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group bro_parser;
-${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group snort_parser;
-${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group yaf_parser;
-${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group enrichments;
-${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group indexing;
-  ```
+    ```
+    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group bro_parser
+    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group snort_parser
+    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group yaf_parser
+    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group enrichments
+    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--group indexing
+    ```
 
 14. Add metron user to the Kafka cluster ACL
-  ```
-/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--cluster kafka-cluster
-  ```
+    ```
+    ${HDP_HOME}/kafka-broker/bin/kafka-acls.sh --authorizer 
kafka.security.auth.SimpleAclAuthorizer --authorizer-properties 
zookeeper.connect=${ZOOKEEPER}:2181 --add --allow-principal User:${KERB_USER} 
--cluster kafka-cluster
+    ```
 
 15. We also need to grant permissions to the HBase tables. Kinit as the hbase 
user and add ACLs for metron.
-  ```
-kinit -kt /etc/security/keytabs/hbase.headless.keytab 
[email protected]
-echo "grant 'metron', 'RW', 'threatintel'" | hbase shell
-echo "grant 'metron', 'RW', 'enrichment'" | hbase shell
-  ```
+    ```
+    kinit -kt /etc/security/keytabs/hbase.headless.keytab 
[email protected]
+    echo "grant 'metron', 'RW', 'threatintel'" | hbase shell
+    echo "grant 'metron', 'RW', 'enrichment'" | hbase shell
+    ```
 
 16. Create a “.storm” directory in the metron user’s home directory and 
switch to that directory.
-  ```
-su metron && cd ~/
-mkdir .storm
-cd .storm
-  ```
+    ```
+    su metron
+    mkdir ~/.storm
+    cd ~/.storm
+    ```
 
 17. Create a custom client jaas file. This should look identical to the Storm 
client jaas file located in /etc/storm/conf/client_jaas.conf except for the 
addition of a Client stanza. The Client stanza is used for Zookeeper. All 
quotes and semicolons are necessary.
-  ```
-[metron@node1 .storm]$ cat client_jaas.conf
-StormClient {
-   com.sun.security.auth.module.Krb5LoginModule required
-   useTicketCache=true
-   renewTicket=true
-   serviceName="nimbus";
-};
-Client {
-   com.sun.security.auth.module.Krb5LoginModule required
-   useKeyTab=true
-   keyTab="/etc/security/keytabs/metron.headless.keytab"
-   storeKey=true
-   useTicketCache=false
-   serviceName="zookeeper"
-   principal="[email protected]";
-};
-KafkaClient {
-   com.sun.security.auth.module.Krb5LoginModule required
-   useKeyTab=true
-   keyTab="/etc/security/keytabs/metron.headless.keytab"
-   storeKey=true
-   useTicketCache=false
-   serviceName="kafka"
-   principal="[email protected]";
-};
-  ```
+    ```
+    cat << EOF > client_jaas.conf
+    StormClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useTicketCache=true
+        renewTicket=true
+        serviceName="nimbus";
+    };
+    Client {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        keyTab="/etc/security/keytabs/metron.headless.keytab"
+        storeKey=true
+        useTicketCache=false
+        serviceName="zookeeper"
+        principal="[email protected]";
+    };
+    KafkaClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        keyTab="/etc/security/keytabs/metron.headless.keytab"
+        storeKey=true
+        useTicketCache=false
+        serviceName="kafka"
+        principal="[email protected]";
+    };
+    EOF
+    ```
 
 18. Create a storm.yaml with jaas file info. Set the array of nimbus hosts 
accordingly.
-  ```
-[metron@node1 .storm]$ cat storm.yaml
-nimbus.seeds : ['node1']
-java.security.auth.login.config : '/home/metron/.storm/client_jaas.conf'
-storm.thrift.transport : 
'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
-  ```
+    ```
+    cat << EOF > storm.yaml
+    nimbus.seeds : ['node1']
+    java.security.auth.login.config : '/home/metron/.storm/client_jaas.conf'
+    storm.thrift.transport : 
'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
+    EOF
+    ```
 
 19. Create an auxiliary storm configuration json file in the metron user’s 
home directory. Note the login config option in the file points to our custom 
client_jaas.conf.
-  ```
-cd /home/metron
-[metron@node1 ~]$ cat storm-config.json
-{
-  "topology.worker.childopts" : 
"-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
-}
-  ```
+    ```
+    cat << EOF > ~/storm-config.json
+    {
+        "topology.worker.childopts" : 
"-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
+    }
+    EOF
+    ```
 
 20. Setup enrichment and indexing.
 
-    a. Modify enrichment.properties - 
`${METRON_HOME}/config/enrichment.properties`
-
+    a. Modify enrichment.properties as root located at 
`${METRON_HOME}/config/enrichment.properties`
     ```
-    kafka.security.protocol=PLAINTEXTSASL
-    
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+    if [[ $EUID -ne 0 ]]; then
+        echo -e "\nERROR:\tYou must be root to run these commands.  You may 
need to type exit."
+    else
+        sed -i 
's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' 
${METRON_HOME}/config/enrichment.properties
+        sed -i 
's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/'
 ${METRON_HOME}/config/enrichment.properties
+    fi
     ```
 
-    b. Modify elasticsearch.properties - 
`${METRON_HOME}/config/elasticsearch.properties`
-
+    b. Modify elasticsearch.properties as root located at 
`${METRON_HOME}/config/elasticsearch.properties`
     ```
-    kafka.security.protocol=PLAINTEXTSASL
-    
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+    if [[ $EUID -ne 0 ]]; then
+        echo -e "\nERROR:\tYou must be root to run these commands.  You may 
need to type exit."
+    else
+        sed -i 
's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' 
${METRON_HOME}/config/elasticsearch.properties
+        sed -i 
's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/'
 ${METRON_HOME}/config/elasticsearch.properties
+    fi
     ```
 
 21. Kinit with the metron user again
-  ```
-kinit -kt /etc/security/keytabs/metron.headless.keytab [email protected]
-  ```
+    ```
+    su metron
+    cd
+    kinit -kt /etc/security/keytabs/metron.headless.keytab [email protected]
+    ```
 
 22. Restart the parser topologies. Be sure to pass in the new parameter, 
“-ksp” or “--kafka_security_protocol.” Run this from the metron home 
directory.
-  ```
-for parser in bro snort; do ${METRON_HOME}/bin/start_parser_topology.sh -z 
${ZOOKEEPER}:2181 -s ${parser} -ksp SASL_PLAINTEXT -e storm-config.json; done
-  ```
+    ```
+    for parser in bro snort; do
+        ${METRON_HOME}/bin/start_parser_topology.sh -z ${ZOOKEEPER}:2181 -s 
${parser} -ksp SASL_PLAINTEXT -e storm-config.json
+    done
+    ```
 
 23. Now restart the enrichment and indexing topologies.
-  ```
-${METRON_HOME}/bin/start_enrichment_topology.sh
-${METRON_HOME}/bin/start_elasticsearch_topology.sh
-  ```
-
-24. Push some sample data to one of the parser topics. E.g for yaf we took raw 
data from 
[incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/yaf/raw/YafExampleOutput](../../metron-platform/metron-integration-test/src/main/sample/data/yaf/raw/YafExampleOutput)
-  ```
-cat sample-yaf.txt | ${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh 
--broker-list ${BROKERLIST}:6667 --security-protocol SASL_PLAINTEXT --topic yaf
-  ```
-
-25. Wait a few moments for data to flow through the system and then check for 
data in the Elasticsearch indexes. Replace yaf with whichever parser type 
you’ve chosen.
-  ```
-curl -XGET "${ZOOKEEPER}:9200/yaf*/_search"
-curl -XGET "${ZOOKEEPER}:9200/yaf*/_count"
-  ```
+    ```
+    ${METRON_HOME}/bin/start_enrichment_topology.sh
+    ${METRON_HOME}/bin/start_elasticsearch_topology.sh
+    ```
+
+24. Push some sample data to one of the parser topics. E.g for bro we took raw 
data from 
[incubator-metron/metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput](../../metron-platform/metron-integration-test/src/main/sample/data/bro/raw/BroExampleOutput)
+    ```
+    cat sample-bro.txt | 
${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh --broker-list 
${BROKERLIST}:6667 --security-protocol SASL_PLAINTEXT --topic bro
+    ```
+
+25. Wait a few moments for data to flow through the system and then check for 
data in the Elasticsearch indexes. Replace bro with whichever parser type 
you’ve chosen.
+    ```
+    curl -XGET "${ZOOKEEPER}:9200/bro*/_search"
+    curl -XGET "${ZOOKEEPER}:9200/bro*/_count"
+    ```
 
 26. You should have data flowing from the parsers all the way through to the 
indexes. This completes the Kerberization instructions
 
@@ -263,5 +274,11 @@ cat sample-yaf.txt | 
${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh --br
 ${HDP_HOME}/kafka-broker/bin/kafka-console-consumer.sh --zookeeper 
${ZOOKEEPER}:2181 --security-protocol PLAINTEXTSASL --topic yaf
 ```
 
+##### Modify the sensor-stubs to send logs via SASL
+```
+sed -i 's/node1:6667 --topic/node1:6667 --security-protocol PLAINTEXTSASL 
--topic/' /opt/sensor-stubs/bin/start-*-stub
+for sensorstub in bro snort; do service sensor-stubs stop $sensorstub; service 
sensor-stubs start $sensorstub; done
+```
+
 #### References
 * 
[https://github.com/apache/storm/blob/master/SECURITY.md](https://github.com/apache/storm/blob/master/SECURITY.md)

Reply via email to