Repository: incubator-hawq-docs
Updated Branches:
  refs/heads/master bce28faa4 -> 501b7d588


add support for active directory KDC server (closes #132)


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/501b7d58
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/501b7d58
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/501b7d58

Branch: refs/heads/master
Commit: 501b7d5885b3d2029d6687d50b6f91073cdf65fc
Parents: bce28fa
Author: Lisa Owen <[email protected]>
Authored: Thu Oct 26 14:29:23 2017 -0700
Committer: David Yozie <[email protected]>
Committed: Thu Oct 26 14:29:23 2017 -0700

----------------------------------------------------------------------
 .../source/subnavs/apache-hawq-nav.erb          |  13 +-
 .../clientaccess/kerberos-mitkdc.html.md.erb    | 113 ++++
 .../kerberos-securehdfs.html.md.erb             | 219 ++++++
 .../clientaccess/kerberos-userauth.html.md.erb  | 459 +++++++++++++
 markdown/clientaccess/kerberos.html.md.erb      | 670 +------------------
 5 files changed, 815 insertions(+), 659 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/501b7d58/book/master_middleman/source/subnavs/apache-hawq-nav.erb
----------------------------------------------------------------------
diff --git a/book/master_middleman/source/subnavs/apache-hawq-nav.erb 
b/book/master_middleman/source/subnavs/apache-hawq-nav.erb
index 03f0755..30bdba8 100644
--- a/book/master_middleman/source/subnavs/apache-hawq-nav.erb
+++ b/book/master_middleman/source/subnavs/apache-hawq-nav.erb
@@ -158,8 +158,19 @@
           <li>
             <a 
href="/docs/userguide/2.2.0.0-incubating/clientaccess/ldap.html">Using LDAP 
Authentication with TLS/SSL</a>
           </li>
-          <li>
+          <li class="has_submenu">
             <a 
href="/docs/userguide/2.2.0.0-incubating/clientaccess/kerberos.html">Using 
Kerberos Authentication</a>
+            <ul>
+              <li>
+            <a 
href="/docs/userguide/2.2.0.0-incubating/clientaccess/kerberos-securehdfs.html">Configuring
 HAWQ/PXF for Secure HDFS</a>
+              </li>
+              <li>
+            <a 
href="/docs/userguide/2.2.0.0-incubating/clientaccess/kerberos-userauth.html">Configuring
 Kerberos User Authentication for HAWQ</a>
+              </li>
+              <li>
+            <a 
href="/docs/userguide/2.2.0.0-incubating/clientaccess/kerberos-mitkdc.html">Example
 - Setting up an MIT KDC Server</a>
+              </li>
+            </ul>
           </li>
           <li>
             <a 
href="/docs/userguide/2.2.0.0-incubating/clientaccess/disable-kerberos.html">Disabling
 Kerberos Security</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/501b7d58/markdown/clientaccess/kerberos-mitkdc.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/kerberos-mitkdc.html.md.erb 
b/markdown/clientaccess/kerberos-mitkdc.html.md.erb
new file mode 100644
index 0000000..32b1040
--- /dev/null
+++ b/markdown/clientaccess/kerberos-mitkdc.html.md.erb
@@ -0,0 +1,113 @@
+---
+title: Example - Setting up an MIT KDC Server
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Follow this procedure to install and configure a Kerberos KDC server on a Red 
Hat Enterprise Linux host. The KDC server resides on the host named 
\<kdc-server\>.
+
+1. Log in to the Kerberos KDC Server system as a superuser:
+
+    ``` shell
+    $ ssh root@<kdc-server>
+    root@kdc-server$ 
+    ```
+
+2.  Install the Kerberos server packages:
+
+    ``` shell
+    root@kdc-server$ yum install krb5-libs krb5-server krb5-workstation
+    ```
+
+3.  Define the Kerberos realm for your cluster by editting the 
`/etc/krb5.conf` configuration file. The following example configures a 
Kerberos server with a realm named `REALM.DOMAIN` residing on a host named 
`hawq-kdc`.
+
+    ```
+    [logging]
+     default = FILE:/var/log/krb5libs.log
+     kdc = FILE:/var/log/krb5kdc.log
+     admin_server = FILE:/var/log/kadmind.log
+
+    [libdefaults]
+     default_realm = REALM.DOMAIN
+     dns_lookup_realm = false
+     dns_lookup_kdc = false
+     ticket_lifetime = 24h
+     renew_lifetime = 7d
+     forwardable = true
+     default_tgs_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
+     default_tkt_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
+     permitted_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
+
+    [realms]
+     REALM.DOMAIN = {
+      kdc = hawq-kdc:88
+      admin_server = hawq-kdc:749
+      default_domain = hawq-kdc
+     }
+
+    [domain_realm]
+     .hawq-kdc = REALM.DOMAIN
+     hawq-kdc = REALM.DOMAIN
+
+    [appdefaults]
+     pam = {
+        debug = false
+        ticket_lifetime = 36000
+        renew_lifetime = 36000
+        forwardable = true
+        krb4_convert = false
+       }
+    ```
+
+    The `kdc` and `admin_server` keys in the `[realms]` section specify the 
host \(`hawq-kdc`\) and port on which the Kerberos server is running. You can 
use an IP address in place of a host name.
+
+    If your Kerberos server manages authentication for other realms, you would 
instead add the `REALM.DOMAINM` realm in the `[realms]` and `[domain_realm]` 
sections of the `krb5.conf` file. See the [Kerberos 
documentation](http://web.mit.edu/kerberos/krb5-latest/doc/) for detailed 
information about the `krb5.conf` configuration file.
+
+4. Note the Kerberos KDC server host name or IP address and the name of the 
realm in which your cluster resides. You will need this information in later 
procedures.
+5.  Create a Kerberos KDC database by running the `kdb5_util` command:
+
+    ```
+    root@kdc-server$ kdb5_util create -s
+    ```
+
+    The `kdb5_util create` command creates the database in which the keys for 
the Kerberos realms managed by this KDC server are stored. The `-s` option 
instructs the command to create a stash file. Without the stash file, the KDC 
server will request a password every time it starts.
+
+6.  Add an administrative user to the Kerberos KDC database with the 
`kadmin.local` utility. Because it does not itself depend on Kerberos 
authentication, the `kadmin.local` utility allows you to add an initial 
administrative user to the local Kerberos server. To add the user `admin` as an 
administrative user to the KDC database, run the following command:
+
+    ```
+    root@kdc-server$ kadmin.local -q "addprinc admin/admin"
+    ```
+
+    Most users do not need administrative access to the Kerberos server. They 
can use `kadmin` to manage their own principals \(for example, to change their 
own password\). For information about `kadmin`, see the [Kerberos 
documentation](http://web.mit.edu/kerberos/krb5-latest/doc/).
+
+7.  If required, edit the `/var/kerberos/krb5kdc/kadm5.acl` file to grant the 
appropriate permissions to `admin`.
+8.  Start the Kerberos daemons:
+
+    ```
+    root@kdc-server$ /sbin/service krb5kdc start
+    root@kdc-server$ /sbin/service kadmin start
+    ```
+
+9.  To start Kerberos automatically upon system restart:
+
+    ```
+    root@kdc-server$ /sbin/chkconfig krb5kdc on
+    root@kdc-server$ /sbin/chkconfig kadmin on
+    ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/501b7d58/markdown/clientaccess/kerberos-securehdfs.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/kerberos-securehdfs.html.md.erb 
b/markdown/clientaccess/kerberos-securehdfs.html.md.erb
new file mode 100644
index 0000000..27369e5
--- /dev/null
+++ b/markdown/clientaccess/kerberos-securehdfs.html.md.erb
@@ -0,0 +1,219 @@
+---
+title: Configuring HAWQ/PXF for Secure HDFS
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+When Kerberos is enabled for your HDFS filesystem, HAWQ, as an HDFS client, 
requires a principal and keytab file to authenticate access to HDFS 
(filesystem) and YARN (resource management). If you have enabled Kerberos at 
the HDFS filesystem level, you will create and deploy principals for your HDFS 
cluster, and ensure that Kerberos authentication is enabled and functioning for 
all HDFS client services, including HAWQ and PXF.
+
+You will perform different procedures depending upon whether you use Ambari to 
manage your HAWQ cluster or you manage your cluster from the command line.
+
+## <a id="task_kerbhdfs_ambarimgd"></a>Procedure for Ambari-Managed Clusters
+
+If you manage your cluster with Ambari, you will enable Kerberos 
authentication for your cluster as described in the [Enabling Kerberos 
Authentication Using 
Ambari](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/configuring_amb_hdp_for_kerberos.html)
 Hortonworks documentation. The Ambari **Kerberos Security Wizard** guides you 
through the kerberization process, including installing Kerberos client 
packages on cluster nodes, syncing Kerberos configuration files, updating 
cluster configuration, and creating and distributing the Kerberos principals 
and keytab files for your Hadoop cluster services, including HAWQ and PXF. 
+
+## <a id="task_kerbhdfs_cmdlinemgd"></a>Procedure for Command-Line-Managed 
Clusters
+
+**Note**: HAWQ does not support command-line-managed clusters employing an 
Active Directory KDC.
+
+If you manage your cluster from the command line, before you configure HAWQ 
and PXF for access to a secure HDFS filesystem ensure that you have:
+
+- Enabled Kerberos for your Hadoop cluster per the instructions for your 
specific distribution and verified the configuration.
+
+- Verified that the HDFS configuration parameter 
`dfs.block.access.token.enable` is set to `true`. You can find this setting in 
the `hdfs-site.xml` configuration file.
+
+- Noted the host name or IP address of your HAWQ \<master\> and Kerberos Key 
Distribution Center \(KDC\) \<kdc-server\> nodes.
+
+- Noted the name of the Kerberos \<realm\> in which your cluster resides.
+
+- Distributed the `/etc/krb5.conf` Kerberos configuration file on the KDC 
server node to **each** HAWQ and PXF cluster node if not already present. For 
example:
+
+    ``` shell
+    $ ssh root@<hawq-node>
+    root@hawq-node$ cp /etc/krb5.conf /save/krb5.conf.save
+    root@hawq-node$ scp <kdc-server>:/etc/krb5.conf /etc/krb5.conf
+    ```
+
+- Verified that the Kerberos client packages are installed on **each** HAWQ 
and PXF node.
+
+    ``` shell
+    root@hawq-node$ rpm -qa | grep krb
+    root@hawq-node$ yum install krb5-libs krb5-workstation
+    ```
+
+#### <a id="task_kerbhdfs_cmdlinemgd_steps"></a>Procedure
+
+Perform the following steps to configure HAWQ and PXF for a secure HDFS. You 
will perform operations on both the HAWQ \<master\> and the \<kdc-server\> 
nodes.
+
+1.  Log in to the Kerberos KDC server as the `root` user.
+
+    ``` shell
+    $ ssh root@<kdc-server>
+    root@kdc-server$ 
+    ```
+
+2.  Use the `kadmin.local` command to create a Kerberos principal for the 
`postgres` user. Substitute your \<realm\>. For example:
+
+    ``` shell
+    root@kdc-server$ kadmin.local -q "addprinc -randkey [email protected]"
+    ```
+
+3.  Use `kadmin.local` to create a Kerberos service principal for **each** 
host on which a PXF agent is configured and running. The service principal 
should be of the form `pxf/<host>@<realm>` where \<host\> is the DNS 
resolvable, fully-qualified hostname of the PXF host system \(output of 
`hostname -f` command\).
+
+    For example, these commands add service principals for three PXF nodes on 
the hosts host1.example.com, host2.example.com, and host3.example.com:
+
+    ``` shell
+    root@kdc-server$ kadmin.local -q "addprinc -randkey 
pxf/[email protected]"
+    root@kdc-server$ kadmin.local -q "addprinc -randkey 
pxf/[email protected]"
+    root@kdc-server$ kadmin.local -q "addprinc -randkey 
pxf/[email protected]"
+    ```
+
+    **Note:** As an alternative, if you have a hosts file that lists the 
fully-qualified domain name of each PXF host \(one host per line\), then you 
can generate principals using the command:
+
+    ``` shell
+    root@kdc-server$ for HOST in $(cat hosts) ; do sudo kadmin.local -q 
"addprinc -randkey pxf/[email protected]" ; done
+    ```
+
+4.  Generate a keytab file for each principal that you created in the previous 
steps \(i.e. `postgres` and each `pxf/<host>`\). Save the keytab files in any 
convenient location \(this example uses the directory 
`/etc/security/keytabs`\). You will deploy the service principal keytab files 
to their respective HAWQ and PXF host machines in a later step. For example:
+
+    ``` shell
+    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/hawq.service.keytab [email protected]"
+    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/pxf-host1.service.keytab 
pxf/[email protected]"
+    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/pxf-host2.service.keytab 
pxf/[email protected]"
+    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/pxf-host3.service.keytab 
pxf/[email protected]"
+    root@kdc-server$ kadmin.local -q "listprincs"
+    ```
+
+    Repeat the `xst` command as necessary to generate a keytab for each HAWQ 
and PXF service principal that you created in the previous steps.
+
+5.  The HAWQ master server requires a 
`/etc/security/keytabs/hdfs.headless.keytab` keytab file for the HDFS 
principal. If this file does not already exist on the HAWQ master node, create 
the principal and generate the keytab. For example:
+
+    ``` shell
+    root@kdc-server$ kadmin.local -q "addprinc -randkey [email protected]"
+    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/hdfs.headless.keytab [email protected]"
+    ```
+
+6.  Copy the HAWQ service keytab file \(and the HDFS headless keytab file if 
you created one) to the HAWQ master segment host. For example:
+
+    ``` shell
+    root@kdc-server$ scp /etc/security/keytabs/hawq.service.keytab 
<master>:/etc/security/keytabs/hawq.service.keytab
+    root@kdc-server$ scp /etc/security/keytabs/hdfs.headless.keytab 
<master>:/etc/security/keytabs/hdfs.headless.keytab
+    ```
+
+7.  Change the ownership and permissions on `hawq.service.keytab` (and 
`hdfs.headless.keytab`) as follows:
+
+    ``` shell
+    root@kdc-server$ ssh <master> chown gpadmin:gpadmin 
/etc/security/keytabs/hawq.service.keytab
+    root@kdc-server$ ssh <master> chmod 400 
/etc/security/keytabs/hawq.service.keytab
+    root@kdc-server$ ssh <master> chown hdfs:hdfs 
/etc/security/keytabs/hdfs.headless.keytab
+    root@kdc-server$ ssh <master> chmod 400 
/etc/security/keytabs/hdfs.headless.keytab
+    ```
+
+8.  Copy the keytab file for each PXF service principal to its respective 
host. For example:
+
+    ``` shell
+    root@kdc-server$ scp /etc/security/keytabs/pxf-host1.service.keytab 
host1.example.com:/etc/security/keytabs/pxf.service.keytab
+    root@kdc-server$ scp /etc/security/keytabs/pxf-host2.service.keytab 
host2.example.com:/etc/security/keytabs/pxf.service.keytab
+    root@kdc-server$ scp /etc/security/keytabs/pxf-host3.service.keytab 
host3.example.com:/etc/security/keytabs/pxf.service.keytab
+    ```
+
+    Note the keytab file location on each PXF host; you will need this  
information for a later configuration step.
+
+9. Change the ownership and permissions on the `pxf.service.keytab` files. For 
example:
+
+    ``` shell
+    root@kdc-server$ ssh host1.example.com chown pxf:pxf 
/etc/security/keytabs/pxf.service.keytab
+    root@kdc-server$ ssh host1.example.com chmod 400 
/etc/security/keytabs/pxf.service.keytab
+    root@kdc-server$ ssh host2.example.com chown pxf:pxf 
/etc/security/keytabs/pxf.service.keytab
+    root@kdc-server$ ssh host2.example.com chmod 400 
/etc/security/keytabs/pxf.service.keytab
+    root@kdc-server$ ssh host3.example.com chown pxf:pxf 
/etc/security/keytabs/pxf.service.keytab
+    root@kdc-server$ ssh host3.example.com chmod 400 
/etc/security/keytabs/pxf.service.keytab
+    ```
+
+10. On **each** PXF node, edit the `/etc/pxf/conf/pxf-site.xml` configuration 
file to identify the local keytab file and security principal name. Add or 
uncomment the properties, substituting your \<realm\>. For example:
+
+    ``` xml
+    <property>
+        <name>pxf.service.kerberos.keytab</name>
+        <value>/etc/security/keytabs/pxf.service.keytab</value>
+        <description>path to keytab file owned by pxf service
+        with permissions 0400</description>
+    </property>
+
+    <property>
+        <name>pxf.service.kerberos.principal</name>
+        <value>pxf/[email protected]</value>
+        <description>Kerberos principal pxf service should use.
+        _HOST is replaced automatically with hostnames
+        FQDN</description>
+    </property>
+    ```
+
+11. Perform the remaining steps on the HAWQ master node as the `gpadmin` user:
+    1.  Log in to the HAWQ master node and set up the HAWQ runtime environment:
+
+        ``` shell
+        $ ssh gpadmin@<master>
+        gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+        ```
+
+    2.  Run the following commands to configure Kerberos HDFS security for 
HAWQ and identify the keytab file:
+
+        ``` shell
+        gpadmin@master$ hawq config -c enable_secure_filesystem -v ON
+        gpadmin@master$ hawq config -c krb_server_keyfile -v 
/etc/security/keytabs/hawq.service.keytab
+        ```
+
+    3.  Start the HAWQ service:
+
+        ``` shell
+        gpadmin@master$ hawq start cluster -a
+        ```
+
+    4.  Obtain a HDFS Kerberos ticket and change the ownership and permissions 
of the HAWQ HDFS data directory, substituting the HDFS data directory path for 
your HAWQ cluster. For example:
+
+        ``` shell
+        gpadmin@master$ sudo -u hdfs kinit -kt 
/etc/security/keytabs/hdfs.headless.keytab hdfs
+        gpadmin@master$ sudo -u hdfs hdfs dfs -chown -R postgres:gpadmin 
/<hawq_data_hdfs_path>
+        ```
+
+    5.  On the **HAWQ master node and each segment node**, edit the 
`/usr/local/hawq/etc/hdfs-client.xml` file to enable kerberos security and 
assign the HDFS NameNode principal. Add or uncomment the following properties 
in each file:
+
+        ``` xml
+        <property>
+          <name>hadoop.security.authentication</name>
+          <value>kerberos</value>
+        </property>
+        ```
+
+    6.  If you are using YARN for resource management, edit the 
`yarn-client.xml` file to enable kerberos security. Add or uncomment the 
following property in the `yarn-client.xml` file on the **HAWQ master and each 
HAWQ segment node**:
+
+        ``` xml
+        <property>
+          <name>hadoop.security.authentication</name>
+          <value>kerberos</value>
+        </property>
+        ```
+
+    7.  Restart your HAWQ cluster:
+
+        ``` shell
+        gpadmin@master$ hawq restart cluster -a -M fast
+        ```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/501b7d58/markdown/clientaccess/kerberos-userauth.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/kerberos-userauth.html.md.erb 
b/markdown/clientaccess/kerberos-userauth.html.md.erb
new file mode 100644
index 0000000..39f0280
--- /dev/null
+++ b/markdown/clientaccess/kerberos-userauth.html.md.erb
@@ -0,0 +1,459 @@
+---
+title: Configuring Kerberos User Authentication for HAWQ
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+When Kerberos authentication is enabled at the user level, HAWQ uses the 
Generic Security Service Application Program Interface \(GSSAPI\) to provide 
automatic authentication \(single sign-on\). When HAWQ uses Kerberos user 
authentication, both the HAWQ server and those HAWQ users \(roles\) that use 
Kerberos authentication require a principal and a keytab. When a user attempts 
to log in to HAWQ, the HAWQ server uses its Kerberos principal to connect to 
the Kerberos server, and presents the user's principal for Kerberos validation. 
If the user's principal is valid, then login succeeds and the user can access 
HAWQ. Conversely, the login fails and HAWQ denies access to the user if the 
principal is not valid.
+
+When HAWQ utilizes Kerberos for user authentication, it uses a single HAWQ 
server principal to connect to the Kerberos KDC. The format of the HAWQ server 
principal is `postgres/<FQDN_of_master>@<realm>`, where \<FQDN\_of\_master\> 
refers to the fully qualified distinguish name of the HAWQ master node.
+
+(You may choose to configure HAWQ user principals before you enable Kerberos 
user authentication for HAWQ. See [Configuring Kerberos-Authenticated HAWQ 
Users](#hawq_kerb_user_cfg).)
+
+The procedure to configure Kerberos user authentication for HAWQ includes:
+
+1. Configuring the HAWQ principal:
+
+    1. If you use an MIT Kerberos KDC Server, refer to [Configuring the HAWQ 
Principals using an MIT KDC Server](#hawq_kerb_cfg_mitkdc).  
+
+    2. If you use an Active Directory Kerberos KDC Server, refer to 
[Configuring the HAWQ Principal using an AD KDC Server](#hawq_kerb_cfg_adkdc).  
+
+2. [Configuring HAWQ to use Kerberos Authentication](#hawq_kerb_cfg)  
+3. [Configuring Kerberos-Authenticated HAWQ Users](#hawq_kerb_user_cfg)  
+4. [Authenticating User Access to HAWQ](#hawq_kerb_dbaccess)  
+
+## <a id="hawq_kerb_cfg_mitkdc"></a>Step 1a: Configuring the HAWQ Principals 
using an MIT KDC Server
+
+Perform the following procedure to configure HAWQ Kerberos and `gpadmin` 
principals when you are using an MIT KDC server. 
+
+**Note**: Some operations may differ based on whether or not you have 
configured secure HDFS. These operations are called out below.
+
+1. Log in to the Kerberos KDC server system:
+
+    ``` shell
+    $ ssh root@<kdc-server>
+    root@kdc-server$ 
+    ```
+
+2. Create a keytab entry for the HAWQ server principal using the 
`kadmin.local` command. Substitute the HAWQ master node fully qualified 
distinguished hostname and your Kerberos realm. For example:
+
+    ``` shell
+    root@kdc-server$ kadmin.local -q "addprinc -randkey 
postgres/<master>@REALM.DOMAIN"
+    ```
+    
+    The `addprinc` command adds the principal `postgres/<master>` to the KDC 
managing your \<realm\>.
+
+3. Generate a keytab file for the HAWQ server principal. Provide the same name 
you used to create the principal.
+
+    **If you have configured Kerberos for your HDFS filesystem**, add the 
keytab to the HAWQ client HDFS keytab file:
+    
+    ``` shell
+    root@kdc-server$ kadmin.local -q "xst -norandkey -k 
/etc/security/keytabs/hawq.service.keytab postgres/<master>@REALM.DOMAIN"
+    ```
+    
+    **Otherwise**, generate a new file for the keytab:
+
+    ``` shell
+    root@kdc-server$ kadmin.local -q "xst -norandkey -k hawq-krb5.keytab 
postgres/<master>@REALM.DOMAIN"
+    ```
+
+4. Use the `klist` command to view the key you just generated:
+
+    ``` shell
+    root@kdc-server$ klist -ket ./hawq-krb5.keytab
+    ```
+    
+    Or:
+    
+    ``` shell
+    root@kdc-server$ klist -ket /etc/security/keytabs/hawq.service.keytab
+    ```
+    
+    The `-ket` option lists the keytabs and encryption types in the identified 
key file.
+
+5. When you enable Kerberos user authentication for HAWQ, you must create a 
Kerberos principal for `gpadmin` or another HAWQ administrative user. Create a 
Kerberos principal for the HAWQ `gpadmin` administrative role, substituting 
your Kerberos realm. For example:
+
+    ``` shell
+    root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
[email protected]"
+    ```
+    
+    This `addprinc` command adds the principal `gpadmin` to the Kerberos KDC 
managing your \<realm\>. When you invoke `kadmin.local` as specified in the 
example above, `gpadmin` will be required to provide the password identified by 
the `-pw` option when authenticating. Alternatively, you can create a keytab 
file for the `gpadmin` principal and distribute the file to HAWQ client nodes.
+
+6. Copy the file in which you added the `postgres/<master>@<realm>` keytab to 
the HAWQ master node:
+
+    ``` shell
+    root@kdc-server$ scp ./hawq-krb5.keytab gpadmin@<master>:/home/gpadmin/
+    ```
+    
+    Or:
+    
+    ``` shell
+    root@kdc-server$ scp /etc/security/keytabs/hawq.service.keytab 
gpadmin@<master>:/etc/security/keytabs/hawq.service.keytab
+    ```
+
+## <a id="hawq_kerb_cfg_adkdc"></a>Step 1b: Configuring the HAWQ Principal 
using an AD KDC Server
+
+Perform the following procedure to configure a HAWQ Kerberos principal when 
you are using an AD KDC server.
+
+1. Log on to the Windows Active Directory Kerberos KDC server system as a user 
with administrator privileges.
+
+2. From the **Start** menu, select **Control Panel > Administrative Tools > 
Active Directory Users and Computers**. (If the **Active Directory Users and 
Computers** menu item is not available, the Active Directory service may not 
have been correctly installed.)
+
+    The **Active Directory Users and Computers** window displays.
+
+3. When you enable Kerberos user authentication for HAWQ, you must create a 
Kerberos principal for the `gpadmin` HAWQ administrative user. Use the left 
pane tree view to navigate to your Kerberos \<realm\> **Managed Service 
Accounts** folder, right-click, and select **New > User** to create a user with 
this name.
+
+    The **New Object - User** wizard displays.
+   
+4. Fill in the **New Object - User** fields:
+
+    **First name:**  gpadmin  
+    **User logon name:**  gpadmin
+    
+5. Click **Next** to advance to the next screen.
+
+6. Add and confirm the password. Be sure to enable the **Password never 
expires** checkbox.
+
+7. Click **Next**, and then **Finish** to complete creation of the `gpadmin` 
user.
+
+8. Open an administrative terminal window or command prompt session on the 
Windows AD KDC server system. Be sure to select **Run as administrator -> Yes**.
+
+9. Add a Service Name Principal (SNP) for the `gpadmin` account you just 
created. Substitute the fully qualified distinguished name of your HAWQ master 
node. This hostname must be resolvable from the Windows AD KDC server. For 
example:
+
+    ``` shell
+    PS C:\Users\Administrator> setspn -A postgres/<master> gpadmin
+    ```
+    
+    The `setspn` command adds the principal `postgres/<master>` to the KDC 
managing your \<realm\>.
+
+10. Create a keytab file for the `postgres/<master>` SPN using the `ktpass` 
command. Substitute the HAWQ master node fully qualified distinguished hostname 
and your Kerberos realm. For example:
+
+    ```shell
+    PS C:\Users\Administrator> ktpass -princ postgres/<master>@<realm> -pass 
changeme -mapuser gpadmin -crypto ALL -ptype KRB5_NT_PRINCIPAL -out 
hawq-krb5.keytab -kvno 0
+    ```
+
+    The `ktpass` command generates a keytab file named `hawq-krb5.keytab`.
+
+11. Copy the keytab file to the HAWQ master node.
+
+
+## <a id="hawq_kerb_cfg"></a>Step 2: Configuring HAWQ to use Kerberos 
Authentication
+
+Perform the following procedure to configure Kerberos user authentication for 
HAWQ. You will perform operations on the HAWQ \<master\> node. 
+
+1. Log in to the HAWQ master node as the `gpadmin` user and set up the HAWQ 
environment:
+
+    ``` shell
+    $ ssh gpadmin@<master>
+    gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+    ```
+    
+2. If you copied the `hawq-krb5.keytab` file, verify the ownership and mode of 
this file:
+
+    ``` shell
+    gpadmin@master$ chown gpadmin:gpadmin /home/gpadmin/hawq-krb5.keytab
+    gpadmin@master$ chmod 400 /home/gpadmin/hawq-krb5.keytab
+    ```
+
+    The HAWQ server keytab file must be readable (and preferably only 
readable) by the HAWQ `gpadmin` administrative account.
+
+3. Add a `pg_hba.conf` entry that mandates Kerberos authentication for HAWQ. 
The `pg_hba.conf` file resides in the directory specified by the 
`hawq_master_directory` server configuration parameter value. For example, add:
+
+    ``` pre
+    host all all 0.0.0.0/0 gss include_realm=0 krb_realm=REALM.DOMAIN
+    ``` 
+
+    This `pg_hba.conf` entry specifies that any remote access (i.e. from any 
user on any remote host) to HAWQ must be authenticated through the Kerberos 
realm named `REALM.DOMAIN`.
+   
+    **Note**: Place the Kerberos entry in the appropriate location in the 
`pg_hba.conf` file. For example, you may choose to retain `pg_hba.conf` entries 
for the `gpadmin` user that grant `trust` or `ident` authentication for local 
connections. Locate the Kerberos entry after these line(s). Refer to 
[Configuring Client Authentication](client_auth.html) for additional 
information about the `pg_hba.conf` file.
+
+4. Update the HAWQ configuration and restart your cluster. You will perform 
different procedures if you manage your cluster from the command line or use 
Ambari to manage your cluster.
+
+    **Note**: After you restart your hawq cluster, Kerberos user 
authentication is enabled for HAWQ, and all users, including `gpadmin`, must 
authenticate before performing any HAWQ operations.
+
+    1. If you manage your cluster using Ambari or are employing a Windows 
Active Directory KDC server:
+    
+        1.  Login in to the Ambari UI from a supported web browser.
+
+        2. Navigate to the **HAWQ** service, **Configs > Advanced** tab and 
expand the **Custom hawq-site** drop down.
+
+        3. Set the `krb_server_keyfile` path value to the new keytab file 
location, `/home/gpadmin/hawq-krb5.keytab`.
+
+        4. **Save** this configuration change and then select the now orange 
**Restart > Restart All Affected** menu button to restart your HAWQ cluster.
+
+        5. Exit the Ambari UI.  
+    
+    2. If you manage your cluster from the command line:
+    
+        1.  Update the `krb_server_keyfile` configuration parameter:
+
+            ``` shell
+            gpadmin@master$ hawq config -c krb_server_keyfile -v 
'/home/gpadmin/hawq-krb5.keytab'
+            GUC krb_server_keyfile already exist in hawq-site.xml
+            Update it with value: /home/gpadmin/hawq-krb5.keytab
+            GUC      : krb_server_keyfile
+            Value    : /home/gpadmin/hawq-krb5.keytab
+            ```
+
+        2.  Restart your HAWQ cluster:
+
+            ``` shell
+            gpadmin@master$ hawq restart cluster
+            ```
+
+5. When Kerberos user authentication is enabled for HAWQ, all users, including 
the `gpadmin` administrative user, must request a ticket to authenticate before 
performing HAWQ operations. Generate a ticket for `gpadmin` on the HAWQ master 
node. You may be required to enter a password if you specified one when you 
created the principal. For example:
+
+    ``` shell
+    gpadmin@master$ kinit gpadmin@<realm>
+    ```
+
+    **Note**: If you are using an Active Directory KDC server and the `kinit` 
command fails with the error "Preauthentication failed while getting initial 
credentials", navigate to the `gpadmin` **Account options** view on the Windows 
AD server system and enable the **Do not require Kerberos preauthentication** 
checkbox.
+
+    See [Authenticate User Access to HAWQ](#hawq_kerb_dbaccess) for more 
information about requesting and generating Kerberos tickets. 
+
+
+## <a id="hawq_kerb_user_cfg"></a>Step 3: Configuring Kerberos-Authenticated 
HAWQ Users
+
+You must configure HAWQ user principals for Kerberos. The first component of a 
HAWQ user principal must be the HAWQ user/role name:
+
+``` pre
+<hawq-user>@<realm>
+```
+
+This procedure includes:
+
+- Identifying an existing HAWQ role or creating a new HAWQ role for each user 
you want to authenticate with Kerberos
+- Creating a Kerberos principal for each role
+- Optionally generating and distributing a keytab file to all HAWQ clients 
from which you will access HAWQ as the new role
+
+
+### <a id="hawq_kerb_user_cfg_proc" class="no-quick-link"></a>Procedure 
+
+Perform the following steps to configure Kerberos authentication for specific 
HAWQ users. You will perform operations on both the HAWQ \<master\> and the 
\<kdc-server\> nodes.
+
+1. Log in to the HAWQ master node as the `gpadmin` user and set up your HAWQ 
environment:
+
+    ``` shell
+    $ ssh gpadmin@master
+    gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+    ```
+
+2. Identify the name of an existing HAWQ user/role or create a new HAWQ 
user/role. For example:
+
+    ``` shell
+    gpadmin@master$ psql -d template1 -c 'CREATE ROLE "bill_kerberos" with 
LOGIN;'
+    ```
+
+    This step creates a HAWQ operational role. Create an administrative HAWQ 
role by adding the `SUPERUSER` clause to the `CREATE ROLE` command.
+
+2. Create a principal for the HAWQ role. Substitute the Kerberos realm you 
noted earlier. 
+
+    MIT KDC server example:
+
+    ``` shell
+    $ ssh root@<kdc-server>
+    root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
[email protected]"
+    ```
+
+    Active Directory KDC server example (generates a keytab file):
+
+    ```shell
+    PS C:\Users\Administrator> ktpass -princ [email protected] -pass 
changeme -mapuser bill -crypto ALL -ptype KRB5_NT_PRINCIPAL -out 
bill-krb5.keytab -kvno 0
+    ```
+
+3. You may choose to authenticate the HAWQ role with a password or a keytab 
file. 
+
+    1. If you choose password authentication, no further configuration is 
required. `bill_kerberos` will provide the password identified by the `-pw` or 
`pass` option when authenticating. Skip the rest of this step.
+    
+    2. If you choose authentication via a keytab file:
+    
+        1. Generate a keytab file for the HAWQ principal you created, again 
substituting your Kerberos realm. 
+
+            MIT KDC server example:
+
+            ``` shell
+            root@kdc-server$ kadmin.local -q "xst -k bill-krb5.keytab 
[email protected]"
+            ```
+
+            The keytab entry is saved to the `./bill-krb5.keytab` file.
+
+        2. View the key you just added to `bill-krb5.keytab`:
+
+            ``` shell
+            root@kdc-server$ klist -ket ./bill-krb5.keytab
+            ```
+
+        3. Distribute the keytab file to **each** HAWQ node from which you 
will access the HAWQ master as the user/role. For example:
+
+            ``` shell
+            root@kdc-server$ scp ./bill-krb5.keytab 
bill@<hawq-node>:/home/bill/
+            ```
+
+4. Log in to the HAWQ node as the user for whom you created the principal and 
set up your HAWQ environment:
+
+    ``` shell
+    $ ssh bill@<hawq-node>
+    bill@hawq-node$ . /usr/local/hawq/greenplum_path.sh
+    ```
+
+5. If you are using keytab file authentication, verify the ownership and mode 
of the keytab file:
+
+    ``` shell
+    bill@hawq-node$ chown bill:bill /home/bill/bill-krb5.keytab
+    bill@hawq-node$ chmod 400 /home/bill/bill-krb5.keytab
+    ```
+
+8. Access HAWQ as the new `bill_kerberos` user:
+
+    ``` shell
+    bill@hawq-node$ psql -d testdb -h <master> -U bill_kerberos
+    psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may 
provide more information
+    GSSAPI continuation error: Credentials cache file '/tmp/krb5cc_502' not 
found
+    ```
+
+    The operation fails. The `bill_kerberos` user has not yet authenticated 
with the Kerberos server. The next section, [Authenticating User Access to 
HAWQ](#hawq_kerb_dbaccess), identifies the procedure required for HAWQ users to 
authenticate with Kerberos.
+
+## <a id="hawq_kerb_dbaccess"></a>Step 4: Authenticating User Access to HAWQ 
+
+When Kerberos user authentication is enabled for HAWQ, users must request a 
ticket from the Kerberos KDC server before connecting to HAWQ. You must request 
the ticket for a principal matching the requested database user name. When 
granted, the ticket expires after a set period of time, after which you will 
need to request another ticket.
+   
+To generate a Kerberos ticket, run the `kinit` command. Specify the Kerberos 
principal for which you are requesting the ticket in a command option. You may 
optionally specify a path to a keytab file.
+
+For example, to request a ticket for the `bill_kerberos` user principal you 
created above using the keytab file for authentication:
+
+``` shell
+bill@hawq-node$ kinit -k -t /home/bill/bill-krb5.keytab 
[email protected]
+```
+
+To request a ticket for the `bill_kerberos` user principal using password 
authentication:
+
+``` shell
+bill@hawq-node$ kinit [email protected]
+Password for [email protected]:
+```
+
+`kinit` prompts you for the password; enter the password at the prompt.
+
+For more information about the ticket, use the `klist` command. `klist` 
invoked without any arguments lists the currently held Kerberos principal and 
tickets. The command output also provides the ticket expiration time. 
+
+Example output from the `klist` command:
+
+``` shell
+bill@hawq-node$ klist
+Ticket cache: FILE:/tmp/krb5cc_502
+Default principal: [email protected]
+
+Valid starting     Expires            Service principal
+06/07/17 23:16:04  06/08/17 23:16:04  krbtgt/[email protected]
+       renew until 06/07/17 23:16:04
+06/07/17 23:16:07  06/08/17 23:16:04  postgres/master@
+       renew until 06/07/17 23:16:04
+06/07/17 23:16:07  06/08/17 23:16:04  postgres/[email protected]
+       renew until 06/07/17 23:16:04
+```
+
+After generating a ticket, you can connect to a HAWQ database as a 
kerberos-authenticated user using `psql` or other client programs.
+
+### <a id="topic7" class="no-quick-link"></a>Name Mapping 
+
+To simplify Kerberos-authenticated HAWQ user login, you can define a mapping 
between a user's Kerberos principal name and a HAWQ database user name. You 
define the mapping in the `pg_ident.conf` file. You use a mapping by specifying 
a `map=<map-name>` option to a specific entry in the `pg_hba.conf` file. 
+
+The `pg_ident.conf` and `pg_hba.conf` files reside on the HAWQ master node in 
the directory identified by the `hawq_master_directory` server configuration 
parameter setting value.
+
+You use the `pg_ident.conf` file to define user name maps. You can create 
entries in this file that define a mapping name, a Kerberos principal name, and 
a HAWQ database user name. For example:
+
+```
+# MAPNAME   SYSTEM-USERNAME      HAWQ-USERNAME
+kerbmap     /^([a-z]+)_kerberos      \1
+```
+
+This entry extracts the component prefacing the `_kerberos` of the Kerberos 
principal name and maps that to a HAWQ user/role.
+
+You identify the map name in the `pg_hba.conf` file entry that enables 
Kerberos support using the `map=<mapname>` option. For example:
+
+```
+host all all 0.0.0.0/0 gss include_realm=0 krb_realm=REALM.DOMAIN map=kerbmap
+```
+
+Suppose that you are logged in as Linux user `bsmith`, your Kerberos principal 
is `[email protected]`, and you want to log in to HAWQ as user `bill`. 
With the `kerbmap` mapping configured in `pg_ident.conf` and `pg_hba.conf` as 
described above and a ticket for Kerberos principal `bill_kerberos`, you log in 
to HAWQ with the user name `bill` as follows:
+
+``` shell
+bsmith@hawq-node$ klist
+Ticket cache: FILE:/tmp/krb5cc_500
+Default principal: [email protected]
+bsmith@hawq-node$ psql -d testdb -h <master> -U bill
+psql (8.2.15)
+Type "help" for help.
+
+testdb=> SELECT current_user;
+ current_user
+--------------
+ bill
+(1 row)
+```
+
+For more information about specifying username maps see [Username 
maps](http://www.postgresql.org/docs/8.4/static/auth-username-maps.html) in the 
PostgreSQL documentation.
+
+## <a id="client_considerations"></a>Kerberos Considerations for Non-HAWQ 
Clients
+
+If you access HAWQ databases from any clients outside of your HAWQ cluster, 
and Kerberos user authentication for HAWQ is enabled, you must specifically 
configure Kerberos access on each client system. Ensure that:
+
+- You have created the appropriate Kerberos principal for the HAWQ user and 
optionally generated and distributed the keytab file.
+- The `krb5-libs` and `krb5-workstation` Kerberos client packages are 
installed on each Linux client.
+- You copy the `/etc/krb5.conf` Kerberos configuration file from the KDC or 
HAWQ master node to each client system.
+- The HAWQ user requests a ticket before connecting to HAWQ.
+
+## <a id="topic9"></a>Configuring JDBC for Kerberos-Enabled HAWQ
+
+JDBC applications that you run must utilize a secure connection when Kerberos 
is configured for HAWQ user authentication.
+
+The following example database connection URL uses a PostgreSQL JDBC driver 
and specifies parameters for Kerberos authentication:
+
+```
+jdbc:postgresql://master:5432/testdb?kerberosServerName=postgres&jaasApplicationName=pgjdbc&user=bill_kerberos
+```
+
+The connection URL parameter names and values specified will depend upon how 
the Java application performs Kerberos authentication.
+
+Before configuring JDBC access to a kerberized HAWQ, verify that:
+
+- The Java Cryptography Extension (JCE) is installed on the client system 
(non-OpenJDK Java installations).
+- Kerberos user authentication is configured for HAWQ as described in 
[Configure Kerberos User Authentication for HAWQ](#hawq_kerb_cfg).
+- If you are accessing HAWQ from a client node that resides outside of your 
HAWQ cluster, verify that the client is configured as described in [Kerberos 
Considerations for Non-HAWQ Clients](#client_considerations).
+
+### <a id="topic9_proc" class="no-quick-link"></a>Procedure
+
+Perform the following procedure to enable Kerberos-authenticated JDBC access 
to HAWQ from a client system.
+
+1.  Create or add the following to the `.java.login.config` file in the 
`$HOME` directory of the user account under which the application will run:
+
+    ``` pre
+    pgjdbc {
+      com.sun.security.auth.module.Krb5LoginModule required
+      doNotPrompt=true
+      useTicketCache=true
+      debug=true
+      client=true;
+    };
+    ```
+
+2.  Generate a Kerberos ticket.
+
+3.  Run the JDBC-based HAWQ application.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/501b7d58/markdown/clientaccess/kerberos.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/clientaccess/kerberos.html.md.erb 
b/markdown/clientaccess/kerberos.html.md.erb
index 464aef4..6d88d5c 100644
--- a/markdown/clientaccess/kerberos.html.md.erb
+++ b/markdown/clientaccess/kerberos.html.md.erb
@@ -33,667 +33,21 @@ Before configuring Kerberos authentication for HAWQ, 
ensure that:
 -   System time on the Kerberos server and HAWQ hosts is synchronized. \(For 
example, install the `ntp` package on both servers.\)
 -   Network connectivity exists between the Kerberos server and all nodes in 
the HAWQ cluster.
 -   Java 1.7.0\_17 or later is installed on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
--   You can identify the Key Distribution Center \(KDC\) server you use for 
Kerberos authentication. See [Example: Install and Configure a Kerberos KDC 
Server](#task_setup_kdc) if you have not yet set up your KDC.
+-   You can identify the Key Distribution Center \(KDC\) server you use for 
Kerberos authentication and the Kerberos realm in which your cluster resides. 
+    - If you plan to use an MIT Kerberos KDC Server but have not yet 
configured it, see [Example: Setting up an MIT Kerberos KDC 
Server](kerberos-mitkdc.html) for example instructions.
+    - If you are using an existing Active Directory KDC Server, also ensure 
that you have:
+        - Installed all Active Directory service roles on your AD KDC server.
+        - Enabled the LDAP service.
 
-## <a id="task_kerbhdfs"></a>Configuring HAWQ/PXF for Secure HDFS
+        Refer to the [Using an Existing Active 
Directory](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_security/content/_use_an_existing_active_directory_domain.html)
 Hortonworks documentation for additional preparation instructions.
+        
+**Note**: HAWQ supports Active Directory KDC servers only for Ambari-managed 
clusters. HAWQ does not support command-line-managed clusters employing an 
Active Directory KDC server.
 
-When Kerberos is enabled for your HDFS filesystem, HAWQ, as an HDFS client, 
requires a principal and keytab file to authenticate access to HDFS 
(filesystem) and YARN (resource management). If you have enabled Kerberos at 
the HDFS filesystem level, you will create and deploy principals for your HDFS 
cluster, and ensure that Kerberos authentication is enabled and functioning for 
all HDFS client services, including HAWQ and PXF. 
 
-### <a id="task_kerbhdfs_ambarimgd"></a>Procedure for Ambari-Managed Clusters
+## <a id="kerberos_procedures"></a>Procedure
 
-If you manage your cluster with Ambari, you will enable Kerberos 
authentication for your cluster as described in the [Enabling Kerberos 
Authentication Using 
Ambari](https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/configuring_amb_hdp_for_kerberos.html)
 Hortonworks documentation. The Ambari **Kerberos Security Wizard** guides you 
through the kerberization process, including installing Kerberos client 
packages on cluster nodes, syncing Kerberos configuration files, updating 
cluster configuration, and creating and distributing the Kerberos principals 
and keytab files for your Hadoop cluster services, including HAWQ and PXF. 
+You can configure Kerberos for HAWQ for secure HDFS and for user 
authentication. You will perform different procedures for each:
 
-### <a id="task_kerbhdfs_cmdlinemgd"></a>Procedure for Command-Line-Managed 
Clusters
+- [Configuring HAWQ/PXF for Secure HDFS](kerberos-securehdfs.html)  
+- [Configuring Kerberos User Authentication for HAWQ](kerberos-userauth.html)
 
-If you manage your cluster from the command line, before you configure HAWQ 
and PXF for access to a secure HDFS filesystem ensure that you have:
-
-- Enabled Kerberos for your Hadoop cluster per the instructions for your 
specific distribution and verified the configuration.
-
-- Verified that the HDFS configuration parameter 
`dfs.block.access.token.enable` is set to `true`. You can find this setting in 
the `hdfs-site.xml` configuration file.
-
-- Noted the host name or IP address of your HAWQ \<master\> and Kerberos Key 
Distribution Center \(KDC\) \<kdc-server\> nodes.
-
-- Noted the name of the Kerberos \<realm\> in which your cluster resides.
-
-- Distributed the `/etc/krb5.conf` Kerberos configuration file on the KDC 
server node to **each** HAWQ and PXF cluster node if not already present. For 
example:
-
-    ``` shell
-    $ ssh root@<hawq-node>
-    root@hawq-node$ cp /etc/krb5.conf /save/krb5.conf.save
-    root@hawq-node$ scp <kdc-server>:/etc/krb5.conf /etc/krb5.conf
-    ```
-
-- Verified that the Kerberos client packages are installed on **each** HAWQ 
and PXF node.
-
-    ``` shell
-    root@hawq-node$ rpm -qa | grep krb
-    root@hawq-node$ yum install krb5-libs krb5-workstation
-    ```
-
-#### <a id="task_kerbhdfs_cmdlinemgd_steps"></a>Procedure
-
-Perform the following steps to configure HAWQ and PXF for a secure HDFS. You 
will perform operations on both the HAWQ \<master\> and the \<kdc-server\> 
nodes.
-
-1.  Log in to the Kerberos KDC server as the `root` user.
-
-    ``` shell
-    $ ssh root@<kdc-server>
-    root@kdc-server$ 
-    ```
-
-2.  Use the `kadmin.local` command to create a Kerberos principal for the 
`postgres` user. Substitute your \<realm\>. For example:
-
-    ``` shell
-    root@kdc-server$ kadmin.local -q "addprinc -randkey [email protected]"
-    ```
-
-3.  Use `kadmin.local` to create a Kerberos service principal for **each** 
host on which a PXF agent is configured and running. The service principal 
should be of the form `pxf/<host>@<realm>` where \<host\> is the DNS 
resolvable, fully-qualified hostname of the PXF host system \(output of 
`hostname -f` command\).
-
-    For example, these commands add service principals for three PXF nodes on 
the hosts host1.example.com, host2.example.com, and host3.example.com:
-
-    ``` shell
-    root@kdc-server$ kadmin.local -q "addprinc -randkey 
pxf/[email protected]"
-    root@kdc-server$ kadmin.local -q "addprinc -randkey 
pxf/[email protected]"
-    root@kdc-server$ kadmin.local -q "addprinc -randkey 
pxf/[email protected]"
-    ```
-
-    **Note:** As an alternative, if you have a hosts file that lists the 
fully-qualified domain name of each PXF host \(one host per line\), then you 
can generate principals using the command:
-
-    ``` shell
-    root@kdc-server$ for HOST in $(cat hosts) ; do sudo kadmin.local -q 
"addprinc -randkey pxf/[email protected]" ; done
-    ```
-
-4.  Generate a keytab file for each principal that you created in the previous 
steps \(i.e. `postgres` and each `pxf/<host>`\). Save the keytab files in any 
convenient location \(this example uses the directory 
`/etc/security/keytabs`\). You will deploy the service principal keytab files 
to their respective HAWQ and PXF host machines in a later step. For example:
-
-    ``` shell
-    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/hawq.service.keytab [email protected]"
-    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/pxf-host1.service.keytab 
pxf/[email protected]"
-    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/pxf-host2.service.keytab 
pxf/[email protected]"
-    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/pxf-host3.service.keytab 
pxf/[email protected]"
-    root@kdc-server$ kadmin.local -q "listprincs"
-    ```
-
-    Repeat the `xst` command as necessary to generate a keytab for each HAWQ 
and PXF service principal that you created in the previous steps.
-
-5.  The HAWQ master server requires a 
`/etc/security/keytabs/hdfs.headless.keytab` keytab file for the HDFS 
principal. If this file does not already exist on the HAWQ master node, create 
the principal and generate the keytab. For example:
-
-    ``` shell
-    root@kdc-server$ kadmin.local -q "addprinc -randkey [email protected]"
-    root@kdc-server$ kadmin.local -q "xst -k 
/etc/security/keytabs/hdfs.headless.keytab [email protected]"
-    ```
-
-6.  Copy the HAWQ service keytab file \(and the HDFS headless keytab file if 
you created one) to the HAWQ master segment host. For example:
-
-    ``` shell
-    root@kdc-server$ scp /etc/security/keytabs/hawq.service.keytab 
<master>:/etc/security/keytabs/hawq.service.keytab
-    root@kdc-server$ scp /etc/security/keytabs/hdfs.headless.keytab 
<master>:/etc/security/keytabs/hdfs.headless.keytab
-    ```
-
-7.  Change the ownership and permissions on `hawq.service.keytab` (and 
`hdfs.headless.keytab`) as follows:
-
-    ``` shell
-    root@kdc-server$ ssh <master> chown gpadmin:gpadmin 
/etc/security/keytabs/hawq.service.keytab
-    root@kdc-server$ ssh <master> chmod 400 
/etc/security/keytabs/hawq.service.keytab
-    root@kdc-server$ ssh <master> chown hdfs:hdfs 
/etc/security/keytabs/hdfs.headless.keytab
-    root@kdc-server$ ssh <master> chmod 400 
/etc/security/keytabs/hdfs.headless.keytab
-    ```
-
-8.  Copy the keytab file for each PXF service principal to its respective 
host. For example:
-
-    ``` shell
-    root@kdc-server$ scp /etc/security/keytabs/pxf-host1.service.keytab 
host1.example.com:/etc/security/keytabs/pxf.service.keytab
-    root@kdc-server$ scp /etc/security/keytabs/pxf-host2.service.keytab 
host2.example.com:/etc/security/keytabs/pxf.service.keytab
-    root@kdc-server$ scp /etc/security/keytabs/pxf-host3.service.keytab 
host3.example.com:/etc/security/keytabs/pxf.service.keytab
-    ```
-
-    Note the keytab file location on each PXF host; you will need this  
information for a later configuration step.
-
-9. Change the ownership and permissions on the `pxf.service.keytab` files. For 
example:
-
-    ``` shell
-    root@kdc-server$ ssh host1.example.com chown pxf:pxf 
/etc/security/keytabs/pxf.service.keytab
-    root@kdc-server$ ssh host1.example.com chmod 400 
/etc/security/keytabs/pxf.service.keytab
-    root@kdc-server$ ssh host2.example.com chown pxf:pxf 
/etc/security/keytabs/pxf.service.keytab
-    root@kdc-server$ ssh host2.example.com chmod 400 
/etc/security/keytabs/pxf.service.keytab
-    root@kdc-server$ ssh host3.example.com chown pxf:pxf 
/etc/security/keytabs/pxf.service.keytab
-    root@kdc-server$ ssh host3.example.com chmod 400 
/etc/security/keytabs/pxf.service.keytab
-    ```
-
-10. On **each** PXF node, edit the `/etc/pxf/conf/pxf-site.xml` configuration 
file to identify the local keytab file and security principal name. Add or 
uncomment the properties, substituting your \<realm\>. For example:
-
-    ``` xml
-    <property>
-        <name>pxf.service.kerberos.keytab</name>
-        <value>/etc/security/keytabs/pxf.service.keytab</value>
-        <description>path to keytab file owned by pxf service
-        with permissions 0400</description>
-    </property>
-
-    <property>
-        <name>pxf.service.kerberos.principal</name>
-        <value>pxf/[email protected]</value>
-        <description>Kerberos principal pxf service should use.
-        _HOST is replaced automatically with hostnames
-        FQDN</description>
-    </property>
-    ```
-
-11. Perform the remaining steps on the HAWQ master node as the `gpadmin` user:
-    1.  Log in to the HAWQ master node and set up the HAWQ runtime environment:
-
-        ``` shell
-        $ ssh gpadmin@<master>
-        gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
-        ```
-
-    2.  Run the following commands to configure Kerberos HDFS security for 
HAWQ and identify the keytab file:
-
-        ``` shell
-        gpadmin@master$ hawq config -c enable_secure_filesystem -v ON
-        gpadmin@master$ hawq config -c krb_server_keyfile -v 
/etc/security/keytabs/hawq.service.keytab
-        ```
-
-    3.  Start the HAWQ service:
-
-        ``` shell
-        gpadmin@master$ hawq start cluster -a
-        ```
-
-    4.  Obtain a HDFS Kerberos ticket and change the ownership and permissions 
of the HAWQ HDFS data directory, substituting the HDFS data directory path for 
your HAWQ cluster. For example:
-
-        ``` shell
-        gpadmin@master$ sudo -u hdfs kinit -kt 
/etc/security/keytabs/hdfs.headless.keytab hdfs
-        gpadmin@master$ sudo -u hdfs hdfs dfs -chown -R postgres:gpadmin 
/<hawq_data_hdfs_path>
-        ```
-
-    5.  On the **HAWQ master node and each segment node**, edit the 
`/usr/local/hawq/etc/hdfs-client.xml` file to enable kerberos security and 
assign the HDFS NameNode principal. Add or uncomment the following properties 
in each file:
-
-        ``` xml
-        <property>
-          <name>hadoop.security.authentication</name>
-          <value>kerberos</value>
-        </property>
-        ```
-
-    6.  If you are using YARN for resource management, edit the 
`yarn-client.xml` file to enable kerberos security. Add or uncomment the 
following property in the `yarn-client.xml` file on the **HAWQ master and each 
HAWQ segment node**:
-
-        ``` xml
-        <property>
-          <name>hadoop.security.authentication</name>
-          <value>kerberos</value>
-        </property>
-        ```
-
-    7.  Restart your HAWQ cluster:
-
-        ``` shell
-        gpadmin@master$ hawq restart cluster -a -M fast
-        ```
-
-## <a id="hawq_kerb_cfg"></a>Configuring Kerberos User Authentication for HAWQ
-
-When Kerberos authentication is enabled at the user level, HAWQ uses the 
Generic Security Service Application Program Interface \(GSSAPI\) to provide 
automatic authentication \(single sign-on\). When HAWQ uses Kerberos user 
authentication, HAWQ itself and the HAWQ users \(roles\) that require Kerberos 
authentication require a principal and keytab. When a user attempts to log in 
to HAWQ, HAWQ uses its Kerberos principal to connect to the Kerberos server, 
and presents the user's principal for Kerberos validation. If the user 
principal is valid, login succeeds and the user can access HAWQ. Conversely, 
the login fails and HAWQ denies access to the user if the principal is not 
valid.
-
-When HAWQ utilizes Kerberos for user authentication, it uses a standard 
principal to connect to the Kerberos KDC. The format of this principal is 
`postgres/<FQDN_of_master>@<realm>`, where \<FQDN\_of\_master\> refers to the 
fully qualified distinguish name of the HAWQ master node.
-
-You may choose to configure HAWQ user principals before you enable Kerberos 
user authentication for HAWQ. See [Configure Kerberos-Authenticated HAWQ 
Users](#hawq_kerb_user_cfg).
-
-The procedure to configure Kerberos user authentication for HAWQ includes:
-
-- Creating a Kerberos principal and generating and distributing a keytab entry 
for the `postgres` process on the HAWQ master node
-- Creating a Kerberos principal for the `gpadmin` or another administrative 
HAWQ user
-- Updating the HAWQ `pg_hba.conf` configuration file to specify Kerberos 
authentication
-- Restarting the HAWQ cluster
-
-Perform the following steps to configure Kerberos user authentication for 
HAWQ. You will perform operations on both the HAWQ \<master\> and the 
\<kdc-server\> nodes. 
-
-**Note**: Some operations may differ based on whether or not you have 
configured secure HDFS. These operations are called out below.
-
-1. Log in to the Kerberos KDC server system:
-
-    ``` shell
-    $ ssh root@<kdc-server>
-    root@kdc-server$ 
-    ```
-
-2. Create a keytab entry for the HAWQ `postgres/<master>` principal using the 
`kadmin.local` command. Substitute the HAWQ master node fully qualified 
distinguished hostname and your Kerberos realm. For example:
-
-    ``` shell
-    root@kdc-server$ kadmin.local -q "addprinc -randkey 
postgres/<master>@REALM.DOMAIN"
-    ```
-    
-    The `addprinc` command adds the principal `postgres/<master>` to the KDC 
managing your \<realm\>.
-
-3. Generate a keytab file for the HAWQ `postgres/<master>` principal. Provide 
the same name you used to create the principal.
-
-    **If you have configured Kerberos for your HDFS filesystem**, add the 
keytab to the HAWQ client HDFS keytab file:
-    
-    ``` shell
-    root@kdc-server$ kadmin.local -q "xst -norandkey -k 
/etc/security/keytabs/hawq.service.keytab postgres/<master>@REALM.DOMAIN"
-    ```
-    
-    **Otherwise**, generate a new file for the keytab:
-
-    ``` shell
-    root@kdc-server$ kadmin.local -q "xst -norandkey -k hawq-krb5.keytab 
postgres/<master>@REALM.DOMAIN"
-    ```
-
-4. Use the `klist` command to view the key you just generated:
-
-    ``` shell
-    root@kdc-server$ klist -ket ./hawq-krb5.keytab
-    ```
-    
-    Or:
-    
-    ``` shell
-    root@kdc-server$ klist -ket /etc/security/keytabs/hawq.service.keytab
-    ```
-    
-    The `-ket` option lists the keytabs and encryption types in the identified 
key file.
-
-5. When you enable Kerberos user authentication for HAWQ, you must create a 
Kerberos principal for `gpadmin` or another HAWQ administrative user. Create a 
Kerberos principal for the HAWQ `gpadmin` administrative role, substituting 
your Kerberos realm. For example:
-
-    ``` shell
-    root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
[email protected]"
-    ```
-    
-    This `addprinc` command adds the principal `gpadmin` to the Kerberos KDC 
managing your \<realm\>. When you invoke `kadmin.local` as specified in the 
example above, `gpadmin` will be required to provide the password identified by 
the `-pw` option when authenticating. Alternatively, you can create a keytab 
file for the `gpadmin` principal and distribute the file to HAWQ client nodes.
-
-6. Copy the file in which you added the `postgres/<master>@<realm>` keytab to 
the HAWQ master node:
-
-    ``` shell
-    root@kdc-server$ scp ./hawq-krb5.keytab gpadmin@<master>:/home/gpadmin/
-    ```
-    
-    Or:
-    
-    ``` shell
-    root@kdc-server$ scp /etc/security/keytabs/hawq.service.keytab 
gpadmin@<master>:/etc/security/keytabs/hawq.service.keytab
-    ```
-
-7. Log in to the HAWQ master node as the `gpadmin` user and set up the HAWQ 
environment:
-
-    ``` shell
-    $ ssh gpadmin@<master>
-    gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
-    ```
-    
-8. If you copied the `hawq-krb5.keytab` file, verify the ownership and mode of 
this file:
-
-    ``` shell
-    gpadmin@master$ chown gpadmin:gpadmin /home/gpadmin/hawq-krb5.keytab
-    gpadmin@master$ chmod 400 /home/gpadmin/hawq-krb5.keytab
-    ```
-
-    The HAWQ server keytab file must be readable (and preferably only 
readable) by the HAWQ `gpadmin` administrative account.
-
-9. Add a `pg_hba.conf` entry that mandates Kerberos authentication for HAWQ. 
The `pg_hba.conf` file resides in the directory specified by the 
`hawq_master_directory` server configuration parameter value. For example, add:
-
-    ``` pre
-    host all all 0.0.0.0/0 gss include_realm=0 krb_realm=REALM.DOMAIN
-    ``` 
-
-    This `pg_hba.conf` entry specifies that any remote access (i.e. from any 
user on any remote host) to HAWQ must be authenticated through the Kerberos 
realm named `REALM.DOMAIN`.
-   
-    **Note**: Place the Kerberos entry in the appropriate location in the 
`pg_hba.conf` file. For example, you may choose to retain `pg_hba.conf` entries 
for the `gpadmin` user that grant `trust` or `ident` authentication for local 
connections. Locate the Kerberos entry after these line(s). Refer to 
[Configuring Client Authentication](client_auth.html) for additional 
information about the `pg_hba.conf` file.
-
-10. Update HAWQ configuration and restart your cluster. You will perform 
different procedures if you manage your cluster from the command line or use 
Ambari to manage your cluster.
-
-    **Note**: After you restart your hawq cluster, Kerberos user 
authentication is enabled for HAWQ, and all users, including `gpadmin`, must 
authenticate before performing any HAWQ operations.
-
-    1. If you manage your cluster using Ambari:
-    
-        1.  Login in to the Ambari UI from a supported web browser.
-
-        2. Navigate to the **HAWQ** service, **Configs > Advanced** tab and 
expand the **Custom hawq-site** drop down.
-
-        3. Set the `krb_server_keyfile` path value to the new keytab file 
location, `/home/gpadmin/hawq-krb5.keytab`.
-
-        4. **Save** this configuration change and then select the now orange 
**Restart > Restart All Affected** menu button to restart your HAWQ cluster.
-
-        5. Exit the Ambari UI.  
-    
-    2. If you manage your cluster from the command line:
-    
-        1.  Update the `krb_server_keyfile` configuration parameter:
-
-            ``` shell
-            gpadmin@master$ hawq config -c krb_server_keyfile -v 
'/home/gpadmin/hawq-krb5.keytab'
-            GUC krb_server_keyfile already exist in hawq-site.xml
-            Update it with value: /home/gpadmin/hawq-krb5.keytab
-            GUC      : krb_server_keyfile
-            Value    : /home/gpadmin/hawq-krb5.keytab
-            ```
-
-        2.  Restart your HAWQ cluster:
-
-            ``` shell
-            gpadmin@master$ hawq restart cluster
-            ```
-
-11. When Kerberos user authentication is enabled for HAWQ, all users, 
including the `gpadmin` administrative user, must request a ticket to 
authenticate before performing HAWQ operations. Generate a ticket for `gpadmin` 
on the HAWQ master node; enter the password identified when you created the 
principal:
-
-    ``` shell
-    gpadmin@master$ kinit gpadmin@<realm>
-    Password for [email protected]:
-    ```
-
-    See [Authenticate User Access to HAWQ](#hawq_kerb_dbaccess) for more 
information about requesting and generating Kerberos tickets. 
-
-### <a id="hawq_kerb_user_cfg"></a>Configuring Kerberos-Authenticated HAWQ 
Users
-
-You must configure HAWQ user principals for Kerberos. The first component of a 
HAWQ user principal must be the HAWQ user/role name:
-
-``` pre
-<hawq-user>@<realm>
-```
-
-This procedure includes:
-
-- Identifying an existing HAWQ role or creating a new HAWQ role for each user 
you want to authenticate with Kerberos
-- Creating a Kerberos principal for each role
-- Optionally generating and distributing a keytab file to all HAWQ clients 
from which you will access HAWQ as the new role
-
-
-#### Procedure <a id="hawq_kerb_user_cfg_proc"></a>
-
-Perform the following steps to configure Kerberos authentication for specific 
HAWQ users. You will perform operations on both the HAWQ \<master\> and the 
\<kdc-server\> nodes.
-
-1. Log in to the HAWQ master node as the `gpadmin` user and set up your HAWQ 
environment:
-
-    ``` shell
-    $ ssh gpadmin@master
-    gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
-    ```
-
-2. Identify the name of an existing HAWQ user/role or create a new HAWQ 
user/role. For example:
-
-    ``` shell
-    gpadmin@master$ psql -d template1 -c 'CREATE ROLE "bill_kerberos" with 
LOGIN;'
-    ```
-
-    This step creates a HAWQ operational role. Create an administrative HAWQ 
role by adding the `SUPERUSER` clause to the `CREATE ROLE` command.
-
-2. Create a principal for the HAWQ role. Substitute the Kerberos realm you 
noted earlier. For example:
-
-    ``` shell
-    $ ssh root@<kdc-server>
-    root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
[email protected]"
-    ```
-    
-    This `addprinc` command adds the principal `bill_kerberos` to the Kerberos 
KDC managing your \<realm\>.
-
-3. You may choose to authenticate the HAWQ role with a password or a keytab 
file. 
-
-    1. If you choose password authentication, no further configuration is 
required. `bill_kerberos` will provide the password identified by the `-pw` 
option when authenticating. Skip the rest of this step.
-    
-    2. If you choose authentication via a keytab file:
-    
-        1. Generate a keytab file for the HAWQ principal you created, again 
substituting your Kerberos realm. For example:
-
-            ``` shell
-            root@kdc-server$ kadmin.local -q "xst -k bill-krb5.keytab 
[email protected]"
-            ```
-
-            The keytab entry is saved to the `./bill-krb5.keytab` file.
-
-        2. View the key you just added to `bill-krb5.keytab`:
-
-            ``` shell
-            root@kdc-server$ klist -ket ./bill-krb5.keytab
-            ```
-
-        3. Distribute the keytab file to **each** HAWQ node from which you 
will access the HAWQ master as the user/role. For example:
-
-            ``` shell
-            root@kdc-server$ scp ./bill-krb5.keytab 
bill@<hawq-node>:/home/bill/
-            ```
-
-4. Log in to the HAWQ node as the user for whom you created the principal and 
set up your HAWQ environment:
-
-    ``` shell
-    $ ssh bill@<hawq-node>
-    bill@hawq-node$ . /usr/local/hawq/greenplum_path.sh
-    ```
-
-5. If you are using keytab file authentication, verify the ownership and mode 
of the keytab file:
-
-    ``` shell
-    bill@hawq-node$ chown bill:bill /home/bill/bill-krb5.keytab
-    bill@hawq-node$ chmod 400 /home/bill/bill-krb5.keytab
-    ```
-
-8. Access HAWQ as the new `bill_kerberos` user:
-
-    ``` shell
-    bill@hawq-node$ psql -d testdb -h <master> -U bill_kerberos
-    psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may 
provide more information
-    GSSAPI continuation error: Credentials cache file '/tmp/krb5cc_502' not 
found
-    ```
-
-    The operation fails. The `bill_kerberos` user has not yet authenticated 
with the Kerberos server. The next section, [Authenticating User Access to 
HAWQ](#hawq_kerb_dbaccess), identifies the procedure required for HAWQ users to 
authenticate with Kerberos.
-
-### <a id="hawq_kerb_dbaccess"></a>Authenticating User Access to HAWQ 
-
-When Kerberos user authentication is enabled for HAWQ, users must request a 
ticket from the Kerberos KDC server before connecting to HAWQ. You must request 
the ticket for a principal matching the requested database user name. When 
granted, the ticket expires after a set period of time, after which you will 
need to request another ticket.
-   
-To generate a Kerberos ticket, run the `kinit` command. Specify the Kerberos 
principal for which you are requesting the ticket in a command option. You may 
optionally specify a path to a keytab file.
-
-For example, to request a ticket for the `bill_kerberos` user principal you 
created above using the keytab file for authentication:
-
-``` shell
-bill@hawq-node$ kinit -k -t /home/bill/bill-krb5.keytab 
[email protected]
-```
-
-To request a ticket for the `bill_kerberos` user principal using password 
authentication:
-
-``` shell
-bill@hawq-node$ kinit [email protected]
-Password for [email protected]:
-```
-
-`kinit` prompts you for the password; enter the password at the prompt.
-
-For more information about the ticket, use the `klist` command. `klist` 
invoked without any arguments lists the currently held Kerberos principal and 
tickets. The command output also provides the ticket expiration time. 
-
-Example output from the `klist` command:
-
-``` shell
-bill@hawq-node$ klist
-Ticket cache: FILE:/tmp/krb5cc_502
-Default principal: [email protected]
-
-Valid starting     Expires            Service principal
-06/07/17 23:16:04  06/08/17 23:16:04  krbtgt/[email protected]
-       renew until 06/07/17 23:16:04
-06/07/17 23:16:07  06/08/17 23:16:04  postgres/master@
-       renew until 06/07/17 23:16:04
-06/07/17 23:16:07  06/08/17 23:16:04  postgres/[email protected]
-       renew until 06/07/17 23:16:04
-```
-
-After generating a ticket, you can connect to a HAWQ database as a 
kerberos-authenticated user using `psql` or other client programs.
-
-#### <a id="topic7"></a>Name Mapping 
-
-To simplify Kerberos-authenticated HAWQ user login, you can define a mapping 
between a user's Kerberos principal name and a HAWQ database user name. You 
define the mapping in the `pg_ident.conf` file. You use a mapping by specifying 
a `map=<map-name>` option to a specific entry in the `pg_hba.conf` file. 
-
-The `pg_ident.conf` and `pg_hba.conf` files reside on the HAWQ master node in 
the directory identified by the `hawq_master_directory` server configuration 
parameter setting value.
-
-You use the `pg_ident.conf` file to define user name maps. You can create 
entries in this file that define a mapping name, a Kerberos principal name, and 
a HAWQ database user name. For example:
-
-```
-# MAPNAME   SYSTEM-USERNAME      HAWQ-USERNAME
-kerbmap     /^([a-z]+)_kerberos      \1
-```
-
-This entry extracts the component prefacing the `_kerberos` of the Kerberos 
principal name and maps that to a HAWQ user/role.
-
-You identify the map name in the `pg_hba.conf` file entry that enables 
Kerberos support using the `map=<mapname>` option. For example:
-
-```
-host all all 0.0.0.0/0 gss include_realm=0 krb_realm=REALM.DOMAIN map=kerbmap
-```
-
-Suppose that you are logged in as Linux user `bsmith`, your Kerberos principal 
is `[email protected]`, and you want to log in to HAWQ as user `bill`. 
With the `kerbmap` mapping configured in `pg_ident.conf` and `pg_hba.conf` as 
described above and a ticket for Kerberos principal `bill_kerberos`, you log in 
to HAWQ with the user name `bill` as follows:
-
-``` shell
-bsmith@hawq-node$ klist
-Ticket cache: FILE:/tmp/krb5cc_500
-Default principal: [email protected]
-bsmith@hawq-node$ psql -d testdb -h <master> -U bill
-psql (8.2.15)
-Type "help" for help.
-
-testdb=> SELECT current_user;
- current_user
---------------
- bill
-(1 row)
-```
-
-For more information about specifying username maps see [Username 
maps](http://www.postgresql.org/docs/8.4/static/auth-username-maps.html) in the 
PostgreSQL documentation.
-
-### <a id="client_considerations"></a>Kerberos Considerations for Non-HAWQ 
Clients
-
-If you access HAWQ databases from any clients outside of your HAWQ cluster, 
and Kerberos user authentication for HAWQ is enabled, you must specifically 
configure Kerberos access on each client system. Ensure that:
-
-- You have created the appropriate Kerberos principal for the HAWQ user and 
optionally generated and distributed the keytab file.
-- The `krb5-libs` and `krb5-workstation` Kerberos client packages are 
installed on each client.
-- You copy the `/etc/krb5.conf` Kerberos configuration file from the KDC or 
HAWQ master node to each client system.
-- The HAWQ user requests a ticket before connecting to HAWQ.
-
-### <a id="topic9"></a>Configuring JDBC for Kerberos-Enabled HAWQ
-
-JDBC applications that you run must utilize a secure connection when Kerberos 
is configured for HAWQ user authentication.
-
-The following example database connection URL uses a PostgreSQL JDBC driver 
and specifies parameters for Kerberos authentication:
-
-```
-jdbc:postgresql://master:5432/testdb?kerberosServerName=postgres&jaasApplicationName=pgjdbc&user=bill_kerberos
-```
-
-The connection URL parameter names and values specified will depend upon how 
the Java application performs Kerberos authentication.
-
-Before configuring JDBC access to a kerberized HAWQ, verify that:
-
-- The Java Cryptography Extension (JCE) is installed on the client system 
(non-OpenJDK Java installations).
-- Kerberos user authentication is configured for HAWQ as described in 
[Configure Kerberos User Authentication for HAWQ](#hawq_kerb_cfg).
-- If you are accessing HAWQ from a client node that resides outside of your 
HAWQ cluster, verify that the client is configured as described in [Kerberos 
Considerations for Non-HAWQ Clients](#client_considerations).
-
-#### <a id="topic9_proc"></a>Procedure
-
-Perform the following procedure to enable Kerberos-authenticated JDBC access 
to HAWQ from a client system.
-
-1.  Create or add the following to the `.java.login.config` file in the 
`$HOME` directory of the user account under which the application will run:
-
-    ``` pre
-    pgjdbc {
-      com.sun.security.auth.module.Krb5LoginModule required
-      doNotPrompt=true
-      useTicketCache=true
-      debug=true
-      client=true;
-    };
-    ```
-
-2.  Generate a Kerberos ticket.
-
-3.  Run the JDBC-based HAWQ application.
-
-
-## <a id="task_setup_kdc"></a>Example: Install and Configure a Kerberos KDC 
Server 
-
-**Note:** If your installation already has a Kerberos Key Distribution Center 
\(KDC\) server, you do not need to perform this procedure. Note the KDC server 
host name or IP address and the name of the realm in which your cluster 
resides. You will need this information for other procedures.
-
-Follow these steps to install and configure a Kerberos KDC server on a Red Hat 
Enterprise Linux host. The KDC server resides on the host named \<kdc-server\>.
-
-1. Log in to the Kerberos KDC Server system as a superuser:
-
-    ``` shell
-    $ ssh root@<kdc-server>
-    root@kdc-server$ 
-    ```
-
-2.  Install the Kerberos server packages:
-
-    ``` shell
-    root@kdc-server$ yum install krb5-libs krb5-server krb5-workstation
-    ```
-
-3.  Define the Kerberos realm for your cluster by editting the 
`/etc/krb5.conf` configuration file. The following example configures a 
Kerberos server with a realm named `REALM.DOMAIN` residing on a host named 
`hawq-kdc`.
-
-    ```
-    [logging]
-     default = FILE:/var/log/krb5libs.log
-     kdc = FILE:/var/log/krb5kdc.log
-     admin_server = FILE:/var/log/kadmind.log
-
-    [libdefaults]
-     default_realm = REALM.DOMAIN
-     dns_lookup_realm = false
-     dns_lookup_kdc = false
-     ticket_lifetime = 24h
-     renew_lifetime = 7d
-     forwardable = true
-     default_tgs_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-     default_tkt_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-     permitted_enctypes = aes128-cts des3-hmac-sha1 des-cbc-crc des-cbc-md5
-
-    [realms]
-     REALM.DOMAIN = {
-      kdc = hawq-kdc:88
-      admin_server = hawq-kdc:749
-      default_domain = hawq-kdc
-     }
-
-    [domain_realm]
-     .hawq-kdc = REALM.DOMAIN
-     hawq-kdc = REALM.DOMAIN
-
-    [appdefaults]
-     pam = {
-        debug = false
-        ticket_lifetime = 36000
-        renew_lifetime = 36000
-        forwardable = true
-        krb4_convert = false
-       }
-    ```
-
-    The `kdc` and `admin_server` keys in the `[realms]` section specify the 
host \(`hawq-kdc`\) and port on which the Kerberos server is running. You can 
use an IP address in place of a host name.
-
-    If your Kerberos server manages authentication for other realms, you would 
instead add the `REALM.DOMAINM` realm in the `[realms]` and `[domain_realm]` 
sections of the `krb5.conf` file. See the [Kerberos 
documentation](http://web.mit.edu/kerberos/krb5-latest/doc/) for detailed 
information about the `krb5.conf` configuration file.
-
-4. Note the Kerberos KDC server host name or IP address and the name of the 
realm in which your cluster resides. You will need this information in later 
procedures.
-5.  Create a Kerberos KDC database by running the `kdb5_util` command:
-
-    ```
-    root@kdc-server$ kdb5_util create -s
-    ```
-
-    The `kdb5_util create` command creates the database in which the keys for 
the Kerberos realms managed by this KDC server are stored. The `-s` option 
instructs the command to create a stash file. Without the stash file, the KDC 
server will request a password every time it starts.
-
-6.  Add an administrative user to the Kerberos KDC database with the 
`kadmin.local` utility. Because it does not itself depend on Kerberos 
authentication, the `kadmin.local` utility allows you to add an initial 
administrative user to the local Kerberos server. To add the user `admin` as an 
administrative user to the KDC database, run the following command:
-
-    ```
-    root@kdc-server$ kadmin.local -q "addprinc admin/admin"
-    ```
-
-    Most users do not need administrative access to the Kerberos server. They 
can use `kadmin` to manage their own principals \(for example, to change their 
own password\). For information about `kadmin`, see the [Kerberos 
documentation](http://web.mit.edu/kerberos/krb5-latest/doc/).
-
-7.  If required, edit the `/var/kerberos/krb5kdc/kadm5.acl` file to grant the 
appropriate permissions to `admin`.
-8.  Start the Kerberos daemons:
-
-    ```
-    root@kdc-server$ /sbin/service krb5kdc start
-    root@kdc-server$ /sbin/service kadmin start
-    ```
-
-9.  To start Kerberos automatically upon system restart:
-
-    ```
-    root@kdc-server$ /sbin/chkconfig krb5kdc on
-    root@kdc-server$ /sbin/chkconfig kadmin on
-    ```

Reply via email to