http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/ddl/ddl-table.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-table.html.md.erb b/ddl/ddl-table.html.md.erb
new file mode 100644
index 0000000..7120031
--- /dev/null
+++ b/ddl/ddl-table.html.md.erb
@@ -0,0 +1,149 @@
+---
+title: Creating and Managing Tables
+---
+
+HAWQ Tables are similar to tables in any relational database, except that 
table rows are distributed across the different segments in the system. When 
you create a table, you specify the table's distribution policy.
+
+## <a id="topic26"></a>Creating a Table 
+
+The `CREATE TABLE` command creates a table and defines its structure. When you 
create a table, you define:
+
+-   The columns of the table and their associated data types. See [Choosing 
Column Data Types](#topic27).
+-   Any table constraints to limit the data that a column or table can 
contain. See [Setting Table Constraints](#topic28).
+-   The distribution policy of the table, which determines how HAWQ divides 
data is across the segments. See [Choosing the Table Distribution 
Policy](#topic34).
+-   The way the table is stored on disk.
+-   The table partitioning strategy for large tables, which specifies how the 
data should be divided. See [Creating and Managing 
Databases](/20/ddl/ddl-database.html).
+
+### <a id="topic27"></a>Choosing Column Data Types 
+
+The data type of a column determines the types of data values the column can 
contain. Choose the data type that uses the least possible space but can still 
accommodate your data and that best constrains the data. For example, use 
character data types for strings, date or timestamp data types for dates, and 
numeric data types for numbers.
+
+There are no performance differences among the character data types `CHAR`, 
`VARCHAR`, and `TEXT` apart from the increased storage size when you use the 
blank-padded type. In most situations, use `TEXT` or `VARCHAR` rather than 
`CHAR`.
+
+Use the smallest numeric data type that will accommodate your numeric data and 
allow for future expansion. For example, using `BIGINT` for data that fits in 
`INT` or `SMALLINT` wastes storage space. If you expect that your data values 
will expand over time, consider that changing from a smaller datatype to a 
larger datatype after loading large amounts of data is costly. For example, if 
your current data values fit in a `SMALLINT` but it is likely that the values 
will expand, `INT` is the better long-term choice.
+
+Use the same data types for columns that you plan to use in cross-table joins. 
When the data types are different, the database must convert one of them so 
that the data values can be compared correctly, which adds unnecessary overhead.
+
+HAWQ supports the parquet columnar storage format, which can increase 
performance on large queries. Use parquet tables for HAWQ internal tables.
+
+### <a id="topic28"></a>Setting Table Constraints 
+
+You can define constraints to restrict the data in your tables. HAWQ support 
for constraints is the same as PostgreSQL with some limitations, including:
+
+-   `CHECK` constraints can refer only to the table on which they are defined.
+-   `FOREIGN KEY` constraints are allowed, but not enforced.
+-   Constraints that you define on partitioned tables apply to the partitioned 
table as a whole. You cannot define constraints on the individual parts of the 
table.
+
+#### <a id="topic29"></a>Check Constraints 
+
+Check constraints allow you to specify that the value in a certain column must 
satisfy a Boolean \(truth-value\) expression. For example, to require positive 
product prices:
+
+``` sql
+=> CREATE TABLE products
+     ( product_no integer,
+       name text,
+       price numeric CHECK (price > 0) );
+```
+
+#### <a id="topic30"></a>Not-Null Constraints 
+
+Not-null constraints specify that a column must not assume the null value. A 
not-null constraint is always written as a column constraint. For example:
+
+``` sql
+=> CREATE TABLE products
+     ( product_no integer NOT NULL,
+       name text NOT NULL,
+       price numeric );
+```
+
+#### <a id="topic33"></a>Foreign Keys 
+
+Foreign keys are not supported. You can declare them, but referential 
integrity is not enforced.
+
+Foreign key constraints specify that the values in a column or a group of 
columns must match the values appearing in some row of another table to 
maintain referential integrity between two related tables. Referential 
integrity checks cannot be enforced between the distributed table segments of a 
HAWQ database.
+
+### <a id="topic34"></a>Choosing the Table Distribution Policy 
+
+All HAWQ tables are distributed. The default is `DISTRIBUTED RANDOMLY` 
\(round-robin distribution\) to determine the table row distribution. However, 
when you create or alter a table, you can optionally specify `DISTRIBUTED BY` 
to distribute data according to a hash-based policy. In this case, the 
`bucketnum` attribute sets the number of hash buckets used by a 
hash-distributed table. Columns of geometric or user-defined data types are not 
eligible as HAWQ distribution key columns. 
+
+Randomly distributed tables have benefits over hash distributed tables. For 
example, after expansion, HAWQ's elasticity feature lets it automatically use 
more resources without needing to redistribute the data. For extremely large 
tables, redistribution is very expensive. Also, data locality for randomly 
distributed tables is better, especially after the underlying HDFS 
redistributes its data during rebalancing or because of data node failures. 
This is quite common when the cluster is large.
+
+However, hash distributed tables can be faster than randomly distributed 
tables. For example, for TPCH queries, where there are several queries, HASH 
distributed tables can have performance benefits. Choose a distribution policy 
that best suits your application scenario. When you `CREATE TABLE`, you can 
also specify the `bucketnum` option. The `bucketnum` determines the number of 
hash buckets used in creating a hash-distributed table or for PXF external 
table intermediate processing. The number of buckets also affects how many 
virtual segments will be created when processing this data. The bucketnumber of 
a gpfdist external table is the number of gpfdist location, and the 
bucketnumber of a command external table is `ON #num`. PXF external tables use 
the `default_hash_table_bucket_number` parameter to control virtual segments. 
+
+HAWQ's elastic execution runtime is based on virtual segments, which are 
allocated on demand, based on the cost of the query. Each node uses one 
physical segment and a number of dynamically allocated virtual segments 
distributed to different hosts, thus simplifying performance tuning. Large 
queries use large numbers of virtual segments, while smaller queries use fewer 
virtual segments. Tables do not need to be redistributed when nodes are added 
or removed.
+
+In general, the more virtual segments are used, the faster the query will be 
executed. You can tune the parameters for `default_hash_table_bucket_number` 
and `hawq_rm_nvseg_perquery_limit` to adjust performance by controlling the 
number of virtual segments used for a query. However, be aware that if the 
value of `default_hash_table_bucket_number` is changed, data must be 
redistributed, which can be costly. Therefore, it is better to set the 
`default_hash_table_bucket_number` up front, if you expect to need a larger 
number of virtual segments. However, you might need to adjust the value in 
`default_hash_table_bucket_number` after cluster expansion, but should take 
care not to exceed the number of virtual segments per query set in 
`hawq_rm_nvseg_perquery_limit`. Refer to the recommended guidelines for setting 
the value of `default_hash_table_bucket_number`, later in this section.
+
+For random or gpfdist external tables, as well as user-defined functions, the 
value set in the `hawq_rm_nvseg_perquery_perseg_limit` parameter limits the 
number of virtual segments that are used for one segment for one query, to 
optimize query resources. Resetting this parameter is not recommended.
+
+Consider the following points when deciding on a table distribution policy.
+
+-   **Even Data Distribution** — For the best possible performance, all 
segments should contain equal portions of data. If the data is unbalanced or 
skewed, the segments with more data must work harder to perform their portion 
of the query processing.
+-   **Local and Distributed Operations** — Local operations are faster than 
distributed operations. Query processing is fastest if the work associated with 
join, sort, or aggregation operations is done locally, at the segment level. 
Work done at the system level requires distributing tuples across the segments, 
which is less efficient. When tables share a common distribution key, the work 
of joining or sorting on their shared distribution key columns is done locally. 
With a random distribution policy, local join operations are not an option.
+-   **Even Query Processing** — For best performance, all segments should 
handle an equal share of the query workload. Query workload can be skewed if a 
table's data distribution policy and the query predicates are not well matched. 
For example, suppose that a sales transactions table is distributed based on a 
column that contains corporate names \(the distribution key\), and the hashing 
algorithm distributes the data based on those values. If a predicate in a query 
references a single value from the distribution key, query processing runs on 
only one segment. This works if your query predicates usually select data on a 
criteria other than corporation name. For queries that use corporation name in 
their predicates, it's possible that only one segment instance will handle the 
query workload.
+
+HAWQ utilizes dynamic parallelism, which can affect the performance of a query 
execution significantly. Performance depends on the following factors:
+
+-   The size of a randomly distributed table.
+-   The `bucketnum` of a hash distributed table.
+-   Data locality.
+-   The values of `default_hash_table_bucket_number`, and 
`hawq_rm_nvseg_perquery_limit` \(including defaults and user-defined values\).
+
+For any specific query, the first four factors are fixed values, while the 
configuration parameters in the last item can be used to tune performance of 
the query execution. In querying a random table, the query resource load is 
related to the data size of the table, usually one virtual segment for one HDFS 
block. As a result, querying a large table could use a large number of 
resources.
+
+The `bucketnum` for a hash table specifies the number of hash buckets to be 
used in creating virtual segments. A HASH distributed table is created with 
`default_hash_table_bucket_number` buckets. The default bucket value can be 
changed in session level or in the `CREATE TABLE` DDL by using the `bucketnum` 
storage parameter.
+
+When initializing a cluster, you can use the `hawq init --bucket_number` 
parameter to explcitly set the default bucket number 
\(`default_hash_table_bucket_number`\).
+
+**Note:** For best performance with large tables, the number of buckets should 
not exceed the value of the `default_hash_table_bucket_number` parameter. Small 
tables can use one segment node, `with bucketnum=1`. For larger tables, the 
bucketnum is set to a multiple of the number of segment nodes, for the best 
load balancing on different segment nodes. The elastic runtime will attempt to 
find the optimal number of buckets for the number of nodes being processed. 
Larger tables need more virtual segments , and hence use larger numbers of 
buckets.
+
+The following statement creates a table “sales” with 8 buckets, which 
would be similar to a hash-distributed table on 8 segments.
+
+``` sql
+create table sales(id int, profit float)  with (bucketnum=8) distributed by 
(id);
+```
+
+There are four ways of creating a table from an origin table. The ways in 
which the new table is generated from the original table are listed below.
+
+<table>
+  <tr>
+    <th></th>
+    <th>Syntax</th>
+  </tr>
+  <tr><td>INHERITS</td><td><pre><code>create table new_table inherit 
(origintable) [with(bucketnum=x)] <br/>[distributed by 
col]</code></pre></td></tr>
+  <tr><td>LIKE</td><td><pre><code>create table new_table (like origintable) 
[with(bucketnum=x)] <br/>[distributed by col]</code></pre></td></tr>
+  <tr><td>AS</td><td><pre><code>create table new_table [with(bucketnum=x)] as 
subquery [distributed by col]</code></pre></td></tr>
+  <tr><td>SELECT INTO</td><td><pre><code>create table origintable 
[with(bucketnum=x)] [distributed by col]; select * <br/>into new_table from 
origintable;</code></pre></td></tr>
+</table>
+
+The optional `INHERITS` clause specifies a list of tables from which the new 
table automatically inherits all columns. Hash tables inherit bucketnumbers 
from their origin table if not otherwise specified. If `WITH` specifies 
`bucketnum` in creating a hash-distributed table, it will be copied. If 
distribution is specified by column, the table will inherit it. Otherwise, the 
table will use default distribution from `default_hash_table_bucket_number`.
+
+The `LIKE` clause specifies a table from which the new table automatically 
copies all column names, data types, not-null constraints, and distribution 
policy. If a `bucketnum` is specified, it will be copied. Otherwise, the table 
will use default distribution.
+
+For hash tables, the `SELECT INTO` function always uses random distribution.
+
+#### <a id="topic_kjg_tqm_gv"></a>Declaring Distribution Keys 
+
+`CREATE TABLE`'s optional clause `DISTRIBUTED BY` specifies the distribution 
policy for a table. The default is a random distribution policy. You can also 
choose to distribute data as a hash-based policy, where the `bucketnum` 
attribute sets the number of hash buckets used by a hash-distributed table. 
HASH distributed tables are created with the number of hash buckets specified 
by the `default_hash_table_bucket_number` parameter.
+
+Policies for different application scenarios can be specified to optimize 
performance. The number of virtual segments used for query execution can now be 
tuned using the `hawq_rm_nvseg_perquery_limit `and 
`hawq_rm_nvseg_perquery_perseg_limit` parameters, in connection with the 
`default_hash_table_bucket_number` parameter, which sets the default 
`bucketnum`. For more information, see the guidelines for Virtual Segments in 
the next section and in [Query 
Performance](/20/query/query-performance.html#topic38).
+
+#### <a id="topic_wff_mqm_gv"></a>Performance Tuning 
+
+Adjusting the values of the configuration parameters 
`default_hash_table_bucket_number` and `hawq_rm_nvseg_perquery_limit` can tune 
performance by controlling the number of virtual segments being used. In most 
circumstances, HAWQ's elastic runtime will dynamically allocate virtual 
segments to optimize performance, so further tuning should not be needed..
+
+Hash tables are created using the value specified in 
`default_hash_table_bucket_number`. Explicitly setting this value can be useful 
in managing resources, as queries for hash tables use a fixed number of 
buckets, regardless of the amount of data present. If a larger or smaller 
number of hash buckets are desired, set this value before you CREATE TABLE. 
Resources are dynamically allocated to a multiple of the number of nodes. If 
setting the value of `default_hash_table_bucket_number` with hawq init 
--bucket\_number, the value should not exceed the value of 
`hawq_rm_nvseg_perquery_limit`, which defines the maximum number of virtual 
segments that can be used for a query \(default = 512, with a maximum of 
65535\). Modifying the value to greater than 1000 segments is not recommended.
+
+The following per-node guidelines apply to values for 
`default_hash_table_bucket_number`.
+
+|Number of Nodes|default\_hash\_table\_bucket\_number value|
+|---------------|------------------------------------------|
+|<= 85|6 \* \#nodes|
+|\> 85 and <= 102|5 \* \#nodes|
+|\> 102 and <= 128|4 \* \#nodes|
+|\> 128 and <= 170|3 \* \#nodes|
+|\> 170 and <= 256|2 \* \#nodes|
+|\> 256 and <= 512|1 \* \#nodes|
+|\> 512|512|
+
+Reducing the value of `hawq_rm_nvseg_perquery_perseg_limit`can improve 
concurrency and increasing the value of 
`hawq_rm_nvseg_perquery_perseg_limit`could possibly increase the degree of 
parallelism. However, for some queries, increasing the degree of parallelism 
will not improve performance if the query has reached the limits set by the 
hardware. Therefore, increasing the value of 
`hawq_rm_nvseg_perquery_perseg_limit` above the default value is not 
recommended. Also, changing the value of `default_hash_table_bucket_number` 
after initializing a cluster means the hash table data must be redistributed. 
If you are expanding a cluster, you might wish to change this value, but be 
aware that retuning could adversely affect performance.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/ddl/ddl-tablespace.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-tablespace.html.md.erb b/ddl/ddl-tablespace.html.md.erb
new file mode 100644
index 0000000..8ead2f0
--- /dev/null
+++ b/ddl/ddl-tablespace.html.md.erb
@@ -0,0 +1,154 @@
+---
+title: Creating and Managing Tablespaces
+---
+
+Tablespaces allow database administrators to have multiple file systems per 
machine and decide how to best use physical storage to store database objects. 
They are named locations within a filespace in which you can create objects. 
Tablespaces allow you to assign different storage for frequently and 
infrequently used database objects or to control the I/O performance on certain 
database objects. For example, place frequently-used tables on file systems 
that use high performance solid-state drives \(SSD\), and place other tables on 
standard hard drives.
+
+A tablespace requires a file system location to store its database files. In 
HAWQ, the master and each segment require a distinct storage location. The 
collection of file system locations for all components in a HAWQ system is a 
*filespace*. Filespaces can be used by one or more tablespaces.
+
+## <a id="topic10"></a>Creating a Filespace 
+
+A filespace sets aside storage for your HAWQ system. A filespace is a symbolic 
storage identifier that maps onto a set of locations in your HAWQ hosts' file 
systems. To create a filespace, prepare the logical file systems on all of your 
HAWQ hosts, then use the `hawq filespace` utility to define the filespace. You 
must be a database superuser to create a filespace.
+
+**Note:** HAWQ is not directly aware of the file system boundaries on your 
underlying systems. It stores files in the directories that you tell it to use. 
You cannot control the location on disk of individual files within a logical 
file system.
+
+### <a id="im178954"></a>To create a filespace using hawq filespace 
+
+1.  Log in to the HAWQ master as the `gpadmin` user.
+
+    ``` shell
+    $ su - gpadmin
+    ```
+
+2.  Create a filespace configuration file:
+
+    ``` shell
+    $ hawq filespace -o hawqfilespace_config
+    ```
+
+3.  At the prompt, enter a name for the filespace, a master file system 
location, and the primary segment file system locations. For example:
+
+    ``` shell
+    $ hawq filespace -o hawqfilespace_config
+    ```
+    ``` pre
+    Enter a name for this filespace
+    > testfs
+    Enter replica num for filespace. If 0, default replica num is used 
(default=3)
+    > 
+
+    Please specify the DFS location for the filespace (for example: 
localhost:9000/fs)
+    location> localhost:8020/fs        
+    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-[created]
+    20160409:16:53:25:028082 hawqfilespace:gpadmin:gpadmin-[INFO]:-
+    To add this filespace to the database please run the command:
+       hawqfilespace --config 
/Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+       
+    ``` shell
+    $ cat /Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+    ``` pre
+    filespace:testfs
+    fsreplica:3
+    dfs_url::localhost:8020/fs
+    ```
+    ``` shell
+    $ hawq filespace --config 
/Users/gpadmin/curwork/git/hawq/hawqfilespace_config
+    ```
+    ``` pre
+    Reading Configuration file: 
'/Users/gpadmin/curwork/git/hawq/hawqfilespace_config'
+
+    CREATE FILESPACE testfs ON hdfs 
+    ('localhost:8020/fs/testfs') WITH (NUMREPLICA = 3);
+    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Connecting 
to database
+    20160409:16:57:56:028104 hawqfilespace:gpadmin:gpadmin-[INFO]:-Filespace 
"testfs" successfully created
+
+    ```
+
+
+4.  `hawq filespace` creates a configuration file. Examine the file to verify 
that the hawq filespace configuration is correct. The following is a sample 
configuration file:
+
+    ```
+    filespace:fastdisk
+    mdw:1:/hawq_master_filespc/gp-1
+    sdw1:2:/hawq_pri_filespc/gp0
+    sdw2:3:/hawq_pri_filespc/gp1
+    ```
+
+5.  Run hawq filespace again to create the filespace based on the 
configuration file:
+
+    ``` shell
+    $ hawq filespace -c hawqfilespace_config
+    ```
+
+
+## <a id="topic13"></a>Creating a Tablespace 
+
+After you create a filespace, use the `CREATE TABLESPACE` command to define a 
tablespace that uses that filespace. For example:
+
+``` sql
+=# CREATE TABLESPACE fastspace FILESPACE fastdisk;
+```
+
+Database superusers define tablespaces and grant access to database users with 
the `GRANT``CREATE`command. For example:
+
+``` sql
+=# GRANT CREATE ON TABLESPACE fastspace TO admin;
+```
+
+## <a id="topic14"></a>Using a Tablespace to Store Database Objects 
+
+Users with the `CREATE` privilege on a tablespace can create database objects 
in that tablespace, such as tables, indexes, and databases. The command is:
+
+``` sql
+CREATE TABLE tablename(options) TABLESPACE spacename
+```
+
+For example, the following command creates a table in the tablespace *space1*:
+
+``` sql
+CREATE TABLE foo(i int) TABLESPACE space1;
+```
+
+You can also use the `default_tablespace` parameter to specify the default 
tablespace for `CREATE TABLE` and `CREATE INDEX` commands that do not specify a 
tablespace:
+
+``` sql
+SET default_tablespace = space1;
+CREATE TABLE foo(i int);
+```
+
+The tablespace associated with a database stores that database's system 
catalogs, temporary files created by server processes using that database, and 
is the default tablespace selected for tables and indexes created within the 
database, if no `TABLESPACE` is specified when the objects are created. If you 
do not specify a tablespace when you create a database, the database uses the 
same tablespace used by its template database.
+
+You can use a tablespace from any database if you have appropriate privileges.
+
+## <a id="topic15"></a>Viewing Existing Tablespaces and Filespaces 
+
+Every HAWQ system has the following default tablespaces.
+
+-   `pg_global` for shared system catalogs.
+-   `pg_default`, the default tablespace. Used by the *template1* and 
*template0* databases.
+
+These tablespaces use the system default filespace, `pg_system`, the data 
directory location created at system initialization.
+
+To see filespace information, look in the *pg\_filespace* and 
*pg\_filespace\_entry* catalog tables. You can join these tables with 
*pg\_tablespace* to see the full definition of a tablespace. For example:
+
+``` sql
+=# SELECT spcname as tblspc, fsname as filespc,
+          fsedbid as seg_dbid, fselocation as datadir
+   FROM   pg_tablespace pgts, pg_filespace pgfs,
+          pg_filespace_entry pgfse
+   WHERE  pgts.spcfsoid=pgfse.fsefsoid
+          AND pgfse.fsefsoid=pgfs.oid
+   ORDER BY tblspc, seg_dbid;
+```
+
+## <a id="topic16"></a>Dropping Tablespaces and Filespaces 
+
+To drop a tablespace, you must be the tablespace owner or a superuser. You 
cannot drop a tablespace until all objects in all databases using the 
tablespace are removed.
+
+Only a superuser can drop a filespace. A filespace cannot be dropped until all 
tablespaces using that filespace are removed.
+
+The `DROP TABLESPACE` command removes an empty tablespace.
+
+The `DROP FILESPACE` command removes an empty filespace.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/ddl/ddl-view.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-view.html.md.erb b/ddl/ddl-view.html.md.erb
new file mode 100644
index 0000000..35da41e
--- /dev/null
+++ b/ddl/ddl-view.html.md.erb
@@ -0,0 +1,25 @@
+---
+title: Creating and Managing Views
+---
+
+Views enable you to save frequently used or complex queries, then access them 
in a `SELECT` statement as if they were a table. A view is not physically 
materialized on disk: the query runs as a subquery when you access the view.
+
+If a subquery is associated with a single query, consider using the `WITH` 
clause of the `SELECT` command instead of creating a seldom-used view.
+
+## <a id="topic101"></a>Creating Views 
+
+The `CREATE VIEW`command defines a view of a query. For example:
+
+``` sql
+CREATE VIEW comedies AS SELECT * FROM films WHERE kind = 'comedy';
+```
+
+Views ignore `ORDER BY` and `SORT` operations stored in the view.
+
+## <a id="topic102"></a>Dropping Views 
+
+The `DROP VIEW` command removes a view. For example:
+
+``` sql
+DROP VIEW topten;
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/ddl/ddl.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl.html.md.erb b/ddl/ddl.html.md.erb
new file mode 100644
index 0000000..7873fe7
--- /dev/null
+++ b/ddl/ddl.html.md.erb
@@ -0,0 +1,19 @@
+---
+title: Defining Database Objects
+---
+
+This section covers data definition language \(DDL\) in HAWQ and how to create 
and manage database objects.
+
+Creating objects in a HAWQ includes making up-front choices about data 
distribution, storage options, data loading, and other HAWQ features that will 
affect the ongoing performance of your database system. Understanding the 
options that are available and how the database will be used will help you make 
the right decisions.
+
+Most of the advanced HAWQ features are enabled with extensions to the SQL 
`CREATE` DDL statements.
+
+This section contains the topics:
+
+*  <a class="subnav" href="./ddl-database.html">Creating and Managing 
Databases</a>
+*  <a class="subnav" href="./ddl-tablespace.html">Creating and Managing 
Tablespaces</a>
+*  <a class="subnav" href="./ddl-schema.html">Creating and Managing Schemas</a>
+*  <a class="subnav" href="./ddl-table.html">Creating and Managing Tables</a>
+*  <a class="subnav" href="./ddl-storage.html">Table Storage Model and 
Distribution Policy</a>
+*  <a class="subnav" href="./ddl-partition.html">Partitioning Large Tables</a>
+*  <a class="subnav" href="./ddl-view.html">Creating and Managing Views</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/Gemfile
----------------------------------------------------------------------
diff --git a/hawq-book/Gemfile b/hawq-book/Gemfile
new file mode 100644
index 0000000..f66d333
--- /dev/null
+++ b/hawq-book/Gemfile
@@ -0,0 +1,5 @@
+source "https://rubygems.org";
+
+gem 'bookbindery'
+
+gem 'libv8', '3.16.14.7'

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/Gemfile.lock
----------------------------------------------------------------------
diff --git a/hawq-book/Gemfile.lock b/hawq-book/Gemfile.lock
new file mode 100644
index 0000000..3c483c0
--- /dev/null
+++ b/hawq-book/Gemfile.lock
@@ -0,0 +1,203 @@
+GEM
+  remote: https://rubygems.org/
+  specs:
+    activesupport (4.2.7.1)
+      i18n (~> 0.7)
+      json (~> 1.7, >= 1.7.7)
+      minitest (~> 5.1)
+      thread_safe (~> 0.3, >= 0.3.4)
+      tzinfo (~> 1.1)
+    addressable (2.4.0)
+    ansi (1.5.0)
+    bookbindery (9.12.0)
+      ansi (~> 1.4)
+      css_parser
+      elasticsearch
+      fog-aws (~> 0.7.1)
+      font-awesome-sass
+      git (~> 1.2.8)
+      middleman (~> 3.4.0)
+      middleman-livereload (~> 3.4.3)
+      middleman-syntax (~> 2.0)
+      nokogiri (= 1.6.7.2)
+      puma
+      rack-rewrite
+      redcarpet (~> 3.2.3)
+      rouge (!= 1.9.1)
+      therubyracer
+      thor
+    builder (3.2.2)
+    capybara (2.4.4)
+      mime-types (>= 1.16)
+      nokogiri (>= 1.3.3)
+      rack (>= 1.0.0)
+      rack-test (>= 0.5.4)
+      xpath (~> 2.0)
+    chunky_png (1.3.6)
+    coffee-script (2.4.1)
+      coffee-script-source
+      execjs
+    coffee-script-source (1.10.0)
+    compass (1.0.3)
+      chunky_png (~> 1.2)
+      compass-core (~> 1.0.2)
+      compass-import-once (~> 1.0.5)
+      rb-fsevent (>= 0.9.3)
+      rb-inotify (>= 0.9)
+      sass (>= 3.3.13, < 3.5)
+    compass-core (1.0.3)
+      multi_json (~> 1.0)
+      sass (>= 3.3.0, < 3.5)
+    compass-import-once (1.0.5)
+      sass (>= 3.2, < 3.5)
+    css_parser (1.4.5)
+      addressable
+    elasticsearch (2.0.0)
+      elasticsearch-api (= 2.0.0)
+      elasticsearch-transport (= 2.0.0)
+    elasticsearch-api (2.0.0)
+      multi_json
+    elasticsearch-transport (2.0.0)
+      faraday
+      multi_json
+    em-websocket (0.5.1)
+      eventmachine (>= 0.12.9)
+      http_parser.rb (~> 0.6.0)
+    erubis (2.7.0)
+    eventmachine (1.2.0.1)
+    excon (0.51.0)
+    execjs (2.7.0)
+    faraday (0.9.2)
+      multipart-post (>= 1.2, < 3)
+    ffi (1.9.14)
+    fog-aws (0.7.6)
+      fog-core (~> 1.27)
+      fog-json (~> 1.0)
+      fog-xml (~> 0.1)
+      ipaddress (~> 0.8)
+    fog-core (1.42.0)
+      builder
+      excon (~> 0.49)
+      formatador (~> 0.2)
+    fog-json (1.0.2)
+      fog-core (~> 1.0)
+      multi_json (~> 1.10)
+    fog-xml (0.1.2)
+      fog-core
+      nokogiri (~> 1.5, >= 1.5.11)
+    font-awesome-sass (4.6.2)
+      sass (>= 3.2)
+    formatador (0.2.5)
+    git (1.2.9.1)
+    haml (4.0.7)
+      tilt
+    hike (1.2.3)
+    hooks (0.4.1)
+      uber (~> 0.0.14)
+    http_parser.rb (0.6.0)
+    i18n (0.7.0)
+    ipaddress (0.8.3)
+    json (1.8.3)
+    kramdown (1.12.0)
+    libv8 (3.16.14.7)
+    listen (3.0.8)
+      rb-fsevent (~> 0.9, >= 0.9.4)
+      rb-inotify (~> 0.9, >= 0.9.7)
+    middleman (3.4.1)
+      coffee-script (~> 2.2)
+      compass (>= 1.0.0, < 2.0.0)
+      compass-import-once (= 1.0.5)
+      execjs (~> 2.0)
+      haml (>= 4.0.5)
+      kramdown (~> 1.2)
+      middleman-core (= 3.4.1)
+      middleman-sprockets (>= 3.1.2)
+      sass (>= 3.4.0, < 4.0)
+      uglifier (~> 2.5)
+    middleman-core (3.4.1)
+      activesupport (~> 4.1)
+      bundler (~> 1.1)
+      capybara (~> 2.4.4)
+      erubis
+      hooks (~> 0.3)
+      i18n (~> 0.7.0)
+      listen (~> 3.0.3)
+      padrino-helpers (~> 0.12.3)
+      rack (>= 1.4.5, < 2.0)
+      thor (>= 0.15.2, < 2.0)
+      tilt (~> 1.4.1, < 2.0)
+    middleman-livereload (3.4.6)
+      em-websocket (~> 0.5.1)
+      middleman-core (>= 3.3)
+      rack-livereload (~> 0.3.15)
+    middleman-sprockets (3.4.2)
+      middleman-core (>= 3.3)
+      sprockets (~> 2.12.1)
+      sprockets-helpers (~> 1.1.0)
+      sprockets-sass (~> 1.3.0)
+    middleman-syntax (2.1.0)
+      middleman-core (>= 3.2)
+      rouge (~> 1.0)
+    mime-types (3.1)
+      mime-types-data (~> 3.2015)
+    mime-types-data (3.2016.0521)
+    mini_portile2 (2.0.0)
+    minitest (5.9.0)
+    multi_json (1.12.1)
+    multipart-post (2.0.0)
+    nokogiri (1.6.7.2)
+      mini_portile2 (~> 2.0.0.rc2)
+    padrino-helpers (0.12.8)
+      i18n (~> 0.6, >= 0.6.7)
+      padrino-support (= 0.12.8)
+      tilt (~> 1.4.1)
+    padrino-support (0.12.8)
+      activesupport (>= 3.1)
+    puma (3.6.0)
+    rack (1.6.4)
+    rack-livereload (0.3.16)
+      rack
+    rack-rewrite (1.5.1)
+    rack-test (0.6.3)
+      rack (>= 1.0)
+    rb-fsevent (0.9.7)
+    rb-inotify (0.9.7)
+      ffi (>= 0.5.0)
+    redcarpet (3.2.3)
+    ref (2.0.0)
+    rouge (1.11.1)
+    sass (3.4.22)
+    sprockets (2.12.4)
+      hike (~> 1.2)
+      multi_json (~> 1.0)
+      rack (~> 1.0)
+      tilt (~> 1.1, != 1.3.0)
+    sprockets-helpers (1.1.0)
+      sprockets (~> 2.0)
+    sprockets-sass (1.3.1)
+      sprockets (~> 2.0)
+      tilt (~> 1.1)
+    therubyracer (0.12.2)
+      libv8 (~> 3.16.14.0)
+      ref
+    thor (0.19.1)
+    thread_safe (0.3.5)
+    tilt (1.4.1)
+    tzinfo (1.2.2)
+      thread_safe (~> 0.1)
+    uber (0.0.15)
+    uglifier (2.7.2)
+      execjs (>= 0.3.0)
+      json (>= 1.8.0)
+    xpath (2.0.0)
+      nokogiri (~> 1.3)
+
+PLATFORMS
+  ruby
+
+DEPENDENCIES
+  bookbindery
+  libv8 (= 3.16.14.7)
+
+BUNDLED WITH
+   1.11.2

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/config.yml
----------------------------------------------------------------------
diff --git a/hawq-book/config.yml b/hawq-book/config.yml
new file mode 100644
index 0000000..7240fee
--- /dev/null
+++ b/hawq-book/config.yml
@@ -0,0 +1,22 @@
+book_repo: incubator-hawq/docs-book-hawq
+
+public_host: http://localhost:9292/
+
+sections:
+ - repository:
+     name: incubator-hawq/docs-apache-hawq-md
+     ref: develop
+   directory: 20
+   subnav_template: apache-hawq-nav
+
+template_variables:
+  use_global_header: true
+  global_header_product_href: https://github.com/apache/incubator-hawq
+  global_header_product_link_text: Downloads
+  support_url: http://mail-archives.apache.org/mod_mbox/incubator-hawq-dev/
+  product_url: http://hawq.incubator.apache.org/
+  book_title: Apache HAWQ (incubating) Documentation
+  support_link: <a href="https://issues.apache.org/jira/browse/HAWQ"; 
target="_blank">Support</a>
+  support_call_to_action: <a href="https://issues.apache.org/jira/browse/HAWQ"; 
target="_blank">Need Help?</a>
+  product_link: <div class="header-item"><a 
href="http://hawq.incubator.apache.org/";>Back to Apache HAWQ Page</a></div>
+  book_title_short: Apache HAWQ (Incubating) Docs

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/images/favicon.ico
----------------------------------------------------------------------
diff --git a/hawq-book/master_middleman/source/images/favicon.ico 
b/hawq-book/master_middleman/source/images/favicon.ico
new file mode 100644
index 0000000..b2c3a0c
Binary files /dev/null and 
b/hawq-book/master_middleman/source/images/favicon.ico differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/javascripts/book.js
----------------------------------------------------------------------
diff --git a/hawq-book/master_middleman/source/javascripts/book.js 
b/hawq-book/master_middleman/source/javascripts/book.js
new file mode 100644
index 0000000..90879c4
--- /dev/null
+++ b/hawq-book/master_middleman/source/javascripts/book.js
@@ -0,0 +1,16 @@
+// Declare your book-specific javascript overrides in this file.
+//= require 'waypoints/waypoint'
+//= require 'waypoints/context'
+//= require 'waypoints/group'
+//= require 'waypoints/noframeworkAdapter'
+//= require 'waypoints/sticky'
+
+window.onload = function() {
+  Bookbinder.boot();
+  var sticky = new Waypoint.Sticky({
+    element: document.querySelector('#js-to-top'),
+    wrapper: '<div class="sticky-wrapper" />',
+    stuckClass: 'sticky',
+    offset: 100
+  });
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/javascripts/waypoints/context.js
----------------------------------------------------------------------
diff --git a/hawq-book/master_middleman/source/javascripts/waypoints/context.js 
b/hawq-book/master_middleman/source/javascripts/waypoints/context.js
new file mode 100644
index 0000000..5e3551b
--- /dev/null
+++ b/hawq-book/master_middleman/source/javascripts/waypoints/context.js
@@ -0,0 +1,300 @@
+(function() {
+  'use strict'
+
+  function requestAnimationFrameShim(callback) {
+    window.setTimeout(callback, 1000 / 60)
+  }
+
+  var keyCounter = 0
+  var contexts = {}
+  var Waypoint = window.Waypoint
+  var oldWindowLoad = window.onload
+
+  /* http://imakewebthings.com/waypoints/api/context */
+  function Context(element) {
+    this.element = element
+    this.Adapter = Waypoint.Adapter
+    this.adapter = new this.Adapter(element)
+    this.key = 'waypoint-context-' + keyCounter
+    this.didScroll = false
+    this.didResize = false
+    this.oldScroll = {
+      x: this.adapter.scrollLeft(),
+      y: this.adapter.scrollTop()
+    }
+    this.waypoints = {
+      vertical: {},
+      horizontal: {}
+    }
+
+    element.waypointContextKey = this.key
+    contexts[element.waypointContextKey] = this
+    keyCounter += 1
+
+    this.createThrottledScrollHandler()
+    this.createThrottledResizeHandler()
+  }
+
+  /* Private */
+  Context.prototype.add = function(waypoint) {
+    var axis = waypoint.options.horizontal ? 'horizontal' : 'vertical'
+    this.waypoints[axis][waypoint.key] = waypoint
+    this.refresh()
+  }
+
+  /* Private */
+  Context.prototype.checkEmpty = function() {
+    var horizontalEmpty = this.Adapter.isEmptyObject(this.waypoints.horizontal)
+    var verticalEmpty = this.Adapter.isEmptyObject(this.waypoints.vertical)
+    if (horizontalEmpty && verticalEmpty) {
+      this.adapter.off('.waypoints')
+      delete contexts[this.key]
+    }
+  }
+
+  /* Private */
+  Context.prototype.createThrottledResizeHandler = function() {
+    var self = this
+
+    function resizeHandler() {
+      self.handleResize()
+      self.didResize = false
+    }
+
+    this.adapter.on('resize.waypoints', function() {
+      if (!self.didResize) {
+        self.didResize = true
+        Waypoint.requestAnimationFrame(resizeHandler)
+      }
+    })
+  }
+
+  /* Private */
+  Context.prototype.createThrottledScrollHandler = function() {
+    var self = this
+    function scrollHandler() {
+      self.handleScroll()
+      self.didScroll = false
+    }
+
+    this.adapter.on('scroll.waypoints', function() {
+      if (!self.didScroll || Waypoint.isTouch) {
+        self.didScroll = true
+        Waypoint.requestAnimationFrame(scrollHandler)
+      }
+    })
+  }
+
+  /* Private */
+  Context.prototype.handleResize = function() {
+    Waypoint.Context.refreshAll()
+  }
+
+  /* Private */
+  Context.prototype.handleScroll = function() {
+    var triggeredGroups = {}
+    var axes = {
+      horizontal: {
+        newScroll: this.adapter.scrollLeft(),
+        oldScroll: this.oldScroll.x,
+        forward: 'right',
+        backward: 'left'
+      },
+      vertical: {
+        newScroll: this.adapter.scrollTop(),
+        oldScroll: this.oldScroll.y,
+        forward: 'down',
+        backward: 'up'
+      }
+    }
+
+    for (var axisKey in axes) {
+      var axis = axes[axisKey]
+      var isForward = axis.newScroll > axis.oldScroll
+      var direction = isForward ? axis.forward : axis.backward
+
+      for (var waypointKey in this.waypoints[axisKey]) {
+        var waypoint = this.waypoints[axisKey][waypointKey]
+        var wasBeforeTriggerPoint = axis.oldScroll < waypoint.triggerPoint
+        var nowAfterTriggerPoint = axis.newScroll >= waypoint.triggerPoint
+        var crossedForward = wasBeforeTriggerPoint && nowAfterTriggerPoint
+        var crossedBackward = !wasBeforeTriggerPoint && !nowAfterTriggerPoint
+        if (crossedForward || crossedBackward) {
+          waypoint.queueTrigger(direction)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+      }
+    }
+
+    for (var groupKey in triggeredGroups) {
+      triggeredGroups[groupKey].flushTriggers()
+    }
+
+    this.oldScroll = {
+      x: axes.horizontal.newScroll,
+      y: axes.vertical.newScroll
+    }
+  }
+
+  /* Private */
+  Context.prototype.innerHeight = function() {
+    /*eslint-disable eqeqeq */
+    if (this.element == this.element.window) {
+      return Waypoint.viewportHeight()
+    }
+    /*eslint-enable eqeqeq */
+    return this.adapter.innerHeight()
+  }
+
+  /* Private */
+  Context.prototype.remove = function(waypoint) {
+    delete this.waypoints[waypoint.axis][waypoint.key]
+    this.checkEmpty()
+  }
+
+  /* Private */
+  Context.prototype.innerWidth = function() {
+    /*eslint-disable eqeqeq */
+    if (this.element == this.element.window) {
+      return Waypoint.viewportWidth()
+    }
+    /*eslint-enable eqeqeq */
+    return this.adapter.innerWidth()
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/context-destroy */
+  Context.prototype.destroy = function() {
+    var allWaypoints = []
+    for (var axis in this.waypoints) {
+      for (var waypointKey in this.waypoints[axis]) {
+        allWaypoints.push(this.waypoints[axis][waypointKey])
+      }
+    }
+    for (var i = 0, end = allWaypoints.length; i < end; i++) {
+      allWaypoints[i].destroy()
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/context-refresh */
+  Context.prototype.refresh = function() {
+    /*eslint-disable eqeqeq */
+    var isWindow = this.element == this.element.window
+    /*eslint-enable eqeqeq */
+    var contextOffset = isWindow ? undefined : this.adapter.offset()
+    var triggeredGroups = {}
+    var axes
+
+    this.handleScroll()
+    axes = {
+      horizontal: {
+        contextOffset: isWindow ? 0 : contextOffset.left,
+        contextScroll: isWindow ? 0 : this.oldScroll.x,
+        contextDimension: this.innerWidth(),
+        oldScroll: this.oldScroll.x,
+        forward: 'right',
+        backward: 'left',
+        offsetProp: 'left'
+      },
+      vertical: {
+        contextOffset: isWindow ? 0 : contextOffset.top,
+        contextScroll: isWindow ? 0 : this.oldScroll.y,
+        contextDimension: this.innerHeight(),
+        oldScroll: this.oldScroll.y,
+        forward: 'down',
+        backward: 'up',
+        offsetProp: 'top'
+      }
+    }
+
+    for (var axisKey in axes) {
+      var axis = axes[axisKey]
+      for (var waypointKey in this.waypoints[axisKey]) {
+        var waypoint = this.waypoints[axisKey][waypointKey]
+        var adjustment = waypoint.options.offset
+        var oldTriggerPoint = waypoint.triggerPoint
+        var elementOffset = 0
+        var freshWaypoint = oldTriggerPoint == null
+        var contextModifier, wasBeforeScroll, nowAfterScroll
+        var triggeredBackward, triggeredForward
+
+        if (waypoint.element !== waypoint.element.window) {
+          elementOffset = waypoint.adapter.offset()[axis.offsetProp]
+        }
+
+        if (typeof adjustment === 'function') {
+          adjustment = adjustment.apply(waypoint)
+        }
+        else if (typeof adjustment === 'string') {
+          adjustment = parseFloat(adjustment)
+          if (waypoint.options.offset.indexOf('%') > - 1) {
+            adjustment = Math.ceil(axis.contextDimension * adjustment / 100)
+          }
+        }
+
+        contextModifier = axis.contextScroll - axis.contextOffset
+        waypoint.triggerPoint = elementOffset + contextModifier - adjustment
+        wasBeforeScroll = oldTriggerPoint < axis.oldScroll
+        nowAfterScroll = waypoint.triggerPoint >= axis.oldScroll
+        triggeredBackward = wasBeforeScroll && nowAfterScroll
+        triggeredForward = !wasBeforeScroll && !nowAfterScroll
+
+        if (!freshWaypoint && triggeredBackward) {
+          waypoint.queueTrigger(axis.backward)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+        else if (!freshWaypoint && triggeredForward) {
+          waypoint.queueTrigger(axis.forward)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+        else if (freshWaypoint && axis.oldScroll >= waypoint.triggerPoint) {
+          waypoint.queueTrigger(axis.forward)
+          triggeredGroups[waypoint.group.id] = waypoint.group
+        }
+      }
+    }
+
+    Waypoint.requestAnimationFrame(function() {
+      for (var groupKey in triggeredGroups) {
+        triggeredGroups[groupKey].flushTriggers()
+      }
+    })
+
+    return this
+  }
+
+  /* Private */
+  Context.findOrCreateByElement = function(element) {
+    return Context.findByElement(element) || new Context(element)
+  }
+
+  /* Private */
+  Context.refreshAll = function() {
+    for (var contextId in contexts) {
+      contexts[contextId].refresh()
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/context-find-by-element */
+  Context.findByElement = function(element) {
+    return contexts[element.waypointContextKey]
+  }
+
+  window.onload = function() {
+    if (oldWindowLoad) {
+      oldWindowLoad()
+    }
+    Context.refreshAll()
+  }
+
+  Waypoint.requestAnimationFrame = function(callback) {
+    var requestFn = window.requestAnimationFrame ||
+      window.mozRequestAnimationFrame ||
+      window.webkitRequestAnimationFrame ||
+      requestAnimationFrameShim
+    requestFn.call(window, callback)
+  }
+  Waypoint.Context = Context
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/javascripts/waypoints/group.js
----------------------------------------------------------------------
diff --git a/hawq-book/master_middleman/source/javascripts/waypoints/group.js 
b/hawq-book/master_middleman/source/javascripts/waypoints/group.js
new file mode 100644
index 0000000..57c3038
--- /dev/null
+++ b/hawq-book/master_middleman/source/javascripts/waypoints/group.js
@@ -0,0 +1,105 @@
+(function() {
+  'use strict'
+
+  function byTriggerPoint(a, b) {
+    return a.triggerPoint - b.triggerPoint
+  }
+
+  function byReverseTriggerPoint(a, b) {
+    return b.triggerPoint - a.triggerPoint
+  }
+
+  var groups = {
+    vertical: {},
+    horizontal: {}
+  }
+  var Waypoint = window.Waypoint
+
+  /* http://imakewebthings.com/waypoints/api/group */
+  function Group(options) {
+    this.name = options.name
+    this.axis = options.axis
+    this.id = this.name + '-' + this.axis
+    this.waypoints = []
+    this.clearTriggerQueues()
+    groups[this.axis][this.name] = this
+  }
+
+  /* Private */
+  Group.prototype.add = function(waypoint) {
+    this.waypoints.push(waypoint)
+  }
+
+  /* Private */
+  Group.prototype.clearTriggerQueues = function() {
+    this.triggerQueues = {
+      up: [],
+      down: [],
+      left: [],
+      right: []
+    }
+  }
+
+  /* Private */
+  Group.prototype.flushTriggers = function() {
+    for (var direction in this.triggerQueues) {
+      var waypoints = this.triggerQueues[direction]
+      var reverse = direction === 'up' || direction === 'left'
+      waypoints.sort(reverse ? byReverseTriggerPoint : byTriggerPoint)
+      for (var i = 0, end = waypoints.length; i < end; i += 1) {
+        var waypoint = waypoints[i]
+        if (waypoint.options.continuous || i === waypoints.length - 1) {
+          waypoint.trigger([direction])
+        }
+      }
+    }
+    this.clearTriggerQueues()
+  }
+
+  /* Private */
+  Group.prototype.next = function(waypoint) {
+    this.waypoints.sort(byTriggerPoint)
+    var index = Waypoint.Adapter.inArray(waypoint, this.waypoints)
+    var isLast = index === this.waypoints.length - 1
+    return isLast ? null : this.waypoints[index + 1]
+  }
+
+  /* Private */
+  Group.prototype.previous = function(waypoint) {
+    this.waypoints.sort(byTriggerPoint)
+    var index = Waypoint.Adapter.inArray(waypoint, this.waypoints)
+    return index ? this.waypoints[index - 1] : null
+  }
+
+  /* Private */
+  Group.prototype.queueTrigger = function(waypoint, direction) {
+    this.triggerQueues[direction].push(waypoint)
+  }
+
+  /* Private */
+  Group.prototype.remove = function(waypoint) {
+    var index = Waypoint.Adapter.inArray(waypoint, this.waypoints)
+    if (index > -1) {
+      this.waypoints.splice(index, 1)
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/first */
+  Group.prototype.first = function() {
+    return this.waypoints[0]
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/last */
+  Group.prototype.last = function() {
+    return this.waypoints[this.waypoints.length - 1]
+  }
+
+  /* Private */
+  Group.findOrCreate = function(options) {
+    return groups[options.axis][options.name] || new Group(options)
+  }
+
+  Waypoint.Group = Group
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js
----------------------------------------------------------------------
diff --git 
a/hawq-book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js 
b/hawq-book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js
new file mode 100644
index 0000000..99abcb5
--- /dev/null
+++ 
b/hawq-book/master_middleman/source/javascripts/waypoints/noframeworkAdapter.js
@@ -0,0 +1,213 @@
+(function() {
+  'use strict'
+
+  var Waypoint = window.Waypoint
+
+  function isWindow(element) {
+    return element === element.window
+  }
+
+  function getWindow(element) {
+    if (isWindow(element)) {
+      return element
+    }
+    return element.defaultView
+  }
+
+  function classNameRegExp(className) {
+    return new RegExp("\\b" + className + "\\b");
+  }
+
+  function NoFrameworkAdapter(element) {
+    this.element = element
+    this.handlers = {}
+  }
+
+  NoFrameworkAdapter.prototype.innerHeight = function() {
+    var isWin = isWindow(this.element)
+    return isWin ? this.element.innerHeight : this.element.clientHeight
+  }
+
+  NoFrameworkAdapter.prototype.innerWidth = function() {
+    var isWin = isWindow(this.element)
+    return isWin ? this.element.innerWidth : this.element.clientWidth
+  }
+
+  NoFrameworkAdapter.prototype.off = function(event, handler) {
+    function removeListeners(element, listeners, handler) {
+      for (var i = 0, end = listeners.length - 1; i < end; i++) {
+        var listener = listeners[i]
+        if (!handler || handler === listener) {
+          element.removeEventListener(listener)
+        }
+      }
+    }
+
+    var eventParts = event.split('.')
+    var eventType = eventParts[0]
+    var namespace = eventParts[1]
+    var element = this.element
+
+    if (namespace && this.handlers[namespace] && eventType) {
+      removeListeners(element, this.handlers[namespace][eventType], handler)
+      this.handlers[namespace][eventType] = []
+    }
+    else if (eventType) {
+      for (var ns in this.handlers) {
+        removeListeners(element, this.handlers[ns][eventType] || [], handler)
+        this.handlers[ns][eventType] = []
+      }
+    }
+    else if (namespace && this.handlers[namespace]) {
+      for (var type in this.handlers[namespace]) {
+        removeListeners(element, this.handlers[namespace][type], handler)
+      }
+      this.handlers[namespace] = {}
+    }
+  }
+
+  /* Adapted from jQuery 1.x offset() */
+  NoFrameworkAdapter.prototype.offset = function() {
+    if (!this.element.ownerDocument) {
+      return null
+    }
+
+    var documentElement = this.element.ownerDocument.documentElement
+    var win = getWindow(this.element.ownerDocument)
+    var rect = {
+      top: 0,
+      left: 0
+    }
+
+    if (this.element.getBoundingClientRect) {
+      rect = this.element.getBoundingClientRect()
+    }
+
+    return {
+      top: rect.top + win.pageYOffset - documentElement.clientTop,
+      left: rect.left + win.pageXOffset - documentElement.clientLeft
+    }
+  }
+
+  NoFrameworkAdapter.prototype.on = function(event, handler) {
+    var eventParts = event.split('.')
+    var eventType = eventParts[0]
+    var namespace = eventParts[1] || '__default'
+    var nsHandlers = this.handlers[namespace] = this.handlers[namespace] || {}
+    var nsTypeList = nsHandlers[eventType] = nsHandlers[eventType] || []
+
+    nsTypeList.push(handler)
+    this.element.addEventListener(eventType, handler)
+  }
+
+  NoFrameworkAdapter.prototype.outerHeight = function(includeMargin) {
+    var height = this.innerHeight()
+    var computedStyle
+
+    if (includeMargin && !isWindow(this.element)) {
+      computedStyle = window.getComputedStyle(this.element)
+      height += parseInt(computedStyle.marginTop, 10)
+      height += parseInt(computedStyle.marginBottom, 10)
+    }
+
+    return height
+  }
+
+  NoFrameworkAdapter.prototype.outerWidth = function(includeMargin) {
+    var width = this.innerWidth()
+    var computedStyle
+
+    if (includeMargin && !isWindow(this.element)) {
+      computedStyle = window.getComputedStyle(this.element)
+      width += parseInt(computedStyle.marginLeft, 10)
+      width += parseInt(computedStyle.marginRight, 10)
+    }
+
+    return width
+  }
+
+  NoFrameworkAdapter.prototype.scrollLeft = function() {
+    var win = getWindow(this.element)
+    return win ? win.pageXOffset : this.element.scrollLeft
+  }
+
+  NoFrameworkAdapter.prototype.scrollTop = function() {
+    var win = getWindow(this.element)
+    return win ? win.pageYOffset : this.element.scrollTop
+  }
+
+  NoFrameworkAdapter.prototype.height = function(newHeight) {
+    this.element.style.height = newHeight;
+  }
+
+  NoFrameworkAdapter.prototype.removeClass = function(className) {
+    this.element.className = 
this.element.className.replace(classNameRegExp(className), '');
+  }
+
+  NoFrameworkAdapter.prototype.toggleClass = function(className, addClass) {
+    var check = classNameRegExp(className);
+    if (check.test(this.element.className)) {
+      if (!addClass) {
+        this.removeClass(className);
+      }
+    } else {
+      this.element.className += ' ' + className;
+    }
+  }
+
+  NoFrameworkAdapter.prototype.parent = function() {
+    return new NoFrameworkAdapter(this.element.parentNode);
+  }
+
+  NoFrameworkAdapter.prototype.wrap = function(wrapper) {
+    this.element.insertAdjacentHTML('beforebegin', wrapper)
+    var wrapperNode = this.element.previousSibling
+    this.element.parentNode.removeChild(this.element)
+    wrapperNode.appendChild(this.element)
+  }
+
+  NoFrameworkAdapter.extend = function() {
+    var args = Array.prototype.slice.call(arguments)
+
+    function merge(target, obj) {
+      if (typeof target === 'object' && typeof obj === 'object') {
+        for (var key in obj) {
+          if (obj.hasOwnProperty(key)) {
+            target[key] = obj[key]
+          }
+        }
+      }
+
+      return target
+    }
+
+    for (var i = 1, end = args.length; i < end; i++) {
+      merge(args[0], args[i])
+    }
+    return args[0]
+  }
+
+  NoFrameworkAdapter.inArray = function(element, array, i) {
+    return array == null ? -1 : array.indexOf(element, i)
+  }
+
+  NoFrameworkAdapter.isEmptyObject = function(obj) {
+    /* eslint no-unused-vars: 0 */
+    for (var name in obj) {
+      return false
+    }
+    return true
+  }
+
+  NoFrameworkAdapter.proxy = function(func, obj) {
+    return function() {
+      return func.apply(obj, arguments);
+    }
+  }
+
+  Waypoint.adapters.push({
+    name: 'noframework',
+    Adapter: NoFrameworkAdapter
+  })
+  Waypoint.Adapter = NoFrameworkAdapter
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/javascripts/waypoints/sticky.js
----------------------------------------------------------------------
diff --git a/hawq-book/master_middleman/source/javascripts/waypoints/sticky.js 
b/hawq-book/master_middleman/source/javascripts/waypoints/sticky.js
new file mode 100644
index 0000000..569fcdb
--- /dev/null
+++ b/hawq-book/master_middleman/source/javascripts/waypoints/sticky.js
@@ -0,0 +1,63 @@
+(function() {
+  'use strict'
+
+  var Waypoint = window.Waypoint;
+  var adapter = Waypoint.Adapter;
+
+  /* http://imakewebthings.com/waypoints/shortcuts/sticky-elements */
+  function Sticky(options) {
+    this.options = adapter.extend({}, Waypoint.defaults, Sticky.defaults, 
options)
+    this.element = this.options.element
+    this.$element = new adapter(this.element)
+    this.createWrapper()
+    this.createWaypoint()
+  }
+
+  /* Private */
+  Sticky.prototype.createWaypoint = function() {
+    var originalHandler = this.options.handler
+
+    this.waypoint = new Waypoint(adapter.extend({}, this.options, {
+      element: this.wrapper,
+      handler: adapter.proxy(function(direction) {
+        var shouldBeStuck = this.options.direction.indexOf(direction) > -1
+        var wrapperHeight = shouldBeStuck ? this.$element.outerHeight(true) : 
''
+
+        this.$wrapper.height(wrapperHeight)
+        this.$element.toggleClass(this.options.stuckClass, shouldBeStuck)
+
+        if (originalHandler) {
+          originalHandler.call(this, direction)
+        }
+      }, this)
+    }))
+  }
+
+  /* Private */
+  Sticky.prototype.createWrapper = function() {
+    if (this.options.wrapper) {
+      this.$element.wrap(this.options.wrapper)
+    }
+    this.$wrapper = this.$element.parent()
+    this.wrapper = this.$wrapper.element
+  }
+
+  /* Public */
+  Sticky.prototype.destroy = function() {
+    if (this.$element.parent().element === this.wrapper) {
+      this.waypoint.destroy()
+      this.$element.removeClass(this.options.stuckClass)
+      if (this.options.wrapper) {
+        this.$element.unwrap()
+      }
+    }
+  }
+
+  Sticky.defaults = {
+    wrapper: '<div class="sticky-wrapper" />',
+    stuckClass: 'stuck',
+    direction: 'down right'
+  }
+
+  Waypoint.Sticky = Sticky
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/javascripts/waypoints/waypoint.js
----------------------------------------------------------------------
diff --git 
a/hawq-book/master_middleman/source/javascripts/waypoints/waypoint.js 
b/hawq-book/master_middleman/source/javascripts/waypoints/waypoint.js
new file mode 100644
index 0000000..7f76f1d
--- /dev/null
+++ b/hawq-book/master_middleman/source/javascripts/waypoints/waypoint.js
@@ -0,0 +1,160 @@
+(function() {
+  'use strict'
+
+  var keyCounter = 0
+  var allWaypoints = {}
+
+  /* http://imakewebthings.com/waypoints/api/waypoint */
+  function Waypoint(options) {
+    if (!options) {
+      throw new Error('No options passed to Waypoint constructor')
+    }
+    if (!options.element) {
+      throw new Error('No element option passed to Waypoint constructor')
+    }
+    if (!options.handler) {
+      throw new Error('No handler option passed to Waypoint constructor')
+    }
+
+    this.key = 'waypoint-' + keyCounter
+    this.options = Waypoint.Adapter.extend({}, Waypoint.defaults, options)
+    this.element = this.options.element
+    this.adapter = new Waypoint.Adapter(this.element)
+    this.callback = options.handler
+    this.axis = this.options.horizontal ? 'horizontal' : 'vertical'
+    this.enabled = this.options.enabled
+    this.triggerPoint = null
+    this.group = Waypoint.Group.findOrCreate({
+      name: this.options.group,
+      axis: this.axis
+    })
+    this.context = Waypoint.Context.findOrCreateByElement(this.options.context)
+
+    if (Waypoint.offsetAliases[this.options.offset]) {
+      this.options.offset = Waypoint.offsetAliases[this.options.offset]
+    }
+    this.group.add(this)
+    this.context.add(this)
+    allWaypoints[this.key] = this
+    keyCounter += 1
+  }
+
+  /* Private */
+  Waypoint.prototype.queueTrigger = function(direction) {
+    this.group.queueTrigger(this, direction)
+  }
+
+  /* Private */
+  Waypoint.prototype.trigger = function(args) {
+    if (!this.enabled) {
+      return
+    }
+    if (this.callback) {
+      this.callback.apply(this, args)
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/destroy */
+  Waypoint.prototype.destroy = function() {
+    this.context.remove(this)
+    this.group.remove(this)
+    delete allWaypoints[this.key]
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/disable */
+  Waypoint.prototype.disable = function() {
+    this.enabled = false
+    return this
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/enable */
+  Waypoint.prototype.enable = function() {
+    this.context.refresh()
+    this.enabled = true
+    return this
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/next */
+  Waypoint.prototype.next = function() {
+    return this.group.next(this)
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/previous */
+  Waypoint.prototype.previous = function() {
+    return this.group.previous(this)
+  }
+
+  /* Private */
+  Waypoint.invokeAll = function(method) {
+    var allWaypointsArray = []
+    for (var waypointKey in allWaypoints) {
+      allWaypointsArray.push(allWaypoints[waypointKey])
+    }
+    for (var i = 0, end = allWaypointsArray.length; i < end; i++) {
+      allWaypointsArray[i][method]()
+    }
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/destroy-all */
+  Waypoint.destroyAll = function() {
+    Waypoint.invokeAll('destroy')
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/disable-all */
+  Waypoint.disableAll = function() {
+    Waypoint.invokeAll('disable')
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/enable-all */
+  Waypoint.enableAll = function() {
+    Waypoint.invokeAll('enable')
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/refresh-all */
+  Waypoint.refreshAll = function() {
+    Waypoint.Context.refreshAll()
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/viewport-height */
+  Waypoint.viewportHeight = function() {
+    return window.innerHeight || document.documentElement.clientHeight
+  }
+
+  /* Public */
+  /* http://imakewebthings.com/waypoints/api/viewport-width */
+  Waypoint.viewportWidth = function() {
+    return document.documentElement.clientWidth
+  }
+
+  Waypoint.adapters = []
+
+  Waypoint.defaults = {
+    context: window,
+    continuous: true,
+    enabled: true,
+    group: 'default',
+    horizontal: false,
+    offset: 0
+  }
+
+  Waypoint.offsetAliases = {
+    'bottom-in-view': function() {
+      return this.context.innerHeight() - this.adapter.outerHeight()
+    },
+    'right-in-view': function() {
+      return this.context.innerWidth() - this.adapter.outerWidth()
+    }
+  }
+
+  window.Waypoint = Waypoint
+}())

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/layouts/_title.erb
----------------------------------------------------------------------
diff --git a/hawq-book/master_middleman/source/layouts/_title.erb 
b/hawq-book/master_middleman/source/layouts/_title.erb
new file mode 100644
index 0000000..ea744d9
--- /dev/null
+++ b/hawq-book/master_middleman/source/layouts/_title.erb
@@ -0,0 +1,6 @@
+<% if current_page.data.title %>
+  <h1 class="title-container" <%= current_page.data.dita ? 'style="display: 
none;"' : '' %>>
+    <%= current_page.data.title %>
+  </h1>
+<% end %>
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/patch/dynamic_variable_interpretation.py
----------------------------------------------------------------------
diff --git 
a/hawq-book/master_middleman/source/patch/dynamic_variable_interpretation.py 
b/hawq-book/master_middleman/source/patch/dynamic_variable_interpretation.py
new file mode 100644
index 0000000..66df9ff
--- /dev/null
+++ b/hawq-book/master_middleman/source/patch/dynamic_variable_interpretation.py
@@ -0,0 +1,192 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+__all__ = ["copy_tarballs_to_hdfs", ]
+import os
+import glob
+import re
+import tempfile
+from resource_management.libraries.functions.default import default
+from resource_management.libraries.functions.format import format
+from resource_management.libraries.resources.copy_from_local import 
CopyFromLocal
+from resource_management.libraries.resources.execute_hadoop import 
ExecuteHadoop
+from resource_management.core.resources.system import Execute
+from resource_management.core.exceptions import Fail
+from resource_management.core.logger import Logger
+from resource_management.core import shell
+
+"""
+This file provides helper methods needed for the versioning of RPMs. 
Specifically, it does dynamic variable
+interpretation to replace strings like {{ hdp_stack_version }}  where the 
value of the
+variables cannot be determined ahead of time, but rather, depends on what 
files are found.
+
+It assumes that {{ hdp_stack_version }} is constructed as 
${major.minor.patch.rev}-${build_number}
+E.g., 998.2.2.1.0-998
+Please note that "-${build_number}" is optional.
+"""
+
+# These values must be the suffix of the properties in cluster-env.xml
+TAR_SOURCE_SUFFIX = "_tar_source"
+TAR_DESTINATION_FOLDER_SUFFIX = "_tar_destination_folder"
+
+
+def _get_tar_source_and_dest_folder(tarball_prefix):
+  """
+  :param tarball_prefix: Prefix of the tarball must be one of tez, hive, mr, 
pig
+  :return: Returns a tuple of (x, y) after verifying the properties
+  """
+  component_tar_source_file = default("/configurations/cluster-env/%s%s" % 
(tarball_prefix.lower(), TAR_SOURCE_SUFFIX), None)
+  # E.g., /usr/hdp/current/hadoop-client/tez-{{ hdp_stack_version }}.tar.gz
+
+  component_tar_destination_folder = 
default("/configurations/cluster-env/%s%s" % (tarball_prefix.lower(), 
TAR_DESTINATION_FOLDER_SUFFIX), None)
+  # E.g., hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/
+
+  if not component_tar_source_file or not component_tar_destination_folder:
+    Logger.warning("Did not find %s tar source file and destination folder 
properties in cluster-env.xml" %
+                   tarball_prefix)
+    return None, None
+
+  if component_tar_source_file.find("/") == -1:
+    Logger.warning("The tar file path %s is not valid" % 
str(component_tar_source_file))
+    return None, None
+
+  if not component_tar_destination_folder.endswith("/"):
+    component_tar_destination_folder = component_tar_destination_folder + "/"
+
+  if not component_tar_destination_folder.startswith("hdfs://"):
+    return None, None
+
+  return component_tar_source_file, component_tar_destination_folder
+
+
+def _copy_files(source_and_dest_pairs, file_owner, group_owner, 
kinit_if_needed):
+  """
+  :param source_and_dest_pairs: List of tuples (x, y), where x is the source 
file in the local file system,
+  and y is the destination file path in HDFS
+  :param file_owner: Owner to set for the file copied to HDFS (typically hdfs 
account)
+  :param group_owner: Owning group to set for the file copied to HDFS 
(typically hadoop group)
+  :param kinit_if_needed: kinit command if it is needed, otherwise an empty 
string
+  :return: Returns 0 if at least one file was copied and no exceptions 
occurred, and 1 otherwise.
+
+  Must kinit before calling this function.
+  """
+  import params
+
+  return_value = 1
+  if source_and_dest_pairs and len(source_and_dest_pairs) > 0:
+    return_value = 0
+    for (source, destination) in source_and_dest_pairs:
+      try:
+        destination_dir = os.path.dirname(destination)
+
+        params.HdfsDirectory(destination_dir,
+                             action="create",
+                             owner=file_owner,
+                             mode=0555
+        )
+
+        CopyFromLocal(source,
+                      mode=0444,
+                      owner=file_owner,
+                      group=group_owner,
+                      dest_dir=destination_dir,
+                      kinnit_if_needed=kinit_if_needed,
+                      hdfs_user=params.hdfs_user,
+                      hadoop_bin_dir=params.hadoop_bin_dir,
+                      hadoop_conf_dir=params.hadoop_conf_dir
+        )
+      except:
+        return_value = 1
+  return return_value
+
+
+def copy_tarballs_to_hdfs(tarball_prefix, component_user, file_owner, 
group_owner):
+  """
+  :param tarball_prefix: Prefix of the tarball must be one of tez, hive, mr, 
pig
+  :param component_user: User that will execute the Hadoop commands
+  :param file_owner: Owner of the files copied to HDFS (typically hdfs account)
+  :param group_owner: Group owner of the files copied to HDFS (typically 
hadoop group)
+  :return: Returns 0 on success, 1 if no files were copied, and in some cases 
may raise an exception.
+
+  In order to call this function, params.py must have all of the following,
+  hdp_stack_version, kinit_path_local, security_enabled, hdfs_user, 
hdfs_principal_name, hdfs_user_keytab,
+  hadoop_bin_dir, hadoop_conf_dir, and HdfsDirectory as a partial function.
+  """
+  import params
+
+  if not hasattr(params, "hdp_stack_version") or params.hdp_stack_version is 
None:
+    Logger.warning("Could not find hdp_stack_version")
+    return 1
+
+  component_tar_source_file, component_tar_destination_folder = 
_get_tar_source_and_dest_folder(tarball_prefix)
+  if not component_tar_source_file or not component_tar_destination_folder:
+    Logger.warning("Could not retrieve properties for tarball with prefix: %s" 
% str(tarball_prefix))
+    return 1
+
+  if not os.path.exists(component_tar_source_file):
+    Logger.warning("Could not find file: %s" % str(component_tar_source_file))
+    return 1
+
+  # Ubuntu returns: "stdin: is not a tty", as subprocess output.
+  tmpfile = tempfile.NamedTemporaryFile()
+  with open(tmpfile.name, 'r+') as file:
+    get_hdp_version_cmd = '/usr/bin/hdp-select versions > %s' % tmpfile.name
+    code, stdoutdata = shell.call(get_hdp_version_cmd)
+    out = file.read()
+  pass
+  if code != 0 or out is None:
+    Logger.warning("Could not verify HDP version by calling '%s'. Return Code: 
%s, Output: %s." %
+                   (get_hdp_version_cmd, str(code), str(out)))
+    return 1
+
+  hdp_version = out.strip() # this should include the build number
+
+  file_name = os.path.basename(component_tar_source_file)
+  destination_file = os.path.join(component_tar_destination_folder, file_name)
+  destination_file = destination_file.replace("{{ hdp_stack_version }}", 
hdp_version)
+
+  does_hdfs_file_exist_cmd = "fs -ls %s" % destination_file
+
+  kinit_if_needed = ""
+  if params.security_enabled:
+    kinit_if_needed = format("{kinit_path_local} -kt {hdfs_user_keytab} 
{hdfs_principal_name};")
+
+  if kinit_if_needed:
+    Execute(kinit_if_needed,
+            user=component_user,
+            path='/bin'
+    )
+
+  does_hdfs_file_exist = False
+  try:
+    ExecuteHadoop(does_hdfs_file_exist_cmd,
+                  user=component_user,
+                  logoutput=True,
+                  conf_dir=params.hadoop_conf_dir,
+                  bin_dir=params.hadoop_bin_dir
+    )
+    does_hdfs_file_exist = True
+  except Fail:
+    pass
+
+  if not does_hdfs_file_exist:
+    source_and_dest_pairs = [(component_tar_source_file, destination_file), ]
+    return _copy_files(source_and_dest_pairs, file_owner, group_owner, 
kinit_if_needed)
+  return 1

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/stylesheets/book-styles.css.scss
----------------------------------------------------------------------
diff --git a/hawq-book/master_middleman/source/stylesheets/book-styles.css.scss 
b/hawq-book/master_middleman/source/stylesheets/book-styles.css.scss
new file mode 100644
index 0000000..1236d8e
--- /dev/null
+++ b/hawq-book/master_middleman/source/stylesheets/book-styles.css.scss
@@ -0,0 +1,3 @@
+* {
+  box-sizing: border-box;
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/stylesheets/partials/_book-base-values.scss
----------------------------------------------------------------------
diff --git 
a/hawq-book/master_middleman/source/stylesheets/partials/_book-base-values.scss 
b/hawq-book/master_middleman/source/stylesheets/partials/_book-base-values.scss
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/hawq-book/master_middleman/source/stylesheets/partials/_book-vars.scss
----------------------------------------------------------------------
diff --git 
a/hawq-book/master_middleman/source/stylesheets/partials/_book-vars.scss 
b/hawq-book/master_middleman/source/stylesheets/partials/_book-vars.scss
new file mode 100644
index 0000000..4245d57
--- /dev/null
+++ b/hawq-book/master_middleman/source/stylesheets/partials/_book-vars.scss
@@ -0,0 +1,19 @@
+$navy: #243640;
+$blue1: #2185c5;
+$blue2: #a7cae1;
+$bluegray1: #4b6475;
+$teal1: #03786D;
+$teal2: #00a79d;
+
+$color-accent: $teal1;
+$color-accent-bright: $teal2;
+
+// link colors
+$color-link: $blue1;
+$color-link-border: $blue2;
+
+$color-border-tip: $blue2;
+
+$color-bg-header: $navy;
+$color-bg-dark: $bluegray1;
+

Reply via email to