fix links in gh-pages-master

Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/e73f2ec1
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/e73f2ec1
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/e73f2ec1

Branch: refs/heads/gh-pages-master
Commit: e73f2ec17cf25b95930e94718f45ea93ea0cde99
Parents: 0119fdd
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Tue Mar 3 12:35:44 2015 -0800
Committer: Bridget Bevens <bbev...@maprtech.com>
Committed: Tue Mar 3 14:15:47 2015 -0800

----------------------------------------------------------------------
 _docs/001-arch.md                               |  6 +++---
 _docs/002-tutorial.md                           | 16 ++++++++--------
 _docs/003-yelp.md                               | 10 +++++-----
 _docs/006-interfaces.md                         | 12 ++++++------
 _docs/009-datasources.md                        |  4 ++--
 _docs/012-rn.md                                 |  2 +-
 _docs/013-contribute.md                         |  4 ++--
 _docs/013-rn.md                                 |  2 +-
 _docs/014-contribute.md                         |  4 ++--
 _docs/014-sample-ds.md                          |  6 +++---
 _docs/015-design.md                             | 10 +++++-----
 _docs/015-sample-ds.md                          |  6 +++---
 _docs/016-design.md                             | 10 +++++-----
 _docs/016-progress.md                           |  2 +-
 _docs/018-progress.md                           |  2 +-
 _docs/arch/arch-hilite/001-flexibility.md       |  2 +-
 _docs/connect/006-reg-hive.md                   |  5 ++---
 _docs/connect/007-default-frmt.md               |  2 +-
 _docs/connect/008-mongo-plugin.md               |  6 +++---
 _docs/contribute/001-guidelines.md              |  5 ++---
 _docs/data-sources/002-hive-udf.md              |  4 ++--
 _docs/data-sources/003-parquet-ref.md           |  4 ++--
 _docs/data-sources/004-json-ref.md              | 10 +++++-----
 _docs/dev-custom-fcn/001-dev-simple.md          |  2 +-
 _docs/dev-custom-fcn/002-dev-aggregate.md       |  2 +-
 _docs/develop/001-compile.md                    |  6 +++---
 _docs/develop/003-patch-tool.md                 |  4 ++--
 _docs/install/001-drill-in-10.md                | 20 ++++++++++----------
 _docs/install/002-deploy.md                     |  4 ++--
 _docs/install/004-install-distributed.md        |  2 +-
 .../install-embedded/001-install-linux.md       |  2 +-
 .../install/install-embedded/002-install-mac.md |  2 +-
 .../install/install-embedded/003-install-win.md |  2 +-
 _docs/interfaces/001-odbc-win.md                |  8 ++++----
 _docs/interfaces/003-jdbc-squirrel.md           |  2 +-
 .../odbc-linux/001-install-odbc-linux.md        |  4 ++--
 .../odbc-linux/002-install-odbc-mac.md          |  4 ++--
 .../odbc-linux/003-odbc-connections-linux.md    |  6 +++---
 .../odbc-linux/005-odbc-connect-str.md          |  2 +-
 .../interfaces/odbc-win/001-install-odbc-win.md |  2 +-
 _docs/interfaces/odbc-win/002-conf-odbc-win.md  |  8 ++++----
 .../interfaces/odbc-win/003-connect-odbc-win.md |  4 ++--
 .../interfaces/odbc-win/004-tableau-examples.md |  4 ++--
 _docs/manage/conf/001-mem-alloc.md              |  2 +-
 _docs/manage/conf/002-startup-opt.md            |  2 +-
 _docs/manage/conf/004-persist-conf.md           |  2 +-
 _docs/query/001-get-started.md                  |  8 ++++----
 _docs/query/002-query-fs.md                     |  2 +-
 _docs/query/003-query-hbase.md                  |  2 +-
 _docs/query/005-query-hive.md                   |  2 +-
 _docs/query/007-query-sys-tbl.md                |  4 ++--
 _docs/query/get-started/001-lesson1-connect.md  |  6 +++---
 _docs/query/query-fs/002-query-parquet.md       |  2 +-
 _docs/rn/004-0.6.0-rn.md                        |  2 +-
 _docs/sql-ref/001-data-types.md                 |  2 +-
 _docs/sql-ref/002-operators.md                  |  2 +-
 _docs/sql-ref/003-functions.md                  |  6 +++---
 _docs/sql-ref/004-nest-functions.md             |  6 +++---
 _docs/sql-ref/005-cmd-summary.md                |  2 +-
 _docs/sql-ref/nested/001-flatten.md             |  2 +-
 _docs/sql-ref/nested/002-kvgen.md               |  6 +++---
 _docs/sql-ref/nested/003-repeated-cnt.md        |  2 +-
 _docs/tutorial/002-get2kno-sb.md                |  2 +-
 _docs/tutorial/003-lesson1.md                   |  2 +-
 _docs/tutorial/004-lesson2.md                   |  2 +-
 _docs/tutorial/005-lesson3.md                   |  2 +-
 _docs/tutorial/006-summary.md                   |  2 +-
 .../install-sandbox/001-install-mapr-vm.md      |  2 +-
 .../install-sandbox/002-install-mapr-vb.md      |  2 +-
 69 files changed, 149 insertions(+), 151 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/001-arch.md
----------------------------------------------------------------------
diff --git a/_docs/001-arch.md b/_docs/001-arch.md
index 0905ad3..8779a11 100644
--- a/_docs/001-arch.md
+++ b/_docs/001-arch.md
@@ -43,7 +43,7 @@ The flow of a Drill query typically involves the following 
steps:
 
 You can access Drill through the following interfaces:
 
-  * [Drill shell (SQLLine)](/drill/docs/starting-stopping-drill)
-  * [Drill Web 
UI](/drill/docs/monitoring-and-canceling-queries-in-the-drill-web-ui)
-  * 
[ODBC/JDBC](/drill/docs/odbc-jdbc-interfaces/#using-odbc-to-access-apache-drill-from-bi-tools)
 
+  * [Drill shell (SQLLine)](/docs/starting-stopping-drill)
+  * [Drill Web UI](/docs/monitoring-and-canceling-queries-in-the-drill-web-ui)
+  * 
[ODBC/JDBC](/docs/odbc-jdbc-interfaces/#using-odbc-to-access-apache-drill-from-bi-tools)
 
   * C++ API
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/002-tutorial.md
----------------------------------------------------------------------
diff --git a/_docs/002-tutorial.md b/_docs/002-tutorial.md
index 14cae80..9f69b71 100644
--- a/_docs/002-tutorial.md
+++ b/_docs/002-tutorial.md
@@ -7,12 +7,12 @@ configured with Apache Drill.
 To complete the tutorial on the MapR Sandbox with Apache Drill, work through
 the following pages in order:
 
-  * [Installing the Apache Drill 
Sandbox](/drill/docs/installing-the-apache-drill-sandbox)
-  * [Getting to Know the Drill 
Setup](/drill/docs/getting-to-know-the-drill-sandbox)
-  * [Lesson 1: Learn About the Data 
Set](/drill/docs/lession-1-learn-about-the-data-set)
-  * [Lesson 2: Run Queries with ANSI 
SQL](/drill/docs/lession-2-run-queries-with-ansi-sql)
-  * [Lesson 3: Run Queries on Complex Data 
Types](/drill/docs/lession-3-run-queries-on-complex-data-types)
-  * [Summary](/drill/docs/summary)
+  * [Installing the Apache Drill 
Sandbox](/docs/installing-the-apache-drill-sandbox)
+  * [Getting to Know the Drill Setup](/docs/getting-to-know-the-drill-sandbox)
+  * [Lesson 1: Learn About the Data 
Set](/docs/lession-1-learn-about-the-data-set)
+  * [Lesson 2: Run Queries with ANSI 
SQL](/docs/lession-2-run-queries-with-ansi-sql)
+  * [Lesson 3: Run Queries on Complex Data 
Types](/docs/lession-3-run-queries-on-complex-data-types)
+  * [Summary](/docs/summary)
 
 ## About Apache Drill
 
@@ -41,11 +41,11 @@ environment to get a feel for the power and capabilities of 
Apache Drill by
 performing various types of queries. Once you get a flavor for the technology,
 refer to the [Apache Drill web site](http://incubator.apache.org/drill/) and
 [Apache Drill documentation
-](/drill/docs)for more
+](/docs)for more
 details.
 
 Note that Hadoop is not a prerequisite for Drill and users can start ramping
 up with Drill by running SQL queries directly on the local file system. Refer
-to [Apache Drill in 10 minutes](/drill/docs/apache-drill-in-10-minutes) for an 
introduction to using Drill in local
+to [Apache Drill in 10 minutes](/docs/apache-drill-in-10-minutes) for an 
introduction to using Drill in local
 (embedded) mode.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/003-yelp.md
----------------------------------------------------------------------
diff --git a/_docs/003-yelp.md b/_docs/003-yelp.md
index b65359e..874f780 100644
--- a/_docs/003-yelp.md
+++ b/_docs/003-yelp.md
@@ -25,7 +25,7 @@ example is downloadable from 
[Yelp](http://www.yelp.com/dataset_challenge)
 
 
[http://incubator.apache.org/drill/download/](http://incubator.apache.org/drill/download/)
 
-You can also [deploy Drill in clustered 
mode](/drill/docs/deploying-apache-drill-in-a-clustered-environment) if you
+You can also [deploy Drill in clustered 
mode](/docs/deploying-apache-drill-in-a-clustered-environment) if you
 want to scale your environment.
 
 ### Step 2 : Open the Drill tar file
@@ -337,10 +337,10 @@ Let’s get the total number of records from the view.
     +------------+
 
 In addition to these queries, you can get many more deeper insights using
-Drill’s [SQL functionality](/drill/docs/sql-reference). If you are not 
comfortable with writing queries manually, you
+Drill’s [SQL functionality](/docs/sql-reference). If you are not comfortable 
with writing queries manually, you
 can use a BI/Analytics tools such as Tableau/MicroStrategy to query raw
 files/Hive/HBase data or Drill-created views directly using Drill [ODBC/JDBC
-drivers](/drill/docs/odbc-jdbc-interfaces).
+drivers](/docs/odbc-jdbc-interfaces).
 
 The goal of Apache Drill is to provide the freedom and flexibility in
 exploring data in ways we have never seen before with SQL technologies. The
@@ -407,6 +407,6 @@ To learn more about Drill, please refer to the following 
resources:
 
   * Download Drill here:<http://incubator.apache.org/drill/download/>
   * 10 reasons we think Drill is 
cool:<http://incubator.apache.org/drill/why-drill/>
-  * [A simple 10-minute tutorial](/drill/docs/apache-drill-in-10-minutes>)
-  * [A more comprehensive tutorial](/drill/docs/apache-drill-tutorial)
+  * [A simple 10-minute tutorial](/docs/apache-drill-in-10-minutes>)
+  * [A more comprehensive tutorial](/docs/apache-drill-tutorial)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/006-interfaces.md
----------------------------------------------------------------------
diff --git a/_docs/006-interfaces.md b/_docs/006-interfaces.md
index ce068a6..f971e63 100644
--- a/_docs/006-interfaces.md
+++ b/_docs/006-interfaces.md
@@ -5,8 +5,8 @@ You can connect to Apache Drill through the following 
interfaces:
 
   * Drill shell (SQLLine)
   * Drill Web UI
-  * 
[ODBC](/drill/docs/odbc-jdbc-interfaces#using-odbc-to-access-apache-drill-from-bi-tools)*
-  * 
[JDBC](/drill/docs/odbc-jdbc-interfaces#using-jdbc-to-access-apache-drill-from-squirrel)
+  * 
[ODBC](/docs/odbc-jdbc-interfaces#using-odbc-to-access-apache-drill-from-bi-tools)*
+  * 
[JDBC](/docs/odbc-jdbc-interfaces#using-jdbc-to-access-apache-drill-from-squirrel)
   * C++ API
 
 *Apache Drill does not have an open source ODBC driver. However, MapR provides 
an ODBC driver that you can use to connect to Apache Drill from BI tools. 
@@ -39,10 +39,10 @@ SQuirreL on Windows.
 To use the Drill JDBC driver with SQuirreL on Windows, complete the following
 steps:
 
-  * [Step 1: Getting the Drill JDBC 
Driver](/drill/docs/using-the-jdbc-driver#step-1-getting-the-drill-jdbc-driver) 
-  * [Step 2: Installing and Starting 
SQuirreL](/drill/docs/using-the-jdbc-driver#step-2-installing-and-starting-squirrel)
-  * [Step 3: Adding the Drill JDBC Driver to 
SQuirreL](/drill/docs/using-the-jdbc-driver#step-3-adding-the-drill-jdbc-driver-to-squirrel)
-  * [Step 4: Running a Drill Query from 
SQuirreL](/drill/docs/using-the-jdbc-driver#step-4-running-a-drill-query-from-squirrel)
+  * [Step 1: Getting the Drill JDBC 
Driver](/docs/using-the-jdbc-driver#step-1-getting-the-drill-jdbc-driver) 
+  * [Step 2: Installing and Starting 
SQuirreL](/docs/using-the-jdbc-driver#step-2-installing-and-starting-squirrel)
+  * [Step 3: Adding the Drill JDBC Driver to 
SQuirreL](/docs/using-the-jdbc-driver#step-3-adding-the-drill-jdbc-driver-to-squirrel)
+  * [Step 4: Running a Drill Query from 
SQuirreL](/docs/using-the-jdbc-driver#step-4-running-a-drill-query-from-squirrel)
 
 For information about how to use SQuirreL, refer to the [SQuirreL Quick
 Start](http://squirrel-sql.sourceforge.net/user-manual/quick_start.html)

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/009-datasources.md
----------------------------------------------------------------------
diff --git a/_docs/009-datasources.md b/_docs/009-datasources.md
index 3f3d431..77348a7 100644
--- a/_docs/009-datasources.md
+++ b/_docs/009-datasources.md
@@ -18,9 +18,9 @@ Drill supports the following input formats for data:
 * Parquet
 * JSON
 
-You set the input format for data coming from data sources to Drill in the 
workspace portion of the [storage 
plugin](/drill/docs/storage-plugin-registration) definition. The default input 
format in Drill is Parquet. 
+You set the input format for data coming from data sources to Drill in the 
workspace portion of the [storage plugin](/docs/storage-plugin-registration) 
definition. The default input format in Drill is Parquet. 
 
-You change the [sys.options table](/drill/docs/planning-and-execution-options) 
to set the output format of Drill data. The default storage format for Drill 
Create Table AS (CTAS) statements is Parquet.
+You change the [sys.options table](/docs/planning-and-execution-options) to 
set the output format of Drill data. The default storage format for Drill 
Create Table AS (CTAS) statements is Parquet.
 
 
  

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/012-rn.md
----------------------------------------------------------------------
diff --git a/_docs/012-rn.md b/_docs/012-rn.md
index f369335..25ec29e 100644
--- a/_docs/012-rn.md
+++ b/_docs/012-rn.md
@@ -75,7 +75,7 @@ This release is primarily a bug fix release, with [more than 
30 JIRAs closed](
 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&vers
 ion=12327472), but there are some notable features:
 
-  * Direct ANSI SQL access to MongoDB, using the latest [MongoDB Plugin for 
Apache Drill](/drill/docs/mongodb-plugin-for-apache-drill)
+  * Direct ANSI SQL access to MongoDB, using the latest [MongoDB Plugin for 
Apache Drill](/docs/mongodb-plugin-for-apache-drill)
   * Filesystem query performance improvements with partition pruning
   * Ability to use the file system as a persistent store for query profiles 
and diagnostic information
   * Window function support (alpha)

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/013-contribute.md
----------------------------------------------------------------------
diff --git a/_docs/013-contribute.md b/_docs/013-contribute.md
index 33db231..42108b9 100644
--- a/_docs/013-contribute.md
+++ b/_docs/013-contribute.md
@@ -2,8 +2,8 @@
 title: "Contribute to Drill"
 ---
 The Apache Drill community welcomes your support. Please read [Apache Drill
-Contribution Guidelines](/drill/docs/apache-drill-contribution-guidelines) for 
information about how to contribute to
+Contribution Guidelines](/docs/apache-drill-contribution-guidelines) for 
information about how to contribute to
 the project. If you would like to contribute to the project and need some
 ideas for what to do, please read [Apache Drill Contribution
-Ideas](/drill/docs/apache-drill-contribution-ideas).
+Ideas](/docs/apache-drill-contribution-ideas).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/013-rn.md
----------------------------------------------------------------------
diff --git a/_docs/013-rn.md b/_docs/013-rn.md
index f369335..25ec29e 100644
--- a/_docs/013-rn.md
+++ b/_docs/013-rn.md
@@ -75,7 +75,7 @@ This release is primarily a bug fix release, with [more than 
30 JIRAs closed](
 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&vers
 ion=12327472), but there are some notable features:
 
-  * Direct ANSI SQL access to MongoDB, using the latest [MongoDB Plugin for 
Apache Drill](/drill/docs/mongodb-plugin-for-apache-drill)
+  * Direct ANSI SQL access to MongoDB, using the latest [MongoDB Plugin for 
Apache Drill](/docs/mongodb-plugin-for-apache-drill)
   * Filesystem query performance improvements with partition pruning
   * Ability to use the file system as a persistent store for query profiles 
and diagnostic information
   * Window function support (alpha)

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/014-contribute.md
----------------------------------------------------------------------
diff --git a/_docs/014-contribute.md b/_docs/014-contribute.md
index 33db231..42108b9 100644
--- a/_docs/014-contribute.md
+++ b/_docs/014-contribute.md
@@ -2,8 +2,8 @@
 title: "Contribute to Drill"
 ---
 The Apache Drill community welcomes your support. Please read [Apache Drill
-Contribution Guidelines](/drill/docs/apache-drill-contribution-guidelines) for 
information about how to contribute to
+Contribution Guidelines](/docs/apache-drill-contribution-guidelines) for 
information about how to contribute to
 the project. If you would like to contribute to the project and need some
 ideas for what to do, please read [Apache Drill Contribution
-Ideas](/drill/docs/apache-drill-contribution-ideas).
+Ideas](/docs/apache-drill-contribution-ideas).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/014-sample-ds.md
----------------------------------------------------------------------
diff --git a/_docs/014-sample-ds.md b/_docs/014-sample-ds.md
index 7212ea0..c6f51e1 100644
--- a/_docs/014-sample-ds.md
+++ b/_docs/014-sample-ds.md
@@ -3,8 +3,8 @@ title: "Sample Datasets"
 ---
 Use any of the following sample datasets provided to test Drill:
 
-  * [AOL Search](/drill/docs/aol-search)
-  * [Enron Emails](/drill/docs/enron-emails)
-  * [Wikipedia Edit History](/drill/docs/wikipedia-edit-history)
+  * [AOL Search](/docs/aol-search)
+  * [Enron Emails](/docs/enron-emails)
+  * [Wikipedia Edit History](/docs/wikipedia-edit-history)
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/015-design.md
----------------------------------------------------------------------
diff --git a/_docs/015-design.md b/_docs/015-design.md
index 00b17e5..474052e 100644
--- a/_docs/015-design.md
+++ b/_docs/015-design.md
@@ -5,9 +5,9 @@ Review the Apache Drill design docs for early descriptions of 
Apache Drill
 functionality, terms, and goals, and reference the research articles to learn
 about Apache Drill's history:
 
-  * [Drill Plan Syntax](/drill/docs/drill-plan-syntax)
-  * [RPC Overview](/drill/docs/rpc-overview)
-  * [Query Stages](/drill/docs/query-stages)
-  * [Useful Research](/drill/docs/useful-research)
-  * [Value Vectors](/drill/docs/value-vectors)
+  * [Drill Plan Syntax](/docs/drill-plan-syntax)
+  * [RPC Overview](/docs/rpc-overview)
+  * [Query Stages](/docs/query-stages)
+  * [Useful Research](/docs/useful-research)
+  * [Value Vectors](/docs/value-vectors)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/015-sample-ds.md
----------------------------------------------------------------------
diff --git a/_docs/015-sample-ds.md b/_docs/015-sample-ds.md
index 7212ea0..c6f51e1 100644
--- a/_docs/015-sample-ds.md
+++ b/_docs/015-sample-ds.md
@@ -3,8 +3,8 @@ title: "Sample Datasets"
 ---
 Use any of the following sample datasets provided to test Drill:
 
-  * [AOL Search](/drill/docs/aol-search)
-  * [Enron Emails](/drill/docs/enron-emails)
-  * [Wikipedia Edit History](/drill/docs/wikipedia-edit-history)
+  * [AOL Search](/docs/aol-search)
+  * [Enron Emails](/docs/enron-emails)
+  * [Wikipedia Edit History](/docs/wikipedia-edit-history)
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/016-design.md
----------------------------------------------------------------------
diff --git a/_docs/016-design.md b/_docs/016-design.md
index 00b17e5..474052e 100644
--- a/_docs/016-design.md
+++ b/_docs/016-design.md
@@ -5,9 +5,9 @@ Review the Apache Drill design docs for early descriptions of 
Apache Drill
 functionality, terms, and goals, and reference the research articles to learn
 about Apache Drill's history:
 
-  * [Drill Plan Syntax](/drill/docs/drill-plan-syntax)
-  * [RPC Overview](/drill/docs/rpc-overview)
-  * [Query Stages](/drill/docs/query-stages)
-  * [Useful Research](/drill/docs/useful-research)
-  * [Value Vectors](/drill/docs/value-vectors)
+  * [Drill Plan Syntax](/docs/drill-plan-syntax)
+  * [RPC Overview](/docs/rpc-overview)
+  * [Query Stages](/docs/query-stages)
+  * [Useful Research](/docs/useful-research)
+  * [Value Vectors](/docs/value-vectors)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/016-progress.md
----------------------------------------------------------------------
diff --git a/_docs/016-progress.md b/_docs/016-progress.md
index bf19a29..680290e 100644
--- a/_docs/016-progress.md
+++ b/_docs/016-progress.md
@@ -4,5 +4,5 @@ title: "Progress Reports"
 Review the following Apache Drill progress reports for a summary of issues,
 progression of the project, summary of mailing list discussions, and events:
 
-  * [2014 Q1 Drill Report](/drill/docs/2014-q1-drill-report)
+  * [2014 Q1 Drill Report](/docs/2014-q1-drill-report)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/018-progress.md
----------------------------------------------------------------------
diff --git a/_docs/018-progress.md b/_docs/018-progress.md
index bf19a29..680290e 100644
--- a/_docs/018-progress.md
+++ b/_docs/018-progress.md
@@ -4,5 +4,5 @@ title: "Progress Reports"
 Review the following Apache Drill progress reports for a summary of issues,
 progression of the project, summary of mailing list discussions, and events:
 
-  * [2014 Q1 Drill Report](/drill/docs/2014-q1-drill-report)
+  * [2014 Q1 Drill Report](/docs/2014-q1-drill-report)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/arch/arch-hilite/001-flexibility.md
----------------------------------------------------------------------
diff --git a/_docs/arch/arch-hilite/001-flexibility.md 
b/_docs/arch/arch-hilite/001-flexibility.md
index 0b5c5e3..8c0ae3a 100644
--- a/_docs/arch/arch-hilite/001-flexibility.md
+++ b/_docs/arch/arch-hilite/001-flexibility.md
@@ -56,7 +56,7 @@ traditional DB (Databases->Tables/Views->Columns). The 
metadata is accessible
 through the ANSI standard INFORMATION_SCHEMA database
 
 For more information on how to configure and work various data sources with
-Drill, refer to [Connect Apache Drill to Data 
Sources](/drill/docs/connect-to-data-sources).
+Drill, refer to [Connect Apache Drill to Data 
Sources](/docs/connect-to-data-sources).
 
 **_Extensibility_**
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/connect/006-reg-hive.md
----------------------------------------------------------------------
diff --git a/_docs/connect/006-reg-hive.md b/_docs/connect/006-reg-hive.md
index c3d2b1d..03a252a 100644
--- a/_docs/connect/006-reg-hive.md
+++ b/_docs/connect/006-reg-hive.md
@@ -46,7 +46,7 @@ To register a remote Hive metastore with Drill, complete the 
following steps:
   6. Verify that `HADOOP_CLASSPATH` is set in `drill-env.sh`. If you need to 
set the classpath, add the following line to `drill-env.sh`.
 
 Once you have configured a storage plugin instance for a Hive data source, you
-can [query Hive tables](/drill/docs/querying-hive/).
+can [query Hive tables](/docs/querying-hive/).
 
 ## Hive Embedded Metastore
 
@@ -56,8 +56,7 @@ Web UI. Before you register Hive, verify that the driver you 
use to connect to
 the Hive metastore is in the Drill classpath located in `/<drill installation
 dirctory>/lib/.` If the driver is not there, copy the driver to `/<drill
 installation directory>/lib` on the Drill node. For more information about
-storage types and configurations, refer to [AdminManual
-MetastoreAdmin](/confluence/display/Hive/AdminManual+MetastoreAdmin).
+storage types and configurations, refer to ["Hive Metastore 
Administration"](https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin).
 
 To register an embedded Hive metastore with Drill, complete the following
 steps:

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/connect/007-default-frmt.md
----------------------------------------------------------------------
diff --git a/_docs/connect/007-default-frmt.md 
b/_docs/connect/007-default-frmt.md
index 3ab52db..31cfe29 100644
--- a/_docs/connect/007-default-frmt.md
+++ b/_docs/connect/007-default-frmt.md
@@ -30,7 +30,7 @@ Drill supports. Currently, Drill supports the following types:
 ## Defining a Default Input Format
 
 You define the default input format for a file system workspace through the
-Drill Web UI. You must have a [defined workspace](/drill/docs/workspaces) 
before you can define a
+Drill Web UI. You must have a [defined workspace](/docs/workspaces) before you 
can define a
 default input format.
 
 To define a default input format for a workspace, complete the following

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/connect/008-mongo-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect/008-mongo-plugin.md 
b/_docs/connect/008-mongo-plugin.md
index 5c5b33d..bf2efdf 100644
--- a/_docs/connect/008-mongo-plugin.md
+++ b/_docs/connect/008-mongo-plugin.md
@@ -26,7 +26,7 @@ Before you can query MongoDB with Drill, you must have Drill 
and MongoDB
 installed on your machine. You may also want to import the MongoDB zip code
 data to run the example queries on your machine.
 
-  1. [Install Drill](/drill/docs/installing-drill-in-embedded-mode), if you do 
not already have it installed on your machine.
+  1. [Install Drill](/docs/installing-drill-in-embedded-mode), if you do not 
already have it installed on your machine.
   2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do 
not already have it installed on your machine.
   3. [Import the MongoDB zip code sample data 
set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set). 
You can use Mongo Import to get the data. 
 
@@ -88,7 +88,7 @@ the `USE` command to change schema.
 The following example queries are included for reference. However, you can use
 the SQL power of Apache Drill directly on MongoDB. For more information about,
 refer to the [SQL
-Reference](/drill/docs/sql-reference).
+Reference](/docs/sql-reference).
 
 **Example 1: View mongo.zipdb Dataset**
 
@@ -164,4 +164,4 @@ Reference](/drill/docs/sql-reference).
 You can leverage the power of Apache Drill to query MongoDB through standard
 BI tools, such as Tableau and SQuirreL.
 
-For information about Drill ODBC and JDBC drivers, refer to [Drill 
Interfaces](/drill/docs/odbc-jdbc-interfaces).
+For information about Drill ODBC and JDBC drivers, refer to [Drill 
Interfaces](/docs/odbc-jdbc-interfaces).

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/contribute/001-guidelines.md
----------------------------------------------------------------------
diff --git a/_docs/contribute/001-guidelines.md 
b/_docs/contribute/001-guidelines.md
index 686d972..7361e5c 100644
--- a/_docs/contribute/001-guidelines.md
+++ b/_docs/contribute/001-guidelines.md
@@ -67,8 +67,7 @@ Setting up IDE formatters is recommended and can be done by 
importing the
 following settings into your browser:
 
 IntelliJ IDEA formatter: [settings
-jar](/confluence/download/attachments/30757399/idea-
-settings.jar?version=1&modificationDate=1363022308000&api=v2)
+jar](https://cwiki.apache.org/confluence/download/attachments/30757399/idea-settings.jar?version=1&modificationDate=1363022308000&api=v2)
 
 Eclipse: [formatter xml from HBase](https://issues.apache.org/jira/secure/atta
 chment/12474245/eclipse_formatter_apache.xml)
@@ -167,7 +166,7 @@ functions which need to be implemented can be found
 [here](https://docs.google.com/spreadsheet/ccc?key=0AgAGbQ6asvQ-
 dDRrUUxVSVlMVXRtanRoWk9JcHgteUE&usp=sharing#gid=0) (WIP).
 
-More contribution ideas are located on the [Contribution 
Ideas](/drill/docs/apache-drill-contribution-ideas) page.
+More contribution ideas are located on the [Contribution 
Ideas](/docs/apache-drill-contribution-ideas) page.
 
 ### Contributing your work
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/data-sources/002-hive-udf.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources/002-hive-udf.md 
b/_docs/data-sources/002-hive-udf.md
index 7c7a48c..ba82145 100644
--- a/_docs/data-sources/002-hive-udf.md
+++ b/_docs/data-sources/002-hive-udf.md
@@ -14,14 +14,14 @@ You create the JAR for a UDF to use in Drill in a 
conventional manner with a few
 3. Create an empty `drill-module.conf` in the resources directory in the Java 
project. 
 4. Export the logic to a JAR, including the `drill-module.conf` file in 
resources.
 
-The `drill-module.conf` file defines [startup 
options](/drill/docs/start-up-options/) and makes the JAR functions available 
to use in queries throughout the Hadoop cluster. After exporting the UDF logic 
to a JAR file, set up the UDF in Drill. Drill users can access the custom UDF 
for use in Hive queries.
+The `drill-module.conf` file defines [startup 
options](/docs/start-up-options/) and makes the JAR functions available to use 
in queries throughout the Hadoop cluster. After exporting the UDF logic to a 
JAR file, set up the UDF in Drill. Drill users can access the custom UDF for 
use in Hive queries.
 
 ## Setting Up a UDF
 After you export the custom UDF as a JAR, perform the UDF setup tasks so Drill 
can access the UDF. The JAR needs to be available at query execution time as a 
session resource, so Drill queries can refer to the UDF by its name.
  
 To set up the UDF:
 
-1. Register Hive. [Register a Hive storage 
plugin](/drill/docs/registering-hive/) that connects Drill to a Hive data 
source.
+1. Register Hive. [Register a Hive storage plugin](/docs/registering-hive/) 
that connects Drill to a Hive data source.
 2. In Drill 0.7 and later, add the JAR for the UDF to the Drill CLASSPATH. In 
earlier versions of Drill, place the JAR file in the `/jars/3rdparty` directory 
of the Drill installation on all nodes running a Drillbit.
 3. On each Drill node in the cluster, restart the Drillbit.
    `<drill installation directory>/bin/drillbit.sh restart`

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/data-sources/003-parquet-ref.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources/003-parquet-ref.md 
b/_docs/data-sources/003-parquet-ref.md
index f9b5924..6fee4a6 100644
--- a/_docs/data-sources/003-parquet-ref.md
+++ b/_docs/data-sources/003-parquet-ref.md
@@ -20,7 +20,7 @@ Apache Drill includes the following support for Parquet:
 
 ### Reading and Writing Parquet Files
 When a read of Parquet data occurs, Drill loads only the necessary columns of 
data, which reduces I/O. Reading only a small piece of the Parquet data from a 
data file or table, Drill can examine and analyze all values for a column 
across multiple files.
-Parquet is the default storage format for a [Create Table As Select 
(CTAS)](/drill/docs/create-table-as-ctas-command) command. You can create a 
Drill table from one format and store the data in another format, including 
Parquet.
+Parquet is the default storage format for a [Create Table As Select 
(CTAS)](/docs/create-table-as-ctas-command) command. You can create a Drill 
table from one format and store the data in another format, including Parquet.
 
 CTAS can use any data source provided by the storage plugin. 
 
@@ -53,7 +53,7 @@ To maximize performance, set the target size of a Parquet row 
group to the numbe
 The default block size is 536870912 bytes.
 
 ### Type Mapping
-The high correlation between Parquet and SQL data types makes reading Parquet 
files effortless in Drill. Writing to Parquet files takes more work than 
reading. Because SQL does not support all Parquet data types, to prevent Drill 
from inferring a type other than one you want, use the [cast function] 
(/drill/docs/sql-functions) Drill offers more liberal casting capabilities than 
SQL for Parquet conversions if the Parquet data is of a logical type. 
+The high correlation between Parquet and SQL data types makes reading Parquet 
files effortless in Drill. Writing to Parquet files takes more work than 
reading. Because SQL does not support all Parquet data types, to prevent Drill 
from inferring a type other than one you want, use the [cast function] 
(/docs/sql-functions) Drill offers more liberal casting capabilities than SQL 
for Parquet conversions if the Parquet data is of a logical type. 
 
 The following general process converts a file from JSON to Parquet:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/data-sources/004-json-ref.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources/004-json-ref.md 
b/_docs/data-sources/004-json-ref.md
index db9e671..d60db06 100644
--- a/_docs/data-sources/004-json-ref.md
+++ b/_docs/data-sources/004-json-ref.md
@@ -69,12 +69,12 @@ Use all text mode to prevent the schema change error 
described in the previous s
 
 When you set this option, Drill reads all data from the JSON files as VARCHAR. 
After reading the data, use a SELECT statement in Drill to cast data as follows:
 
-* Cast [JSON numeric 
values](/drill/docs/lession-2-run-queries-with-ansi-sql#return-customer-data-with-appropriate-data-types)
 to SQL types, such as BIGINT, DECIMAL, FLOAT, INTEGER, and SMALLINT.
-* Cast JSON strings to [Drill Date/Time Data Type 
Formats](/drill/docs/supported-date-time-data-type-formats).
+* Cast [JSON numeric 
values](/docs/lession-2-run-queries-with-ansi-sql#return-customer-data-with-appropriate-data-types)
 to SQL types, such as BIGINT, DECIMAL, FLOAT, INTEGER, and SMALLINT.
+* Cast JSON strings to [Drill Date/Time Data Type 
Formats](/docs/supported-date-time-data-type-formats).
 
 For example, apply a [Drill view] (link to view reference) to the data. 
 
-Drill uses [map and array data types](/drill/docs/data-types) internally for 
reading and writing complex and nested data structures from JSON. <<true?>>
+Drill uses [map and array data types](/docs/data-types) internally for reading 
and writing complex and nested data structures from JSON. <<true?>>
 
 ## Reading JSON
 To read JSON data using Drill, use a [file system storage plugin](link to 
plugin section) that defines the JSON format. You can use the `dfs` storage 
plugin, which includes the definition. 
@@ -118,7 +118,7 @@ You can write data from Drill to a JSON file. The following 
setup is required:
         CREATE TABLE my_json AS
         SELECT my column from dfs.`<path_file_name>`;
 
-Drill performs the following actions, as shown in the complete [CTAS command 
example](/drill/docs/create-table-as-ctas-command):
+Drill performs the following actions, as shown in the complete [CTAS command 
example](/docs/create-table-as-ctas-command):
    
 * Creates a directory using table name.
 * Writes the JSON data to the directory in the workspace location.
@@ -283,7 +283,7 @@ To access the second geometry coordinate of the first city 
lot in the San Franci
                +------------+
                1 row selected (0.19 seconds)
 
-More examples of drilling down into an array are shown in ["Selecting Nested 
Data for a Column"](/drill/docs/query-3-selecting-nested-data-for-a-column). 
+More examples of drilling down into an array are shown in ["Selecting Nested 
Data for a Column"](/docs/query-3-selecting-nested-data-for-a-column). 
 
 ### Example: Analyze Map Fields in a Map
 This example uses a WHERE clause to drill down to a third level of the 
following JSON hierarchy to get the Id and weight of the person whose max_hdl 
exceeds 160, use dot notation as shown in the query that follows:

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/dev-custom-fcn/001-dev-simple.md
----------------------------------------------------------------------
diff --git a/_docs/dev-custom-fcn/001-dev-simple.md 
b/_docs/dev-custom-fcn/001-dev-simple.md
index ebf3831..1528bd9 100644
--- a/_docs/dev-custom-fcn/001-dev-simple.md
+++ b/_docs/dev-custom-fcn/001-dev-simple.md
@@ -5,7 +5,7 @@ parent: "Develop Custom Functions"
 Create a class within a Java package that implements Drill’s simple interface
 into the program, and include the required information for the function type.
 Your function must include data types that Drill supports, such as int or
-BigInt. For a list of supported data types, refer to the [SQL 
Reference](/drill/docs/sql-reference).
+BigInt. For a list of supported data types, refer to the [SQL 
Reference](/docs/sql-reference).
 
 Complete the following steps to develop a simple function using Drill’s 
simple
 function interface:

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/dev-custom-fcn/002-dev-aggregate.md
----------------------------------------------------------------------
diff --git a/_docs/dev-custom-fcn/002-dev-aggregate.md 
b/_docs/dev-custom-fcn/002-dev-aggregate.md
index 4fd14d7..5f58da9 100644
--- a/_docs/dev-custom-fcn/002-dev-aggregate.md
+++ b/_docs/dev-custom-fcn/002-dev-aggregate.md
@@ -5,7 +5,7 @@ parent: "Develop Custom Functions"
 Create a class within a Java package that implements Drill’s aggregate
 interface into the program. Include the required information for the function.
 Your function must include data types that Drill supports, such as int or
-BigInt. For a list of supported data types, refer to the [SQL 
Reference](/drill/docs/sql-reference/).
+BigInt. For a list of supported data types, refer to the [SQL 
Reference](/docs/sql-reference/).
 
 Complete the following steps to create an aggregate function:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/develop/001-compile.md
----------------------------------------------------------------------
diff --git a/_docs/develop/001-compile.md b/_docs/develop/001-compile.md
index 2cf6ac9..dea42e9 100644
--- a/_docs/develop/001-compile.md
+++ b/_docs/develop/001-compile.md
@@ -30,8 +30,8 @@ Maven and JDK installed:
 Now that you have Drill installed, you can connect to Drill and query sample
 data or you can connect Drill to your data sources.
 
-  * To connect Drill to your data sources, refer to [Connect to Data 
Sources](/drill/docs/connect-to-data-sources) for instructions.
+  * To connect Drill to your data sources, refer to [Connect to Data 
Sources](/docs/connect-to-data-sources) for instructions.
   * To connect to Drill and query sample data, refer to the following topics:
-    * [Start Drill ](/drill/docs/starting-stopping-drill)(For Drill installed 
in embedded mode)
-    * [Query Data ](/drill/docs/query-data)
+    * [Start Drill ](/docs/starting-stopping-drill)(For Drill installed in 
embedded mode)
+    * [Query Data ](/docs/query-data)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/develop/003-patch-tool.md
----------------------------------------------------------------------
diff --git a/_docs/develop/003-patch-tool.md b/_docs/develop/003-patch-tool.md
index 3ef3fe5..28c8a54 100644
--- a/_docs/develop/003-patch-tool.md
+++ b/_docs/develop/003-patch-tool.md
@@ -21,8 +21,8 @@ parent: "Develop Drill"
 
 #### 1\. Setup
 
-  1. Follow instructions 
[here](/drill/docs/drill-patch-review-tool#jira-command-line-tool) to setup the 
jira-python package
-  2. Follow instructions 
[here](/drill/docs/drill-patch-review-tool#reviewboard) to setup the 
reviewboard python tools
+  1. Follow instructions 
[here](/docs/drill-patch-review-tool#jira-command-line-tool) to setup the 
jira-python package
+  2. Follow instructions [here](/docs/drill-patch-review-tool#reviewboard) to 
setup the reviewboard python tools
   3. Install the argparse module 
   
         On Linux -> sudo yum install python-argparse

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/install/001-drill-in-10.md
----------------------------------------------------------------------
diff --git a/_docs/install/001-drill-in-10.md b/_docs/install/001-drill-in-10.md
index 3f74859..37b8bd0 100644
--- a/_docs/install/001-drill-in-10.md
+++ b/_docs/install/001-drill-in-10.md
@@ -120,7 +120,7 @@ Complete the following steps to install Drill:
   
         cd /opt/drill/apache-drill-<version>
 
-At this point, you can [start 
Drill](/drill/docs/apache-drill-in-10-minutes#start-drill).
+At this point, you can [start 
Drill](/docs/apache-drill-in-10-minutes#start-drill).
 
 ### Installing Drill on Mac OS X
 
@@ -144,7 +144,7 @@ Complete the following steps to install Drill:
   
         cd /Users/max/drill/apache-drill-<version>
 
-At this point, you can [start 
Drill](/drill/docs/apache-drill-in-10-minutes/#start-drill).
+At this point, you can [start 
Drill](/docs/apache-drill-in-10-minutes/#start-drill).
 
 ### Installing Drill on Windows
 
@@ -193,7 +193,7 @@ directory path, Drill fails to run.
      2. When prompted, enter the password `admin` and then press Enter. The 
cursor blinks for a few seconds and then `0: jdbc:drill:zk=local>` displays in 
the prompt.
 
 At this point, you can submit queries to Drill. Refer to the [Query Sample Dat
-a](/drill/docs/apache-drill-in-10-minutes#query-sample-data) section of this 
document.
+a](/docs/apache-drill-in-10-minutes#query-sample-data) section of this 
document.
 
 ## Start Drill
 
@@ -215,7 +215,7 @@ Example: `~/apache-drill-<version>`
 also starts a local Drillbit. If you are connecting to an Apache Drill
 cluster, the value of `zk=` would be a list of Zookeeper quorum nodes. For
 more information about how to run Drill in clustered mode, go to [Deploying
-Apache Drill in a Clustered 
Environment](/drill/docs/deploying-apache-drill-in-a-clustered-environment).
+Apache Drill in a Clustered 
Environment](/docs/deploying-apache-drill-in-a-clustered-environment).
 
 When SQLLine starts, the system displays the following prompt:  
 `0: jdbc:drill:zk=local>`
@@ -231,7 +231,7 @@ Your Drill installation includes a `sample-date` directory 
with JSON and
 Parquet files that you can query. The local file system on your machine is
 configured as the `dfs` storage plugin instance by default when you install
 Drill in embedded mode. For more information about storage plugin
-configuration, refer to [Storage Plugin 
Registration](/drill/docs/connect-to-data-sources).
+configuration, refer to [Storage Plugin 
Registration](/docs/connect-to-data-sources).
 
 Use SQL syntax to query the sample `JSON` and `Parquet` files in the `sample-
 data` directory on your local file system.
@@ -353,11 +353,11 @@ following tasks:
 
 Now that you have an idea about what Drill can do, you might want to:
 
-  * [Deploy Drill in a clustered 
environment.](/drill/docs/deploying-apache-drill-in-a-clustered-environment)
-  * [Configure storage plugins to connect Drill to your data 
sources](/drill/docs/connect-to-data-sources).
-  * Query [Hive](/drill/docs/querying-hive) and 
[HBase](/docs/hbase-storage-plugin) data.
-  * [Query Complex Data](/drill/docs/querying-complex-data)
-  * [Query Plain Text Files](/drill/docs/querying-plain-text-files)
+  * [Deploy Drill in a clustered 
environment.](/docs/deploying-apache-drill-in-a-clustered-environment)
+  * [Configure storage plugins to connect Drill to your data 
sources](/docs/connect-to-data-sources).
+  * Query [Hive](/docs/querying-hive) and [HBase](/docs/hbase-storage-plugin) 
data.
+  * [Query Complex Data](/docs/querying-complex-data)
+  * [Query Plain Text Files](/docs/querying-plain-text-files)
 
 ## More Information
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/install/002-deploy.md
----------------------------------------------------------------------
diff --git a/_docs/install/002-deploy.md b/_docs/install/002-deploy.md
index eecd3bc..399414e 100644
--- a/_docs/install/002-deploy.md
+++ b/_docs/install/002-deploy.md
@@ -52,7 +52,7 @@ Complete the following steps to install Drill on designated 
nodes:
 ### Connecting Drill to Data Sources
 
 You can connect Drill to various types of data sources. Refer to [Connect
-Apache Drill to Data Sources](/drill/docs/connect-to-data-sources) to get 
configuration instructions for the
+Apache Drill to Data Sources](/docs/connect-to-data-sources) to get 
configuration instructions for the
 particular type of data source that you want to connect to Drill.
 
 ### Starting Drill
@@ -86,4 +86,4 @@ Drill provides a list of Drillbits that have joined.
 **Example**
 
 Now you can query data with Drill. The Drill installation includes sample data
-that you can query. Refer to [Query Sample Data](/drill/docs/sample-datasets).
\ No newline at end of file
+that you can query. Refer to [Query Sample Data](/docs/sample-datasets).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/install/004-install-distributed.md
----------------------------------------------------------------------
diff --git a/_docs/install/004-install-distributed.md 
b/_docs/install/004-install-distributed.md
index d0f07aa..a47176f 100644
--- a/_docs/install/004-install-distributed.md
+++ b/_docs/install/004-install-distributed.md
@@ -51,5 +51,5 @@ Complete the following steps to install Drill on designated 
nodes:
          }
 
 You can connect Drill to various types of data sources. Refer to [Connect
-Apache Drill to Data Sources](/drill/docs/connect-to-data-sources) to get 
configuration instructions for the
+Apache Drill to Data Sources](/docs/connect-to-data-sources) to get 
configuration instructions for the
 particular type of data source that you want to connect to Drill.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/install/install-embedded/001-install-linux.md
----------------------------------------------------------------------
diff --git a/_docs/install/install-embedded/001-install-linux.md 
b/_docs/install/install-embedded/001-install-linux.md
index b7a0c85..589fa0f 100644
--- a/_docs/install/install-embedded/001-install-linux.md
+++ b/_docs/install/install-embedded/001-install-linux.md
@@ -19,4 +19,4 @@ Linux:
 
         cd /opt/drill/apache-drill-<version>
 At this point, you can [invoke
-SQLLine](/drill/docs/starting-stopping-drill) to run Drill.
\ No newline at end of file
+SQLLine](/docs/starting-stopping-drill) to run Drill.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/install/install-embedded/002-install-mac.md
----------------------------------------------------------------------
diff --git a/_docs/install/install-embedded/002-install-mac.md 
b/_docs/install/install-embedded/002-install-mac.md
index a288b94..97ae775 100644
--- a/_docs/install/install-embedded/002-install-mac.md
+++ b/_docs/install/install-embedded/002-install-mac.md
@@ -23,7 +23,7 @@ OS X:
   
         cd /Users/max/drill/apache-drill-<version>
 
-At this point, you can [invoke SQLLine](/drill/docs/starting-stopping-drill) to
+At this point, you can [invoke SQLLine](/docs/starting-stopping-drill) to
 run Drill.
 
 <!--The title is too complicated for me to figure out how to create a link to 
it.-->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/install/install-embedded/003-install-win.md
----------------------------------------------------------------------
diff --git a/_docs/install/install-embedded/003-install-win.md 
b/_docs/install/install-embedded/003-install-win.md
index 6680019..6c8272b 100644
--- a/_docs/install/install-embedded/003-install-win.md
+++ b/_docs/install/install-embedded/003-install-win.md
@@ -48,4 +48,4 @@ directory path, Drill fails to run.
      2. When prompted, enter the password `admin` and then press Enter. The 
cursor blinks for a few seconds and then `0: jdbc:drill:zk=local>` displays in 
the prompt.
 
 At this point, you can submit queries to Drill. Refer to the [Query Sample Dat
-a](/drill/docs/apache-drill-in-10-minutes#query-sampledata) section of this 
document.
\ No newline at end of file
+a](/docs/apache-drill-in-10-minutes#query-sampledata) section of this document.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/001-odbc-win.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/001-odbc-win.md b/_docs/interfaces/001-odbc-win.md
index 86a4167..5105048 100644
--- a/_docs/interfaces/001-odbc-win.md
+++ b/_docs/interfaces/001-odbc-win.md
@@ -17,13 +17,13 @@ that is self-describing, such as HBase, Parquet, JSON, CSV, 
and TSV.
 Complete the following steps to connect to a Drill data source from a BI tool
 using ODBC:
 
-  * [Step 1. Install the MapR Drill ODBC 
Driver](/drill/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows)
-  * [Step 2. Configure ODBC Connections to Drill Data 
Sources](/drill/docs/step-2-configure-odbc-connections-to-drill-data-sources)
-  * [Step 3. Connect to Drill Data Sources from a BI 
Tool](/drill/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool)
+  * [Step 1. Install the MapR Drill ODBC 
Driver](/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows)
+  * [Step 2. Configure ODBC Connections to Drill Data 
Sources](/docs/step-2-configure-odbc-connections-to-drill-data-sources)
+  * [Step 3. Connect to Drill Data Sources from a BI 
Tool](/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool)
 
 For examples of how you can use the MapR Drill ODBC Driver to connect to Drill
 Data Sources from BI tools, see [Step 3. Connect to Drill Data Sources from a
-BI Tool](/drill/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool). 
While the documentation includes examples for Tableau, you can use
+BI Tool](/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool). While the 
documentation includes examples for Tableau, you can use
 this driver with any BI tool that works with ODBC, such as Excel,
 MicroStrategy, and Toad.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/003-jdbc-squirrel.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/003-jdbc-squirrel.md 
b/_docs/interfaces/003-jdbc-squirrel.md
index 99eba80..0fd14c0 100644
--- a/_docs/interfaces/003-jdbc-squirrel.md
+++ b/_docs/interfaces/003-jdbc-squirrel.md
@@ -6,7 +6,7 @@ To use the JDBC Driver to access Drill through Squirrel, ensure 
that you meet th
 ### Prerequisites
 
   * SQuirreL requires JRE 7
-  * Drill installed in distributed mode on one or multiple nodes in a cluster. 
Refer to the [Install Drill](/drill/docs/install-drill/) documentation for more 
information.
+  * Drill installed in distributed mode on one or multiple nodes in a cluster. 
Refer to the [Install Drill](/docs/install-drill/) documentation for more 
information.
   * The client must be able to resolve the actual hostname of the Drill 
node(s) with the IP(s). Verify that a DNS entry was created on the client 
machine for the Drill node(s).
      
 If a DNS entry does not exist, create the entry for the Drill node(s).

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-linux/001-install-odbc-linux.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-linux/001-install-odbc-linux.md 
b/_docs/interfaces/odbc-linux/001-install-odbc-linux.md
index 3ae1930..61644d8 100644
--- a/_docs/interfaces/odbc-linux/001-install-odbc-linux.md
+++ b/_docs/interfaces/odbc-linux/001-install-odbc-linux.md
@@ -15,7 +15,7 @@ To install the MapR Drill ODBC Driver, complete the following 
steps:
   * Step 3: Setting the LD_LIBRARY_PATH Environment Variable
 
 After you complete the installation steps, complete the steps listed in
-[Configuring ODBC Connections for Linux and Mac OS 
X](/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x).
+[Configuring ODBC Connections for Linux and Mac OS 
X](/docs/configuring-odbc-connections-for-linux-and-mac-os-x).
 
 Verify that your system meets the system requirements before you start.
 
@@ -101,5 +101,5 @@ variables permanently.
 #### Next Step
 
 Complete the steps listed in [Configuring ODBC Connections for Linux and Mac
-OS X](/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x).
+OS X](/docs/configuring-odbc-connections-for-linux-and-mac-os-x).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-linux/002-install-odbc-mac.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-linux/002-install-odbc-mac.md 
b/_docs/interfaces/odbc-linux/002-install-odbc-mac.md
index 65a35f3..77b8e1b 100644
--- a/_docs/interfaces/odbc-linux/002-install-odbc-mac.md
+++ b/_docs/interfaces/odbc-linux/002-install-odbc-mac.md
@@ -12,7 +12,7 @@ To install the MapR Drill ODBC Driver, complete the following 
steps:
   * Step 3: Updating the DYLD_LIBRARY_PATH Environment Variable
 
 After you complete the installation steps, complete the steps listed in
-[Configuring ODBC Connections for Linux and Mac OS 
X](/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x)
+[Configuring ODBC Connections for Linux and Mac OS 
X](/docs/configuring-odbc-connections-for-linux-and-mac-os-x)
 .
 
 Verify that your system meets the following prerequisites before you start.
@@ -67,4 +67,4 @@ c/lib/universal`
 #### Next Step
 
 Complete the steps listed in [Configuring ODBC Connections for Linux and Mac
-OS X](/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x).
+OS X](/docs/configuring-odbc-connections-for-linux-and-mac-os-x).

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-linux/003-odbc-connections-linux.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-linux/003-odbc-connections-linux.md 
b/_docs/interfaces/odbc-linux/003-odbc-connections-linux.md
index 11b660d..f6f276e 100644
--- a/_docs/interfaces/odbc-linux/003-odbc-connections-linux.md
+++ b/_docs/interfaces/odbc-linux/003-odbc-connections-linux.md
@@ -27,7 +27,7 @@ steps:
   * Step 4: Configure the MapR Drill ODBC Driver
 
 Once you have completed the required steps, refer to [Testing the ODBC
-Connection on Linux and Mac OS 
X](/drill/docs/testing-the-odbc-connection-on-linux-and-mac-os-x).
+Connection on Linux and Mac OS 
X](/docs/testing-the-odbc-connection-on-linux-and-mac-os-x).
 
 #### Sample Configuration Files
 
@@ -114,7 +114,7 @@ following steps:
 For details on the configuration options available for controlling the
 behavior of DSNs using Simba ODBC Driver for Apache Drill, see [Driver
 Configuration
-Options](/drill/docs/driver-configuration-options).
+Options](/docs/driver-configuration-options).
 
 ## Step 3: (Optional) Define the ODBC Driver in `odbcinst.ini`
 
@@ -174,5 +174,5 @@ named `DYLD_LIBRARY_PATH`.
 
 ### Next Step
 
-Refer to [Testing the ODBC Connection on Linux and Mac OS 
X](/drill/docs/testing-the-odbc-connection-on-linux-and-mac-os-x).
+Refer to [Testing the ODBC Connection on Linux and Mac OS 
X](/docs/testing-the-odbc-connection-on-linux-and-mac-os-x).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-linux/005-odbc-connect-str.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-linux/005-odbc-connect-str.md 
b/_docs/interfaces/odbc-linux/005-odbc-connect-str.md
index 595432b..70f3858 100644
--- a/_docs/interfaces/odbc-linux/005-odbc-connect-str.md
+++ b/_docs/interfaces/odbc-linux/005-odbc-connect-str.md
@@ -5,7 +5,7 @@ parent: "Using the MapR ODBC Driver on Linux and Mac OS X"
 You can use a connection string to connect to your data source. For a list of
 all the properties that you can use in connection strings, see [Driver
 Configuration
-Options](/drill/docs/driver-configuration-options).
+Options](/docs/driver-configuration-options).
 
 The following example shows a connection string for connecting directly to a
 Drillbit:

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-win/001-install-odbc-win.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-win/001-install-odbc-win.md 
b/_docs/interfaces/odbc-win/001-install-odbc-win.md
index 5bb6c8d..7ff770d 100644
--- a/_docs/interfaces/odbc-win/001-install-odbc-win.md
+++ b/_docs/interfaces/odbc-win/001-install-odbc-win.md
@@ -54,5 +54,5 @@ driver.
   2. When the installation completes, press any key to continue.   
 For example, you can press the SPACEBAR key.
 
-#### What's Next? Go to [Step 2. Configure ODBC Connections to Drill Data 
Sources](/drill/docs/step-2-configure-odbc-connections-to-drill-data-sources).
+#### What's Next? Go to [Step 2. Configure ODBC Connections to Drill Data 
Sources](/docs/step-2-configure-odbc-connections-to-drill-data-sources).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-win/002-conf-odbc-win.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-win/002-conf-odbc-win.md 
b/_docs/interfaces/odbc-win/002-conf-odbc-win.md
index 636bd9f..5fe40b2 100644
--- a/_docs/interfaces/odbc-win/002-conf-odbc-win.md
+++ b/_docs/interfaces/odbc-win/002-conf-odbc-win.md
@@ -8,7 +8,7 @@ sources:
   * Create a Data Source Name
   * Create an ODBC Connection String
 
-**Prerequisite:** An Apache Drill installation must be available that is 
configured to access the data sources that you want to connect to.  For 
information about how to install Apache Drill, see [Install 
Drill](/drill/docs/install-drill). For information about configuring data 
sources, see the [Apache Drill documentation](/drill/docs).
+**Prerequisite:** An Apache Drill installation must be available that is 
configured to access the data sources that you want to connect to.  For 
information about how to install Apache Drill, see [Install 
Drill](/docs/install-drill). For information about configuring data sources, 
see the [Apache Drill documentation](/docs).
 
 ## Create a Data Source Name (DSN)
 
@@ -31,12 +31,12 @@ The ODBC Data Source Administrator window appears.
 
      <table style='table-layout:fixed;width:100%'><tbody><tr><th>Connection 
Type</th><th >Properties</th><th >Descriptions</th></tr><tr><td rowspan="2" 
valign="top" width="10%">Zookeeper Quorum</td><td valign="top" style='width: 
100px;'>Quorum</td><td valign="top" style='width: 400px;'>A comma-separated 
list of servers in a Zookeeper cluster.For example, 
&lt;ip_zookeepernode1&gt;:5181,&lt;ip_zookeepernode21&gt;:5181,…</td></tr><tr><td
 valign="top">ClusterID</td><td valign="top">Name of the drillbit cluster. The 
default is drillbits1. You may need to specify a different value if the cluster 
ID was changed in the drill-override.conf file.</td></tr><tr><td colspan="1" 
valign="top">Direct to Drillbit</td><td colspan="1" valign="top"> </td><td 
colspan="1" valign="top">Provide the IP address or host name of the Drill 
server and the port number that that the Drill server is listening on.  The 
port number defaults to 31010. You may need to specify a different value if the 
port number was 
 changed in the drill-override.conf file.</td></tr></tbody></table>
      For information on selecting the appropriate connection type, see 
[Connection
-Types](/drill/docs/step-2-configure-odbc-connections-to-drill-data-sources#connection-type).
+Types](/docs/step-2-configure-odbc-connections-to-drill-data-sources#connection-type).
   8. In the **Default Schema** field, select the default schema that you want 
to connect to.
      For more information about the schemas that appear in this list, see 
Schemas.
   9. Optionally, perform one of the following operations:
 
-     <table ><tbody><tr><th >Option</th><th >Action</th></tr><tr><td 
valign="top">Update the configuration of the advanced properties.</td><td 
valign="top">Edit the default values in the <strong>Advanced 
Properties</strong> section. <br />For more information, see <a 
href="/drill/docs/advanced-properties/">Advanced 
Properties</a>.</td></tr><tr><td valign="top">Configure the types of events 
that you want the driver to log.</td><td valign="top">Click <strong>Logging 
Options</strong>. <br />For more information, see <a 
href="/drill/docs/step-2-configure-odbc-connections-to-drill-data-sources#logging-options">Logging
 Options</a>.</td></tr><tr><td valign="top">Create views or explore Drill 
sources.</td><td valign="top">Click <strong>Drill Explorer</strong>. <br />For 
more information, see <a 
href="/drill/docs/using-drill-explorer-to-browse-data-and-create-views">Using 
Drill Explorer to Browse Data and Create Views</a>.</td></tr></tbody></table>
+     <table ><tbody><tr><th >Option</th><th >Action</th></tr><tr><td 
valign="top">Update the configuration of the advanced properties.</td><td 
valign="top">Edit the default values in the <strong>Advanced 
Properties</strong> section. <br />For more information, see <a 
href="/docs/advanced-properties/">Advanced Properties</a>.</td></tr><tr><td 
valign="top">Configure the types of events that you want the driver to 
log.</td><td valign="top">Click <strong>Logging Options</strong>. <br />For 
more information, see <a 
href="/docs/step-2-configure-odbc-connections-to-drill-data-sources#logging-options">Logging
 Options</a>.</td></tr><tr><td valign="top">Create views or explore Drill 
sources.</td><td valign="top">Click <strong>Drill Explorer</strong>. <br />For 
more information, see <a 
href="/docs/using-drill-explorer-to-browse-data-and-create-views">Using Drill 
Explorer to Browse Data and Create Views</a>.</td></tr></tbody></table>
   10. Click **OK** to save the DSN.
 
 ## Configuration Options
@@ -139,5 +139,5 @@ type:
 
         DRIVER=MapR Drill ODBC 
Driver;AdvancedProperties={HandshakeTimeout=0;QueryTimeout=0;TimestampTZDisplayTimezone=utc;ExcludedSchemas=sys,
 
INFORMATION_SCHEMA;};Catalog=DRILL;Schema=;ConnectionType=ZooKeeper;ZKQuorum=192.168.39.43:5181;ZKClusterID=drillbits1
 
-#### What's Next? Go to [Step 3. Connect to Drill Data Sources from a BI 
Tool](/drill/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool).
+#### What's Next? Go to [Step 3. Connect to Drill Data Sources from a BI 
Tool](/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-win/003-connect-odbc-win.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-win/003-connect-odbc-win.md 
b/_docs/interfaces/odbc-win/003-connect-odbc-win.md
index d60b294..3a887c3 100644
--- a/_docs/interfaces/odbc-win/003-connect-odbc-win.md
+++ b/_docs/interfaces/odbc-win/003-connect-odbc-win.md
@@ -9,7 +9,7 @@ Examples of self-describing data include HBase, Parquet, JSON, 
CSV,and TSV.
 In some cases, you may want to use Drill Explorer to explore that data or to
 create a view before you connect to the data from a BI tool. For more
 information about Drill Explorer, see [Using Drill Explorer to Browse Data and
-Create 
Views](/drill/docs/using-drill-explorer-to-browse-data-and-create-views).
+Create Views](/docs/using-drill-explorer-to-browse-data-and-create-views).
 
 In an ODBC-compliant BI tool, use the ODBC DSN to create an ODBC connection
 with one of the methods applicable to the data source type:
@@ -17,7 +17,7 @@ with one of the methods applicable to the data source type:
 <table ><tbody><tr><th >Data Source Type</th><th >ODBC Connection 
Method</th></tr><tr><td valign="top">Hive</td><td valign="top">Connect to a 
table.<br />Connect to the table using custom SQL.<br />Use Drill Explorer to 
create a view. Then use ODBC to connect to the view as if it were a 
table.</td></tr><tr><td valign="top">HBase<br /><span style="line-height: 
1.4285715;background-color: transparent;">Parquet<br /></span><span 
style="line-height: 1.4285715;background-color: transparent;">JSON<br 
/></span><span style="line-height: 1.4285715;background-color: 
transparent;">CSV<br /></span><span style="line-height: 
1.4285715;background-color: transparent;">TSV</span></td><td valign="top">Use 
Drill Explorer to create a view. Then use ODBC to connect to the view as if it 
were a table.<br />Connect to the data using custom 
SQL.</td></tr></tbody></table>
   
 For examples of how to connect to Drill data sources from a BI tool, see the
-[Step 3. Connect to Drill Data Sources from a BI 
Tool](/drill/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool).
+[Step 3. Connect to Drill Data Sources from a BI 
Tool](/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool).
 
 **Note:** The default schema that you configure in the DSN may or may not 
carry over to an application’s data source connections. You may need to 
re-select the schema.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/interfaces/odbc-win/004-tableau-examples.md
----------------------------------------------------------------------
diff --git a/_docs/interfaces/odbc-win/004-tableau-examples.md 
b/_docs/interfaces/odbc-win/004-tableau-examples.md
index f25f50d..e543d63 100644
--- a/_docs/interfaces/odbc-win/004-tableau-examples.md
+++ b/_docs/interfaces/odbc-win/004-tableau-examples.md
@@ -12,7 +12,7 @@ This section includes the following examples:
   * Using custom SQL to connect to data in a Parquet file
 The steps and results of these examples assume pre-configured schemas and
 source data. You configure schemas as storage plugin instances on the Storage
-tab of the [Drill Web 
UI](/drill/docs/getting-to-know-the-drill-sandbox#storage-plugins-overview).
+tab of the [Drill Web 
UI](/docs/getting-to-know-the-drill-sandbox#storage-plugins-overview).
 
 ## Example: Connect to a Hive Table in Tableau
 
@@ -125,7 +125,7 @@ HBase table.
 
      HBase does not contain type information, so you need to cast the data in 
Drill
 Explorer. For information about SQL query support, see the SQL
-Reference in the [Apache Drill Wiki documentation](/drill/docs/sql-reference).
+Reference in the [Apache Drill Wiki documentation](/docs/sql-reference).
   9. To save the view, click **Create As**.
   10. Specify the schema where you want to save the view, enter a name for the 
view, and click **Save**.  
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/manage/conf/001-mem-alloc.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/001-mem-alloc.md 
b/_docs/manage/conf/001-mem-alloc.md
index 8f98cfc..4caf563 100644
--- a/_docs/manage/conf/001-mem-alloc.md
+++ b/_docs/manage/conf/001-mem-alloc.md
@@ -27,5 +27,5 @@ env.sh`.
 
 After you edit `<drill_installation_directory>/conf/drill-env.sh`, [restart
 the Drillbit
-](/drill/docs/starting-stopping-drill#starting-a-drillbit)on
+](/docs/starting-stopping-drill#starting-a-drillbit)on
 the node.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/manage/conf/002-startup-opt.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/002-startup-opt.md 
b/_docs/manage/conf/002-startup-opt.md
index 9db8b45..d1766fb 100644
--- a/_docs/manage/conf/002-startup-opt.md
+++ b/_docs/manage/conf/002-startup-opt.md
@@ -46,5 +46,5 @@ override.conf` file located in Drill’s` /conf` directory.
 You may want to configure the following start-up options that control certain
 behaviors in Drill:
 
-<table ><tbody><tr><th >Option</th><th >Default Value</th><th 
>Description</th></tr><tr><td valign="top" 
>drill.exec.sys.store.provider</td><td valign="top" >ZooKeeper</td><td 
valign="top" >Defines the persistent storage (PStore) provider. The PStore 
holds configuration and profile data. For more information about PStores, see 
<a href="/drill/docs/persistent-configuration-storage" 
rel="nofollow">Persistent Configuration Storage</a>.</td></tr><tr><td 
valign="top" >drill.exec.buffer.size</td><td valign="top" > </td><td 
valign="top" >Defines the amount of memory available, in terms of record 
batches, to hold data on the downstream side of an operation. Drill pushes data 
downstream as quickly as possible to make data immediately available. This 
requires Drill to use memory to hold the data pending operations. When data on 
a downstream operation is required, that data is immediately available so Drill 
does not have to go over the network to process it. Providing more memory to 
this optio
 n increases the speed at which Drill completes a query.</td></tr><tr><td 
valign="top" 
>drill.exec.sort.external.directoriesdrill.exec.sort.external.fs</td><td 
valign="top" > </td><td valign="top" >These options control spooling. The 
drill.exec.sort.external.directories option tells Drill which directory to use 
when spooling. The drill.exec.sort.external.fs option tells Drill which file 
system to use when spooling beyond memory files. <span style="line-height: 
1.4285715;background-color: transparent;"> </span>Drill uses a spool and sort 
operation for beyond memory operations. The sorting operation is designed to 
spool to a Hadoop file system. The default Hadoop file system is a local file 
system in the /tmp directory. Spooling performance (both writing and reading 
back from it) is constrained by the file system. <span style="line-height: 
1.4285715;background-color: transparent;"> </span>For MapR clusters, use 
MapReduce volumes or set up local volumes to use for spooling purposes. Vol
 umes improve performance and stripe data across as many disks as 
possible.</td></tr><tr><td valign="top" colspan="1" 
>drill.exec.debug.error_on_leak</td><td valign="top" colspan="1" >True</td><td 
valign="top" colspan="1" >Determines how Drill behaves when memory leaks occur 
during a query. By default, this option is enabled so that queries fail when 
memory leaks occur. If you disable the option, Drill issues a warning when a 
memory leak occurs and completes the query.</td></tr><tr><td valign="top" 
colspan="1" >drill.exec.zk.connect</td><td valign="top" colspan="1" 
>localhost:2181</td><td valign="top" colspan="1" >Provides Drill with the 
ZooKeeper quorum to use to connect to data sources. Change this setting to 
point to the ZooKeeper quorum that you want Drill to use. You must configure 
this option on each Drillbit node.</td></tr><tr><td valign="top" colspan="1" 
>drill.exec.cluster-id</td><td valign="top" colspan="1" 
>my_drillbit_cluster</td><td valign="top" colspan="1" >Identifies t
 he cluster that corresponds with the ZooKeeper quorum indicated. It also 
provides Drill with the name of the cluster used during UDP multicast. You must 
change the default cluster-id if there are multiple clusters on the same 
subnet. If you do not change the ID, the clusters will try to connect to each 
other to create one cluster.</td></tr></tbody></table></div>
+<table ><tbody><tr><th >Option</th><th >Default Value</th><th 
>Description</th></tr><tr><td valign="top" 
>drill.exec.sys.store.provider</td><td valign="top" >ZooKeeper</td><td 
valign="top" >Defines the persistent storage (PStore) provider. The PStore 
holds configuration and profile data. For more information about PStores, see 
<a href="/docs/persistent-configuration-storage" rel="nofollow">Persistent 
Configuration Storage</a>.</td></tr><tr><td valign="top" 
>drill.exec.buffer.size</td><td valign="top" > </td><td valign="top" >Defines 
the amount of memory available, in terms of record batches, to hold data on the 
downstream side of an operation. Drill pushes data downstream as quickly as 
possible to make data immediately available. This requires Drill to use memory 
to hold the data pending operations. When data on a downstream operation is 
required, that data is immediately available so Drill does not have to go over 
the network to process it. Providing more memory to this option incr
 eases the speed at which Drill completes a query.</td></tr><tr><td 
valign="top" 
>drill.exec.sort.external.directoriesdrill.exec.sort.external.fs</td><td 
valign="top" > </td><td valign="top" >These options control spooling. The 
drill.exec.sort.external.directories option tells Drill which directory to use 
when spooling. The drill.exec.sort.external.fs option tells Drill which file 
system to use when spooling beyond memory files. <span style="line-height: 
1.4285715;background-color: transparent;"> </span>Drill uses a spool and sort 
operation for beyond memory operations. The sorting operation is designed to 
spool to a Hadoop file system. The default Hadoop file system is a local file 
system in the /tmp directory. Spooling performance (both writing and reading 
back from it) is constrained by the file system. <span style="line-height: 
1.4285715;background-color: transparent;"> </span>For MapR clusters, use 
MapReduce volumes or set up local volumes to use for spooling purposes. Volumes 
i
 mprove performance and stripe data across as many disks as 
possible.</td></tr><tr><td valign="top" colspan="1" 
>drill.exec.debug.error_on_leak</td><td valign="top" colspan="1" >True</td><td 
valign="top" colspan="1" >Determines how Drill behaves when memory leaks occur 
during a query. By default, this option is enabled so that queries fail when 
memory leaks occur. If you disable the option, Drill issues a warning when a 
memory leak occurs and completes the query.</td></tr><tr><td valign="top" 
colspan="1" >drill.exec.zk.connect</td><td valign="top" colspan="1" 
>localhost:2181</td><td valign="top" colspan="1" >Provides Drill with the 
ZooKeeper quorum to use to connect to data sources. Change this setting to 
point to the ZooKeeper quorum that you want Drill to use. You must configure 
this option on each Drillbit node.</td></tr><tr><td valign="top" colspan="1" 
>drill.exec.cluster-id</td><td valign="top" colspan="1" 
>my_drillbit_cluster</td><td valign="top" colspan="1" >Identifies the clu
 ster that corresponds with the ZooKeeper quorum indicated. It also provides 
Drill with the name of the cluster used during UDP multicast. You must change 
the default cluster-id if there are multiple clusters on the same subnet. If 
you do not change the ID, the clusters will try to connect to each other to 
create one cluster.</td></tr></tbody></table></div>
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/manage/conf/004-persist-conf.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/004-persist-conf.md 
b/_docs/manage/conf/004-persist-conf.md
index 12439a5..3f11906 100644
--- a/_docs/manage/conf/004-persist-conf.md
+++ b/_docs/manage/conf/004-persist-conf.md
@@ -67,7 +67,7 @@ override.conf.`
 ## MapR-DB for Persistent Configuration Storage
 
 The MapR-DB plugin will be released soon. You can [compile Drill from
-source](/drill/docs/compiling-drill-from-source) to try out this
+source](/docs/compiling-drill-from-source) to try out this
 new feature.
 
 If you have MapR-DB in your cluster, you can use MapR-DB for persistent

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/query/001-get-started.md
----------------------------------------------------------------------
diff --git a/_docs/query/001-get-started.md b/_docs/query/001-get-started.md
index 92e924d..7383e4b 100644
--- a/_docs/query/001-get-started.md
+++ b/_docs/query/001-get-started.md
@@ -8,7 +8,7 @@ parent: "Query Data"
 This tutorial covers how to query a file and a directory on your local file
 system. Files and directories are like standard SQL tables to Drill. If you
 install Drill in [embedded
-mode](/drill/docs/installing-drill-in-embedded-mode), the
+mode](/docs/installing-drill-in-embedded-mode), the
 installer registers and configures your file system as the `dfs` instance.
 You can query these types of files using the default `dfs` storage plugin:
 
@@ -22,7 +22,7 @@ plugin to simplify querying plain text files.
 ## Prerequisites
 
 This tutorial assumes that you installed Drill in [embedded
-mode](/drill/docs/installing-drill-in-embedded-mode). The first few lessons of 
the tutorial
+mode](/docs/installing-drill-in-embedded-mode). The first few lessons of the 
tutorial
 use a Google file of Ngram data that you download from the internet. The
 compressed Google Ngram files are 8 and 58MB. To expand the compressed files,
 you need an additional 448MB of free disk space for this exercise.
@@ -32,7 +32,7 @@ interface (CLI) on Linux, Mac OS X, or Windows.
 
 ### Start Drill (Linux or Mac OS X)
 
-To [start Drill](/drill/docs/starting-stopping-drill) on Linux
+To [start Drill](/docs/starting-stopping-drill) on Linux
 or Mac OS X, use the SQLLine command.
 
   1. Open a terminal.
@@ -46,7 +46,7 @@ or Mac OS X, use the SQLLine command.
 
 ### Start Drill (Windows)
 
-To [start Drill](/drill/docs/starting-stopping-drill) on
+To [start Drill](/docs/starting-stopping-drill) on
 Windows, use the SQLLine command.
 
   1. Open the `apache-drill-<version>` folder.

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/query/002-query-fs.md
----------------------------------------------------------------------
diff --git a/_docs/query/002-query-fs.md b/_docs/query/002-query-fs.md
index ca488fb..c5d27f6 100644
--- a/_docs/query/002-query-fs.md
+++ b/_docs/query/002-query-fs.md
@@ -16,7 +16,7 @@ distributed file system:
 The default `dfs` storage plugin instance registered with Drill has a
 `default` workspace. If you query data in the `default` workspace, you do not
 need to include the workspace in the query. Refer to
-[Workspaces](/drill/docs/workspaces) for
+[Workspaces](/docs/workspaces) for
 more information.
 
 Drill supports the following file types:

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/query/003-query-hbase.md
----------------------------------------------------------------------
diff --git a/_docs/query/003-query-hbase.md b/_docs/query/003-query-hbase.md
index d2a33d5..e6e70fa 100644
--- a/_docs/query/003-query-hbase.md
+++ b/_docs/query/003-query-hbase.md
@@ -88,7 +88,7 @@ steps:
     
          cat testdata.txt | hbase shell
   5. Issue `exit` to leave the `hbase shell`.
-  6. Start Drill. Refer to [Starting/Stopping 
Drill](/drill/docs/starting-stopping-drill) for instructions.
+  6. Start Drill. Refer to [Starting/Stopping 
Drill](/docs/starting-stopping-drill) for instructions.
   7. Use Drill to issue the following SQL queries on the “students” and 
“clicks” tables:  
   
      1. Issue the following query to see the data in the “students” table: 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/query/005-query-hive.md
----------------------------------------------------------------------
diff --git a/_docs/query/005-query-hive.md b/_docs/query/005-query-hive.md
index 01be576..92071ff 100644
--- a/_docs/query/005-query-hive.md
+++ b/_docs/query/005-query-hive.md
@@ -19,7 +19,7 @@ To create a Hive table and query it with Drill, complete the 
following steps:
 
         hive> load data local inpath '/<directory path>/customers.csv' 
overwrite into table customers;`
   4. Issue `quit` or `exit` to leave the Hive shell.
-  5. Start Drill. Refer to [/drill/docs/starting-stopping-drill) for 
instructions.
+  5. Start Drill. Refer to [/docs/starting-stopping-drill) for instructions.
   6. Issue the following query to Drill to get the first and last names of the 
first ten customers in the Hive table:  
 
         0: jdbc:drill:schema=hiveremote> SELECT firstname,lastname FROM 
hiveremote.`customers` limit 10;`

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/query/007-query-sys-tbl.md
----------------------------------------------------------------------
diff --git a/_docs/query/007-query-sys-tbl.md b/_docs/query/007-query-sys-tbl.md
index 9b853ec..42975d9 100644
--- a/_docs/query/007-query-sys-tbl.md
+++ b/_docs/query/007-query-sys-tbl.md
@@ -152,8 +152,8 @@ The default value, which is of the double, float, or long 
double data type;
 otherwise, null.
 
 For information about how to configure Drill system and session options, see[
-Planning and Execution Options](/drill/docs/planning-and-execution-options).
+Planning and Execution Options](/docs/planning-and-execution-options).
 
 For information about how to configure Drill start-up options, see[ Start-Up
-Options](/drill/docs/start-up-options).
+Options](/docs/start-up-options).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/query/get-started/001-lesson1-connect.md
----------------------------------------------------------------------
diff --git a/_docs/query/get-started/001-lesson1-connect.md 
b/_docs/query/get-started/001-lesson1-connect.md
index c4619c3..1ca43eb 100644
--- a/_docs/query/get-started/001-lesson1-connect.md
+++ b/_docs/query/get-started/001-lesson1-connect.md
@@ -30,10 +30,10 @@ To list the default storage plugins, use the SHOW DATABASES 
command.
 
   2. Take a look at the list of storage plugins and workspaces that Drill 
recognizes.
 
-* `dfs` is the storage plugin for connecting to the [file 
system](/drill/docs/querying-a-file-system) data source on your machine.
+* `dfs` is the storage plugin for connecting to the [file 
system](/docs/querying-a-file-system) data source on your machine.
 * `cp` is a storage plugin for connecting to a JAR data source used with MapR.
-* `sys` is a storage plugin for connecting to Drill [system 
tables](/drill/docs/querying-system-tables).
-* [INFORMATION_SCHEMA](/drill/docs/querying-the-information-schema) is a 
storage plugin for connecting to an ANSI standard set of metadata tables.
+* `sys` is a storage plugin for connecting to Drill [system 
tables](/docs/querying-system-tables).
+* [INFORMATION_SCHEMA](/docs/querying-the-information-schema) is a storage 
plugin for connecting to an ANSI standard set of metadata tables.
 
 ## List Tables
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/query/query-fs/002-query-parquet.md
----------------------------------------------------------------------
diff --git a/_docs/query/query-fs/002-query-parquet.md 
b/_docs/query/query-fs/002-query-parquet.md
index cf19fcf..390dac4 100644
--- a/_docs/query/query-fs/002-query-parquet.md
+++ b/_docs/query/query-fs/002-query-parquet.md
@@ -6,7 +6,7 @@ Your Drill installation includes a `sample-data` directory with 
Parquet files
 that you can query. Use SQL syntax to query the `region.parquet` and
 `nation.parquet` files in the `sample-data` directory.
 
-**Note:** Your Drill installation location may differ from the examples used 
here. The examples assume that Drill was installed in embedded mode on your 
machine following the [Apache Drill in 10 Minutes 
](/drill/docs/apache-drill-in-10-minutes/)tutorial. If you installed Drill in 
distributed mode, or your `sample-data` directory differs from the location 
used in the examples, make sure to change the `sample-data` directory to the 
correct location before you run the queries.
+**Note:** Your Drill installation location may differ from the examples used 
here. The examples assume that Drill was installed in embedded mode on your 
machine following the [Apache Drill in 10 Minutes 
](/docs/apache-drill-in-10-minutes/)tutorial. If you installed Drill in 
distributed mode, or your `sample-data` directory differs from the location 
used in the examples, make sure to change the `sample-data` directory to the 
correct location before you run the queries.
 
 ## Region File
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/rn/004-0.6.0-rn.md
----------------------------------------------------------------------
diff --git a/_docs/rn/004-0.6.0-rn.md b/_docs/rn/004-0.6.0-rn.md
index f121ebe..28111f0 100644
--- a/_docs/rn/004-0.6.0-rn.md
+++ b/_docs/rn/004-0.6.0-rn.md
@@ -20,7 +20,7 @@ This release is primarily a bug fix release, with [more than 
30 JIRAs closed](
 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&vers
 ion=12327472), but there are some notable features:
 
-  * Direct ANSI SQL access to MongoDB, using the latest [MongoDB Plugin for 
Apache Drill](/drill/docs/mongodb-plugin-for-apache-drill)
+  * Direct ANSI SQL access to MongoDB, using the latest [MongoDB Plugin for 
Apache Drill](/docs/mongodb-plugin-for-apache-drill)
   * Filesystem query performance improvements with partition pruning
   * Ability to use the file system as a persistent store for query profiles 
and diagnostic information
   * Window function support (alpha)

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/sql-ref/001-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/001-data-types.md b/_docs/sql-ref/001-data-types.md
index e425033..491898a 100644
--- a/_docs/sql-ref/001-data-types.md
+++ b/_docs/sql-ref/001-data-types.md
@@ -18,7 +18,7 @@ You can use the following SQL data types in your Drill 
queries:
   * TIME
   * TIMESTAMP
 
-Refer to [Supported Date/Time Data Type 
formats](/drill/docs/supported-date-time-data-type-formats/).
+Refer to [Supported Date/Time Data Type 
formats](/docs/supported-date-time-data-type-formats/).
 
 #### Integer
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/sql-ref/002-operators.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/002-operators.md b/_docs/sql-ref/002-operators.md
index 79afc7d..074375a 100644
--- a/_docs/sql-ref/002-operators.md
+++ b/_docs/sql-ref/002-operators.md
@@ -60,7 +60,7 @@ You can use the following subquery operators in your Drill 
queries:
   * EXISTS
   * IN
 
-See [SELECT Statements](/drill/docs/select-statements).
+See [SELECT Statements](/docs/select-statements).
 
 ## String Operators
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/sql-ref/003-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/003-functions.md b/_docs/sql-ref/003-functions.md
index bc57a39..98ce701 100644
--- a/_docs/sql-ref/003-functions.md
+++ b/_docs/sql-ref/003-functions.md
@@ -181,6 +181,6 @@ embedded JSON data:
 This section contains descriptions of SQL functions that you can use to
 analyze nested data:
 
-  * [FLATTEN Function](/drill/docs/flatten-function)
-  * [KVGEN Function](/drill/docs/kvgen-function)
-  * [REPEATED_COUNT Function](/drill/docs/repeated-count-function)
\ No newline at end of file
+  * [FLATTEN Function](/docs/flatten-function)
+  * [KVGEN Function](/docs/kvgen-function)
+  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/e73f2ec1/_docs/sql-ref/004-nest-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/004-nest-functions.md 
b/_docs/sql-ref/004-nest-functions.md
index 09fe91e..c6e7ff2 100644
--- a/_docs/sql-ref/004-nest-functions.md
+++ b/_docs/sql-ref/004-nest-functions.md
@@ -5,6 +5,6 @@ parent: "SQL Reference"
 This section contains descriptions of SQL functions that you can use to
 analyze nested data:
 
-  * [FLATTEN Function](/drill/docs/flatten-function)
-  * [KVGEN Function](/drill/docs/kvgen-function)
-  * [REPEATED_COUNT Function](/drill/docs/repeated-count-function)
\ No newline at end of file
+  * [FLATTEN Function](/docs/flatten-function)
+  * [KVGEN Function](/docs/kvgen-function)
+  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file

Reply via email to