Repository: tajo
Updated Branches:
  refs/heads/master 410a53c98 -> 68dd96065


TAJO-2028: Refining Hive Integration document including typo.

Closes #916

Signed-off-by: Jihoon Son <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/tajo/repo
Commit: http://git-wip-us.apache.org/repos/asf/tajo/commit/68dd9606
Tree: http://git-wip-us.apache.org/repos/asf/tajo/tree/68dd9606
Diff: http://git-wip-us.apache.org/repos/asf/tajo/diff/68dd9606

Branch: refs/heads/master
Commit: 68dd960652c754a6cbd41ac68065eeca74522ed5
Parents: 410a53c
Author: Jongyoung Park <[email protected]>
Authored: Wed Jan 6 16:42:37 2016 +0900
Committer: Jihoon Son <[email protected]>
Committed: Wed Jan 6 16:42:56 2016 +0900

----------------------------------------------------------------------
 CHANGES                                        |  3 ++
 tajo-docs/src/main/sphinx/hive_integration.rst | 36 +++++++++++++--------
 2 files changed, 26 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/tajo/blob/68dd9606/CHANGES
----------------------------------------------------------------------
diff --git a/CHANGES b/CHANGES
index 82465c9..6536009 100644
--- a/CHANGES
+++ b/CHANGES
@@ -138,6 +138,9 @@ Release 0.12.0 - unreleased
 
   TASKS
 
+    TAJO-2028: Refining Hive Integration document including typo. 
+    (Jongyoung Park via jihoon)
+
     TAJO-1999: TUtil.newList/newLinkedHashMap should be replaced by Java's 
diamond 
     operator. (Dongkyu Hwangbo via jihoon)
 

http://git-wip-us.apache.org/repos/asf/tajo/blob/68dd9606/tajo-docs/src/main/sphinx/hive_integration.rst
----------------------------------------------------------------------
diff --git a/tajo-docs/src/main/sphinx/hive_integration.rst 
b/tajo-docs/src/main/sphinx/hive_integration.rst
index 4c1d8d4..6262a02 100644
--- a/tajo-docs/src/main/sphinx/hive_integration.rst
+++ b/tajo-docs/src/main/sphinx/hive_integration.rst
@@ -1,6 +1,6 @@
-*************************************
+****************
 Hive Integration
-*************************************
+****************
 
 Apache Tajo™ catalog supports HiveCatalogStore to integrate with Apache 
Hive™.
 This integration allows Tajo to access all tables used in Apache Hive. 
@@ -12,16 +12,22 @@ and then add some configs into ``conf/tajo-env.sh`` and 
``conf/catalog-site.xml`
 This section describes how to setup HiveMetaStore integration.
 This instruction would take no more than five minutes.
 
-You need to set your Hive home directory to the environment variable 
``HIVE_HOME`` in conf/tajo-env.sh as follows: ::
+You need to set your Hive home directory to the environment variable 
**HIVE_HOME** in ``conf/tajo-env.sh`` as follows:
+
+.. code-block:: sh
 
   export HIVE_HOME=/path/to/your/hive/directory
 
 If you need to use jdbc to connect HiveMetaStore, you have to prepare MySQL 
jdbc driver.
-Next, you should set the path of MySQL JDBC driver jar file to the environment 
variable HIVE_JDBC_DRIVER_DIR in conf/tajo-env.sh as follows: ::
+Next, you should set the path of MySQL JDBC driver jar file to the environment 
variable **HIVE_JDBC_DRIVER_DIR** in ``conf/tajo-env.sh`` as follows:
+
+.. code-block:: sh
 
-  export 
HIVE_JDBC_DRIVER_DIR==/path/to/your/mysql_jdbc_driver/mysql-connector-java-x.x.x-bin.jar
+  export 
HIVE_JDBC_DRIVER_DIR=/path/to/your/mysql_jdbc_driver/mysql-connector-java-x.x.x-bin.jar
 
-Finally, you should specify HiveCatalogStore as Tajo catalog driver class in 
``conf/catalog-site.xml`` as follows: ::
+Finally, you should specify HiveCatalogStore as Tajo catalog driver class in 
``conf/catalog-site.xml`` as follows:
+
+.. code-block:: xml
 
   <property>
     <name>tajo.catalog.store.class</name>
@@ -30,13 +36,17 @@ Finally, you should specify HiveCatalogStore as Tajo 
catalog driver class in ``c
 
 .. note::
 
-  Hive stores a list of partitions for each table in its metastore. If new 
partitions are
-  directly added to HDFS, HiveMetastore will not able aware of these 
partitions unless the user
+  Hive stores a list of partitions for each table in its metastore. When new 
partitions are
+  added directly to HDFS, HiveMetastore can't recognize these partitions until 
the user executes
   ``ALTER TABLE table_name ADD PARTITION`` commands on each of the newly added 
partitions or
-  ``MSCK REPAIR TABLE  table_name`` command.
+  ``MSCK REPAIR TABLE table_name`` command.
+
+  But current Tajo doesn't provide ``ADD PARTITION`` command and Hive doesn't 
provide an api for
+  responding to ``MSK REPAIR TABLE`` command. Thus, if you insert data to Hive 
partitioned
+  table and you want to scan the updated partitions through Tajo, you must run 
following command on Hive
+  (see `Hive doc 
<https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)>`_
+  for more details of the command):
 
-  But current tajo doesn't provide ``ADD PARTITION`` command and hive doesn't 
provide an api for
-  responding to ``MSK REPAIR TABLE`` command. Thus, if you insert data to hive 
partitioned
-  table and you want to scan the updated partitions through Tajo, you must run 
following command on hive ::
+  .. code-block:: sql
 
-  $ MSCK REPAIR TABLE [table_name];
+    MSCK REPAIR TABLE [table_name];

Reply via email to