Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The following page has been changed by RodrigoSchmidt:
http://wiki.apache.org/hadoop/Hive/AdminManual/Installation

------------------------------------------------------------------------------
- Installing Hive is simple and only requires having Java 1.6 and Ant.
+ Installing Hive is simple and only requires having Java 1.6 and Ant installed 
on your machine.
  
+ Hive is available via SVN at 
http://svn.apache.org/repos/asf/hadoop/hive/trunk. You can download it by 
running the following command.
- Hive is available via SVN at: 
http://svn.apache.org/repos/asf/hadoop/hive/trunk
-   * $ svn co http://svn.apache.org/repos/asf/hadoop/hive/trunk hive
-   * $ cd hive
-   * $ ant package
-   * $ cd build/dist
-   * $ ls
-     * README.txt
-     * bin/ (all the shell scripts)
-     * lib/ (required jar files)
-     * conf/ (configuration files)
-     * examples/ (sample input and query files)
  
- In the rest of the page, we use build/dist and <install-dir> interchangeably.
+ {{{
+ $ svn co http://svn.apache.org/repos/asf/hadoop/hive/trunk hive
+ }}}
  
- [wiki:/EclipseSetup Instructions] to setup eclipse for hive development.
+ To build hive, execute the following command on the base directory:
  
- == Running Hive ==
+ {{{
+ $ ant package
+ }}}
  
+ It will create the subdirectory build/dist with the following contents:
- Hive uses hadoop  that means:
-   * you must have hadoop in your path OR
-   * export HADOOP_HOME=<hadoop-install-dir>
  
- In addition, you must create /tmp and /user/hive/warehouse 
+     * README.txt: readme file.
+     * bin/: directory containing all the shell scripts
+     * lib/: directory containing all required jar files)
+     * conf/: directory with configuration files
+     * examples/: directory with sample input and query files
+ 
+ Subdirectory build/dist should contain all the files necessary to run hive. 
You can run it from there or copy it to a different location, if you prefer.
+ 
+ In order to run Hive, you must have hadoop in your path or have defined the 
environment variable HADOOP_HOME with the hadoop installation directory.
+ 
+ Moreover, we strongly advise users to create the HDFS directories /tmp and 
/user/hive/warehouse 
- (aka hive.metastore.warehouse.dir) and set them chmod g+w in 
+ (aka hive.metastore.warehouse.dir) and set them chmod g+w before tables are 
created in Hive. 
- HDFS before a table can be created in Hive. 
  
  
+ To use hive command line interface (cli) go to the hive home directory (the 
one with the contents of build/dist) and execute the following command:
- To use hive command line interface (cli) from the shell:
-   * $ bin/hive
  
+ {{{
+ $ bin/hive
+ }}}
+ 
+ Metadata is stored in an embedded Derby database whose disk storage location 
is determined by the hive configuration variable named 
javax.jdo.option.ConnectionURL. By default (see conf/hive-default.xml), this 
location is ./metastore_db
+ 
+ Using Derby in embedded mode allows at most one user at a time. To configure 
Derby to run in server mode, look at HiveDerbyServerMode.
+ 

Reply via email to