Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The following page has been changed by AshishThusoo:
http://wiki.apache.org/hadoop/Hive/AdminManual/Configuration

------------------------------------------------------------------------------
  
  === Hive Configuration Variables ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
- ||hive.exec.script.wrapper||Wrapper around any invocations to script operator 
e.g. if this is set to python, the script passed to the script operator will be 
invoked as {{{python <script command>}}}. If the value is null or not set, the 
script is invoked as {{{<script command>}}}||null||
+ ||hive.exec.script.wrapper||Wrapper around any invocations to script operator 
e.g. if this is set to python, the script passed to the script operator will be 
invoked as {{{python <script command>}}}. If the value is null or not set, the 
script is invoked as {{{<script command>}}}.||null||
  ||hive.exec.plan||||null||
- ||hive.exec.scratchdir||This directory is used by hive to store the plans for 
different map/reduce stages for the query as well as to stored the intermediate 
outputs of these stages||/tmp/<user.name>/hive||
+ ||hive.exec.scratchdir||This directory is used by hive to store the plans for 
different map/reduce stages for the query as well as to stored the intermediate 
outputs of these stages.||/tmp/<user.name>/hive||
- ||hive.exec.submitviachild||Determines whether the map/reduce jobs should be 
submitted through a separate jvm in the non local mode||false - By default jobs 
are submitted through the same jvm as the compiler||
+ ||hive.exec.submitviachild||Determines whether the map/reduce jobs should be 
submitted through a separate jvm in the non local mode.||false - By default 
jobs are submitted through the same jvm as the compiler||
- ||hive.exec.script.maxerrsize||Maximum number of serialization errors allowed 
in a user script invoked through {{{TRANSFORM}}} or {{{MAP}}} or {{{REDUCE}}} 
constructs||100000||
+ ||hive.exec.script.maxerrsize||Maximum number of serialization errors allowed 
in a user script invoked through {{{TRANSFORM}}} or {{{MAP}}} or {{{REDUCE}}} 
constructs.||100000||
- ||hive.exec.compress.output||Determines whether the output of the final 
map/reduce job in a query is compressed or not||false||
+ ||hive.exec.compress.output||Determines whether the output of the final 
map/reduce job in a query is compressed or not.||false||
- ||hive.exec.compress.intermediate||Determines whether the output of the 
intermediate map/reduce jobs in a query is compressed or not||false||
+ ||hive.exec.compress.intermediate||Determines whether the output of the 
intermediate map/reduce jobs in a query is compressed or not.||false||
- ||hive.jar.path||The location of hive_cli.jar that is used when submitting 
jobs in a separate jvm||||
+ ||hive.jar.path||The location of hive_cli.jar that is used when submitting 
jobs in a separate jvm.||||
- ||hive.aux.jars.path||||||
- ||hive.partition.pruning||||nonstrict||
- ||hive.map.aggr||||false||
+ ||hive.aux.jars.path||The location of the plugin jars that contain 
implementations of user defined functions and serdes.||||
+ ||hive.partition.pruning||A strict value for this variable indicates that an 
error is thrown by the compiler in case no partition predicate is provided on a 
partitioned table. This is used to protect against a user inadvertently issuing 
a query against all the partitions of the table.||nonstrict||
+ ||hive.map.aggr||Determines whether the map side aggregation is on or 
not.||false||
  ||hive.join.emit.interval||||1000||
  ||hive.map.aggr.hash.percentmemory||||(float)0.5||
  ||hive.default.fileformat||||TextFile||
@@ -59, +59 @@

  
  === Hive Configuration Variables used to interact with Hadoop ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
- ||hadoop.bin.path||||System.getenv("HADOOP_HOME") + "/bin/hadoop"||
- ||hadoop.config.dir||||System.getenv("HADOOP_HOME") + "/conf"||
+ ||hadoop.bin.path||The location of hadoop script which is used to submit jobs 
to hadoop when submitting through a separate jvm.||$HADOOP_HOME/bin/hadoop||
+ ||hadoop.config.dir||The location of the configuration directory of the 
hadoop installation||$HADOOP_HOME/conf||
  ||fs.default.name||||file:///||
  ||map.input.file||||null||
- ||mapred.job.tracker||||local||
- ||mapred.reduce.tasks||||1||
- ||mapred.job.name||||null||
+ ||mapred.job.tracker||The url to the jobtracker. If this is set to local then 
map/reduce is run in the local mode.||local||
+ ||mapred.reduce.tasks||The number of reducers for each map/reduce stage in 
the query plan.||1||
+ ||mapred.job.name||The name of the map/reduce job||null||
  
  === Hive Variables used to pass run time information ===
  ||'''Variable Name'''||'''Description'''||'''Default Value'''||
- ||hive.session.id||||||
- ||hive.query.string||||||
- ||hive.query.planid||||||
- ||hive.jobname.length||||50||
- ||hive.table.name||||||
- ||hive.partition.name||||||
- ||hive.alias||||||
+ ||hive.session.id||The id of the Hive Session.||||
+ ||hive.query.string||The query string passed to the map/reduce job.||||
+ ||hive.query.planid||The id of the plan for the map/reduce stage.||||
+ ||hive.jobname.length||The maximum length of the jobname.||50||
+ ||hive.table.name||The name of the hive table. This is passed to the user 
scripts through the script operator.||||
+ ||hive.partition.name||The name of the hive partition. This is passed to the 
user scripts through the script operator.||||
+ ||hive.alias||The alias being processed. This is also passed to the user 
scripts through the script operator.||||
  

Reply via email to