Repository: oozie
Updated Branches:
  refs/heads/master f9b3746ae -> 2bce9e8f6


OOZIE-2188 Fix typos in twiki documentation


Project: http://git-wip-us.apache.org/repos/asf/oozie/repo
Commit: http://git-wip-us.apache.org/repos/asf/oozie/commit/2bce9e8f
Tree: http://git-wip-us.apache.org/repos/asf/oozie/tree/2bce9e8f
Diff: http://git-wip-us.apache.org/repos/asf/oozie/diff/2bce9e8f

Branch: refs/heads/master
Commit: 2bce9e8f64b259f52401453463f8e0b0f985c558
Parents: f9b3746
Author: Purshotam Shah <[email protected]>
Authored: Wed Apr 1 12:32:13 2015 -0700
Committer: Purshotam Shah <[email protected]>
Committed: Wed Apr 1 12:32:13 2015 -0700

----------------------------------------------------------------------
 .../src/site/twiki/AG_HadoopConfiguration.twiki |  4 +--
 docs/src/site/twiki/AG_Install.twiki            | 32 +++++++++---------
 docs/src/site/twiki/AG_Monitoring.twiki         |  8 ++---
 docs/src/site/twiki/BundleFunctionalSpec.twiki  |  2 +-
 .../site/twiki/CoordinatorFunctionalSpec.twiki  | 30 ++++++++---------
 docs/src/site/twiki/DG_CommandLineTool.twiki    | 18 +++++------
 .../site/twiki/DG_CustomActionExecutor.twiki    |  4 +--
 .../site/twiki/DG_EmailActionExtension.twiki    |  4 +--
 docs/src/site/twiki/DG_Examples.twiki           |  6 ++--
 .../site/twiki/DG_Hive2ActionExtension.twiki    |  4 +--
 docs/src/site/twiki/DG_JMSNotifications.twiki   |  4 +--
 docs/src/site/twiki/DG_QuickStart.twiki         |  2 +-
 docs/src/site/twiki/DG_SLAMonitoring.twiki      |  4 +--
 .../site/twiki/DG_ShellActionExtension.twiki    |  6 ++--
 .../site/twiki/DG_SqoopActionExtension.twiki    |  4 +--
 docs/src/site/twiki/ENG_Building.twiki          |  2 +-
 .../site/twiki/ENG_Custom_Authentication.twiki  | 12 +++----
 docs/src/site/twiki/ENG_MiniOozie.twiki         |  2 +-
 docs/src/site/twiki/WebServicesAPI.twiki        | 34 ++++++++++----------
 .../src/site/twiki/WorkflowFunctionalSpec.twiki | 29 ++++++++---------
 release-log.txt                                 |  1 +
 21 files changed, 106 insertions(+), 106 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/AG_HadoopConfiguration.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/AG_HadoopConfiguration.twiki 
b/docs/src/site/twiki/AG_HadoopConfiguration.twiki
index d907296..528bf4a 100644
--- a/docs/src/site/twiki/AG_HadoopConfiguration.twiki
+++ b/docs/src/site/twiki/AG_HadoopConfiguration.twiki
@@ -30,7 +30,7 @@ Oozie supports whitelisting Hadoop services (JobTracker, 
HDFS), via 2 configurat
 The value must follow the pattern =[AUTHORITY,...]=. Where =AUTHORITY= is the 
=HOST:PORT= of
 the Hadoop service (JobTracker, HDFS).
 
-If the value is empty any HOST:PORT is accepted. Emtpy is the default value.
+If the value is empty any HOST:PORT is accepted. Empty is the default value.
 
 ---++ Hadoop Default Configuration Values
 
@@ -38,7 +38,7 @@ Oozie supports Hadoop configuration equivalent to the Hadoop 
=*-site.xml= files.
 
 The configuration property in the =oozie-site.xml= is 
=oozie.service.HadoopAccessorService.hadoop.configurations=
 and its value must follow the pattern =[<AUTHORITY>=<HADOOP_CONF_DIR>,]*=. 
Where =<AUTHORITY>= is the =HOST:PORT= of
-the Hadoop service (JobTracker, HDFS). The =<HADOO_CONF_DIR>= is a Hadoop 
configuration directory. If the specified
+the Hadoop service (JobTracker, HDFS). The =<HADOOP_CONF_DIR>= is a Hadoop 
configuration directory. If the specified
  directory is a relative path, it will be looked under the Oozie configuration 
directory. And absolute path can
  also be specified. Oozie will load the Hadoop =*-site.xml= files in the 
following order: core-site.xml, hdfs-site.xml,
  mapred-site.xml, yarn-site.xml, hadoop-site.xml, ssl-client.xml.

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/AG_Install.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/AG_Install.twiki 
b/docs/src/site/twiki/AG_Install.twiki
index f911412..0ce2609 100644
--- a/docs/src/site/twiki/AG_Install.twiki
+++ b/docs/src/site/twiki/AG_Install.twiki
@@ -72,10 +72,10 @@ Usage  : oozie-setup.sh <OPTIONS>"
                                                                FS_URI is the 
fs.default.name"
                                                                for hdfs uri; 
SHARED_LIBRARY, path to the"
                                                                Oozie sharelib 
to install, it can be a tarball"
-                                                               or an expanded 
version of it. If ommited,"
+                                                               or an expanded 
version of it. If omitted,"
                                                                the Oozie 
sharelib tarball from the Oozie"
                                                                installation 
directory will be used)"
-                                                               (action failes 
if sharelib is already installed"
+                                                               (action fails 
if sharelib is already installed"
                                                                in HDFS)"
          sharelib upgrade -fs FS_URI [-locallib SHARED_LIBRARY] 
([deprecated][use create command to create new version]
                                                                  upgrade 
existing sharelib, fails if there"
@@ -117,7 +117,7 @@ After the Hadoop JARs and the ExtJS library has been added 
to the =oozie.war= fi
 Delete any previous deployment of the =oozie.war= from the servlet container 
(if using Tomcat, delete
 =oozie.war= and =oozie= directory from Tomcat's =webapps/= directory)
 
-Deploy the prepared =oozie.war= file (the one that contains the Hadoop JARs 
adn the ExtJS library) in the
+Deploy the prepared =oozie.war= file (the one that contains the Hadoop JARs 
and the ExtJS library) in the
 servlet container (if using Tomcat, copy the prepared =oozie.war= file to 
Tomcat's =webapps/= directory).
 
 *IMPORTANT:* Only one Oozie instance can be deployed per Tomcat instance.
@@ -130,12 +130,12 @@ By default, Oozie is configured to use Embedded Derby.
 
 Oozie bundles the JDBC drivers for HSQL, Embedded Derby and PostgreSQL.
 
-HSQL is normally used for testcases as it is an in-memory database and all 
data is lost everytime Oozie is stopped.
+HSQL is normally used for test cases as it is an in-memory database and all 
data is lost every time Oozie is stopped.
 
 If using Derby, MySQL, Oracle, PostgreSQL, or SQL Server, the Oozie database 
schema must be created using the =ooziedb.sh= command
 line tool.
 
-If using MySQL, Oracle, or SQL Server, the corresponding JDBC driver JAR file 
mut be copied to Oozie's =libext/= directory and
+If using MySQL, Oracle, or SQL Server, the corresponding JDBC driver JAR file 
must be copied to Oozie's =libext/= directory and
 it must be added to Oozie WAR file using the =bin/addtowar.sh= or the 
=oozie-setup.sh= scripts using the =-jars= option.
 
 *IMPORTANT:* It is recommended to set the database's timezone to GMT (consult 
your database's documentation on how to do this).
@@ -197,7 +197,7 @@ the =ooziedb.sh= command line tool.
 NOTE: If instead using the '-run' option, the '-sqlfile <FILE>' option is 
used, then all the
 database changes will be written to the specified file and the database won't 
be modified.
 
-If using HSQL there is no need to use the =ooziedb= command line tool as HSQL 
is an im-memory database. Use the
+If using HSQL there is no need to use the =ooziedb= command line tool as HSQL 
is an in-memory database. Use the
 following configuration properties in the oozie-site.xml:
 
 <verbatim>
@@ -246,7 +246,7 @@ Oozie logs in 4 different files:
 
    * oozie.log: web services log streaming works from this log
    * oozie-ops.log: messages for Admin/Operations to monitor
-   * oozie-instrumentation.log: intrumentation data, every 60 seconds 
(configurable)
+   * oozie-instrumentation.log: instrumentation data, every 60 seconds 
(configurable)
    * oozie-audit.log: audit messages, workflow jobs changes
 
 The embedded Tomcat and embedded Derby log files are also written to Oozie's 
=logs/= directory.
@@ -266,9 +266,9 @@ The =user.name= parameter value is taken from the client 
process Java System pro
 Kerberos HTTP SPNEGO authentication requires the user to perform a Kerberos 
HTTP SPNEGO authentication sequence.
 
 If Pseudo/simple or Kerberos HTTP SPNEGO authentication mechanisms are used, 
Oozie will return the user an
-authentication token HTTP Cookie that can be used in later requests as identy 
proof.
+authentication token HTTP Cookie that can be used in later requests as 
identity proof.
 
-Oozie uses Apache Hadoop-Auth (Java HTTP SPENGO) library for authentication.
+Oozie uses Apache Hadoop-Auth (Java HTTP SPNEGO) library for authentication.
 This library can be extended to support other authentication mechanisms.
 
 Oozie user authentication is configured using the following configuration 
properties (default values shown):
@@ -293,7 +293,7 @@ The =signature.secret= is the signature secret for signing 
the authentication to
 case Oozie will randomly generate one on startup.
 
 The =oozie.authentication.cookie.domain= The domain to use for the HTTP cookie 
that stores the
-authentication token. In order to authentiation to work correctly across all 
Hadoop nodes web-consoles
+authentication token. In order to authentication to work correctly across all 
Hadoop nodes web-consoles
 the domain must be correctly set.
 
 The =simple.anonymous.allowed= indicates if anonymous requests are allowed. 
This setting is meaningful
@@ -349,7 +349,7 @@ Because proxyuser is a powerful capability, Oozie provides 
the following restric
 
    * Proxyuser is an explicit configuration on per proxyuser user basis.
    * A proxyuser user can be restricted to impersonate other users from a set 
of hosts.
-   * A proxyser user can be restricted to impersonate users belonging to a set 
of groups.
+   * A proxyuser user can be restricted to impersonate users belonging to a 
set of groups.
 
 There are 2 configuration properties needed to set up a proxyuser:
 
@@ -945,7 +945,7 @@ additional calls to the KDC to authenticate users to the 
Oozie server (because t
 servers, which will cause a fallback to Kerberos).
 
 4. If you'd like to use HTTPS (SSL) with Oozie HA, there's some additional 
considerations that need to be made.
-See the [[AG_Install#Setting_Up_Oozie_with_HTTPS_SSL][Seeting Up Oozie with 
HTTPS (SSL)]] section for more information.
+See the [[AG_Install#Setting_Up_Oozie_with_HTTPS_SSL][Setting Up Oozie with 
HTTPS (SSL)]] section for more information.
 
 ---++++ JobId sequence
 Oozie in HA mode, uses ZK to generate job id sequence. Job Ids are of 
following format.
@@ -972,7 +972,7 @@ Use the standard Tomcat commands to start and stop Oozie.
 
 Copy and expand the =oozie-client= TAR.GZ file bundled with the distribution. 
Add the =bin/= directory to the =PATH=.
 
-Refer to the [[DG_CommandLineTool][Command Line Interface Utilities]] document 
for a a full reference of the =oozie=
+Refer to the [[DG_CommandLineTool][Command Line Interface Utilities]] document 
for a full reference of the =oozie=
 command line tool.
 
 ---++ Oozie Share Lib
@@ -1022,7 +1022,7 @@ action and value is a comma separated list of DFS 
directories or jar files.
 
 By default Oozie runs coordinator and bundle jobs using =UTC= timezone for 
datetime values specified in the application
 XML and in the job parameter properties. This includes coordinator 
applications start and end times of jobs, coordinator
-datasets initial-instance, bundle applications kick-offtimes. In addition, 
coordinator dataset instance URI templates
+datasets initial-instance, and bundle applications kickoff times. In addition, 
coordinator dataset instance URI templates
 will be resolved using datetime values of the Oozie processing timezone.
 
 It is possible to set the Oozie processing timezone to a timezone that is an 
offset of UTC, alternate timezones must
@@ -1044,8 +1044,8 @@ be expressed in the corresponding timezone, for example 
=2012-08-08T12:42+0530=.
 
 *NOTE:* It is strongly encouraged to use =UTC=, the default Oozie processing 
timezone.
 
-For more details on using an alternate Oozie processing timezone, please 
reffer to the
-[[CoordinatorFunctionalSpec#datetime][Coordinator Fuctional Specification, 
section '4. Datetime']]
+For more details on using an alternate Oozie processing timezone, please refer 
to the
+[[CoordinatorFunctionalSpec#datetime][Coordinator Functional Specification, 
section '4. Datetime']]
 
 #UberJar
 ---++ MapReduce Workflow Uber Jars

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/AG_Monitoring.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/AG_Monitoring.twiki 
b/docs/src/site/twiki/AG_Monitoring.twiki
index a37fd20..63f0542 100644
--- a/docs/src/site/twiki/AG_Monitoring.twiki
+++ b/docs/src/site/twiki/AG_Monitoring.twiki
@@ -35,10 +35,10 @@ Instrumentation data includes variables, samplers, timers 
and counters.
 
    * logging
       * config.file: Log4j '.properties' configuration file.
-      * from.classpath: whether the config file has been read from the 
claspath or from the config directory.
-      * reload.interval: interval at which the config file will be realoded. 0 
if the config file will never be reloaded, when loaded from the classpath is 
never reloaded.
+      * from.classpath: whether the config file has been read from the 
classpath or from the config directory.
+      * reload.interval: interval at which the config file will be reloaded. 0 
if the config file will never be reloaded, when loaded from the classpath is 
never reloaded.
 
----+++ Samplers - Poll data at a fixed interval (default 1 sec) and report an 
average utlization over a longer period of time (default 60 seconds).
+---+++ Samplers - Poll data at a fixed interval (default 1 sec) and report an 
average utilization over a longer period of time (default 60 seconds).
 
 Poll for data over fixed interval and generate an average over the time 
interval. Unless specified, all samplers in
 Oozie work on a 1 minute interval.
@@ -59,7 +59,7 @@ Oozie work on a 1 minute interval.
       * requests
       * version
 
----+++ Counters - Maintain statistics about the number of times an event has 
occured, for the running Oozie instance. The values are reset if the Oozie 
instance is restarted.
+---+++ Counters - Maintain statistics about the number of times an event has 
occurred, for the running Oozie instance. The values are reset if the Oozie 
instance is restarted.
 
    * action.executors - Counters related to actions.
       * [action_type]#action.[operation_performed] (start, end, check, kill)

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/BundleFunctionalSpec.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/BundleFunctionalSpec.twiki 
b/docs/src/site/twiki/BundleFunctionalSpec.twiki
index 4a3241d..b400c19 100644
--- a/docs/src/site/twiki/BundleFunctionalSpec.twiki
+++ b/docs/src/site/twiki/BundleFunctionalSpec.twiki
@@ -17,7 +17,7 @@ The goal of this document is to define a new oozie 
abstraction called bundle sys
 
 Bundle is a higher-level oozie abstraction that will batch a set of 
coordinator applications. The user will be able to 
start/stop/suspend/resume/rerun in the bundle level resulting a better and easy 
operational control.
 
-More specififcally, the oozie *Bundle* system allows the user to define and 
execute a bunch of coordinator applications often called a data pipeline. There 
is no explicit dependency among the coordinator applications in a bundle. 
However, a user could use the data dependency of coordinator applications to 
create an implicit data application pipeline.  
+More specifically, the oozie *Bundle* system allows the user to define and 
execute a bunch of coordinator applications often called a data pipeline. There 
is no explicit dependency among the coordinator applications in a bundle. 
However, a user could use the data dependency of coordinator applications to 
create an implicit data application pipeline.
 
 
 ---++ 2. Definitions

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki 
b/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki
index 0812bb8..ca61fe9 100644
--- a/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki
+++ b/docs/src/site/twiki/CoordinatorFunctionalSpec.twiki
@@ -41,7 +41,7 @@ The goal of this document is to define a coordinator engine 
system specialized i
    * Clean up unsupported functions
 ---+++!! 02/JUN/2010:
 
-   * Update all EL functions in CoordFunctionSpec with "coord:" prefix
+   * Update all EL functions in CoordFunctionalSpec with "coord:" prefix
 ---+++!! 02/OCT/2009:
 
    * Added Appendix A, Oozie Coordinator XML-Schema
@@ -482,7 +482,7 @@ For example, the value "L" in the day-of-month field means 
"the last day of the
 February on non-leap years.
 If used in the day-of-week field by itself, it simply means "7" or "SAT".
 But if used in the day-of-week field after another value, it means "the last 
xxx day of the month" - for example
-"6L" means "the last friday of the month".
+"6L" means "the last Friday of the month".
 You can also specify an offset from the last day of the month, such as "L-3" 
which would mean the third-to-last day of the
 calendar month.
 When using the 'L' option, it is important not to specify lists, or ranges of 
values, as you'll get confusing/unexpected results.
@@ -916,7 +916,7 @@ All the datasets instances defined as input events must be 
available for the coo
 
 Input events are normally parameterized. For example, the last 24 hourly 
instances of the 'searchlogs' dataset.
 
-Input events can be refer to multiple instances of multiple datasets. For 
example, the last 24 hourly instances of the 'searchlogs' datset and the last 
weekly instance of the 'celebrityRumours' dataset.
+Input events can be refer to multiple instances of multiple datasets. For 
example, the last 24 hourly instances of the 'searchlogs' dataset and the last 
weekly instance of the 'celebrityRumours' dataset.
 
 ---++++ 6.1.5. Output Events
 
@@ -959,7 +959,7 @@ This set of interdependent *coordinator applications* is 
referred as a *data pip
 
    * The =hourlyRevenue-coord= coordinator job triggers, every hour, a 
=revenueCalculator-wf= workflow. It specifies as input the last 4 =checkouts= 
dataset instances and it specifies as output a new instance of the 
=hourlyRevenue= dataset.
    * The =dailyRollUpRevenue-coord= coordinator job triggers, every day, a 
=rollUpRevenue-wf= workflow. It specifies as input the last 24 =hourlyRevenue= 
dataset instances and it specifies as output a new instance of the 
=dailyRevenue= dataset.
-   * The =monthlyRollUpRevenue-coord= coordinator job triggers, once a month, 
a =rollUpRevenue-wf= workflow. It specifies as input all the =dailyRevenue= 
dataset instance of the month and it specifies as ouptut a new instance of the 
=monthlyRevenue= dataset.
+   * The =monthlyRollUpRevenue-coord= coordinator job triggers, once a month, 
a =rollUpRevenue-wf= workflow. It specifies as input all the =dailyRevenue= 
dataset instance of the month and it specifies as output a new instance of the 
=monthlyRevenue= dataset.
 
 This example contains describes all the components that conform a data 
pipeline: datasets, coordinator jobs and coordinator actions (workflows).
 
@@ -1490,7 +1490,7 @@ CA_NT: coordinator action creation (materialization) 
nominal time
 coord:current(int n) = DS_II + DS_FREQ * ( (CA_NT - DS_II) div DS_FREQ + n)
 </verbatim>
 
-NOTE: The formula above is not 100% correct, because DST changes the 
calculation has to account for hour shifts. Oozie Coordinator must make the 
correct calculation accounting for DTS hour shifts.
+NOTE: The formula above is not 100% correct, because DST changes the 
calculation has to account for hour shifts. Oozie Coordinator must make the 
correct calculation accounting for DST hour shifts.
 
 When a positive integer is used with the =${coord:current(int n)}=, it refers 
to a dataset instance in the future from the coordinator action creation 
(materialization) time. This can be useful when creating dataset instances for 
future use by other systems.
 
@@ -1767,7 +1767,7 @@ coord:offset(int n, String timeUnit) = CA_NT + 
floor(timeUnit * n div DS_FREQ) *
 </verbatim>
 
 NOTE: The formula above is not 100% correct, because DST changes the 
calculation has to account for hour shifts. Oozie Coordinator
-must make the correct calculation accounting for DTS hour shifts.
+must make the correct calculation accounting for DST hour shifts.
 
 When used in 'instance' or 'end-instance' XML elements, the above equation is 
used; the effect of the floor function is to
 "rewind" the resolved datetime to match the latest instance before the 
resolved time.
@@ -2242,7 +2242,7 @@ In the case of the synchronous 'logs' dataset, for the 
first action of this coor
 
 ---+++ 6.7. Parameterization of Coordinator Application Actions
 
-Actions started by a coordinator application normally require access to the 
dataset instances resolved by the input and output events to be able to 
propagate them to the the workflow job as parameters.
+Actions started by a coordinator application normally require access to the 
dataset instances resolved by the input and output events to be able to 
propagate them to the workflow job as parameters.
 
 The following EL functions are the mechanism that enables this propagation.
 
@@ -2288,7 +2288,7 @@ Coordinator application definition:
    </coordinator-app>
 </verbatim>
 
-In this example, each coordinator action will use as input events the the last 
day hourly instances of the 'logs' dataset.
+In this example, each coordinator action will use as input events the last day 
hourly instances of the 'logs' dataset.
 
 The =${coord:dataIn(String name)}= function enables the coordinator 
application to pass the URIs of all the dataset instances for the last day to 
the workflow job triggered by the coordinator action. For the 
=2009-01-02T00:00Z" run, the =${coord:dataIn('inputLogs')}= function will 
resolve to:
 
@@ -2367,9 +2367,9 @@ Coordinator application definition:
    </coordinator-app>
 </verbatim>
 
-In this example, each coordinator action will use as input events the the last 
24 hourly instances of the 'hourlyLogs' dataset to create a 'dailyLogs' dataset 
instance.
+In this example, each coordinator action will use as input events the last 24 
hourly instances of the 'hourlyLogs' dataset to create a 'dailyLogs' dataset 
instance.
 
-The =${coord:dataOut(String name)}= function enables the coordinator 
application to pass the URIs of the the dataset instance that will be created 
by the workflow job triggered by the coordinator action. For the 
=2009-01-01T24:00Z" run, the =${coord:dataOut('dailyLogs')}= function will 
resolve to:
+The =${coord:dataOut(String name)}= function enables the coordinator 
application to pass the URIs of the dataset instance that will be created by 
the workflow job triggered by the coordinator action. For the 
=2009-01-01T24:00Z" run, the =${coord:dataOut('dailyLogs')}= function will 
resolve to:
 
 <verbatim>
   hdfs://bar:8020/app/logs/2009/01/02
@@ -2857,7 +2857,7 @@ with the following change in pig params in addition to 
database and table.
 
 *Example usage in Pig:*
 This illustrates another pig script which filters partitions based on range, 
with range limits parameterized with the
-EL funtions
+EL functions
 
 <blockquote>
 A = load '$HCAT_IN_DB.$HCAT_IN_TABLE' using 
org.apache.hive.hcatalog.pig.HCatLoader();
@@ -3069,7 +3069,7 @@ This section describes the EL functions that could be 
used to parameterized both
 
 ---++++ 6.9.1. coord:dateOffset(String baseDate, int instance, String 
timeUnit) EL Function
 
-The =${coord:dateOffset(String baseDate, int instance, String timeUnit)}= EL 
function calculates date based on the following equaltion : =newDate = baseDate 
+ instance,  * timeUnit=
+The =${coord:dateOffset(String baseDate, int instance, String timeUnit)}= EL 
function calculates date based on the following equation : =newDate = baseDate 
+ instance,  * timeUnit=
 
 For example, if baseDate is '2009-01-01T00:00Z', instance is '2' and timeUnit 
is 'MONTH', the return date will be '2009-03-01T00:00Z'. If baseDate is 
'2009-01-01T00:00Z', instance is '1' and timeUnit is 'YEAR', the return date 
will be '2010-01-01T00:00Z'.
 
@@ -3478,7 +3478,7 @@ If you add *sla* tags to the Coordinator or Workflow XML 
files, then the SLA inf
         <sla:dev-contact>[email protected]</sla:dev-contact>
         <sla:qa-contact>[email protected]</sla:qa-contact>
         <sla:se-contact>[email protected]</sla:se-contact>
-        <sla:upstream-apps>applicaion-a,application-b</sla:upstream-apps>
+        <sla:upstream-apps>application-a,application-b</sla:upstream-apps>
         <sla:alert-percentage>99</sla:alert-percentage>
         <sla:alert-frequency>${24 * LAST_HOUR}</sla:alert-frequency>
     </sla:info>
@@ -3505,10 +3505,10 @@ $oozie job -rerun <coord_Job_id> [-nocleanup] 
[-refresh] [-failed]
 (if neither -action nor -date is given, the exception will be thrown.)
 </verbatim>
 
-The =rerun= option reruns a terminated (=TIMEDOUT=, =SUCCEEDED=, =KILLED=, 
=FAILED=) coordiantor action when coordiator job
+The =rerun= option reruns a terminated (=TIMEDOUT=, =SUCCEEDED=, =KILLED=, 
=FAILED=) coordinator action when coordinator job
 is not in =FAILED= or =KILLED= state.
 
-After the command is executed the rerun coordiator action will be in =WAITING= 
status.
+After the command is executed the rerun coordinator action will be in 
=WAITING= status.
 
 Refer to the [[DG_CoordinatorRerun][Rerunning Coordinator Actions]] for 
details on rerun.
 

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_CommandLineTool.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_CommandLineTool.twiki 
b/docs/src/site/twiki/DG_CommandLineTool.twiki
index 3c85eb6..372bfe6 100644
--- a/docs/src/site/twiki/DG_CommandLineTool.twiki
+++ b/docs/src/site/twiki/DG_CommandLineTool.twiki
@@ -48,7 +48,7 @@ usage:
                                       job without actually executing it
                 -failed               re-runs the failed workflow actions of 
the coordinator actions (requires -rerun)
                 -filter <arg>         
<key><comparator><value>[;<key><comparator><value>]*
-                                      (All Coordinator actions satisfying the 
filters will be retreived).
+                                      (All Coordinator actions satisfying the 
filters will be retrieved).
                                       key: status or nominaltime
                                       comparator: =, !=, <, <=, >, >=. = is 
used as OR and others as AND
                                       status: values are valid status like 
SUCCEEDED, KILLED etc. Only = and != apply
@@ -66,7 +66,7 @@ usage:
                 -logfilter <arg>      job log search parameter. Can be 
specified as -logfilter
                                       opt1=val1;opt2=val1;opt3=val1. Supported 
options are recent,
                                       start, end, loglevel, text, limit and 
debug
-                -nocleanup            do not clean up output-events of the 
coordiantor rerun
+                -nocleanup            do not clean up output-events of the 
coordinator rerun
                                       actions (requires -rerun)
                 -offset <arg>         job info offset of actions (default '1', 
requires -info)
                 -oozie <arg>          Oozie URL
@@ -195,7 +195,7 @@ For pseudo/simple authentication the =oozie= CLI uses the 
user name of the curre
 For Kerberos HTTP SPNEGO authentication the =oozie= CLI uses the default 
principal for the OS Kerberos cache
 (normally the principal that did =kinit=).
 
-Oozie uses Apache Hadoop-Auth (Java HTTP SPENGO) library for authentication.
+Oozie uses Apache Hadoop-Auth (Java HTTP SPNEGO) library for authentication.
 This library can be extended to support other authentication mechanisms.
 
 Once authentication is performed successfully the received authentication 
token is cached in the user home directory
@@ -319,7 +319,7 @@ $ oozie job -oozie http://localhost:11000/oozie -suspend 
14-20090525161321-oozie
 The =suspend= option suspends a workflow job in =RUNNING= status.
 After the command is executed the workflow job will be in =SUSPENDED= status.
 
-The =suspend= option suspends a coordinator/bundle  job in =RUNNING=, 
=RUNNIINGWITHERROR= or =PREP= status.
+The =suspend= option suspends a coordinator/bundle  job in =RUNNING=, 
=RUNNINGWITHERROR= or =PREP= status.
 When the coordinator job is suspended, running coordinator actions will stay 
in running and the workflows will be suspended. If the coordinator job is in 
=RUNNING=status, it will transit to =SUSPENDED=status; if it is in 
=RUNNINGWITHERROR=status, it will transit to =SUSPENDEDWITHERROR=; if it is in 
=PREP=status, it will transit to =PREPSUSPENDED=status.
 
 When the bundle job is suspended, running coordinators will be suspended. If 
the bundle job is in =RUNNING=status, it will transit to =SUSPENDED=status; if 
it is in =RUNNINGWITHERROR=status, it will transit to =SUSPENDEDWITHERROR=; if 
it is in =PREP=status, it will transit to =PREPSUSPENDED=status.
@@ -374,7 +374,7 @@ $oozie job -kill <coord_Job_id> [-action 1, 3-4, 7-40] 
[-date 2009-01-01T01:00Z:
    * If one of the actions in the given list of -action is already in terminal 
state, the output of this command will only include the other actions.
    * The dates specified in -date must be UTC.
    * Single date specified in -date must be able to find an action with 
matched nominal time to be effective.
-   * After the command is executed the killed coordiator action will have 
=KILLED= status.
+   * After the command is executed the killed coordinator action will have 
=KILLED= status.
 
 ---+++ Changing endtime/concurrency/pausetime/status of a Coordinator Job
 
@@ -402,7 +402,7 @@ Conditions and usage:
    * New concurrency value has to be a valid integer.
    * All lookahead actions which are in WAITING/READY state will be revoked 
according to the new pause/end time. If any action after new pause/end time is 
not in WAITING/READY state, an exception will be thrown.
    * Also empty string "" can be used to reset pause time to none.
-   * Endtime/concurency/pausetime of IGNORED Job cannot be changed.
+   * Endtime/concurrency/pausetime of IGNORED Job cannot be changed.
 
 After the command is executed the job's end time, concurrency or pause time 
should be changed. If an already-succeeded job changes its end time, its status 
will become running.
 
@@ -448,7 +448,7 @@ Example:
 $ oozie job -oozie http://localhost:11000/oozie -config job.properties -rerun 
14-20090525161321-oozie-joe
 </verbatim>
 
-The =rerun= option reruns a completed ( =SUCCCEDED=, =FAILED= or =KILLED= ) 
job skipping the specified nodes.
+The =rerun= option reruns a completed ( =SUCCEEDED=, =FAILED= or =KILLED= ) 
job skipping the specified nodes.
 
 The parameters for the job must be provided in a file, either a Java 
Properties file (.properties) or a Hadoop XML
 Configuration file (.xml). This file must be specified with the 
<code>-config</code> option.
@@ -712,7 +712,7 @@ $
 Search example with specific date range.
 <verbatim>
 $ ./oozie job -log 0000003-140319184715726-oozie-puru-C  -logfilter 
"start=2014-03-20 10:00:57,063;end=2014-03-20 10:10:57,063" -oozie 
http://localhost:11000/oozie/
-2014-03-20 10:00:57,063  INFO CoordActionUpdateXCommand:539 - SERVER[ ] 
USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000003-140319184715726-oozie-puru-C] 
ACTION[0000003-140319184715726-oozie-puru-C@1] Updating Coordintaor action id 
:0000003-140319184715726-oozie-puru-C@1 status  to KILLED, pending = 0
+2014-03-20 10:00:57,063  INFO CoordActionUpdateXCommand:539 - SERVER[ ] 
USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000003-140319184715726-oozie-puru-C] 
ACTION[0000003-140319184715726-oozie-puru-C@1] Updating Coordinator action id 
:0000003-140319184715726-oozie-puru-C@1 status  to KILLED, pending = 0
 2014-03-20 10:02:18,967  INFO CoordMaterializeTransitionXCommand:539 - SERVER[ 
] USER[-] GROUP[-] TOKEN[] APP[aggregator-coord] 
JOB[0000003-140319184715726-oozie-puru-C] ACTION[-] materialize actions for 
tz=Coordinated Universal Time,
  start=Thu Dec 31 18:00:00 PST 2009, end=Thu Dec 31 19:00:00 PST 2009,
  timeUnit 12,
@@ -963,7 +963,7 @@ Valid filter names are:
 The query will do an AND among all the filter names. The query will do an OR 
among all the filter values for the same
 name. Multiple values must be specified as different name value pairs.
 
-startCreatedTime and endCreatedTime should be specified either in *ISO8601 
(UTC)* format (*yyyy-MM-dd'T'HH:mm'Z'*) or a offset value in days or hours from 
the current time. for example, -2d means the current time - 2 days. -3h means 
the current time - 3 hours, -5m means the current time - 5 minutes
+startCreatedTime and endCreatedTime should be specified either in *ISO8601 
(UTC)* format (*yyyy-MM-dd'T'HH:mm'Z'*) or a offset value in days or hours from 
the current time. For example, -2d means the current time - 2 days. -3h means 
the current time - 3 hours, -5m means the current time - 5 minutes
 
 ---+++ Checking the Status of multiple Coordinator Jobs
 

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_CustomActionExecutor.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_CustomActionExecutor.twiki 
b/docs/src/site/twiki/DG_CustomActionExecutor.twiki
index f88e13b..93cda00 100644
--- a/docs/src/site/twiki/DG_CustomActionExecutor.twiki
+++ b/docs/src/site/twiki/DG_CustomActionExecutor.twiki
@@ -9,7 +9,7 @@
 ---++ Introduction
 Oozie can be extended to support additional action types by writing a custom 
[[WorkflowFunctionalSpec#ActionNodes][Action Node]]. Action Nodes can be 
synchronous or asynchronous.
    * Synchronous Node - Sync nodes are executed inline by Oozie, which waits 
for completion of these nodes before proceeding. Hence, these nodes should 
almost never be used and are meant for lightweight tasks like FileSystem move, 
mkdir, delete.
-   * Asynchronouse Nodes - Oozie starts asynchrnous nodes, and then monitors 
the action being executed for completion. This is done via a callback from the 
action or Oozie polling for the action status.
+   * Asynchronous Nodes - Oozie starts asynchronous nodes, and then monitors 
the action being executed for completion. This is done via a callback from the 
action or Oozie polling for the action status.
 
 ---++ Writing a custom Action Node
 Action Executors are configured in the oozie configuration file 
oozie-site.xml. These executors are loaded during Oozie startup. 
[[DG_CustomActionExecutor#Deploying_a_custom_Action_Executor][Deploying a 
Custom Action Executor]].
@@ -45,7 +45,7 @@ For sync actions, this method will not be called, and should 
throw an Unsupporte
 The implementation for a custom action should interact with and kill the 
running action, and take care of any cleanup which may be required. 
context.setEndData(status, signalValue) should be called with both values set 
to Action.Status.KILLED.
 ---+++ end(ActionExecutor.Context context, Action action)
 <code>end(...)</end> is used for any cleanup or processing which may need to 
be done after completion of the action. After any processing, 
context.setEndData(status, signalValue) should be called to complete execution 
of the action and trigger the next workflow transition. signalValue can be 
Action.Status.OK or Action.Status.ERROR.
----+++ Registereing Errors
+---+++ Registering Errors
 Oozie actions can generate different types of Errors.
    * TRANSIENT - will be retried
    * NON TRANSIENT - the job will be suspended and can be resumed later by 
human intervention, after fixing whatever problem caused this error.

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_EmailActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_EmailActionExtension.twiki 
b/docs/src/site/twiki/DG_EmailActionExtension.twiki
index fee16b0..0f702e4 100644
--- a/docs/src/site/twiki/DG_EmailActionExtension.twiki
+++ b/docs/src/site/twiki/DG_EmailActionExtension.twiki
@@ -12,7 +12,7 @@
 ---++++ 3.2.4 Email action
 
 The =email= action allows sending emails in Oozie from a workflow application. 
An email action must provide =to=
-addresses, =cc= addresses (optional), a =subject= and a =body=. Multiple 
reciepents of an email can be provided
+addresses, =cc= addresses (optional), a =subject= and a =body=. Multiple 
recipients of an email can be provided
 as comma separated addresses.
 
 The email action is executed synchronously, and the workflow job will wait 
until the specified
@@ -41,7 +41,7 @@ All values specified in the =email= action can be 
parameterized (templatized) us
 </workflow-app>
 </verbatim>
 
-The =to= and =cc= commands are used to specify reciepents who should get the 
mail. Multiple email reciepents can be provided
+The =to= and =cc= commands are used to specify recipients who should get the 
mail. Multiple email recipients can be provided
 using comma-separated values. Providing a =to= command is necessary, while the 
=cc= may optionally be used along.
 
 The =subject= and =body= commands are used to specify subject and body of the 
mail.

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_Examples.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_Examples.twiki 
b/docs/src/site/twiki/DG_Examples.twiki
index 6c44574..a49c222 100644
--- a/docs/src/site/twiki/DG_Examples.twiki
+++ b/docs/src/site/twiki/DG_Examples.twiki
@@ -33,7 +33,7 @@ The examples assume the JobTracker is =localhost:8021= and 
the NameNode is =hdfs
 values are different, the job properties files in the examples directory must 
be edited to the correct values.
 
 The example applications are under the examples/app directory, one directory 
per example. The directory contains the
-application XML file (workflow, or worklfow and coordinator), the 
=job.properties= file to submit the job and any JAR
+application XML file (workflow, or workflow and coordinator), the 
=job.properties= file to submit the job and any JAR
 files the example may need.
 
 The inputs for all examples are in the =examples/input-data/= directory.
@@ -123,7 +123,7 @@ import java.util.Properties;
         Thread.sleep(10 * 1000);
     }
 .
-    // print the final status o the workflow job
+    // print the final status of the workflow job
     System.out.println("Workflow job completed ...");
     System.out.println(wf.getJobInfo(jobId));
     ...
@@ -174,7 +174,7 @@ import java.util.Properties;
         Thread.sleep(10 * 1000);
     }
 .
-    // print the final status o the workflow job
+    // print the final status of the workflow job
     System.out.println("Workflow job completed ...");
     System.out.println(wf.getJobInfo(jobId));
 .

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_Hive2ActionExtension.twiki 
b/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
index e09040d..37aff88 100644
--- a/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
+++ b/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
@@ -114,7 +114,7 @@ expressions.
     ...
     <action name="my-hive2-action">
         <hive2 xmlns="uri:oozie:hive2-action:0.1">
-            <job-traker>foo:8021</job-tracker>
+            <job-tracker>foo:8021</job-tracker>
             <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
@@ -209,4 +209,4 @@ with a Kerberized Hive Server 2.
 
 [[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>
\ No newline at end of file
+</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_JMSNotifications.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_JMSNotifications.twiki 
b/docs/src/site/twiki/DG_JMSNotifications.twiki
index 098b080..a4b0f0d 100644
--- a/docs/src/site/twiki/DG_JMSNotifications.twiki
+++ b/docs/src/site/twiki/DG_JMSNotifications.twiki
@@ -1,6 +1,6 @@
-<noauolink>
+<noautolink>
 
-[[index][::Go back o Oozie Documentation Index::]]
+[[index][::Go back to Oozie Documentation Index::]]
 
 ---+!! JMS Notifications
 

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_QuickStart.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_QuickStart.twiki 
b/docs/src/site/twiki/DG_QuickStart.twiki
index a348df4..7c46dee 100644
--- a/docs/src/site/twiki/DG_QuickStart.twiki
+++ b/docs/src/site/twiki/DG_QuickStart.twiki
@@ -212,7 +212,7 @@ The Java 1.6+ =bin= directory should be in the command path.
 
 Copy and expand the =oozie-client= TAR.GZ file bundled with the distribution. 
Add the =bin/= directory to the =PATH=.
 
-Refer to the [[DG_CommandLineTool][Command Line Interface Utilities]] document 
for a a full reference of the =oozie=
+Refer to the [[DG_CommandLineTool][Command Line Interface Utilities]] document 
for a full reference of the =oozie=
 command line tool.
 
 NOTE: The Oozie server installation includes the Oozie client. The Oozie 
client should be installed in remote machines

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_SLAMonitoring.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_SLAMonitoring.twiki 
b/docs/src/site/twiki/DG_SLAMonitoring.twiki
index 1413945..106ce8a 100644
--- a/docs/src/site/twiki/DG_SLAMonitoring.twiki
+++ b/docs/src/site/twiki/DG_SLAMonitoring.twiki
@@ -55,7 +55,7 @@ job does not end successfully e.g. goes to error state - 
Failed/Killed/Error/Tim
 ---++ Configuring SLA in Applications
 
 To make your jobs trackable for SLA, you simply need to add the =<sla:info>= 
tag to your workflow application definition.
-If you were already using the existing SLA schema in your workflows (Schema 
xmlns:sla="uri:oozie:sla:0.1"), you dont need to
+If you were already using the existing SLA schema in your workflows (Schema 
xmlns:sla="uri:oozie:sla:0.1"), you don't need to
 do anything extra to receive SLA notifications via JMS messages. This new SLA 
monitoring framework is backward-compatible -
 no need to change application XML for now and you can continue to fetch old 
records via the [[DG_CommandLineTool#SLAOperations][command line API]].
 However, usage of old schema and API is deprecated and we strongly recommend 
using new schema.
@@ -106,7 +106,7 @@ This new schema is much more compact and meaningful, 
getting rid of redundant an
    * ==should-start==: Relative to =nominal-time= this is the amount of time 
(along with time-unit - MINUTES, HOURS, DAYS) within which your job should 
*start running* to meet SLA. This is optional.
    * ==should-end==: Relative to =nominal-time= this is the amount of time 
(along with time-unit - MINUTES, HOURS, DAYS) within which your job should 
*finish* to meet SLA.
    * ==max-duration==: This is the maximum amount of time (along with 
time-unit - MINUTES, HOURS, DAYS) your job is expected to run. This is optional.
-   * ==alert-events==: Specify the types of events for which *Email* alerts 
should be sent. Allowable values in this comma-separated list are start_miss, 
end_miss and duration_miss. *_met events can generally be deemed low priority 
and hence email alerting for these is not neccessary. However, note that this 
setting is only for alerts via *email* alerts and not via JMS messages, where 
all events send out notifications, and user can filter them using desired 
selectors. This is optional and only applicable when alert-contact is 
configured.
+   * ==alert-events==: Specify the types of events for which *Email* alerts 
should be sent. Allowable values in this comma-separated list are start_miss, 
end_miss and duration_miss. *_met events can generally be deemed low priority 
and hence email alerting for these is not necessary. However, note that this 
setting is only for alerts via *email* alerts and not via JMS messages, where 
all events send out notifications, and user can filter them using desired 
selectors. This is optional and only applicable when alert-contact is 
configured.
    * ==alert-contact==: Specify a comma separated list of email addresses 
where you wish your alerts to be sent. This is optional and need not be 
configured if you just want to view your job SLA history in the UI and do not 
want to receive email alerts.
 
 NOTE: All tags can be parameterized.

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_ShellActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_ShellActionExtension.twiki 
b/docs/src/site/twiki/DG_ShellActionExtension.twiki
index fb8ee09..dc856f5 100644
--- a/docs/src/site/twiki/DG_ShellActionExtension.twiki
+++ b/docs/src/site/twiki/DG_ShellActionExtension.twiki
@@ -105,15 +105,15 @@ using one or more =argument= element.
 The =argument= element, if present, contains argument to be passed to
 the Shell command.
 
-The =env-var= element, if present, contains the environemnt to be passed
+The =env-var= element, if present, contains the environment to be passed
 to the Shell command. =env-var= should contain only one pair of environment 
variable
 and value. If the pair contains the variable such as $PATH, it should follow 
the
 Unix convention such as PATH=$PATH:mypath. Don't use ${PATH} which will be
-substitued by Oozie's EL evaluator.
+substituted by Oozie's EL evaluator.
 
 A =shell= action creates a Hadoop configuration. The Hadoop configuration is 
made available as a local file to the
 Shell application in its running directory. The exact file path is exposed to 
the spawned shell using the environment
-variable called =OOZIE_ACTION_CONF_XML=.The Shell application can access the 
environemnt variable to read the action
+variable called =OOZIE_ACTION_CONF_XML=.The Shell application can access the 
environment variable to read the action
 configuration XML file path.
  
 If the =capture-output= element is present, it indicates Oozie to capture 
output of the STDOUT of the shell command

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/DG_SqoopActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_SqoopActionExtension.twiki 
b/docs/src/site/twiki/DG_SqoopActionExtension.twiki
index b256529..457e899 100644
--- a/docs/src/site/twiki/DG_SqoopActionExtension.twiki
+++ b/docs/src/site/twiki/DG_SqoopActionExtension.twiki
@@ -112,7 +112,7 @@ Using the =command= element:
     ...
     <action name="myfirsthivejob">
         <sqoop xmlns="uri:oozie:sqoop-action:0.2">
-            <job-traker>foo:8021</job-tracker>
+            <job-tracker>foo:8021</job-tracker>
             <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>
@@ -139,7 +139,7 @@ The same Sqoop action using =arg= elements:
     ...
     <action name="myfirsthivejob">
         <sqoop xmlns="uri:oozie:sqoop-action:0.2">
-            <job-traker>foo:8021</job-tracker>
+            <job-tracker>foo:8021</job-tracker>
             <name-node>bar:8020</name-node>
             <prepare>
                 <delete path="${jobOutput}"/>

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/ENG_Building.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/ENG_Building.twiki 
b/docs/src/site/twiki/ENG_Building.twiki
index efd0b2e..e76368e 100644
--- a/docs/src/site/twiki/ENG_Building.twiki
+++ b/docs/src/site/twiki/ENG_Building.twiki
@@ -26,7 +26,7 @@ The source of the modified plugin is available in the Oozie 
GitHub repository, i
 
 To build and install it locally run the following command in the =ydoxia= 
branch:
 
-<verbation>
+<verbatim>
 $ mvn install
 </verbatim>
 

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/ENG_Custom_Authentication.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/ENG_Custom_Authentication.twiki 
b/docs/src/site/twiki/ENG_Custom_Authentication.twiki
index d366a0d..6bac3a6 100644
--- a/docs/src/site/twiki/ENG_Custom_Authentication.twiki
+++ b/docs/src/site/twiki/ENG_Custom_Authentication.twiki
@@ -21,13 +21,13 @@ The following authenticators are provided in hadoop-auth:
    * PseudoAuthenticationHandler     : the authenticator handler provides a 
pseudo authentication mechanism that accepts the user name specified as a query 
string parameter.
    * AltKerberosAuthenticationHandler: the authenticator handler allows for 
Kerberos SPNEGO authentication for non-browsers and an alternate form of 
authentication for browsers.  A subclass must implement the alternate 
authentication (see [[ENG_Custom_Authentication#LoginServerExample][Example 
Login Server]])
 
-3. =org.apache.hadoop.security.authentication.server.AuthenticationFilter:= A 
servlet filter enables protecting web application resources with different 
authentication mechanisms provided by AuthenticationHandler. To enable the 
filter, web application resources file (ex. web.xml) needs to include the a 
filter class derived from =AuthenticationFilter=.
+3. =org.apache.hadoop.security.authentication.server.AuthenticationFilter:= A 
servlet filter enables protecting web application resources with different 
authentication mechanisms provided by AuthenticationHandler. To enable the 
filter, web application resources file (ex. web.xml) needs to include a filter 
class derived from =AuthenticationFilter=.
 
 ---++ Provide Custom Client Authenticator
 
 In client side, a custom authentication requires a extended =Authenticator= to 
retrieve authentication token or certificate and set it to 'token' instance in 
method 'authenticate()'.
 
-The following methods should be overriden by derived Authenticator.
+The following methods should be overridden by derived Authenticator.
 <verbatim>
 
    public void authenticate(URL url, AuthenticatedURL.Token token)
@@ -43,7 +43,7 @@ The following methods should be overriden by derived 
Authenticator.
 Eclipse and IntelliJ can use directly MiniOozie Maven project files. MiniOozie 
project can be imported to
 Eclipse and IntelliJ as independent project.
 
-overriden methods
+overridden methods
 <verbatim>
                 mechanism, retrieve the cert string or token.
                String encodedStr = URLEncoder.encode(aCertString, "UTF-8");
@@ -88,7 +88,7 @@ The following shows an example of a singleton class which can 
be used at a class
 
 Apache Oozie contains a default class 
=org.apache.oozie.client.AuthOozieClient= to support Kerberos HTTP SPNEGO 
authentication, pseudo/simple authentication and anonymous access for client 
connections.
 
-To provide other authentication mechanisms, a Oozie client should extend from 
=AuthOozieClient= and provide the following methods should be overriden by 
derived classes to provide custom authentication:
+To provide other authentication mechanisms, a Oozie client should extend from 
=AuthOozieClient= and provide the following methods should be overridden by 
derived classes to provide custom authentication:
 
    * getAuthenticator()   : return corresponding Authenticator based on value 
specified by user at =auth= command option.
    * createConnection()   : create a singleton class at Authenticator to allow 
client set and get key-value configuration for authentication.
@@ -97,7 +97,7 @@ To provide other authentication mechanisms, a Oozie client 
should extend from =A
 
 In server side, a custom authentication requires a extended 
AuthenticationHandler to retrieve authentication token or certificate from http 
request and verify it. After successful verification, an =AuthenticationToken= 
is created with user name and current authentication type. With this token, 
this request can be proceeded for response.
 
-The following methods should be overriden by derived AuthenticationHandler.
+The following methods should be overridden by derived AuthenticationHandler.
 <verbatim>
 
     public AuthenticationToken authenticate(HttpServletRequest request, 
HttpServletResponse response)
@@ -300,4 +300,4 @@ The README.txt file in the =login= directory contains 
instructions on how to bui
 
 [[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>
\ No newline at end of file
+</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/ENG_MiniOozie.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/ENG_MiniOozie.twiki 
b/docs/src/site/twiki/ENG_MiniOozie.twiki
index a1fc3f1..97c0d07 100644
--- a/docs/src/site/twiki/ENG_MiniOozie.twiki
+++ b/docs/src/site/twiki/ENG_MiniOozie.twiki
@@ -72,4 +72,4 @@ The test directories under MiniOozie are:
 
 [[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>
\ No newline at end of file
+</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/WebServicesAPI.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/WebServicesAPI.twiki 
b/docs/src/site/twiki/WebServicesAPI.twiki
index bf2bf50..f169ba7 100644
--- a/docs/src/site/twiki/WebServicesAPI.twiki
+++ b/docs/src/site/twiki/WebServicesAPI.twiki
@@ -12,7 +12,7 @@ The Oozie Web Services API is a HTTP REST JSON API.
 
 All responses are in =UTF-8=.
 
-Assuming Oozie is runing at =OOZIE_URL=, the following web services end points 
are supported:
+Assuming Oozie is running at =OOZIE_URL=, the following web services end 
points are supported:
 
    * <OOZIE_URL>/versions
    * <OOZIE_URL>/v1/admin
@@ -584,7 +584,7 @@ An HTTP POST request with an XML configuration as payload 
creates a job.
 
 The type of job is determined by the presence of one of the following 3 
properties:
 
-   * =oozie.wf.application.path= : path to a workflow aplication directory, 
creates a workflow job
+   * =oozie.wf.application.path= : path to a workflow application directory, 
creates a workflow job
    * =oozie.coord.application.path= : path to a coordinator application file, 
creates a coordinator job
    * =oozie.bundle.application.path= : path to a bundle application file, 
creates a bundle job
    
@@ -706,7 +706,7 @@ Content-Type: application/json;charset=UTF-8
 
 ---++++ Proxy Pig Job Submission
 
-You can submit a Workflow that contains a single Pig action without writing a 
workflow.xml.  Any requred Jars or other files must
+You can submit a Workflow that contains a single Pig action without writing a 
workflow.xml.  Any required Jars or other files must
 already exist in HDFS.
 
 The following properties are required:
@@ -719,7 +719,7 @@ The following properties are required:
 
 The following properties are optional:
    * =oozie.pig.script.params.size=: The number of parameters you'll be 
passing to Pig
-   * =oozie.pig.script.params.n=: A parameter (variable definition for the 
script) in 'key=value' format, the 'n' should be an integer starting with 0 to 
indicate the parameter number
+   required =oozie.pig.script.params.n=: A parameter (variable definition for 
the script) in 'key=value' format, the 'n' should be an integer starting with 0 
to indicate the parameter number
    * =oozie.pig.options.size=: The number of options you'll be passing to Pig
    * =oozie.pig.options.n=: An argument to pass to Pig, the 'n' should be an 
integer starting with 0 to indicate the option number
 
@@ -806,7 +806,7 @@ Content-Type: application/json;charset=UTF-8
 
 ---++++ Proxy Hive Job Submission
 
-You can submit a Workflow that contains a single Hive action without writing a 
workflow.xml.  Any requred Jars or other files must
+You can submit a Workflow that contains a single Hive action without writing a 
workflow.xml.  Any required Jars or other files must
 already exist in HDFS.
 
 The following properties are required:
@@ -889,7 +889,7 @@ Content-Type: application/json;charset=UTF-8
 
 ---++++ Proxy Sqoop Job Submission
 
-You can submit a Workflow that contains a single Sqoop command without writing 
a workflow.xml. Any requred Jars or other
+You can submit a Workflow that contains a single Sqoop command without writing 
a workflow.xml. Any required Jars or other
  files must already exist in HDFS.
 
 The following properties are required:
@@ -981,7 +981,7 @@ Valid values for the 'action' parameter are 'start', 
'suspend', 'resume', 'kill'
 
 Rerunning and changing a job require additional parameters, and are described 
below:
 
----+++++  Re-Runing a Workflow Job
+---+++++  Re-Running a Workflow Job
 
 A workflow job in =SUCCEEDED=, =KILLED= or =FAILED= status can be partially 
rerun specifying a list
 of workflow nodes to skip during the rerun. All the nodes in the skip list 
must have complete its
@@ -1021,7 +1021,7 @@ Content-Type: application/xml;charset=UTF-8
 HTTP/1.1 200 OK
 </verbatim>
 
----+++++ Re-Runing a coordinator job
+---+++++ Re-Running a coordinator job
 
 A coordinator job in =RUNNING= =SUCCEEDED=, =KILLED= or =FAILED= status can be 
partially rerun by specifying the coordinator actions
 to re-execute.
@@ -1036,7 +1036,7 @@ The =scope= of the rerun depends on the type:
 
 The =refresh= parameter can be =true= or =false= to specify if the user wants 
to refresh an action's input and output events.
 
-The =nocleanp= paramter can be =true= or =false= to specify is the user wants 
to cleanup output events for the rerun actions.
+The =nocleanup= parameter can be =true= or =false= to specify is the user 
wants to cleanup output events for the rerun actions.
 
 *Request:*
 
@@ -1058,7 +1058,7 @@ PUT 
/oozie/v1/job/job-3?action=coord-rerun&type=date2009-02-01T00:10Z::2009-03-0
 HTTP/1.1 200 OK
 </verbatim>
 
----+++++ Re-Runing a bundle job
+---+++++ Re-Running a bundle job
 
 A coordinator job in =RUNNING= =SUCCEEDED=, =KILLED= or =FAILED= status can be 
partially rerun by specifying the coordinators to
 re-execute.
@@ -1072,7 +1072,7 @@ by =::=. If empty or not included, Oozie will figure this 
out for you
 
 The =refresh= parameter can be =true= or =false= to specify if the user wants 
to refresh the coordinator's input and output events.
 
-The =nocleanp= paramter can be =true= or =false= to specify is the user wants 
to cleanup output events for the rerun coordinators.
+The =nocleanup= parameter can be =true= or =false= to specify is the user 
wants to cleanup output events for the rerun coordinators.
 
 *Request:*
 
@@ -1340,7 +1340,7 @@ GET 
/oozie/v1/job/0000002-130507145349661-oozie-joe-W?show=info&offset=5&len=10
 </verbatim>
 Query parameters, =offset=, =length=, =filter= can be specified with a 
coordinator job to retrieve specific actions. 
 Query parameter, =order= with value "desc" can be used to retrieve the latest 
coordinator actions materialized instead of actions from @1.
-Query parameters =filter= can be used to retrieve coodinator actions matching 
specific status.
+Query parameters =filter= can be used to retrieve coordinator actions matching 
specific status.
 Default is offset=0, len=0 for v2/job (i.e., does not return any coordinator 
actions) and offset=0, len=1000 with v1/job and v0/job.
 So if you need actions to be returned with v2 API, specifying =len= parameter 
is necessary.
 Default =order= is "asc".
@@ -1351,7 +1351,7 @@ Note that the filter is URL encoded, its decoded value is 
<code>status=KILLED</c
 <verbatim>
 GET 
/oozie/v1/job/0000001-111219170928042-oozie-joe-C?show=info&filter=status%21%3DSUCCEEDED&order=desc
 </verbatim>
-This retrives coordinator actions except for SUCCEEDED status, which is useful 
for debugging.
+This retrieves coordinator actions except for SUCCEEDED status, which is 
useful for debugging.
 
 ---++++ Job Application Definition
 
@@ -1390,7 +1390,7 @@ Content-Type: application/xml;charset=UTF-8
     ...
     </datasets>
     ...
-</cordinator-app>
+</coordinator-app>
 </verbatim>
 
 *Response for a bundle job:*
@@ -1651,7 +1651,7 @@ Additionally the =offset= and =len= parameters can be 
used for pagination. The s
 Moreover, the =jobtype= parameter could be used to determine what type of job 
is looking for.
 The valid values of job type are: =wf=, =coordinator= or =bundle=.
 
-startCreatedTime and endCreatedTime should be specified either in *ISO8601 
(UTC)* format (*yyyy-MM-dd'T'HH:mm'Z'*) or a offset value in days or hours from 
the current time. for example, -2d means the current time - 2 days. -3h means 
the current time - 3 hours. -5m means the current time - 5 minutes
+startCreatedTime and endCreatedTime should be specified either in *ISO8601 
(UTC)* format (*yyyy-MM-dd'T'HH:mm'Z'*) or a offset value in days or hours from 
the current time. For example, -2d means the current time - 2 days. -3h means 
the current time - 3 hours. -5m means the current time - 5 minutes
 
 ---++++ Bulk modify jobs
 
@@ -1890,7 +1890,7 @@ The Oozie Web Services API is a HTTP REST JSON API.
 
 All responses are in =UTF-8=.
 
-Assuming Oozie is runing at =OOZIE_URL=, the following web services end points 
are supported:
+Assuming Oozie is running at =OOZIE_URL=, the following web services end 
points are supported:
 
    * <OOZIE_URL>/versions
    * <OOZIE_URL>/v2/admin
@@ -1988,7 +1988,7 @@ Content-Type: application/json;charset=UTF-8
 A ignore request is done with an HTTP PUT request with a =ignore=
 
 The =type= parameter supports =action= only.
-The =scope= parameter can contain coodinator action id(s) to be ignored.
+The =scope= parameter can contain coordinator action id(s) to be ignored.
 Multiple action ids can be passed to the =scope= parameter
 
 *Request:*

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/docs/src/site/twiki/WorkflowFunctionalSpec.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/WorkflowFunctionalSpec.twiki 
b/docs/src/site/twiki/WorkflowFunctionalSpec.twiki
index fb4845b..e3790a4 100644
--- a/docs/src/site/twiki/WorkflowFunctionalSpec.twiki
+++ b/docs/src/site/twiki/WorkflowFunctionalSpec.twiki
@@ -372,7 +372,7 @@ boolean value, =true= or =false=. For example:
 
 The =name= attribute in the =decision= node is the name of the decision node.
 
-Each =case= elements contains a predicate an a transition name. The predicate 
ELs are evaluated
+Each =case= elements contains a predicate and a transition name. The predicate 
ELs are evaluated
 in order until one returns =true= and the corresponding transition is taken.
 
 The =default= element indicates the transition to take if none of the 
predicates evaluates
@@ -390,9 +390,8 @@ state if none of the predicates evaluates to true.
         <switch>
             <case to="reconsolidatejob">
               ${fs:fileSize(secondjobOutputDir) gt 10 * GB}
-            </case>
-            <case to="rexpandjob">
-              ${fs:filSize(secondjobOutputDir) lt 100 * MB}
+            </case> <case to="rexpandjob">
+              ${fs:fileSize(secondjobOutputDir) lt 100 * MB}
             </case>
             <case to="recomputejob">
               ${ hadoop:counters('secondjob')[RECORDS][REDUCE_OUT] lt 1000000 }
@@ -603,7 +602,7 @@ MapReduce action, if you're more familiar with MapReduce's 
Java API, if there's
 configuration that's difficult to do in straight XML (e.g. Avro).
 
 Create a class that implements the 
org.apache.oozie.action.hadoop.OozieActionConfigurator interface from the 
"oozie-sharelib-oozie"
-artifact.  It contains a single method that recieves a =JobConf= as an 
argument.  Any configuration properties set on this =JobConf=
+artifact.  It contains a single method that receives a =JobConf= as an 
argument.  Any configuration properties set on this =JobConf=
 will be used by the MapReduce action.
 
 The OozieActionConfigurator has this signature:
@@ -789,7 +788,7 @@ Properties specified in the =config-class= class override 
properties specified i
 
 External Stats can be turned on/off by specifying the property 
_oozie.action.external.stats.write_ as _true_ or _false_ in the configuration 
element of workflow.xml. The default value for this property is _false_.
 
-The =file= element, if present, must specify the target sybolic link for 
binaries by separating the original file and target with a # 
(file#target-sym-link). This is not required for libraries.
+The =file= element, if present, must specify the target symbolic link for 
binaries by separating the original file and target with a # 
(file#target-sym-link). This is not required for libraries.
 
 The =mapper= and =reducer= process for streaming jobs, should specify the 
executable command with URL encoding. e.g. '%' should be replaced by '%25'.
 
@@ -1835,8 +1834,8 @@ executed or it has not completed yet.
 
 *String wf:lastErrorNode()*
 
-It returns the name of the last workflow action node that exit with an =ERROR= 
exit state, or an empty string if no a
-ction has exited with =ERROR= state in the current workflow job.
+It returns the name of the last workflow action node that exit with an =ERROR= 
exit state, or an empty string if no
+action has exited with =ERROR= state in the current workflow job.
 
 *String wf:errorCode(String node)*
 
@@ -1870,7 +1869,7 @@ completed yet.
 
 *int wf:actionTrackerUri(String node)*
 
-It returns the tracker URIfor an action node, or an empty string if the action 
has not being executed or it has not
+It returns the tracker URI for an action node, or an empty string if the 
action has not being executed or it has not
 completed yet.
 
 *int wf:actionExternalStatus(String node)*
@@ -2384,7 +2383,7 @@ request to Oozie the workflow job ends reaching the 
=KILLED= final state.
 #JobReRun
 ---++ 10 Workflow Jobs Recovery (re-run)
 
-Oozie must provide a mechanism by which a a failed workflow job can be 
resubmitted and executed starting after any
+Oozie must provide a mechanism by which a failed workflow job can be 
resubmitted and executed starting after any
 action node that has completed its execution in the prior run. This is 
specially useful when the already executed
 action of a workflow job are too expensive to be re-executed.
 
@@ -2402,7 +2401,7 @@ The recovery workflow job will run under the same 
workflow job ID as the origina
 To submit a recovery workflow job the target workflow job to recover must be 
in an end state (=SUCCEEDED=, =FAILED=
 or =KILLED=).
 
-A recovery run could be done using a new worklfow application path under 
certain constraints (see next paragraph).
+A recovery run could be done using a new workflow application path under 
certain constraints (see next paragraph).
 This is to allow users to do a one off patch for the workflow application 
without affecting other running jobs for the
 same application.
 
@@ -2488,7 +2487,7 @@ A workflow job can use the system share library by 
setting the job property =ooz
 ---+++ 17.1 Action Share Library Override (since Oozie 3.3)
 
 Oozie share libraries are organized per action type, for example Pig action 
share library directory is =share/lib/pig/=
-and Mapreduce Streaming share library direcotry is 
=share/library/mapreduce-streaming/=.
+and Mapreduce Streaming share library directory is 
=share/library/mapreduce-streaming/=.
 
 Oozie bundles a share library for specific versions of streaming, pig, hive, 
sqoop, distcp actions. These versions
 of streaming, pig, hive, sqoop and distcp have been tested and verified to 
work correctly with the version of Oozie
@@ -2498,8 +2497,8 @@ actions (since Oozie 4.x).
 In addition, Oozie provides a mechanism to override the action share library 
JARs to allow using an alternate version
 of of the action JARs.
 
-This mechanism enables Oozie administrators to patch share library JARs, to 
include alternate versios of the share
-libraries, to provide acess to more than one version at the same time.
+This mechanism enables Oozie administrators to patch share library JARs, to 
include alternate versions of the share
+libraries, to provide access to more than one version at the same time.
 
 The share library override is supported at server level and at job level. The 
share library directory names are resolved
 using the following precedence order:
@@ -2530,7 +2529,7 @@ Oozie adminstrator can allow more error codes to be 
handled for User-Retry. By a
 =oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext= to 
=oozie.site.xml=
 and error codes as value, these error codes will be considered as User-Retry 
after system restart.
 
-Examples of User-Retry in a workflow aciton is :
+Examples of User-Retry in a workflow action is :
 
 <verbatim>
 <workflow-app xmlns="uri:oozie:workflow:0.3" name="wf-name">

http://git-wip-us.apache.org/repos/asf/oozie/blob/2bce9e8f/release-log.txt
----------------------------------------------------------------------
diff --git a/release-log.txt b/release-log.txt
index a6c5d4b..466fb04 100644
--- a/release-log.txt
+++ b/release-log.txt
@@ -1,5 +1,6 @@
 -- Oozie 4.2.0 release (trunk - unreleased)
 
+OOZIE-2188 Fix typos in twiki documentation ( jacobtolar via puru)
 OOZIE-2174 Add missing admin commands to OozieClient and OozieCLI (rkanter)
 OOZIE-2186 Upgrade Tomcat to 6.0.43 (rkanter)
 OOZIE-2181 JsonToBean has some missing and incorrect mappings (rkanter)

Reply via email to