http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_CoordinatorRerun.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_CoordinatorRerun.twiki 
b/docs/src/site/twiki/DG_CoordinatorRerun.twiki
index fbb1376..f535d16 100644
--- a/docs/src/site/twiki/DG_CoordinatorRerun.twiki
+++ b/docs/src/site/twiki/DG_CoordinatorRerun.twiki
@@ -1,12 +1,12 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! Coordinator Rerun
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
+# Coordinator Rerun
 
----++ Pre-Conditions
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
+
+## Pre-Conditions
 
    * Rerun coordinator action must be in TIMEDOUT/SUCCEEDED/KILLED/FAILED.
    * Coordinator actions cannot be rerun if the coordinator job is in the PREP 
or IGNORED state.
@@ -16,14 +16,15 @@
    * Coordinator Rerun will only use the original configs from first run.
    * Coordinator Rerun will not re-read the coordinator.xml in hdfs.
 
----++ Rerun Arguments
+## Rerun Arguments
+
 
- <verbatim>
+```
 $oozie job -rerun <coord_Job_id> [-nocleanup] [-refresh] [-failed] [-config 
<arg>]
 [-action 1, 3-4, 7-40] (-action or -date is required to rerun.)
 [-date 2009-01-01T01:00Z::2009-05-31T23:59Z, 2009-11-10T01:00Z, 
2009-12-31T22:00Z]
 (if neither -action nor -date is given, the exception will be thrown.)
-</verbatim>
+```
 
    * Either -action or -date should be given.
    * If -action and -date both are given, an error will be thrown.
@@ -37,7 +38,7 @@ $oozie job -rerun <coord_Job_id> [-nocleanup] [-refresh] 
[-failed] [-config <arg
    * If -failed is set, re-runs the failed workflow actions of the coordinator 
actions.
    * -config can be used to supply properties to workflow by job configuration 
file '.xml' or '.properties'.
 
----++ Rerun coordinator actions
+## Rerun coordinator actions
 
    * Rerun terminated (timeout, succeeded, killed, failed) coordinator actions.
    * By default, Oozie will delete the 'output-event' directories before 
changing actions' status and materializing actions.
@@ -47,6 +48,6 @@ $oozie job -rerun <coord_Job_id> [-nocleanup] [-refresh] 
[-failed] [-config <arg
      within that range.  If the existing actions are action #5....#40, which 
map to Jan 15 to Feb 15, then only those actions will run.
    * The rerun action_id and nominal_time of the actions which are eligible to 
rerun will be returned.
 
-[[index][::Go back to Oozie Documentation Index::]]
+[::Go back to Oozie Documentation Index::](index.html)
+
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_CustomActionExecutor.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_CustomActionExecutor.twiki 
b/docs/src/site/twiki/DG_CustomActionExecutor.twiki
index 7831484..5768b27 100644
--- a/docs/src/site/twiki/DG_CustomActionExecutor.twiki
+++ b/docs/src/site/twiki/DG_CustomActionExecutor.twiki
@@ -1,78 +1,83 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! Custom Action Nodes
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
+# Custom Action Nodes
+
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
+
+## Introduction
+Oozie can be extended to support additional action types by writing a custom 
[Action Node](WorkflowFunctionalSpec.html#ActionNodes). Action Nodes can be 
synchronous or asynchronous.
 
----++ Introduction
-Oozie can be extended to support additional action types by writing a custom 
[[WorkflowFunctionalSpec#ActionNodes][Action Node]]. Action Nodes can be 
synchronous or asynchronous.
    * Synchronous Node - Sync nodes are executed inline by Oozie, which waits 
for completion of these nodes before proceeding. Hence, these nodes should 
almost never be used and are meant for lightweight tasks like FileSystem move, 
mkdir, delete.
    * Asynchronous Nodes - Oozie starts asynchronous nodes, and then monitors 
the action being executed for completion. This is done via a callback from the 
action or Oozie polling for the action status.
 
----++ Writing a custom Action Node
-Action Executors are configured in the oozie configuration file 
oozie-site.xml. These executors are loaded during Oozie startup. 
[[DG_CustomActionExecutor#Deploying_a_custom_Action_Executor][Deploying a 
Custom Action Executor]].
+## Writing a custom Action Node
+Action Executors are configured in the oozie configuration file 
oozie-site.xml. These executors are loaded during Oozie startup. [Deploying a 
Custom Action 
Executor](DG_CustomActionExecutor.html#Deploying_a_custom_Action_Executor).
 
-Action Executors MUST extend the =ActionExecutor= class and override the 
required methods.
+Action Executors MUST extend the `ActionExecutor` class and override the 
required methods.
 
 Most methods take as argument the Execution Context and the actual Action 
object with various configuration properties resolved.
----+++ ActionExecutor.Context
+### ActionExecutor.Context
 The Execution context gives Action Nodes access to configuration properties, 
methods to set the state of the action, methods to set variables which are to 
be made available later in the execution path.
 
-*The following methods from the ActionExecutor interface should be 
implemented.*
----+++ Constructor
+**The following methods from the ActionExecutor interface should be 
implemented.**
+### Constructor
 A no argument constructor should be implemented, which calls 
super(ACTION_TYPE). ACTION_TYPE is the name of the action which will be used in 
the workflow xml, and is used by Oozie to instantiate the correct type of 
Executor.
 
----+++ initActionType()
+### initActionType()
 This method is called once during initialization of the Action Executor during 
Oozie startup. Any common initialization code for the Action Node should go 
here.
 
 As an example, setting up of error handling for the Custom Action should be 
done here.
 
 This method must call super.initActionType() as it's first statement.
 
----+++ start(ActionExecutor.Context context, Action action)
+### start(ActionExecutor.Context context, Action action)
 The action start up happens here.
+
    * Async Actions - The action should be started and 
context.setStartData(externalId, trackerUri, consoleUrl) must be set. A check 
can be made for whether the action has completed, in which case 
context.setExecutionData(externalStatus, actionData) must be called.
    * Sync Actions - The action should be started and should complete 
execution. context.setExecutionData(externalStatus, actionData) must be called.
----+++ check(ActionExecutor.Context context, Action action)
-<code>check(...)</code> is used by Oozie to poll for the status of the action. 
This method should interact with the action started previously, and update the 
status. If the action has completed, context.setExecutionData(externalStatus, 
actionData) must be called. Otherwise, the status can be updated using 
context.setExternalStatus(externalStatus).
+### check(ActionExecutor.Context context, Action action)
+`check(...)` is used by Oozie to poll for the status of the action. This 
method should interact with the action started previously, and update the 
status. If the action has completed, context.setExecutionData(externalStatus, 
actionData) must be called. Otherwise, the status can be updated using 
context.setExternalStatus(externalStatus).
 
 For sync actions, this method will not be called, and should throw an 
UnsupportedOperationException().
----+++ kill(ActionExecutor.Context context, Action action)
-<code>kill(...)</code> is called when there is an attempt to kill the running 
job or action. No workflow transition is made after this.
+### kill(ActionExecutor.Context context, Action action)
+`kill(...)` is called when there is an attempt to kill the running job or 
action. No workflow transition is made after this.
 
 The implementation for a custom action should interact with and kill the 
running action, and take care of any cleanup which may be required. 
context.setEndData(status, signalValue) should be called with both values set 
to Action.Status.KILLED.
----+++ end(ActionExecutor.Context context, Action action)
-<code>end(...)</end> is used for any cleanup or processing which may need to 
be done after completion of the action. After any processing, 
context.setEndData(status, signalValue) should be called to complete execution 
of the action and trigger the next workflow transition. signalValue can be 
Action.Status.OK or Action.Status.ERROR.
----+++ Registering Errors
+### end(ActionExecutor.Context context, Action action)
+`end(...)` is used for any cleanup or processing which may need to be done 
after completion of the action. After any processing, 
context.setEndData(status, signalValue) should be called to complete execution 
of the action and trigger the next workflow transition. signalValue can be 
Action.Status.OK or Action.Status.ERROR.
+### Registering Errors
 Oozie actions can generate different types of Errors.
+
    * TRANSIENT - will be retried
    * NON TRANSIENT - the job will be suspended and can be resumed later by 
human intervention, after fixing whatever problem caused this error.
    * ERROR - causes the error transition to be taken.
    * FAILED - the action and the job are set to FAILED state. No transitions 
are taken.
 registerError(exceptionClassName, errorType, errorMessage) can be used to 
register possible exceptions while executing the action, along with their type 
and error message. This will normally be done during initialization of the 
Action Executor.
 
----++ Deploying a custom Action Executor
+## Deploying a custom Action Executor
 Action Nodes can be registered in the oozie configuration file oozie-site.xml, 
by changing the property 'oozie.service.ActionService.executor.ext.classes'. 
For multiple Executors, the class name should be separated by commas.
-<verbatim>  <property>
+
+```  <property>
     <name>oozie.service.ActionService.executor.ext.classes</name>
     <value>
       org.apache.oozie.wf.action.decision.CustomActionExecutor,
          Custom_Action_Executr_2.class
     </value>
-  </property></verbatim>
+  </property>
+```
 Any configuration properties to be made available to this class should also be 
added to oozie-site.xml. The convention to be followed for naming these 
properties is 'oozie.action.[ActionName].property.name'
 
 The XML schema (XSD) for the new Actions should be added to oozie-site.xml, 
under the property 'oozie.service.WorkflowSchemaService.ext.schemas'. A comma 
separated list for multiple Action schemas.
 
 The XML schema (XSD) for the new action should be also added to Fluent Job 
API. Please refer to
-[[DG_FluentJobAPI#AE.C_Appendix_C_How_To_Extend][Fluent Job API :: How To 
Extend]] for details.
+[Fluent Job API :: How To 
Extend](DG_FluentJobAPI.html#AE.C_Appendix_C_How_To_Extend) for details.
 
 The executor class should be placed along with the oozie webapp in the correct 
path. Once Oozie is restarted, the custom action node can be used in workflows.
 
 
 
-[[index][::Go back to Oozie Documentation Index::]]
+[::Go back to Oozie Documentation Index::](index.html)
+
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_DistCpActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_DistCpActionExtension.twiki 
b/docs/src/site/twiki/DG_DistCpActionExtension.twiki
index 8bab3da..13c2a0a 100644
--- a/docs/src/site/twiki/DG_DistCpActionExtension.twiki
+++ b/docs/src/site/twiki/DG_DistCpActionExtension.twiki
@@ -1,27 +1,28 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie DistCp Action Extension
+# Oozie DistCp Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
----++ DistCp Action
+## DistCp Action
 
-The =DistCp= action uses Hadoop distributed copy to copy files from one 
cluster to another or within the same cluster.
+The `DistCp` action uses Hadoop distributed copy to copy files from one 
cluster to another or within the same cluster.
 
-*IMPORTANT:* The DistCp action may not work properly with all configurations 
(secure, insecure) in all versions
+**IMPORTANT:** The DistCp action may not work properly with all configurations 
(secure, insecure) in all versions
 of Hadoop. For example, distcp between two secure clusters is tested and works 
well. Same is true with two insecure
 clusters. In cases where a secure and insecure clusters are involved, distcp 
will not work.
 
 Both Hadoop clusters have to be configured with proxyuser for the Oozie 
process as explained
-[[DG_QuickStart#HadoopProxyUser][here]] on the Quick Start page.
+[here](DG_QuickStart.html#HadoopProxyUser) on the Quick Start page.
+
+**Syntax:**
 
-*Syntax:*
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="distcp-example">
@@ -36,29 +37,31 @@ Both Hadoop clusters have to be configured with proxyuser 
for the Oozie process
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The first =arg= indicates the input and the second =arg= indicates the output. 
 In the above example, the input is on =namenode1=
-and the output is on =namenode2=.
+The first `arg` indicates the input and the second `arg` indicates the output. 
 In the above example, the input is on `namenode1`
+and the output is on `namenode2`.
 
-*IMPORTANT:* If using the DistCp action between 2 secure clusters, the 
following property must be added to the =configuration= of
+**IMPORTANT:** If using the DistCp action between 2 secure clusters, the 
following property must be added to the `configuration` of
 the action:
-<verbatim>
+
+```
 <property>
     <name>oozie.launcher.mapreduce.job.hdfs-servers</name>
     <value>${nameNode1},${nameNode2}</value>
 </property>
-</verbatim>
+```
 
-The =DistCp= action is also commonly used to copy files within the same 
cluster. Cases where copying files within
+The `DistCp` action is also commonly used to copy files within the same 
cluster. Cases where copying files within
 a directory to another directory or directories to target directory is 
supported. Example below will illustrate a
-copy within a cluster, notice the source and target =nameNode= is the same and 
use of =*= syntax is supported to
-represent only child files or directories within a source directory. For the 
sake of the example, =jobTracker= and =resourceManager=
+copy within a cluster, notice the source and target `nameNode` is the same and 
use of `*` syntax is supported to
+represent only child files or directories within a source directory. For the 
sake of the example, `jobTracker` and `resourceManager`
 are synonymous.
 
-*Syntax:*
+**Syntax:**
+
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="copy-example">
@@ -73,14 +76,15 @@ are synonymous.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
----++ Appendix, DistCp XML-Schema
+## Appendix, DistCp XML-Schema
 
----+++ AE.A Appendix A, DistCp XML-Schema
+### AE.A Appendix A, DistCp XML-Schema
 
----++++ DistCp Action Schema Version 1.0
-<verbatim>
+#### DistCp Action Schema Version 1.0
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:distcp="uri:oozie:distcp-action:1.0" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:distcp-action:1.0">
@@ -105,10 +109,11 @@ are synonymous.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### DistCp Action Schema Version 0.2
 
----++++ DistCp Action Schema Version 0.2
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:distcp="uri:oozie:distcp-action:0.2" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:distcp-action:0.2">
@@ -156,10 +161,11 @@ are synonymous.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
 
----++++ DistCp Action Schema Version 0.1
-<verbatim>
+#### DistCp Action Schema Version 0.1
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:distcp="uri:oozie:distcp-action:0.1" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:distcp-action:0.1">
@@ -207,8 +213,8 @@ are synonymous.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_EmailActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_EmailActionExtension.twiki 
b/docs/src/site/twiki/DG_EmailActionExtension.twiki
index 4de290c..1afcbb4 100644
--- a/docs/src/site/twiki/DG_EmailActionExtension.twiki
+++ b/docs/src/site/twiki/DG_EmailActionExtension.twiki
@@ -1,28 +1,29 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie Email Action Extension
+# Oozie Email Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
-#EmailAction
----++++ 3.2.4 Email action
+<a name="EmailAction"></a>
+## 3.2.4 Email action
 
-The =email= action allows sending emails in Oozie from a workflow application. 
An email action must provide =to=
-addresses, =cc= addresses (optional), =bcc= addresses (optional), a =subject= 
and a =body=.
+The `email` action allows sending emails in Oozie from a workflow application. 
An email action must provide `to`
+addresses, `cc` addresses (optional), `bcc` addresses (optional), a `subject` 
and a `body`.
 Multiple recipients of an email can be provided as comma separated addresses.
 
 The email action is executed synchronously, and the workflow job will wait 
until the specified
 emails are sent before continuing to the next action.
 
-All values specified in the =email= action can be parameterized (templatized) 
using EL expressions.
+All values specified in the `email` action can be parameterized (templatized) 
using EL expressions.
+
+**Syntax:**
 
-*Syntax:*
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:0.1">
     ...
     <action name="[NODE-NAME]">
@@ -40,37 +41,39 @@ All values specified in the =email= action can be 
parameterized (templatized) us
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =to= and =cc= and =bcc= commands are used to specify recipients who should 
get the mail. Multiple email recipients
-can be provided using comma-separated values. Providing a =to= command is 
necessary, while the =cc= or =bcc= may
+The `to` and `cc` and `bcc` commands are used to specify recipients who should 
get the mail. Multiple email recipients
+can be provided using comma-separated values. Providing a `to` command is 
necessary, while the `cc` or `bcc` may
 optionally be used along.
 
-The =subject= and =body= commands are used to specify subject and body of the 
mail.
+The `subject` and `body` commands are used to specify subject and body of the 
mail.
 From uri:oozie:email-action:0.2 one can also specify mail content type as 
<content_type>text/html</content_type>.
 "text/plain" is default.
 
-The =attachment= is used to attach a file(s) on HDFS to the mail. Multiple 
attachment can be provided using comma-separated values.
+The `attachment` is used to attach a file(s) on HDFS to the mail. Multiple 
attachment can be provided using comma-separated values.
 Non fully qualified path is considered as a file on default HDFS. A local file 
cannot be attached.
 
-*Configuration*
+**Configuration**
 
-The =email= action requires some SMTP server configuration to be present (in 
oozie-site.xml). The following are the values
+The `email` action requires some SMTP server configuration to be present (in 
oozie-site.xml). The following are the values
 it looks for:
-   * =oozie.email.smtp.host= - The host where the email action may find the 
SMTP server (localhost by default).
-   * =oozie.email.smtp.port= - The port to connect to for the SMTP server (25 
by default).
-   * =oozie.email.from.address= - The from address to be used for mailing all 
emails (oozie@localhost by default).
-   * =oozie.email.smtp.auth= - Boolean property that toggles if authentication 
is to be done or not. (false by default).
-   * =oozie.email.smtp.starttls.enable= - Boolean property that toggles if use 
TLS communication or not. (false by default).
-   * =oozie.email.smtp.username= - If authentication is enabled, the username 
to login as (empty by default).
-   * =oozie.email.smtp.password= - If authentication is enabled, the 
username's password (empty by default).
-   * =oozie.email.attachment.enabled= - Boolean property that toggles if 
configured attachments are to be placed into the emails.
+
+   * `oozie.email.smtp.host` - The host where the email action may find the 
SMTP server (localhost by default).
+   * `oozie.email.smtp.port` - The port to connect to for the SMTP server (25 
by default).
+   * `oozie.email.from.address` - The from address to be used for mailing all 
emails (oozie@localhost by default).
+   * `oozie.email.smtp.auth` - Boolean property that toggles if authentication 
is to be done or not. (false by default).
+   * `oozie.email.smtp.starttls.enable` - Boolean property that toggles if use 
TLS communication or not. (false by default).
+   * `oozie.email.smtp.username` - If authentication is enabled, the username 
to login as (empty by default).
+   * `oozie.email.smtp.password` - If authentication is enabled, the 
username's password (empty by default).
+   * `oozie.email.attachment.enabled` - Boolean property that toggles if 
configured attachments are to be placed into the emails.
    (false by default).
-   * =oozie.email.smtp.socket.timeout.ms= - The timeout to apply over all SMTP 
server socket operations (10000ms by default).
+   * `oozie.email.smtp.socket.timeout.ms` - The timeout to apply over all SMTP 
server socket operations (10000ms by default).
+
+**Example:**
 
-*Example:*
 
-<verbatim>
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
     ...
     <action name="an-email">
@@ -86,14 +89,15 @@ it looks for:
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
 In the above example, an email is sent to 'bob', 'the.other.bob', 'will' (cc), 
yet.another.bob (bcc)
 with the subject and body both containing the workflow ID after substitution.
 
----+++ AE.A Appendix A, Email XML-Schema
+## AE.A Appendix A, Email XML-Schema
 
-<verbatim>
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:email="uri:oozie:email-action:0.2" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:email-action:0.2">
@@ -112,11 +116,12 @@ with the subject and body both containing the workflow ID 
after substitution.
         </xs:sequence>
     </xs:complexType>
 </xs:schema>
-</verbatim>
+```
+
+**GMail example to oozie-site.xml**
 
-*GMail example to oozie-site.xml*
 
-<verbatim>
+```
 oozie.email.smtp.host=smtp.gmail.com
 oozie.email.smtp.port=587
 oozie.email.from.address=<some email address>
@@ -124,8 +129,8 @@ oozie.email.smtp.auth=true
 oozie.email.smtp.starttls.enable=true
 oozie.email.smtp.username=<Gmail Id>
 oozie.email.smtp.password=<Gmail Pass>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_Examples.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_Examples.twiki 
b/docs/src/site/twiki/DG_Examples.twiki
index 5323a17..ff33506 100644
--- a/docs/src/site/twiki/DG_Examples.twiki
+++ b/docs/src/site/twiki/DG_Examples.twiki
@@ -1,58 +1,61 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! Oozie Examples
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
+# Oozie Examples
 
----++ Command Line Examples
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
----+++ Setting Up the Examples
+## Command Line Examples
 
-Oozie examples are bundled within the Oozie distribution in the 
=oozie-examples.tar.gz= file.
+### Setting Up the Examples
 
-Expanding this file will create an =examples/= directory in the local file 
system.
+Oozie examples are bundled within the Oozie distribution in the 
`oozie-examples.tar.gz` file.
 
-The =examples/= directory must be copied to the user HOME directory in HDFS:
+Expanding this file will create an `examples/` directory in the local file 
system.
 
-<verbatim>
+The `examples/` directory must be copied to the user HOME directory in HDFS:
+
+
+```
 $ hadoop fs -put examples examples
-</verbatim>
+```
 
-*NOTE:* If an examples directory already exists in HDFS, it must be deleted 
before copying it again. Otherwise files may not be
+**NOTE:** If an examples directory already exists in HDFS, it must be deleted 
before copying it again. Otherwise files may not be
 copied.
 
----+++ Running the Examples
+### Running the Examples
 
-For the Streaming and Pig example, the [[DG_QuickStart#OozieShareLib][Oozie 
Share Library]] must be installed in HDFS.
+For the Streaming and Pig example, the [Oozie Share 
Library](DG_QuickStart.html#OozieShareLib) must be installed in HDFS.
 
-Add Oozie =bin/= to the environment PATH.
+Add Oozie `bin/` to the environment PATH.
 
-The examples assume the ResourceManager is =localhost:8032= and the NameNode 
is =hdfs://localhost:8020=. If the actual
+The examples assume the ResourceManager is `localhost:8032` and the NameNode 
is `hdfs://localhost:8020`. If the actual
 values are different, the job properties files in the examples directory must 
be edited to the correct values.
 
 The example applications are under the examples/app directory, one directory 
per example. The directory contains the
-application XML file (workflow, or workflow and coordinator), the 
=job.properties= file to submit the job and any JAR
+application XML file (workflow, or workflow and coordinator), the 
`job.properties` file to submit the job and any JAR
 files the example may need.
 
-The inputs for all examples are in the =examples/input-data/= directory.
+The inputs for all examples are in the `examples/input-data/` directory.
 
-The examples create output under the =examples/output-data/${EXAMPLE_NAME}= 
directory.
+The examples create output under the `examples/output-data/${EXAMPLE_NAME}` 
directory.
 
-*Note*: The =job.properties= file needs to be a local file during submissions, 
and not a HDFS path.
+**Note**: The `job.properties` file needs to be a local file during 
submissions, and not a HDFS path.
 
-*How to run an example application:*
+**How to run an example application:**
 
-<verbatim>
+
+```
 $ oozie job -oozie http://localhost:11000/oozie -config 
examples/apps/map-reduce/job.properties -run
 .
 job: 14-20090525161321-oozie-tucu
-</verbatim>
+```
 
 Check the workflow job status:
 
-<verbatim>
+
+```
 $ oozie job -oozie http://localhost:11000/oozie -info 
14-20090525161321-oozie-tucu
 .
 
.----------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -71,28 +74,30 @@ Action Name             Type        Status     Transition  
External Id
 
.----------------------------------------------------------------------------------------------------------------------------------------------------------------
 mr-node                 map-reduce  OK         end         
job_200904281535_0254  SUCCEEDED        -             2009-05-26 05:01 +0000  
2009-05-26 05:01 +0000
 
.----------------------------------------------------------------------------------------------------------------------------------------------------------------
-</verbatim>
+```
 
-To check the workflow job status via the Oozie web console, with a browser go 
to =http://localhost:11000/oozie=.
+To check the workflow job status via the Oozie web console, with a browser go 
to `http://localhost:11000/oozie`.
 
-To avoid having to provide the =-oozie= option with the Oozie URL with every 
=oozie= command, set =OOZIE_URL= env 
+To avoid having to provide the `-oozie` option with the Oozie URL with every 
`oozie` command, set `OOZIE_URL` env
 variable to the Oozie URL in the shell environment. For example:
 
-<verbatim>
+
+```
 $ export OOZIE_URL="http://localhost:11000/oozie";
 $
 $ oozie job -info 14-20090525161321-oozie-tucu
-</verbatim>
+```
 
----++ Java API Example
+## Java API Example
 
-Oozie provides a 
=[[./apidocs/org/org/apache/oozie/client/package-summary.html][Java Client 
API]] that simplifies
+Oozie provides a [Java Client 
API](./apidocs/org/org/apache/oozie/client/package-summary.html) that simplifies
 integrating Oozie with Java applications. This Java Client API is a 
convenience API to interact with Oozie Web-Services
 API.
 
 The following code snippet shows how to submit an Oozie job using the Java 
Client API.
 
-<verbatim>
+
+```
 import org.apache.oozie.client.OozieClient;
 import org.apache.oozie.client.WorkflowJob;
 .
@@ -127,20 +132,21 @@ import java.util.Properties;
     System.out.println("Workflow job completed ...");
     System.out.println(wf.getJobInfo(jobId));
     ...
-</verbatim>
+```
 
----++ Local Oozie Example
+## Local Oozie Example
 
-Oozie provides an embedded Oozie implementation, 
=[[./apidocs/org/apache/oozie/local/LocalOozie.html][LocalOozie]]=,
+Oozie provides an embedded Oozie implementation,  
[LocalOozie](./apidocs/org/apache/oozie/local/LocalOozie.html) ,
 which is useful for development, debugging and testing of workflow 
applications within the convenience of an IDE.
 
-The code snippet below shows the usage of the =LocalOozie= class. All the 
interaction with Oozie is done using Oozie
- =OozieClient= Java API, as shown in the previous section.
+The code snippet below shows the usage of the `LocalOozie` class. All the 
interaction with Oozie is done using Oozie
+ `OozieClient` Java API, as shown in the previous section.
 
-The examples bundled with Oozie include the complete and running class, 
=LocalOozieExample= from where this snippet was
+The examples bundled with Oozie include the complete and running class, 
`LocalOozieExample` from where this snippet was
 taken.
 
-<verbatim>
+
+```
 import org.apache.oozie.local.LocalOozie;
 import org.apache.oozie.client.OozieClient;
 import org.apache.oozie.client.WorkflowJob;
@@ -181,18 +187,18 @@ import java.util.Properties;
     // stop local Oozie
     LocalOozie.stop();
     ...
-</verbatim>
+```
+
+Also asynchronous actions like FS action can be used / tested using 
`LocalOozie` / `OozieClient` API. Please see the module
+`oozie-mini` for details like `fs-decision.xml` workflow example.
 
-Also asynchronous actions like FS action can be used / tested using 
=LocalOozie= / =OozieClient= API. Please see the module
-=oozie-mini= for details like =fs-decision.xml= workflow example.
 
+## Fluent Job API Examples
 
----++ Fluent Job API Examples
+There are some elaborate examples how to use the [Fluent Job 
API](DG_FluentJobAPI.html), under `examples/fluentjob/`. There are two
+simple examples covered under [Fluent Job API :: A Simple 
Example](DG_FluentJobAPI.html#A_Simple_Example) and
+[Fluent Job API :: A More Verbose 
Example](DG_FluentJobAPI.html#A_More_Verbose_Example).
 
-There are some elaborate examples how to use the [[DG_FluentJobAPI][Fluent Job 
API]], under =examples/fluentjob/=. There are two
-simple examples covered under [[DG_FluentJobAPI#A_Simple_Example][Fluent Job 
API :: A Simple Example]] and
-[[DG_FluentJobAPI#A_More_Verbose_Example][Fluent Job API :: A More Verbose 
Example]].
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_FluentJobAPI.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_FluentJobAPI.twiki 
b/docs/src/site/twiki/DG_FluentJobAPI.twiki
index c8b764b..bd36517 100644
--- a/docs/src/site/twiki/DG_FluentJobAPI.twiki
+++ b/docs/src/site/twiki/DG_FluentJobAPI.twiki
@@ -1,27 +1,27 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! Fluent Job API
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
+# Fluent Job API
 
----++ Introduction
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
+
+## Introduction
 
 Oozie is a mature workflow scheduler system. XML is the standard way of 
defining workflow, coordinator, or bundle jobs.  For users
 who prefer an alternative, the Fluent Job API provides a Java interface 
instead.
 
----+++ Motivation
+### Motivation
 
 Prior to Oozie 5.1.0, the following ways were available to submit a workflow, 
coordinator, or bundle job: through Oozie CLI or via
 HTTP submit a generic workflow, coordinator, or bundle job, or submit a Pig, 
Hive, Sqoop, or MapReduce workflow job.
 
 As the generic way goes, the user has to have uploaded a workflow, 
coordinator, or bundle XML and all necessary dependencies like
-scripts, JAR or ZIP files, to HDFS beforehand, as well as have a 
=job.properties= file at command line and / or provide any
+scripts, JAR or ZIP files, to HDFS beforehand, as well as have a 
`job.properties` file at command line and / or provide any
 missing parameters as part of the command.
 
 As the specific Pig, Hive, or Sqoop ways go, the user can provide all 
necessary parameters as part of the command issued. A
- =workflow.xml= file will be generated with all the necessary details and 
stored to HDFS so that Oozie can grab it. Note that
+ `workflow.xml` file will be generated with all the necessary details and 
stored to HDFS so that Oozie can grab it. Note that
 dependencies have to be uploaded to HDFS beforehand as well.
 
 There are some usability problems by using the XML job definition. XML is not 
an ideal way to express dependencies and a directed
@@ -40,7 +40,7 @@ fork / join pairs automatically.
 Either way, there were no programmatic ways to define workflow jobs. That 
doesn't mean users could not generate XML themselves -
 actually this is something HUE's Oozie UI also tries to target.
 
----+++ Goals
+### Goals
 
 Fluent Job API aims to solve following from the user's perspective. It 
provides a Java API instead of declarative XML to define
 workflows. It defines dependencies across actions as opposed to defining a 
control flow. This is how data engineers and data
@@ -54,7 +54,7 @@ workflow rendered as XML, as well as coexist XML based and 
Fluent Job API based
 time all workflow action types. When XSDs change, as few manual steps are 
necessary as possible both on API internal and public
 side.
 
----+++ Non-goals
+### Non-goals
 
 The following points are not targeted for the initial release of Fluent Job 
API with Oozie 5.1.0. It doesn't provide API in any
 language other than Java. It doesn't provide a REPL. It doesn't allow for 
dynamic action instantiation depending on e.g. conditional
@@ -71,7 +71,7 @@ for user-supplied custom actions / XSDs.
 
 Most of the non-goals may be targeted as enhancements of the Fluent Job API 
for future Oozie releases.
 
----+++ Approach
+### Approach
 
 When using the Fluent Job API, the following points are different from the XML 
jobs definition. Instead of control flow (successor)
 definition, the user can define dependencies (parents of an action).
@@ -82,32 +82,33 @@ Control flow and necessary boilerplate are generated 
automatically by keeping us
 new dependencies to keep Oozie workflow format of nested fork / join pairs. 
Note that not every dependency DAG can be expressed in
 the Oozie workflow format. When this is not possible, user is notified at 
build time.
 
----++ How To Use
+## How To Use
 
----+++ A Simple Example
+### A Simple Example
 
 The simplest thing to create using the Oozie Fluent Job API is a workflow 
consisting of only one action. Let's see how it goes, step
 by step.
 
-First, put the project =org.apache.oozie:oozie-fluent-job-api= to the build 
path. In case of a Maven managed build, create a new
-Maven project and declare a Maven dependency to 
=org.apache.oozie:oozie-fluent-job-api=.
+First, put the project `org.apache.oozie:oozie-fluent-job-api` to the build 
path. In case of a Maven managed build, create a new
+Maven project and declare a Maven dependency to 
`org.apache.oozie:oozie-fluent-job-api`.
 
-Then, create a class that =implements WorkflowFactory= and implement the 
method =WorkflowFactory#create()=. inside that method,
-create a =ShellAction= using =ShellActionBuilder=, fill in some attributes 
then create a =Workflow= using =WorkflowBuilder= using
-the =ShellAction= just built. Return the =Workflow=.
+Then, create a class that `implements WorkflowFactory` and implement the 
method `WorkflowFactory#create()`. inside that method,
+create a `ShellAction` using `ShellActionBuilder`, fill in some attributes 
then create a `Workflow` using `WorkflowBuilder` using
+the `ShellAction` just built. Return the `Workflow`.
 
-Compile a Fluent Job API jar that has the =Main-Class= attribute set to the 
=WorkflowFactory= subclass just created,
-e.g. =shell-workflow.jar=.
+Compile a Fluent Job API jar that has the `Main-Class` attribute set to the 
`WorkflowFactory` subclass just created,
+e.g. `shell-workflow.jar`.
 
-Moving on, 
[[DG_CommandLineTool#Checking_a_workflow_definition_generated_by_a_Fluent_Job_API_jar_file][check
 via command line]] that
+Moving on, [check via command 
line](DG_CommandLineTool.html#Checking_a_workflow_definition_generated_by_a_Fluent_Job_API_jar_file)
 that
 the compiled API JAR file is valid.
 
 As a finishing touch,
-[[DG_CommandLineTool#Running_a_workflow_definition_generated_by_a_Fluent_Job_API_jar_file][run
 via command line]] the Fluent Job API
+[run via command 
line](DG_CommandLineTool.html#Running_a_workflow_definition_generated_by_a_Fluent_Job_API_jar_file)
 the Fluent Job API
 workflow.
 
-*For reference, a simplistic API JAR example consisting of a =Workflow= having 
only one =ShellAction=:*
-<verbatim>
+**For reference, a simplistic API JAR example consisting of a `Workflow` 
having only one `ShellAction`:**
+
+```
 public class MyFirstWorkflowFactory implements WorkflowFactory {
 .
     @Override
@@ -129,10 +130,11 @@ public class MyFirstWorkflowFactory implements 
WorkflowFactory {
         return shellWorkflow;
     }
 }
-</verbatim>
+```
+
+**After check, the generated workflow XML looks like this:**
 
-*After check, the generated workflow XML looks like this:*
-<verbatim>
+```
 <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
 <workflow:workflow-app xmlns:workflow="uri:oozie:workflow:1.0"  
xmlns:shell="uri:oozie:shell-action:1.0" name="shell-workflow">
 .
@@ -163,21 +165,22 @@ public class MyFirstWorkflowFactory implements 
WorkflowFactory {
     <workflow:end name="end"/>
 .
 </workflow:workflow-app>
-</verbatim>
+```
 
 
----+++ A More Verbose Example
+### A More Verbose Example
 
-*Error handling*
+**Error handling**
 
-If you would like to provide some error handling in case of action failure, 
you should add an =ErrorHandler= to the =Node=
-representing the action. The error handler action will be added as the 
="error-transition"= of the original action in the generated
-Oozie workflow XML. Both the ="ok-transition"= and the ="error-transition"= of 
the error handler action itself will lead to an
+If you would like to provide some error handling in case of action failure, 
you should add an `ErrorHandler` to the `Node`
+representing the action. The error handler action will be added as the 
`"error-transition"` of the original action in the generated
+Oozie workflow XML. Both the `"ok-transition"` and the `"error-transition"` of 
the error handler action itself will lead to an
 autogenerated kill node.
 
-*Here you find an example consisting of a =Workflow= having three 
=ShellAction=s, an error handler =EmailAction=, and one =decision=
-to sort out which way to go:*
-<verbatim>
+**Here you find an example consisting of a `Workflow` having three 
`ShellAction`s, an error handler `EmailAction`, and one `decision`
+to sort out which way to go:**
+
+```
 public class MySecondWorkflowFactory implements WorkflowFactory {
 .
     @Override
@@ -218,10 +221,11 @@ public class MySecondWorkflowFactory implements 
WorkflowFactory {
         return workflow;
     }
 }
-</verbatim>
+```
+
+**After check, the generated workflow XML looks like this:**
 
-*After check, the generated workflow XML looks like this:*
-<verbatim>
+```
 <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
 <workflow:workflow-app ... name="shell-example">
 .
@@ -305,72 +309,76 @@ public class MySecondWorkflowFactory implements 
WorkflowFactory {
     <workflow:end name="end"/>
 .
 </workflow:workflow-app>
-</verbatim>
+```
 
----+++ Runtime Limitations
+### Runtime Limitations
 
 Even if Fluent Job API tries to abstract away the task of assembly job 
descriptor XML files, there are some runtime
-limitations apart from the [[DG_FluentJobAPI#Non-goals][non-goals section]]. 
All such limitations are based on the current
+limitations apart from the [non-goals 
section](DG_FluentJobAPI.html#Non-goals). All such limitations are based on the 
current
 implementations and subject to further improvements and fixes.
 
-There is only one =kill= possibility in every =workflow=. That is, there can 
be defined only one =action= to be executed just before
-any other =action= turns to be =kill=ed. Furthermore, =kill= goes to =end= 
directly. That means, there cannot be defined an
-intricate network of =kill= nodes, cascading sometimes to other =action= 
nodes, avoiding going to =end= in the first place.
+There is only one `kill` possibility in every `workflow`. That is, there can 
be defined only one `action` to be executed just before
+any other `action` turns to be `kill`ed. Furthermore, `kill` goes to `end` 
directly. That means, there cannot be defined an
+intricate network of `kill` nodes, cascading sometimes to other `action` 
nodes, avoiding going to `end` in the first place.
 
-There are places where =decision= node generation fails, throwing an 
=Exception=. The problem is that during the transformation,
-Fluent Job API reaches a state where there is a =fork= that transitions to two 
=decision= nodes, which in turn split into two paths
-each. One of the paths from the first =decision= joins a path from the other 
=decision=, but the remaining conditional paths never
-meet. Therefore, not all paths originating from the =fork= converge to the 
same =join=.
+There are places where `decision` node generation fails, throwing an 
`Exception`. The problem is that during the transformation,
+Fluent Job API reaches a state where there is a `fork` that transitions to two 
`decision` nodes, which in turn split into two paths
+each. One of the paths from the first `decision` joins a path from the other 
`decision`, but the remaining conditional paths never
+meet. Therefore, not all paths originating from the `fork` converge to the 
same `join`.
 
----++ Appendixes
+## Appendixes
 
----+++ AE.A Appendix A, API JAR format
+### AE.A Appendix A, API JAR format
 
-It's kept simple - all the necessary Java class files that are needed are 
packed into a JAR file, that has a =META-INF/MANIFEST.MF=
-with a single entry having the =Main-Class= attribute set to the fully 
qualified name of the entry class, the one that
-=implements WorkflowFactory=:
-<verbatim>
+It's kept simple - all the necessary Java class files that are needed are 
packed into a JAR file, that has a `META-INF/MANIFEST.MF`
+with a single entry having the `Main-Class` attribute set to the fully 
qualified name of the entry class, the one that
+`implements WorkflowFactory`:
+
+```
 Main-Class: org.apache.oozie.jobs.api.factory.MyFirstWorkflowFactory
-</verbatim>
+```
+
+**An example of the command line assembly of such an API JAR:**
 
-*An example of the command line assembly of such an API JAR:*
-<verbatim>
+```
 jar cfe simple-workflow.jar 
org.apache.oozie.fluentjob.api.factory.MyFirstWorkflowFactory \
 -C /Users/forsage/Workspace/oozie/fluent-job/fluent-job-api/target/classes \
 org/apache/oozie/jobs/api/factory/MyFirstWorkflowFactory.class
-</verbatim>
+```
+
+### AE.B Appendix B, Some Useful Builder classes
 
----+++ AE.B Appendix B, Some Useful Builder classes
+For a complete list of `Builder` classes, please have a look at 
`oozie-fluent-job-api` artifact's following packages:
 
-For a complete list of =Builder= classes, please have a look at 
=oozie-fluent-job-api= artifact's following packages:
-   * =org.apache.oozie.fluentjob.api.action= - =ActionBuilder= classes
-   * =org.apache.oozie.fluentjob.api.factory= - the single entry point, 
=WorkflowFactory= is here
-   * =org.apache.oozie.fluentjob.api.workflow= - workflow related =Builder= 
classes
+   * `org.apache.oozie.fluentjob.api.action` - `ActionBuilder` classes
+   * `org.apache.oozie.fluentjob.api.factory` - the single entry point, 
`WorkflowFactory` is here
+   * `org.apache.oozie.fluentjob.api.workflow` - workflow related `Builder` 
classes
 
-On examples how to use these please see =oozie-examples= artifact's 
=org.apache.oozie.example.fluentjob= package.
+On examples how to use these please see `oozie-examples` artifact's 
`org.apache.oozie.example.fluentjob` package.
 
----+++ AE.C Appendix C, How To Extend
+### AE.C Appendix C, How To Extend
 
 Sometimes there are new XSD versions of an existing custom or core workflow 
action, sometimes it's a new custom workflow action that
 gets introduced. In any case, Fluent Job API needs to keep up with the changes.
 
 Here are the steps needed:
-   * in =fluent-job-api/pom.xml= extend or modify =jaxb2-maven-plugin= section 
=sources= by a new =source=
-   * in =fluent-job-api/src/main/xjb/bindings.xml= extend by a new or modify 
an existing =jaxb:bindings=
-   * in =fluent-job-api=, =org.apache.oozie.fluentjob.api.mapping= package, 
introduce a new or modify an existing =DozerConverter=
-   * in =dozer_config.xml=, introduce a new or modify an existing =converter= 
inside =custom-converters=
-   * in =fluent-job-api=, =org.apache.oozie.fluentjob.api.action=, introduce a 
new =Action= and a new =Builder=
+
+   * in `fluent-job-api/pom.xml` extend or modify `jaxb2-maven-plugin` section 
`sources` by a new `source`
+   * in `fluent-job-api/src/main/xjb/bindings.xml` extend by a new or modify 
an existing `jaxb:bindings`
+   * in `fluent-job-api`, `org.apache.oozie.fluentjob.api.mapping` package, 
introduce a new or modify an existing `DozerConverter`
+   * in `dozer_config.xml`, introduce a new or modify an existing `converter` 
inside `custom-converters`
+   * in `fluent-job-api`, `org.apache.oozie.fluentjob.api.action`, introduce a 
new `Action` and a new `Builder`
    * write new / modify existing relevant unit and integration tests
 
----+++ AE.D Appendix D, API compatibility guarantees
+### AE.D Appendix D, API compatibility guarantees
 
-Fluent Job API is available beginning version 5.1.0. It's marked 
=@InterfaceAudience.Private= (intended for use in Oozie itself) and
-=@InterfaceStability.Unstable= (no stability guarantees are provided across 
any level of release granularity) to indicate that for
+Fluent Job API is available beginning version 5.1.0. It's marked 
`@InterfaceAudience.Private` (intended for use in Oozie itself) and
+`@InterfaceStability.Unstable` (no stability guarantees are provided across 
any level of release granularity) to indicate that for
 the next few minor releases it's bound to change a lot.
 
-Beginning from around 5.4.0 planning the next phase, 
=@InterfaceStability.Evolving= (compatibility breaking only between minors),
-and a few minor releases later, =@InterfaceAudience.Public= (safe to use 
outside of Oozie).
+Beginning from around 5.4.0 planning the next phase, 
`@InterfaceStability.Evolving` (compatibility breaking only between minors),
+and a few minor releases later, `@InterfaceAudience.Public` (safe to use 
outside of Oozie).
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_HCatalogIntegration.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_HCatalogIntegration.twiki 
b/docs/src/site/twiki/DG_HCatalogIntegration.twiki
index d3107b4..5c592e8 100644
--- a/docs/src/site/twiki/DG_HCatalogIntegration.twiki
+++ b/docs/src/site/twiki/DG_HCatalogIntegration.twiki
@@ -1,32 +1,32 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! HCatalog Integration (Since Oozie 4.x)
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
+# HCatalog Integration (Since Oozie 4.x)
 
----++ HCatalog Overview
-    HCatalog is a table and storage management layer for Hadoop that enables 
users with different data processing
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
+
+## HCatalog Overview
+HCatalog is a table and storage management layer for Hadoop that enables users 
with different data processing
 tools - Pig, MapReduce, and Hive - to more easily read and write data on the 
grid. HCatalog's table abstraction presents
 users with a relational view of data in the Hadoop distributed file system 
(HDFS).
 
-    Read 
[[http://incubator.apache.org/hcatalog/docs/r0.5.0/index.html][HCatalog 
Documentation]] to know more about HCatalog.
+Read [HCatalog 
Documentation](http://incubator.apache.org/hcatalog/docs/r0.5.0/index.html) to 
know more about HCatalog.
 Working with HCatalog using pig is detailed in
-[[http://incubator.apache.org/hcatalog/docs/r0.5.0/loadstore.html][HCatLoader 
and HCatStorer]].
+[HCatLoader and 
HCatStorer](http://incubator.apache.org/hcatalog/docs/r0.5.0/loadstore.html).
 Working with HCatalog using MapReduce directly is detailed in
-[[http://incubator.apache.org/hcatalog/docs/r0.5.0/inputoutput.html][HCatInputFormat
 and HCatOutputFormat]].
+[HCatInputFormat and 
HCatOutputFormat](http://incubator.apache.org/hcatalog/docs/r0.5.0/inputoutput.html).
 
----+++ HCatalog notifications
+### HCatalog notifications
    HCatalog provides notifications through a JMS provider like ActiveMQ when a 
new partition is added to a table in the
 database. This allows applications to consume those events and schedule the 
work that depends on them. In case of Oozie,
 the notifications are used to determine the availability of HCatalog 
partitions defined as data dependencies in the
 Coordinator and trigger workflows.
 
-Read 
[[http://incubator.apache.org/hcatalog/docs/r0.5.0/notification.html][HCatalog 
Notification]] to know more about
+Read [HCatalog 
Notification](http://incubator.apache.org/hcatalog/docs/r0.5.0/notification.html)
 to know more about
 notifications in HCatalog.
 
----++ Oozie HCatalog Integration
+## Oozie HCatalog Integration
    Oozie's Coordinators so far have been supporting HDFS directories as a 
input data dependency. When a HDFS URI
 template is specified as a dataset and input events are defined in Coordinator 
for the dataset, Oozie performs data
 availability checks by polling the HDFS directory URIs resolved based on the 
nominal time. When all the data
@@ -49,14 +49,14 @@ coordinator action was materialized and to deal with missed 
notifications due to
 fallback polling is usually lower than the constant polling. Defaults are 10 
minutes and 1 minute respectively.
 
 
----+++ Oozie Server Configuration
-   Refer to [[AG_Install#HCatalog_Configuration][HCatalog Configuration]] 
section of [[AG_Install][Oozie Install]]
+### Oozie Server Configuration
+   Refer to [HCatalog Configuration](AG_Install.html#HCatalog_Configuration) 
section of [Oozie Install](AG_Install.html)
 documentation for the Oozie server side configuration required to support 
HCatalog table partitions as a data dependency.
 
----+++ HCatalog URI Format
+### HCatalog URI Format
 
 Oozie supports specifying HCatalog partitions as a data dependency through a 
URI notation. The HCatalog partition URI is
-used to identify a set of table partitions: 
hcat://bar:8020/logsDB/logsTable/dt=20090415;region=US.
+used to identify a set of table partitions: 
`hcat://bar:8020/logsDB/logsTable/dt=20090415;region=US`
 
 The format to specify a HCatalog table URI is:
 
@@ -67,14 +67,15 @@ The format to specify a HCatalog table partition URI is:
 hcat://[metastore server]:[port]/[database name]/[table 
name]/[partkey1]=[value];[partkey2]=[value];...
 
 For example,
-<verbatim>
+
+```
   <dataset name="logs" frequency="${coord:days(1)}"
            initial-instance="2009-02-15T08:15Z" timezone="America/Los_Angeles">
     <uri-template>
       
hcat://myhcatmetastore:9080/database1/table1/datestamp=${YEAR}${MONTH}${DAY}${HOUR};region=USA
     </uri-template>
   </dataset>
-</verbatim>
+```
 
 Post Oozie-4.3.0 release, Oozie also supports the multiple HCatalog servers in 
the URI. Each of the server needs to be
 separated by single comma (,).
@@ -84,59 +85,63 @@ The format to specify a HCatalog table partition URI with 
multiple HCatalog serv
 
hcat://[metastore_server]:[port],[metastore_server]:[port]/[database_name]/[table_name]/[partkey1]=[value];[partkey2]=[value];...
 
 For example,
-<verbatim>
+
+```
   <dataset name="logs" frequency="${coord:days(1)}"
            initial-instance="2009-02-15T08:15Z" timezone="America/Los_Angeles">
     <uri-template>
       
hcat://myhcatmetastore:9080,myhcatmetastore:9080/database1/table1/datestamp=${YEAR}${MONTH}${DAY}${HOUR};region=USA
     </uri-template>
   </dataset>
-</verbatim>
+```
 
 The regex for parsing the multiple HCatalog URI is exposed via oozie-site.xml, 
So Users can modify if there is any
-requirement. Key for the regex is: =oozie.hcat.uri.regex.pattern=
+requirement. Key for the regex is: `oozie.hcat.uri.regex.pattern`
 
 For example, following has multiple HCatalog URI with multiple HCatalog 
servers. To understand this, Oozie will split them into
 two HCatalog URIs. For splitting the URIs, above mentioned regex is used.
 
-hcat://hostname1:1000,hcat://hostname2:2000/mydb/clicks/datastamp=12;region=us,scheme://hostname3:3000,scheme://hostname4:4000,scheme://hostname5:5000/db/table/p1=12;p2=us
+`hcat://hostname1:1000,hcat://hostname2:2000/mydb/clicks/datastamp=12;region=us,scheme://hostname3:3000,scheme://hostname4:4000,scheme://hostname5:5000/db/table/p1=12;p2=us`
 
 After split: (This is internal Oozie mechanism)
 
-hcat://hostname1:1000,hcat://hostname2:2000/mydb/clicks/datastamp=12;region=us
+`hcat://hostname1:1000,hcat://hostname2:2000/mydb/clicks/datastamp=12;region=us`
 
-scheme://hostname3:3000,scheme://hostname4:4000,scheme://hostname5:5000/db/table/p1=12;p2=us
+`scheme://hostname3:3000,scheme://hostname4:4000,scheme://hostname5:5000/db/table/p1=12;p2=us`
 
-#HCatalogLibraries
----+++ HCatalog Libraries
+<a name="HCatalogLibraries"></a>
+### HCatalog Libraries
 
-A workflow action interacting with HCatalog requires the following jars in the 
classpath: 
+A workflow action interacting with HCatalog requires the following jars in the 
classpath:
 hcatalog-core.jar, hcatalog-pig-adapter.jar, webhcat-java-client.jar, 
hive-common.jar, hive-exec.jar,
 hive-metastore.jar, hive-serde.jar and libfb303.jar.
 hive-site.xml which has the configuration to talk to the HCatalog server also 
needs to be in the classpath. The correct
 version of HCatalog and hive jars should be placed in classpath based on the 
version of HCatalog installed on the cluster.
 
 The jars can be added to the classpath of the action using one of the below 
ways.
-   * You can place the jars and hive-site.xml in the system shared library. 
The shared library for a pig, hive or java action can be overridden to include 
hcatalog shared libraries along with the action's shared library. Refer to 
[[WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3][Shared
 Libraries]] for more information. The oozie-sharelib-[version].tar.gz in the 
oozie distribution bundles the required HCatalog jars in a hcatalog sharelib. 
If using a different version of HCatalog than the one bundled in the sharelib, 
copy the required HCatalog jars from such version into the sharelib.
+
+   * You can place the jars and hive-site.xml in the system shared library. 
The shared library for a pig, hive or java action can be overridden to include 
hcatalog shared libraries along with the action's shared library. Refer to 
[Shared 
Libraries](WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3)
 for more information. The oozie-sharelib-[version].tar.gz in the oozie 
distribution bundles the required HCatalog jars in a hcatalog sharelib. If 
using a different version of HCatalog than the one bundled in the sharelib, 
copy the required HCatalog jars from such version into the sharelib.
    * You can place the jars and hive-site.xml in the workflow application lib/ 
path.
-   * You can specify the location of the jar files in =archive= tag and the 
hive-site.xml in =file= tag in the corresponding pig, hive or java action.
+   * You can specify the location of the jar files in `archive` tag and the 
hive-site.xml in `file` tag in the corresponding pig, hive or java action.
 
----+++ Coordinator
+### Coordinator
+
+Refer to [Coordinator Functional 
Specification](CoordinatorFunctionalSpec.html) for more information about
 
-Refer to [[CoordinatorFunctionalSpec][Coordinator Functional Specification]] 
for more information about
    * how to specify HCatalog partitions as a data dependency using input 
dataset events
    * how to specify HCatalog partitions as output dataset events
    * the various EL functions available to work with HCatalog dataset events 
and how to use them to access HCatalog partitions in pig, hive or java actions 
in a workflow.
 
----+++ Workflow
-Refer to [[WorkflowFunctionalSpec][Workflow Functional Specification]] for 
more information about
+### Workflow
+Refer to [Workflow Functional Specification](WorkflowFunctionalSpec.html) for 
more information about
+
    * how to drop HCatalog table/partitions in the prepare block of a action
    * the HCatalog EL functions available to use in workflows
 
-Refer to [[DG_ActionAuthentication][Action Authentication]] for more 
information about
+Refer to [Action Authentication](DG_ActionAuthentication.html) for more 
information about
+
    * how to access a secure HCatalog from any action (e.g. hive, pig, etc) in 
a workflow
 
----+++ Known Issues
+### Known Issues
    * When rerunning a coordinator action without specifying -nocleanup option 
if the 'output-event' are hdfs directories, then they are deleted. But if the 
'output-event' is a hcatalog partition, currently the partition is not dropped.
 
-</noautolink>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_Hive2ActionExtension.twiki 
b/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
index efbe56d..d81ed02 100644
--- a/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
+++ b/docs/src/site/twiki/DG_Hive2ActionExtension.twiki
@@ -1,41 +1,42 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie Hive 2 Action Extension
+# Oozie Hive 2 Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
----++ Hive 2 Action
+## Hive 2 Action
 
-The =hive2= action runs Beeline to connect to Hive Server 2.
+The `hive2` action runs Beeline to connect to Hive Server 2.
 
 The workflow job will wait until the Hive Server 2 job completes before
 continuing to the next action.
 
-To run the Hive Server 2 job, you have to configure the =hive2= action with 
the =resource-manager=, =name-node=, =jdbc-url=,
- =password= elements, and either Hive's =script= or =query= element, as well 
as the necessary parameters and configuration.
+To run the Hive Server 2 job, you have to configure the `hive2` action with 
the `resource-manager`, `name-node`, `jdbc-url`,
+ `password` elements, and either Hive's `script` or `query` element, as well 
as the necessary parameters and configuration.
 
-A =hive2= action can be configured to create or delete HDFS directories
+A `hive2` action can be configured to create or delete HDFS directories
 before starting the Hive Server 2 job.
 
 Oozie EL expressions can be used in the inline configuration. Property
-values specified in the =configuration= element override values specified
-in the =job-xml= file.
+values specified in the `configuration` element override values specified
+in the `job-xml` file.
 
-As with Hadoop =map-reduce= jobs, it is possible to add files and
+As with Hadoop `map-reduce` jobs, it is possible to add files and
 archives in order to make them available to Beeline. Refer to the
-[WorkflowFunctionalSpec#FilesArchives][Adding Files and Archives for the Job]
+[Adding Files and Archives for the 
Job](WorkflowFunctionalSpec.html#FilesArchives)
 section for more information about this feature.
 
 Oozie Hive 2 action supports Hive scripts with parameter variables, their
-syntax is =${VARIABLES}=.
+syntax is `${VARIABLES}`.
+
+**Syntax:**
 
-*Syntax:*
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="[NODE-NAME]">
@@ -75,44 +76,45 @@ syntax is =${VARIABLES}=.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =prepare= element, if present, indicates a list of paths to delete
-or create before starting the job. Specified paths must start with 
=hdfs://HOST:PORT=.
+The `prepare` element, if present, indicates a list of paths to delete
+or create before starting the job. Specified paths must start with 
`hdfs://HOST:PORT`.
 
-The =job-xml= element, if present, specifies a file containing configuration
-for Beeline. Multiple =job-xml= elements are allowed in order to specify 
multiple =job.xml= files.
+The `job-xml` element, if present, specifies a file containing configuration
+for Beeline. Multiple `job-xml` elements are allowed in order to specify 
multiple `job.xml` files.
 
-The =configuration= element, if present, contains configuration
+The `configuration` element, if present, contains configuration
 properties that are passed to the Beeline job.
 
-The =jdbc-url= element must contain the JDBC URL for the Hive Server 2.  
Beeline will use this to know where to connect to.
+The `jdbc-url` element must contain the JDBC URL for the Hive Server 2.  
Beeline will use this to know where to connect to.
 
-The =password= element must contain the password of the current user.  
However, the =password= is only used if Hive Server 2 is
+The `password` element must contain the password of the current user.  
However, the `password` is only used if Hive Server 2 is
 backed by something requiring a password (e.g. LDAP); non-secured Hive Server 
2 or Kerberized Hive Server 2 don't require a password
-so in those cases the =password= is ignored and can be omitted from the action 
XML.  It is up to the user to ensure that a password
+so in those cases the `password` is ignored and can be omitted from the action 
XML.  It is up to the user to ensure that a password
 is specified when required.
 
-The =script= element must contain the path of the Hive script to
+The `script` element must contain the path of the Hive script to
 execute. The Hive script can be templatized with variables of the form
-=${VARIABLE}=. The values of these variables can then be specified
-using the =params= element.
+`${VARIABLE}`. The values of these variables can then be specified
+using the `params` element.
 
-The =query= element available from uri:oozie:hive2-action:0.2, can be used 
instead of the =script= element. It allows for embedding
-queries within the =worklfow.xml= directly.  Similar to the =script= element, 
it also allows for the templatization of variables
-in the form =${VARIABLE}=.
+The `query` element available from uri:oozie:hive2-action:0.2, can be used 
instead of the `script` element. It allows for embedding
+queries within the `worklfow.xml` directly.  Similar to the `script` element, 
it also allows for the templatization of variables
+in the form `${VARIABLE}`.
 
-The =params= element, if present, contains parameters to be passed to
+The `params` element, if present, contains parameters to be passed to
 the Hive script.
 
-The =argument= element, if present, contains arguments to be passed as-is to 
Beeline.
+The `argument` element, if present, contains arguments to be passed as-is to 
Beeline.
 
 All the above elements can be parameterized (templatized) using EL
 expressions.
 
-*Example:*
+**Example:**
 
-<verbatim>
+
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="my-hive2-action">
@@ -139,22 +141,21 @@ expressions.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
 
----+++ Security
+### Security
 
-As mentioned above, =password= is only used in cases where Hive Server 2 is 
backed by something requiring a password (e.g. LDAP).
+As mentioned above, `password` is only used in cases where Hive Server 2 is 
backed by something requiring a password (e.g. LDAP).
 Non-secured Hive Server 2 and Kerberized Hive Server 2 don't require a 
password so in these cases it can be omitted.
-See [[DG_UnifiedCredentialsModule][here]] for more information on the 
configuration for using the Hive Server 2 Action
-with a Kerberized Hive Server 2.
 
----++ Appendix, Hive 2 XML-Schema
+## Appendix, Hive 2 XML-Schema
+
+### AE.A Appendix A, Hive 2 XML-Schema
 
----+++ AE.A Appendix A, Hive 2 XML-Schema
+#### Hive 2 Action Schema Version 1.0
 
----++++ Hive 2 Action Schema Version 1.0
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive2="uri:oozie:hive2-action:1.0" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive2-action:1.0">
@@ -188,10 +189,11 @@ with a Kerberized Hive Server 2.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
 
----++++ Hive 2 Action Schema Version 0.2
-<verbatim>
+#### Hive 2 Action Schema Version 0.2
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive2="uri:oozie:hive2-action:0.2" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive2-action:0.2">
@@ -248,10 +250,11 @@ with a Kerberized Hive Server 2.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### Hive 2 Action Schema Version 0.1
 
----++++ Hive 2 Action Schema Version 0.1
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive2="uri:oozie:hive2-action:0.1" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive2-action:0.1">
@@ -305,8 +308,8 @@ with a Kerberized Hive Server 2.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_HiveActionExtension.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_HiveActionExtension.twiki 
b/docs/src/site/twiki/DG_HiveActionExtension.twiki
index aaa74fa..99a73c6 100644
--- a/docs/src/site/twiki/DG_HiveActionExtension.twiki
+++ b/docs/src/site/twiki/DG_HiveActionExtension.twiki
@@ -1,48 +1,49 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
+
+[::Go back to Oozie Documentation Index::](index.html)
 
 -----
 
----+!! Oozie Hive Action Extension
+# Oozie Hive Action Extension
 
-%TOC%
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
 
-#HiveAction
----++ Hive Action
+<a name="HiveAction"></a>
+## Hive Action
 
-The =hive= action runs a Hive job.
+The `hive` action runs a Hive job.
 
 The workflow job will wait until the Hive job completes before
 continuing to the next action.
 
-To run the Hive job, you have to configure the =hive= action with the 
=resource-manager=, =name-node= and Hive =script=
-(or Hive =query=) elements as well as the necessary parameters and 
configuration.
+To run the Hive job, you have to configure the `hive` action with the 
`resource-manager`, `name-node` and Hive `script`
+(or Hive `query`) elements as well as the necessary parameters and 
configuration.
 
-A =hive= action can be configured to create or delete HDFS directories
+A `hive` action can be configured to create or delete HDFS directories
 before starting the Hive job.
 
-Hive configuration can be specified with a file, using the =job-xml=
-element, and inline, using the =configuration= elements.
+Hive configuration can be specified with a file, using the `job-xml`
+element, and inline, using the `configuration` elements.
 
 Oozie EL expressions can be used in the inline configuration. Property
-values specified in the =configuration= element override values specified
-in the =job-xml= file.
+values specified in the `configuration` element override values specified
+in the `job-xml` file.
 
-Note that YARN =yarn.resourcemanager.address= (=resource-manager=) and HDFS 
=fs.default.name= (=name-node=) properties
+Note that YARN `yarn.resourcemanager.address` (`resource-manager`) and HDFS 
`fs.default.name` (`name-node`) properties
 must not be present in the inline configuration.
 
-As with Hadoop =map-reduce= jobs, it is possible to add files and
+As with Hadoop `map-reduce` jobs, it is possible to add files and
 archives in order to make them available to the Hive job. Refer to the
 [WorkflowFunctionalSpec#FilesArchives][Adding Files and Archives for the Job]
 section for more information about this feature.
 
 Oozie Hive action supports Hive scripts with parameter variables, their
-syntax is =${VARIABLES}=.
+syntax is `${VARIABLES}`.
+
+**Syntax:**
 
-*Syntax:*
 
-<verbatim>
+```
 <workflow-app name="[WF-DEF-NAME]" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="[NODE-NAME]">
@@ -77,37 +78,38 @@ syntax is =${VARIABLES}=.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
-The =prepare= element, if present, indicates a list of paths to delete
-or create before starting the job. Specified paths must start with 
=hdfs://HOST:PORT=.
+The `prepare` element, if present, indicates a list of paths to delete
+or create before starting the job. Specified paths must start with 
`hdfs://HOST:PORT`.
 
-The =job-xml= element, if present, specifies a file containing configuration
-for the Hive job. As of schema 0.3, multiple =job-xml= elements are allowed in 
order to 
-specify multiple =job.xml= files.
+The `job-xml` element, if present, specifies a file containing configuration
+for the Hive job. As of schema 0.3, multiple `job-xml` elements are allowed in 
order to
+specify multiple `job.xml` files.
 
-The =configuration= element, if present, contains configuration
+The `configuration` element, if present, contains configuration
 properties that are passed to the Hive job.
 
-The =script= element must contain the path of the Hive script to
+The `script` element must contain the path of the Hive script to
 execute. The Hive script can be templatized with variables of the form
-=${VARIABLE}=. The values of these variables can then be specified
-using the =params= element.
+`${VARIABLE}`. The values of these variables can then be specified
+using the `params` element.
 
-The =query= element available from uri:oozie:hive-action:0.6, can be used 
instead of the
-=script= element. It allows for embedding queries within the =worklfow.xml= 
directly.
-Similar to the =script= element, it also allows for the templatization of 
variables in the
-form =${VARIABLE}=.
+The `query` element available from uri:oozie:hive-action:0.6, can be used 
instead of the
+`script` element. It allows for embedding queries within the `worklfow.xml` 
directly.
+Similar to the `script` element, it also allows for the templatization of 
variables in the
+form `${VARIABLE}`.
 
-The =params= element, if present, contains parameters to be passed to
+The `params` element, if present, contains parameters to be passed to
 the Hive script.
 
 All the above elements can be parameterized (templatized) using EL
 expressions.
 
-*Example:*
+**Example:**
+
 
-<verbatim>
+```
 <workflow-app name="sample-wf" xmlns="uri:oozie:workflow:1.0">
     ...
     <action name="myfirsthivejob">
@@ -132,14 +134,14 @@ expressions.
     </action>
     ...
 </workflow-app>
-</verbatim>
+```
 
----+++ Hive Default and Site Configuration Files
+### Hive Default and Site Configuration Files
 
-Hive (as of Hive 0.8) ignores a =hive-default.xml= file.  As a result, Oozie 
(as of Oozie 3.4) ignores the =oozie.hive.defaults=
+Hive (as of Hive 0.8) ignores a `hive-default.xml` file.  As a result, Oozie 
(as of Oozie 3.4) ignores the `oozie.hive.defaults`
 property that was previously required by earlier versions of Oozie for the 
Hive action.
 
----+++ Hive Action Logging
+### Hive Action Logging
 
 Hive action logs are redirected to the Oozie Launcher map-reduce job task 
STDOUT/STDERR that runs Hive.
 
@@ -147,14 +149,15 @@ From Oozie web-console, from the Hive action pop up using 
the 'Console URL' link
 to navigate to the Oozie Launcher map-reduce job task logs via the Hadoop 
job-tracker web-console.
 
 The logging level of the Hive action can set in the Hive action configuration 
using the
-property =oozie.hive.log.level=. The default value is =INFO=.
+property `oozie.hive.log.level`. The default value is `INFO`.
 
----++ Appendix, Hive XML-Schema
+## Appendix, Hive XML-Schema
 
----+++ AE.A Appendix A, Hive XML-Schema
+### AE.A Appendix A, Hive XML-Schema
 
----++++ Hive Action Schema Version 1.0
-<verbatim>
+#### Hive Action Schema Version 1.0
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive="uri:oozie:hive-action:1.0"
            elementFormDefault="qualified"
@@ -187,10 +190,11 @@ property =oozie.hive.log.level=. The default value is 
=INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### Hive Action Schema Version 0.6
 
----++++ Hive Action Schema Version 0.6
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive="uri:oozie:hive-action:0.6" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive-action:0.6">
@@ -245,9 +249,10 @@ property =oozie.hive.log.level=. The default value is 
=INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
----++++ Hive Action Schema Version 0.5
-<verbatim>
+```
+#### Hive Action Schema Version 0.5
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive="uri:oozie:hive-action:0.5" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive-action:0.5">
@@ -299,10 +304,11 @@ property =oozie.hive.log.level=. The default value is 
=INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### Hive Action Schema Version 0.4
 
----++++ Hive Action Schema Version 0.4
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive="uri:oozie:hive-action:0.4" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive-action:0.4">
@@ -353,10 +359,11 @@ property =oozie.hive.log.level=. The default value is 
=INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+#### Hive Action Schema Version 0.3
 
----++++ Hive Action Schema Version 0.3
-<verbatim>
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive="uri:oozie:hive-action:0.3" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive-action:0.3">
@@ -407,10 +414,11 @@ property =oozie.hive.log.level=. The default value is 
=INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
 
----++++ Hive Action Schema Version 0.2
-<verbatim>
+#### Hive Action Schema Version 0.2
+
+```
 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema";
            xmlns:hive="uri:oozie:hive-action:0.2" 
elementFormDefault="qualified"
            targetNamespace="uri:oozie:hive-action:0.2">
@@ -461,8 +469,8 @@ property =oozie.hive.log.level=. The default value is 
=INFO=.
     </xs:complexType>
 .
 </xs:schema>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_JMSNotifications.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_JMSNotifications.twiki 
b/docs/src/site/twiki/DG_JMSNotifications.twiki
index a4b0f0d..e8f8a76 100644
--- a/docs/src/site/twiki/DG_JMSNotifications.twiki
+++ b/docs/src/site/twiki/DG_JMSNotifications.twiki
@@ -1,26 +1,26 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+!! JMS Notifications
+[::Go back to Oozie Documentation Index::](index.html)
 
-%TOC%
+# JMS Notifications
 
----++ Overview
+<!-- MACRO{toc|fromDepth=1|toDepth=4} -->
+
+## Overview
 Since Oozie 4.0, Oozie supports publishing notifications to a JMS Provider for 
job status changes and SLA met and miss
 events. This provides an alternative to polling Oozie for Job or SLA related 
information and getting events as they
 happen without any delay. Clients can be written to consume these 
notifications and integrate with different monitoring
 and alerting systems.
 
----++ Oozie Server Configuration
-Refer to [[AG_Install#Notifications_Configuration][Notifications 
Configuration]] section of [[AG_Install][Oozie Install]]
+## Oozie Server Configuration
+Refer to [Notifications 
Configuration](AG_Install.html#Notifications_Configuration) section of [Oozie 
Install](AG_Install.html)
 documentation for the Oozie server side configuration required to support 
publishing notifications to a JMS Provider.
 The JNDI properties for the JMS provider, the topics to publish to and the 
notification types to publish (Job and/or SLA)
 need to be configured.
 
----++ Consuming Notifications
+## Consuming Notifications
 
----+++ Notification types
+### Notification types
 Job and SLA notifications are published to the configured JMS Provider on the 
configured topics.
 
 Job status change notifications include job start, success, failure, 
suspended, etc. Currently only workflow job and
@@ -28,26 +28,28 @@ coordinator action status change notifications are 
published.
 
 SLA notifications include START_MET, END_MET, DURATION_MET, START_MISS, 
END_MISS, DURATION_MISS events and are published
 for a workflow job, workflow action or coordinator action for which SLA 
information is configured in the job xml. Refer
-to [[DG_SLAMonitoring#Configuring_SLA_in_Applications][SLA Configuration]] for 
information on configuring SLA for a workflow or
+to [SLA Configuration](DG_SLAMonitoring.html#Configuring_SLA_in_Applications) 
for information on configuring SLA for a workflow or
 coordinator.
 
----+++ JMS Topic
+### JMS Topic
 Consumers interested in notification on events will require to know the JNDI 
properties to connect to the JMS provider.
 They will also need to know the JMS topic on which notifications for a 
particular job are published.
 
 Oozie Client provides the following APIs :
-<verbatim>
+
+```
 public JMSConnectionInfo getJMSConnectionInfo()
 public String getJMSTopicName(String jobId)
-</verbatim>
+```
 
 The JMSConnectionInfo exposes 3 methods:
 
-<verbatim>
+
+```
 Properties getJNDIProperties();
 String getTopicPattern(AppType appType);
 String getTopicPrefix();
-</verbatim>
+```
 
 The topic is obtained by concatenating topic prefix and the substituted value 
for topic pattern. The topic pattern
 can be a constant value like workflow or coordinator which the administrator 
has configured or a variable (either ${username}
@@ -59,13 +61,14 @@ The getJMSTopicName API can be used if the job id is 
already known and will give
 notifications for that job are published.
 
 
----+++ JMS Message Format
-JMS messages published are =javax.jms.TextMessage=. The body contains JSON and 
the header contains multiple properties
+### JMS Message Format
+JMS messages published are `javax.jms.TextMessage`. The body contains JSON and 
the header contains multiple properties
 that can be used as selectors. The header properties are not repeated in the 
body of the message to keep the messages
 small.
 
 <b>Message Header:</b> <br/>
 The different header properties are:
+
    * msgType - Value can be JOB or SLA.
    * user - The user who submitted the job.
    * appName - Application name of the job.
@@ -86,77 +89,88 @@ FAILURE = When the Workflow Job or Coordinator Action is in 
terminal state other
 <b>Message Body for Job Notifications:</b> <br/>
     Sample JSON response for different job and sla events as below.
 
-<verbatim>
+
+```
 Workflow Job in RUNNING state:
 
{"status":"RUNNING","id":"0000042-130618221729631-oozie-oozi-W","startTime":1342915200000}
-</verbatim>
+```
 
-<verbatim>
+
+```
 Workflow Job in FAILED state:
 {"status":"FAILED","errorCode":"EL_ERROR","errorMessage":"variable 
[dummyvalue] cannot be resolved",
 
"id":"0000042-130618221729631-oozie-oozi-W","startTime":1342915200000,"endTime":1366672183543}
-</verbatim>
+```
+
 
-<verbatim>
+```
 Workflow Job in SUCCEEDED state:
 
{"status":"SUCCEEDED","id":"0000039-130618221729631-oozie-oozi-W","startTime":1342915200000,
 "parentId":"0000025-130618221729631-oozie-oozi-C@1","endTime":1366676224154}
-</verbatim>
+```
 
-<verbatim>
+
+```
 Workflow Job in SUSPENDED state:
 
{"status":"SUSPENDED","id":"0000039-130618221729631-oozie-oozi-W","startTime":1342915200000,
 "parentId":"0000025-130618221729631-oozie-oozi-C@1"}
-</verbatim>
+```
+
 
-<verbatim>
+```
 Coordinator Action in WAITING state:
 
{"status":"WAITING","nominalTime":1310342400000,"missingDependency":"hdfs://gsbl90107.blue.com:8020/user/john/dir1/file1",
 
"id":"0000025-130618221729631-oozie-oozi-C@1","startTime":1342915200000,"parentId":"0000025-130618221729631-oozie-oozi-C"}
-</verbatim>
+```
 
-<verbatim>
+
+```
 Coordinator Action in RUNNING state:
 
{"status":"RUNNING","nominalTime":1310342400000,"id":"0000025-130618221729631-oozie-oozi-C@1",
 "startTime":1342915200000,"parentId":"0000025-130618221729631-oozie-oozi-C"}
-</verbatim>
+```
+
 
-<verbatim>
+```
 Coordinator Action in SUCCEEDED state:
 
{"status":"SUCCEEDED","nominalTime":1310342400000,"id":"0000025-130618221729631-oozie-oozi-C@1",
 
"startTime":1342915200000,"parentId":"0000025-130618221729631-oozie-oozi-C","endTime":1366677082799}
-</verbatim>
+```
+
 
-<verbatim>
+```
 Coordinator Action in FAILED state:
 
{"status":"FAILED","errorCode":"E0101","errorMessage":"dummyError","nominalTime":1310342400000,
 "id":"0000025-130618221729631-oozie-oozi-C@1","startTime":1342915200000,
 "parentId":"0000025-130618221729631-oozie-oozi-C","endTime":1366677140818}
-</verbatim>
+```
 
 <b>Message Body for SLA Notifications:</b> <br/>
 
-<verbatim>
+
+```
 Workflow Job in sla END_MISS state:
 
{"id":"0000000-000000000000001-oozie-wrkf-C@1","parentId":"0000000-000000000000001-oozie-wrkf-C",
 "expectedStartTime":1356998400000,"notificationMessage":"notification of start 
miss","actualStartTime":1357002000000,
 "expectedDuration":-1, 
"actualDuration":3600,"expectedEndTime":1356998400000,"actualEndTime":1357002000000}
-</verbatim>
+```
 
----+++ JMS Client
+### JMS Client
 
 Oozie provides a helper class JMSMessagingUtils for consumers to deserialize 
the JMS messages back to Java objects.
 The below method getEventMessage() expects a sub type of EventMessage.
 There are different implementations of EventMessage - WorkflowJobMessage, 
CoordinatorActionMessage and SLAMessage.
 
-<verbatim>
+
+```
 <T extends EventMessage> T JMSMessagingUtils.getEventMessage(Message 
jmsMessage)
-</verbatim>
----++++ Example
+```
+#### Example
 Below is a sample code to consume notifications.
 
 First, create the Oozie client and retrieve the JNDI properties to make a 
connection to the JMS server.
-<verbatim>
+
+```
    OozieClient oc = new OozieClient("http://localhost:11000/oozie";);
    JMSConnectionInfo jmsInfo = oc.getJMSConnectionInfo();
    Properties jndiProperties = jmsInfo.getJNDIProperties();
@@ -181,13 +195,14 @@ First, create the Oozie client and retrieve the JNDI 
properties to make a connec
    MessageConsumer consumer = session.createConsumer(topic);
    consumer.setMessageListener(this);
    connection.start();
-</verbatim>
+```
 
-To start receiving messages, the JMS 
[[http://docs.oracle.com/javaee/6/api/javax/jms/MessageListener.html][MessageListener]]
+To start receiving messages, the JMS 
[MessageListener](http://docs.oracle.com/javaee/6/api/javax/jms/MessageListener.html)
 interface needs to be implemented. Also, its onMessage() method  needs to be 
implemented.
 This method will be called whenever a message is available on the JMS bus.
 
-<verbatim>
+
+```
     public void onMessage(Message message) {
        if 
(message.getStringProperty(JMSHeaderConstants.MESSAGE_TYPE).equals(MessageType.SLA.name())){
           SLAMessage slaMessage = JMSMessagingUtils.getEventMessage(message);
@@ -198,22 +213,24 @@ This method will be called whenever a message is 
available on the JMS bus.
           // Further processing
        }
     }
-</verbatim>
+```
 
----++++ Applying Selectors
+#### Applying Selectors
 
 Below is a sample ActiveMQ text message header properties section.
-<verbatim>
+
+```
 ActiveMQTextMessage
 {properties = {appName = map-reduce-wf, msgType=JOB, appType=WORKFLOW_JOB, 
user=john, msgFormat=json, eventStatus=STARTED} ...}
-</verbatim>
+```
 
 On the header properties, consumers can apply JMS selectors to filter messages 
from JMS provider.
-They are listed at 
[[../docs/client/apidocs/org/apache/oozie/client/event/jms/JMSHeaderConstants.html][JMSHeaderConstants]]
+They are listed at 
[JMSHeaderConstants](../docs/client/apidocs/org/apache/oozie/client/event/jms/JMSHeaderConstants.html)
 
 Sample use of selector to filter events related to Job which have failed and 
has a particular app-name
 
-<verbatim>
+
+```
 String selector=JMSHeaderConstants.EVENT_STATUS + "='FAILURE' AND " + 
JMSHeaderConstants.APP_NAME + "='app-name'";
 MessageConsumer consumer = session.createConsumer(topic, selector);
-</verbatim>
+```

http://git-wip-us.apache.org/repos/asf/oozie/blob/4e5b3cb5/docs/src/site/twiki/DG_Overview.twiki
----------------------------------------------------------------------
diff --git a/docs/src/site/twiki/DG_Overview.twiki 
b/docs/src/site/twiki/DG_Overview.twiki
index 3ec94a2..6a2b9d2 100644
--- a/docs/src/site/twiki/DG_Overview.twiki
+++ b/docs/src/site/twiki/DG_Overview.twiki
@@ -1,8 +1,8 @@
-<noautolink>
 
-[[index][::Go back to Oozie Documentation Index::]]
 
----+ Oozie Workflow Overview
+[::Go back to Oozie Documentation Index::](index.html)
+
+# Oozie Workflow Overview
 
 Oozie is a server based _Workflow Engine_ specialized in running workflow jobs 
with actions that run Hadoop Map/Reduce
 and Pig jobs.
@@ -14,7 +14,7 @@ a control dependency DAG (Directed Acyclic Graph). "control 
dependency" from one
 action can't run until the first action has completed.
 
 Oozie workflows definitions are written in hPDL (a XML Process Definition 
Language similar to
-[[http://www.jboss.org/jbossjbpm/][JBOSS JBPM]] jPDL).
+[JBOSS JBPM](http://www.jboss.org/jbossjbpm/) jPDL).
 
 Oozie workflow actions start jobs in remote systems (i.e. Hadoop, Pig). Upon 
action completion, the remote systems
 callback Oozie to notify the action completion, at this point Oozie proceeds 
to the next action in the workflow.
@@ -25,26 +25,27 @@ by default to the user code.
 
 Oozie workflows contain control flow nodes and action nodes.
 
-Control flow nodes define the beginning and the end of a workflow ( =start=, 
=end= and =fail= nodes) and provide a
-mechanism to control the workflow execution path ( =decision=, =fork= and 
=join= nodes).
+Control flow nodes define the beginning and the end of a workflow ( `start`, 
`end` and `fail` nodes) and provide a
+mechanism to control the workflow execution path ( `decision`, `fork` and 
`join` nodes).
 
 Action nodes are the mechanism by which a workflow triggers the execution of a 
computation/processing task. Oozie
 provides support for different types of actions: Hadoop map-reduce, Hadoop 
file system, Pig, SSH, HTTP, eMail and
 Oozie sub-workflow. Oozie can be extended to support additional type of 
actions.
 
-Oozie workflows can be parameterized (using variables like =${inputDir}= 
within the workflow definition). When
+Oozie workflows can be parameterized (using variables like `${inputDir}` 
within the workflow definition). When
 submitting a workflow job values for the parameters must be provided. If 
properly parameterized (i.e. using different
 output directories) several identical workflow jobs can concurrently.
 
----++ WordCount Workflow Example
+## WordCount Workflow Example
+
+**Workflow Diagram:**
 
-*Workflow Diagram:*
+<img src="./DG_Overview.png"/>
 
-<img src="%ATTACHURLPATH%/DG_Overview.png"/>
+**hPDL Workflow Definition:**
 
-*hPDL Workflow Definition:*
 
-<verbatim>
+```
 <workflow-app name='wordcount-wf' xmlns="uri:oozie:workflow:0.1">
     <start to='wordcount'/>
     <action name='wordcount'>
@@ -78,8 +79,8 @@ output directories) several identical workflow jobs can 
concurrently.
     </kill/>
     <end name='end'/>
 </workflow-app>
-</verbatim>
+```
+
+[::Go back to Oozie Documentation Index::](index.html)
 
-[[index][::Go back to Oozie Documentation Index::]]
 
-</noautolink>

Reply via email to