Added: falcon/site/0.8/HiveIntegration.html
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/HiveIntegration.html?rev=1717229&view=auto
==============================================================================
--- falcon/site/0.8/HiveIntegration.html (added)
+++ falcon/site/0.8/HiveIntegration.html Mon Nov 30 11:11:50 2015
@@ -0,0 +1,453 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2015-11-30
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20151130" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Hive Integration</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Hive Integration</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-11-30</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.8</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Hive Integration<a name="Hive_Integration"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Falcon provides data management functions for feeds declaratively. It 
allows users to represent feed locations as time-based partition directories on 
HDFS containing files.</p>
+<p>Hive provides a simple and familiar database like tabular model of data 
management to its users, which are backed by HDFS. It supports two classes of 
tables, managed tables and external tables.</p>
+<p>Falcon allows users to represent feed location as Hive tables. Falcon 
supports both managed and external tables and provide data management services 
for tables such as replication, eviction, archival, etc. Falcon will notify 
HCatalog as a side effect of either acquiring, replicating or evicting a data 
set instance and adds the missing capability of HCatalog table replication.</p>
+<p>In the near future, Falcon will allow users to express pipeline processing 
in Hive scripts apart from Pig and Oozie workflows.</p></div>
+<div class="section">
+<h3>Assumptions<a name="Assumptions"></a></h3>
+<p></p>
+<ul>
+<li>Date is a mandatory first-level partition for Hive tables
+<ul>
+<li>Data availability triggers are based on date pattern in 
Oozie</li></ul></li>
+<li>Tables must be created in Hive prior to adding it as a Feed in Falcon.
+<ul>
+<li>Duplicating this in Falcon will create confusion on the real source of 
truth. Also propagating schema changes</li></ul></li></ul>between systems is a 
hard problem.
+<ul>
+<li>Falcon does not know about the encoding of the data and data should be in 
HCatalog supported format.</li></ul></div>
+<div class="section">
+<h3>Configuration<a name="Configuration"></a></h3>
+<p>Falcon provides a system level option to enable Hive integration. Falcon 
must be configured with an implementation for the catalog registry. The default 
implementation for Hive is shipped with Falcon.</p>
+<div class="source">
+<pre>
+catalog.service.impl=org.apache.falcon.catalog.HiveCatalogService
+
+</pre></div></div>
+<div class="section">
+<h3>Incompatible changes<a name="Incompatible_changes"></a></h3>
+<p>Falcon depends heavily on data-availability triggers for scheduling Falcon 
workflows. Oozie must support data-availability triggers based on HCatalog 
partition availability. This is only available in oozie 4.x.</p>
+<p>Hence, Falcon for Hive support needs Oozie 4.x.</p></div>
+<div class="section">
+<h3>Oozie Shared Library setup<a name="Oozie_Shared_Library_setup"></a></h3>
+<p>Falcon post Hive integration depends heavily on the <a class="externalLink" 
href="http://oozie.apache.org/docs/4.0.1/WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3";>shared
 library feature of Oozie</a>. Since the sheer number of jars for HCatalog, Pig 
and Hive are in the many 10s in numbers, its quite daunting to redistribute the 
dependent jars from Falcon.</p>
+<p><a class="externalLink" 
href="http://oozie.apache.org/docs/4.0.1/DG_QuickStart.html#Oozie_Share_Lib_Installation";>This
 is a one time effort in Oozie setup and is quite straightforward.</a></p></div>
+<div class="section">
+<h3>Approach<a name="Approach"></a></h3></div>
+<div class="section">
+<h4>Entity Changes<a name="Entity_Changes"></a></h4>
+<p></p>
+<ul>
+<li>Cluster DSL will have an additional registry-interface section, specifying 
the endpoint for the</li></ul>HCatalog server. If this is absent, no HCatalog 
publication will be done from Falcon for this cluster.
+<div class="source">
+<pre>thrift://hcatalog-server:port
+</pre></div>
+<p></p>
+<ul>
+<li>Feed DSL will allow users to specify the URI (location) for HCatalog 
tables as:</li></ul>
+<div class="source">
+<pre>catalog:database_name:table_name#partitions(key=value?)*
+</pre></div>
+<p></p>
+<ul>
+<li>Failure to publish to HCatalog will be retried (configurable # of retires) 
with back off. Permanent failures</li></ul>after all the retries are exhausted 
will fail the Falcon workflow</div>
+<div class="section">
+<h4>Eviction<a name="Eviction"></a></h4>
+<p></p>
+<ul>
+<li>Falcon will construct DDL statements to filter candidate partitions 
eligible for eviction drop partitions</li>
+<li>Falcon will construct DDL statements to drop the eligible partitions</li>
+<li>Additionally, Falcon will nuke the data on HDFS for external 
tables</li></ul></div>
+<div class="section">
+<h4>Replication<a name="Replication"></a></h4>
+<p></p>
+<ul>
+<li>Falcon will use HCatalog (Hive) API to export the data for a given table 
and the partition,</li></ul>which will result in a data collection that 
includes metadata on the data's storage format, the schema, how the data is 
sorted, what table the data came from, and values of any partition keys from 
that table.
+<ul>
+<li>Falcon will use discp tool to copy the exported data collection into the 
secondary cluster into a staging</li></ul>directory used by Falcon.
+<ul>
+<li>Falcon will then import the data into HCatalog (Hive) using the HCatalog 
(Hive) API. If the specified table does</li></ul>not yet exist, Falcon will 
create it, using the information in the imported metadata to set defaults for 
the table such as schema, storage format, etc.
+<ul>
+<li>The partition is not complete and hence not visible to users until all the 
data is committed on the secondary</li></ul>cluster, (no dirty reads)
+<ul>
+<li>Data collection is staged by Falcon and retries for copy continues from 
where it left off.</li>
+<li>Failure to register with Hive will be retired. After all the attempts are 
exhausted,</li></ul>the data will be cleaned up by Falcon.</div>
+<div class="section">
+<h4>Security<a name="Security"></a></h4>
+<p>The user owns all data managed by Falcon. Falcon runs as the user who 
submitted the feed. Falcon will authenticate with HCatalog as the end user who 
owns the entity and the data.</p>
+<p>For Hive managed tables, the table may be owned by the end user or 
&#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;. For 
&#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d; owned tables, user will have to 
configure the feed as &#xe2;&#x80;&#x9c;hive&#xe2;&#x80;&#x9d;.</p></div>
+<div class="section">
+<h3>Load on HCatalog from Falcon<a 
name="Load_on_HCatalog_from_Falcon"></a></h3>
+<p>It generally depends on the frequency of the feeds configured in Falcon and 
how often data is ingested, replicated, or processed.</p></div>
+<div class="section">
+<h3>User Impact<a name="User_Impact"></a></h3>
+<p></p>
+<ul>
+<li>There should not be any impact to user due to this integration</li>
+<li>Falcon will be fully backwards compatible</li>
+<li>Users have a choice to either choose storage based on files on HDFS as 
they do today or use HCatalog for</li></ul>accessing the data in tables</div>
+<div class="section">
+<h3>Known Limitations<a name="Known_Limitations"></a></h3></div>
+<div class="section">
+<h4>Oozie<a name="Oozie"></a></h4>
+<p></p>
+<ul>
+<li>Falcon with Hadoop 1.x requires copying guava jars manually to sharelib in 
oozie. Hadoop 2.x ships this.</li>
+<li>hcatalog-pig-adapter needs to be copied manually to oozie 
sharelib.</li></ul>
+<div class="source">
+<pre>
+bin/hadoop dfs -copyFromLocal 
$LFS/share/lib/hcatalog/hcatalog-pig-adapter-0.5.0-incubating.jar 
share/lib/hcatalog
+
+</pre></div>
+<p></p>
+<ul>
+<li>Oozie 4.x with Hadoop-2.x</li></ul>Replication jobs are submitted to oozie 
on the destination cluster. Oozie runs a table export job on RM on source 
cluster. Oozie server on the target cluster must be configured with source 
hadoop configs else jobs fail with errors on secure and non-secure clusters as 
below:
+<div class="source">
+<pre>
+org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not 
found for ApplicationAttempt appattempt_1395965672651_0010_000002
+
+</pre></div>
+<p>Make sure all oozie servers that falcon talks to has the hadoop configs 
configured in oozie-site.xml</p>
+<div class="source">
+<pre>
+&lt;property&gt;
+      
&lt;name&gt;oozie.service.HadoopAccessorService.hadoop.configurations&lt;/name&gt;
+      
&lt;value&gt;*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3&lt;/value&gt;
+      &lt;description&gt;
+          Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the 
HOST:PORT of
+          the Hadoop service (JobTracker, HDFS). The wildcard '*' 
configuration is
+          used when there is no exact match for an authority. The 
HADOOP_CONF_DIR contains
+          the relevant Hadoop *-site.xml files. If the path is relative is 
looked within
+          the Oozie configuration directory; though the path can be absolute 
(i.e. to point
+          to Hadoop client conf/ directories in the local filesystem.
+      &lt;/description&gt;
+    &lt;/property&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Hive<a name="Hive"></a></h4>
+<p></p>
+<ul>
+<li>Dated Partitions</li></ul>Falcon does not work well when table partition 
contains multiple dated columns. Falcon only works with a single dated 
partition. This is being tracked in FALCON-357 which is a limitation in Oozie.
+<div class="source">
+<pre>
+catalog:default:table4#year=${YEAR};month=${MONTH};day=${DAY};hour=${HOUR};minute=${MINUTE}
+
+</pre></div>
+<p></p>
+<ul>
+<li><a class="externalLink" 
href="https://issues.apache.org/jira/browse/HIVE-5550";>Hive table import fails 
for tables created with default text and sequence file formats using HCatalog 
API</a></li></ul>For some arcane reason, hive substitutes the output format for 
text and sequence to be prefixed with Hive. Hive table import fails since it 
compares against the input and output formats of the source table and they are 
different. Say, a table was created with out specifying the file format, it 
defaults to:
+<div class="source">
+<pre>
+fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
+
+</pre></div>
+<p>But, when hive fetches the table from the metastore, it replaces the output 
format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat and the 
comparison between source and target table fails.</p>
+<div class="source">
+<pre>
+org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable
+      // check IF/OF/Serde
+      String existingifc = table.getInputFormatClass().getName();
+      String importedifc = tableDesc.getInputFormat();
+      String existingofc = table.getOutputFormatClass().getName();
+      String importedofc = tableDesc.getOutputFormat();
+      if ((!existingifc.equals(importedifc))
+          || (!existingofc.equals(importedofc))) {
+        throw new SemanticException(
+            ErrorMsg.INCOMPATIBLE_SCHEMA
+                .getMsg(&quot; Table inputformat/outputformats do not 
match&quot;));
+      }
+
+</pre></div>
+<p>The above is not an issue with Hive 0.13.</p></div>
+<div class="section">
+<h3>Hive Examples<a name="Hive_Examples"></a></h3>
+<p>Following is an example entity configuration for lifecycle management 
functions for tables in Hive.</p></div>
+<div class="section">
+<h4>Hive Table Lifecycle Management - Replication and Retention<a 
name="Hive_Table_Lifecycle_Management_-_Replication_and_Retention"></a></h4></div>
+<div class="section">
+<h5>Primary Cluster<a name="Primary_Cluster"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    Primary cluster configuration for demo vm
+  --&gt;
+&lt;cluster colo=&quot;west-coast&quot; description=&quot;Primary Cluster&quot;
+         name=&quot;primary-cluster&quot;
+         xmlns=&quot;uri:falcon:cluster:0.1&quot; 
xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;interfaces&gt;
+        &lt;interface type=&quot;readonly&quot; 
endpoint=&quot;hftp://localhost:10070&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;write&quot; 
endpoint=&quot;hdfs://localhost:10020&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;execute&quot; 
endpoint=&quot;localhost:10300&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;workflow&quot; 
endpoint=&quot;http://localhost:11010/oozie/&quot;
+                   version=&quot;4.0.1&quot; /&gt;
+        &lt;interface type=&quot;registry&quot; 
endpoint=&quot;thrift://localhost:19083&quot;
+                   version=&quot;0.11.0&quot; /&gt;
+        &lt;interface type=&quot;messaging&quot; 
endpoint=&quot;tcp://localhost:61616?daemon=true&quot;
+                   version=&quot;5.4.3&quot; /&gt;
+    &lt;/interfaces&gt;
+    &lt;locations&gt;
+        &lt;location name=&quot;staging&quot; 
path=&quot;/apps/falcon/staging&quot; /&gt;
+        &lt;location name=&quot;temp&quot; path=&quot;/tmp&quot; /&gt;
+        &lt;location name=&quot;working&quot; 
path=&quot;/apps/falcon/working&quot; /&gt;
+    &lt;/locations&gt;
+&lt;/cluster&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>BCP Cluster<a name="BCP_Cluster"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    BCP cluster configuration for demo vm
+  --&gt;
+&lt;cluster colo=&quot;east-coast&quot; description=&quot;BCP Cluster&quot;
+         name=&quot;bcp-cluster&quot;
+         xmlns=&quot;uri:falcon:cluster:0.1&quot; 
xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;interfaces&gt;
+        &lt;interface type=&quot;readonly&quot; 
endpoint=&quot;hftp://localhost:20070&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;write&quot; 
endpoint=&quot;hdfs://localhost:20020&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;execute&quot; 
endpoint=&quot;localhost:20300&quot;
+                   version=&quot;1.1.1&quot; /&gt;
+        &lt;interface type=&quot;workflow&quot; 
endpoint=&quot;http://localhost:11020/oozie/&quot;
+                   version=&quot;4.0.1&quot; /&gt;
+        &lt;interface type=&quot;registry&quot; 
endpoint=&quot;thrift://localhost:29083&quot;
+                   version=&quot;0.11.0&quot; /&gt;
+        &lt;interface type=&quot;messaging&quot; 
endpoint=&quot;tcp://localhost:61616?daemon=true&quot;
+                   version=&quot;5.4.3&quot; /&gt;
+    &lt;/interfaces&gt;
+    &lt;locations&gt;
+        &lt;location name=&quot;staging&quot; 
path=&quot;/apps/falcon/staging&quot; /&gt;
+        &lt;location name=&quot;temp&quot; path=&quot;/tmp&quot; /&gt;
+        &lt;location name=&quot;working&quot; 
path=&quot;/apps/falcon/working&quot; /&gt;
+    &lt;/locations&gt;
+&lt;/cluster&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Feed with replication and eviction policy<a 
name="Feed_with_replication_and_eviction_policy"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    Replicating Hourly customer table from primary to secondary cluster.
+  --&gt;
+&lt;feed description=&quot;Replicating customer table feed&quot; 
name=&quot;customer-table-replicating-feed&quot;
+      xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;primary-cluster&quot; 
type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2013-09-24T00:00Z&quot; 
end=&quot;2013-10-26T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;hours(2)&quot; 
action=&quot;delete&quot;/&gt;
+        &lt;/cluster&gt;
+        &lt;cluster name=&quot;bcp-cluster&quot; type=&quot;target&quot;&gt;
+            &lt;validity start=&quot;2013-09-24T00:00Z&quot; 
end=&quot;2013-10-26T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;days(30)&quot; 
action=&quot;delete&quot;/&gt;
+
+            &lt;table 
uri=&quot;catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;table 
uri=&quot;catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+
+    &lt;ACL owner=&quot;seetharam&quot; group=&quot;users&quot; 
permission=&quot;0755&quot;/&gt;
+    &lt;schema location=&quot;&quot; provider=&quot;hcatalog&quot;/&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Hive Table used in Processing Pipelines<a 
name="Hive_Table_used_in_Processing_Pipelines"></a></h4></div>
+<div class="section">
+<h5>Primary Cluster<a name="Primary_Cluster"></a></h5>
+<p>The cluster definition from the lifecycle example can be used.</p></div>
+<div class="section">
+<h5>Input Feed<a name="Input_Feed"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;feed description=&quot;clicks log table &quot; 
name=&quot;input-table&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
+    &lt;groups&gt;online,bi&lt;/groups&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;##cluster##&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2010-01-01T00:00Z&quot; 
end=&quot;2012-04-21T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;hours(24)&quot; 
action=&quot;delete&quot;/&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;table 
uri=&quot;catalog:falcon_db:input_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+
+    &lt;ACL owner=&quot;testuser&quot; group=&quot;group&quot; 
permission=&quot;0x755&quot;/&gt;
+    &lt;schema location=&quot;/schema/clicks&quot; 
provider=&quot;protobuf&quot;/&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Output Feed<a name="Output_Feed"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;feed description=&quot;clicks log identity table&quot; 
name=&quot;output-table&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;&gt;
+    &lt;groups&gt;online,bi&lt;/groups&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;##cluster##&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2010-01-01T00:00Z&quot; 
end=&quot;2012-04-21T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;hours(24)&quot; 
action=&quot;delete&quot;/&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;table 
uri=&quot;catalog:falcon_db:output_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+
+    &lt;ACL owner=&quot;testuser&quot; group=&quot;group&quot; 
permission=&quot;0x755&quot;/&gt;
+    &lt;schema location=&quot;/schema/clicks&quot; 
provider=&quot;protobuf&quot;/&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Process<a name="Process"></a></h5>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;process name=&quot;##processName##&quot; 
xmlns=&quot;uri:falcon:process:0.1&quot;&gt;
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;##cluster##&quot;&gt;
+            &lt;validity end=&quot;2012-04-22T00:00Z&quot; 
start=&quot;2012-04-21T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;parallel&gt;1&lt;/parallel&gt;
+    &lt;order&gt;FIFO&lt;/order&gt;
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+    &lt;timezone&gt;UTC&lt;/timezone&gt;
+
+    &lt;inputs&gt;
+        &lt;input end=&quot;today(0,0)&quot; start=&quot;today(0,0)&quot; 
feed=&quot;input-table&quot; name=&quot;input&quot;/&gt;
+    &lt;/inputs&gt;
+
+    &lt;outputs&gt;
+        &lt;output instance=&quot;now(0,0)&quot; feed=&quot;output-table&quot; 
name=&quot;output&quot;/&gt;
+    &lt;/outputs&gt;
+
+    &lt;properties&gt;
+        &lt;property name=&quot;blah&quot; value=&quot;blah&quot;/&gt;
+    &lt;/properties&gt;
+
+    &lt;workflow engine=&quot;pig&quot; 
path=&quot;/falcon/test/apps/pig/table-id.pig&quot;/&gt;
+
+    &lt;retry policy=&quot;periodic&quot; delay=&quot;minutes(10)&quot; 
attempts=&quot;3&quot;/&gt;
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Pig Script<a name="Pig_Script"></a></h5>
+<div class="source">
+<pre>
+A = load '$input_database.$input_table' using 
org.apache.hcatalog.pig.HCatLoader();
+B = FILTER A BY $input_filter;
+C = foreach B generate id, value;
+store C into '$output_database.$output_table' USING 
org.apache.hcatalog.pig.HCatStorer('$output_dataout_partitions');
+
+</pre></div></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2015
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Added: falcon/site/0.8/InstallationSteps.html
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/InstallationSteps.html?rev=1717229&view=auto
==============================================================================
--- falcon/site/0.8/InstallationSteps.html (added)
+++ falcon/site/0.8/InstallationSteps.html Mon Nov 30 11:11:50 2015
@@ -0,0 +1,148 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2015-11-30
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20151130" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Building & Installing Falcon</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Building & Installing Falcon</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-11-30</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.8</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Building &amp; Installing Falcon<a 
name="Building__Installing_Falcon"></a></h2></div>
+<div class="section">
+<h3>Building Falcon<a name="Building_Falcon"></a></h3></div>
+<div class="section">
+<h4>Prerequisites<a name="Prerequisites"></a></h4>
+<p></p>
+<ul>
+<li>JDK 1.7</li>
+<li>Maven 3.2.x</li></ul></div>
+<div class="section">
+<h4>Step 1 - Clone the Falcon repository<a 
name="Step_1_-_Clone_the_Falcon_repository"></a></h4>
+<div class="source">
+<pre>
+$git clone https://git-wip-us.apache.org/repos/asf/falcon.git falcon
+
+</pre></div></div>
+<div class="section">
+<h4>Step 2 - Build Falcon<a name="Step_2_-_Build_Falcon"></a></h4>
+<div class="source">
+<pre>
+$cd falcon
+$export MAVEN_OPTS=&quot;-Xmx1024m -XX:MaxPermSize=256m -noverify&quot; 
&amp;&amp; mvn clean install
+
+</pre></div>
+<p>It builds and installs the package into the local repository, for use as a 
dependency in other projects locally.</p>
+<p>[optionally -Dhadoop.version=&lt;&lt;hadoop.version&gt;&gt; can be appended 
to build for a specific version of Hadoop]</p>
+<p><b>NOTE:</b> Falcon drops support for Hadoop-1 and only supports Hadoop-2 
from Falcon 0.6 onwards [optionally -Doozie.version=&lt;&lt;oozie 
version&gt;&gt; can be appended to build with a specific version of Oozie. 
Oozie versions &gt;= 4 are supported] NOTE: Falcon builds with JDK 1.7 using 
-noverify option       To compile Falcon with Hive Replication, optionally 
&quot;-P hadoop-2,hivedr&quot; can be appended. For this Hive &gt;= 1.2.0       
and Oozie &gt;= 4.2.0 should be available.</p></div>
+<div class="section">
+<h4>Step 3 - Package and Deploy Falcon<a 
name="Step_3_-_Package_and_Deploy_Falcon"></a></h4>
+<p>Once the build successfully completes, artifacts can be packaged for 
deployment using the assembly plugin. The Assembly Plugin for Maven is 
primarily intended to allow users to aggregate the project output along with 
its dependencies, modules, site documentation, and other files into a single 
distributable archive. There are two basic ways in which you can deploy Falcon 
- Embedded mode(also known as Stand Alone Mode) and Distributed mode. Your next 
steps will vary based on the mode in which you want to deploy Falcon.</p>
+<p><b>NOTE</b> : Oozie is being extended by Falcon (particularly on 
el-extensions) and hence the need for Falcon to build &amp; re-package Oozie, 
so that users of Falcon can work with the right Oozie setup. Though Oozie is 
packaged by Falcon, it needs to be deployed separately by the administrator and 
is not auto deployed along with Falcon.</p></div>
+<div class="section">
+<h5>Embedded/Stand Alone Mode<a name="EmbeddedStand_Alone_Mode"></a></h5>
+<p>Embedded mode is useful when the Hadoop jobs and relevant data processing 
involve only one Hadoop cluster. In this mode  there is a single Falcon server 
that contacts the scheduler to schedule jobs on Hadoop. All the process/feed 
requests  like submit, schedule, suspend, kill etc. are sent to this server. 
For running Falcon in this mode one should use the  Falcon which has been built 
using standalone option. You can find the instructions for Embedded mode setup  
<a href="./Embedded-mode.html">here</a>.</p></div>
+<div class="section">
+<h5>Distributed Mode<a name="Distributed_Mode"></a></h5>
+<p>Distributed mode is for multiple (colos) instances of Hadoop clusters, and 
multiple workflow schedulers to handle them. In this mode Falcon has 2 
components: Prism and Server(s). Both Prism and Server(s) have their own their 
own config locations(startup and runtime properties). In this mode Prism acts 
as a contact point for Falcon servers. While  all commands are available 
through Prism, only read and instance api's are available through Server. You 
can find the  instructions for Distributed Mode setup <a 
href="./Distributed-mode.html">here</a>.</p></div>
+<div class="section">
+<h4>Preparing Oozie and Falcon packages for deployment<a 
name="Preparing_Oozie_and_Falcon_packages_for_deployment"></a></h4>
+<div class="source">
+<pre>
+$cd &lt;&lt;project home&gt;&gt;
+$src/bin/package.sh &lt;&lt;hadoop-version&gt;&gt; 
&lt;&lt;oozie-version&gt;&gt;
+
+&gt;&gt; ex. src/bin/package.sh 1.1.2 4.0.1 or src/bin/package.sh 
0.20.2-cdh3u5 4.0.1
+&gt;&gt; ex. src/bin/package.sh 2.5.0 4.0.0
+&gt;&gt; Falcon package is available in &lt;&lt;falcon 
home&gt;&gt;/target/apache-falcon-&lt;&lt;version&gt;&gt;-bin.tar.gz
+&gt;&gt; Oozie package is available in &lt;&lt;falcon 
home&gt;&gt;/target/oozie-4.0.1-distro.tar.gz
+
+</pre></div>
+<p><b>NOTE:</b> If you have a separate Apache Oozie installation, you will 
need to follow some additional steps:</p>
+<ol style="list-style-type: decimal">
+<li>Once you have setup the Falcon Server, copy libraries under 
{falcon-server-dir}/oozie/libext/ to {oozie-install-dir}/libext.</li>
+<li>Modify Oozie's configuration file. Copy all Falcon related properties from 
{falcon-server-dir}/oozie/conf/oozie-site.xml to 
{oozie-install-dir}/conf/oozie-site.xml</li>
+<li>Restart oozie:
+<ol style="list-style-type: decimal">
+<li>cd {oozie-install-dir}</li>
+<li>sudo -u oozie ./bin/oozie-stop.sh</li>
+<li>sudo -u oozie ./bin/oozie-setup.sh prepare-war</li>
+<li>sudo -u oozie ./bin/oozie-start.sh</li></ol></li></ol></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2015
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Added: falcon/site/0.8/MigrationInstructions.html
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/MigrationInstructions.html?rev=1717229&view=auto
==============================================================================
--- falcon/site/0.8/MigrationInstructions.html (added)
+++ falcon/site/0.8/MigrationInstructions.html Mon Nov 30 11:11:50 2015
@@ -0,0 +1,100 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2015-11-30
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20151130" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Migration Instructions</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Migration Instructions</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-11-30</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.8</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Migration Instructions<a name="Migration_Instructions"></a></h2></div>
+<div class="section">
+<h3>Migrate from 0.5-incubating to 0.6-incubating<a 
name="Migrate_from_0.5-incubating_to_0.6-incubating"></a></h3>
+<p>This is a placeholder wiki for migration instructions from falcon 
0.5-incubating to 0.6-incubating.</p></div>
+<div class="section">
+<h4>Update Entities<a name="Update_Entities"></a></h4></div>
+<div class="section">
+<h4>Change cluster dir permissions<a 
name="Change_cluster_dir_permissions"></a></h4></div>
+<div class="section">
+<h4>Enable/Disable TLS<a name="EnableDisable_TLS"></a></h4></div>
+<div class="section">
+<h4>Authorization<a name="Authorization"></a></h4></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2015
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Added: falcon/site/0.8/OnBoarding.html
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/OnBoarding.html?rev=1717229&view=auto
==============================================================================
--- falcon/site/0.8/OnBoarding.html (added)
+++ falcon/site/0.8/OnBoarding.html Mon Nov 30 11:11:50 2015
@@ -0,0 +1,368 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2015-11-30
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20151130" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Contents</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Contents</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-11-30</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.8</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h3>Contents<a name="Contents"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Onboarding Steps">Onboarding Steps</a></li>
+<li><a href="#Sample Pipeline">Sample Pipeline</a></li>
+<li><a href="./HiveIntegration.html">Hive Examples</a></li></ul></div>
+<div class="section">
+<h4>Onboarding Steps<a name="Onboarding_Steps"></a></h4>
+<p></p>
+<ul>
+<li>Create cluster definition for the cluster, specifying name node, job 
tracker, workflow engine endpoint, messaging endpoint. Refer to <a 
href="./EntitySpecification.html">cluster definition</a> for details.</li>
+<li>Create Feed definitions for each of the input and output specifying 
frequency, data path, ownership. Refer to <a 
href="./EntitySpecification.html">feed definition</a> for details.</li>
+<li>Create Process definition for your job. Process defines configuration for 
the workflow job. Important attributes are frequency, inputs/outputs and 
workflow path. Refer to <a href="./EntitySpecification.html">process 
definition</a> for process details.</li>
+<li>Define workflow for your job using the workflow engine(only oozie is 
supported as of now). Refer <a class="externalLink" 
href="http://oozie.apache.org/docs/3.1.3-incubating/WorkflowFunctionalSpec.html";>Oozie
 Workflow Specification</a>. The libraries required for the workflow should be 
available in lib folder in workflow path.</li>
+<li>Set-up workflow definition, libraries and referenced scripts on 
hadoop.</li>
+<li>Submit cluster definition</li>
+<li>Submit and schedule feed and process definitions</li></ul></div>
+<div class="section">
+<h4>Sample Pipeline<a name="Sample_Pipeline"></a></h4></div>
+<div class="section">
+<h5>Cluster   <a name="Cluster"></a></h5>
+<p>Cluster definition that contains end points for name node, job tracker, 
oozie and jms server: The cluster locations MUST be created prior to submitting 
a cluster entity to Falcon. <b>staging</b> must have 777 permissions and the 
parent dirs must have execute permissions <b>working</b> must have 755 
permissions and the parent dirs must have execute permissions</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot;?&gt;
+&lt;!--
+    Cluster configuration
+  --&gt;
+&lt;cluster colo=&quot;ua2&quot; description=&quot;&quot; 
name=&quot;corp&quot; xmlns=&quot;uri:falcon:cluster:0.1&quot;
+    xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;    
+    &lt;interfaces&gt;
+        &lt;interface type=&quot;readonly&quot; 
endpoint=&quot;hftp://name-node.com:50070&quot; version=&quot;2.5.0&quot; /&gt;
+
+        &lt;interface type=&quot;write&quot; 
endpoint=&quot;hdfs://name-node.com:54310&quot; version=&quot;2.5.0&quot; /&gt;
+
+        &lt;interface type=&quot;execute&quot; 
endpoint=&quot;job-tracker:54311&quot; version=&quot;2.5.0&quot; /&gt;
+
+        &lt;interface type=&quot;workflow&quot; 
endpoint=&quot;http://oozie.com:11000/oozie/&quot; version=&quot;4.0.1&quot; 
/&gt;
+
+        &lt;interface type=&quot;messaging&quot; 
endpoint=&quot;tcp://jms-server.com:61616?daemon=true&quot; 
version=&quot;5.1.6&quot; /&gt;
+    &lt;/interfaces&gt;
+
+    &lt;locations&gt;
+        &lt;location name=&quot;staging&quot; 
path=&quot;/projects/falcon/staging&quot; /&gt;
+        &lt;location name=&quot;temp&quot; path=&quot;/tmp&quot; /&gt;
+        &lt;location name=&quot;working&quot; 
path=&quot;/projects/falcon/working&quot; /&gt;
+    &lt;/locations&gt;
+&lt;/cluster&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Input Feed<a name="Input_Feed"></a></h5>
+<p>Hourly feed that defines feed path, frequency, ownership and validity:</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+&lt;!--
+    Hourly sample input data
+  --&gt;
+
+&lt;feed description=&quot;sample input data&quot; 
name=&quot;SampleInput&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;
+    xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;groups&gt;group&lt;/groups&gt;
+
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+
+    &lt;late-arrival cut-off=&quot;hours(6)&quot; /&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;corp&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2009-01-01T00:00Z&quot; 
end=&quot;2099-12-31T00:00Z&quot; timezone=&quot;UTC&quot; /&gt;
+            &lt;retention limit=&quot;months(24)&quot; 
action=&quot;delete&quot; /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;locations&gt;
+        &lt;location type=&quot;data&quot; 
path=&quot;/projects/bootcamp/data/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleInput&quot;
 /&gt;
+        &lt;location type=&quot;stats&quot; 
path=&quot;/projects/bootcamp/stats/SampleInput&quot; /&gt;
+        &lt;location type=&quot;meta&quot; 
path=&quot;/projects/bootcamp/meta/SampleInput&quot; /&gt;
+    &lt;/locations&gt;
+
+    &lt;ACL owner=&quot;suser&quot; group=&quot;users&quot; 
permission=&quot;0755&quot; /&gt;
+
+    &lt;schema location=&quot;/none&quot; provider=&quot;none&quot; /&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Output Feed<a name="Output_Feed"></a></h5>
+<p>Daily feed that defines feed path, frequency, ownership and validity:</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+&lt;!--
+    Daily sample output data
+  --&gt;
+
+&lt;feed description=&quot;sample output data&quot; 
name=&quot;SampleOutput&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;
+xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+    &lt;groups&gt;group&lt;/groups&gt;
+
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+
+    &lt;late-arrival cut-off=&quot;hours(6)&quot; /&gt;
+
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;corp&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2009-01-01T00:00Z&quot; 
end=&quot;2099-12-31T00:00Z&quot; timezone=&quot;UTC&quot; /&gt;
+            &lt;retention limit=&quot;months(24)&quot; 
action=&quot;delete&quot; /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+    &lt;locations&gt;
+        &lt;location type=&quot;data&quot; 
path=&quot;/projects/bootcamp/output/${YEAR}-${MONTH}-${DAY}/SampleOutput&quot; 
/&gt;
+        &lt;location type=&quot;stats&quot; 
path=&quot;/projects/bootcamp/stats/SampleOutput&quot; /&gt;
+        &lt;location type=&quot;meta&quot; 
path=&quot;/projects/bootcamp/meta/SampleOutput&quot; /&gt;
+    &lt;/locations&gt;
+
+    &lt;ACL owner=&quot;suser&quot; group=&quot;users&quot; 
permission=&quot;0755&quot; /&gt;
+
+    &lt;schema location=&quot;/none&quot; provider=&quot;none&quot; /&gt;
+&lt;/feed&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Process<a name="Process"></a></h5>
+<p>Sample process which runs daily at 6th hour on corp cluster. It takes one 
input - SampleInput for the previous day(24 instances). It generates one output 
- SampleOutput for previous day. The workflow is defined at 
/projects/bootcamp/workflow/workflow.xml. Any libraries available for the 
workflow should be at /projects/bootcamp/workflow/lib. The process also defines 
properties queueName, ssh.host, and fileTimestamp which are passed to the 
workflow. In addition, Falcon exposes the following properties to the workflow: 
nameNode, jobTracker(hadoop properties), input and output(Input/Output 
properties).</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
+&lt;!--
+    Daily sample process. Runs at 6th hour every day. Input - last day's 
hourly data. Generates output for yesterday
+ --&gt;
+&lt;process name=&quot;SampleProcess&quot;&gt;
+    &lt;cluster name=&quot;corp&quot; /&gt;
+
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+
+    &lt;validity start=&quot;2012-04-03T06:00Z&quot; 
end=&quot;2022-12-30T00:00Z&quot; timezone=&quot;UTC&quot; /&gt;
+
+    &lt;inputs&gt;
+        &lt;input name=&quot;input&quot; feed=&quot;SampleInput&quot; 
start=&quot;yesterday(0,0)&quot; end=&quot;today(-1,0)&quot; /&gt;
+    &lt;/inputs&gt;
+
+    &lt;outputs&gt;
+            &lt;output name=&quot;output&quot; feed=&quot;SampleOutput&quot; 
instance=&quot;yesterday(0,0)&quot; /&gt;
+    &lt;/outputs&gt;
+
+    &lt;properties&gt;
+        &lt;property name=&quot;queueName&quot; value=&quot;reports&quot; /&gt;
+        &lt;property name=&quot;ssh.host&quot; value=&quot;host.com&quot; /&gt;
+        &lt;property name=&quot;fileTimestamp&quot; 
value=&quot;${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}&quot; /&gt;
+    &lt;/properties&gt;
+
+    &lt;workflow engine=&quot;oozie&quot; 
path=&quot;/projects/bootcamp/workflow&quot; /&gt;
+
+    &lt;retry policy=&quot;periodic&quot; delay=&quot;minutes(5)&quot; 
attempts=&quot;3&quot; /&gt;
+    
+    &lt;late-process policy=&quot;exp-backoff&quot; 
delay=&quot;hours(1)&quot;&gt;
+        &lt;late-input input=&quot;input&quot; 
workflow-path=&quot;/projects/bootcamp/workflow/lateinput&quot; /&gt;
+    &lt;/late-process&gt;
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Oozie Workflow<a name="Oozie_Workflow"></a></h5>
+<p>The sample user workflow contains 3 actions:</p>
+<ul>
+<li>Pig action - Executes pig script 
/projects/bootcamp/workflow/script.pig</li>
+<li>concatenator - Java action that concatenates part files and generates a 
single file</li>
+<li>file upload - ssh action that gets the concatenated file from hadoop and 
sends the file to a remote host</li></ul>
+<div class="source">
+<pre>
+&lt;workflow-app xmlns=&quot;uri:oozie:workflow:0.2&quot; 
name=&quot;sample-wf&quot;&gt;
+        &lt;start to=&quot;pig&quot; /&gt;
+
+        &lt;action name=&quot;pig&quot;&gt;
+                &lt;pig&gt;
+                        &lt;job-tracker&gt;${jobTracker}&lt;/job-tracker&gt;
+                        &lt;name-node&gt;${nameNode}&lt;/name-node&gt;
+                        &lt;prepare&gt;
+                                &lt;delete path=&quot;${output}&quot;/&gt;
+                        &lt;/prepare&gt;
+                        &lt;configuration&gt;
+                                &lt;property&gt;
+                                        
&lt;name&gt;mapred.job.queue.name&lt;/name&gt;
+                                        &lt;value&gt;${queueName}&lt;/value&gt;
+                                &lt;/property&gt;
+                                &lt;property&gt;
+                                        
&lt;name&gt;mapreduce.fileoutputcommitter.marksuccessfuljobs&lt;/name&gt;
+                                        &lt;value&gt;true&lt;/value&gt;
+                                &lt;/property&gt;
+                        &lt;/configuration&gt;
+                        
&lt;script&gt;${nameNode}/projects/bootcamp/workflow/script.pig&lt;/script&gt;
+                        &lt;param&gt;input=${input}&lt;/param&gt;
+                        &lt;param&gt;output=${output}&lt;/param&gt;
+                        &lt;file&gt;lib/dependent.jar&lt;/file&gt;
+                &lt;/pig&gt;
+                &lt;ok to=&quot;concatenator&quot; /&gt;
+                &lt;error to=&quot;fail&quot; /&gt;
+        &lt;/action&gt;
+
+        &lt;action name=&quot;concatenator&quot;&gt;
+                &lt;java&gt;
+                        &lt;job-tracker&gt;${jobTracker}&lt;/job-tracker&gt;
+                        &lt;name-node&gt;${nameNode}&lt;/name-node&gt;
+                        &lt;prepare&gt;
+                                &lt;delete 
path=&quot;${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv&quot;/&gt;
+                        &lt;/prepare&gt;
+                        &lt;configuration&gt;
+                                &lt;property&gt;
+                                        
&lt;name&gt;mapred.job.queue.name&lt;/name&gt;
+                                        &lt;value&gt;${queueName}&lt;/value&gt;
+                                &lt;/property&gt;
+                        &lt;/configuration&gt;
+                        
&lt;main-class&gt;com.wf.Concatenator&lt;/main-class&gt;
+                        &lt;arg&gt;${output}&lt;/arg&gt;
+                        
&lt;arg&gt;${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv&lt;/arg&gt;
+                &lt;/java&gt;
+                &lt;ok to=&quot;fileupload&quot; /&gt;
+                &lt;error to=&quot;fail&quot;/&gt;
+        &lt;/action&gt;
+                        
+        &lt;action name=&quot;fileupload&quot;&gt;
+                &lt;ssh&gt;
+                        &lt;host&gt;localhost&lt;/host&gt;
+                        &lt;command&gt;/tmp/fileupload.sh&lt;/command&gt;
+                        
&lt;args&gt;${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv&lt;/args&gt;
+                        
&lt;args&gt;${wf:conf(&quot;ssh.host&quot;)}&lt;/args&gt;
+                        &lt;capture-output/&gt;
+                &lt;/ssh&gt;
+                &lt;ok to=&quot;fileUploadDecision&quot; /&gt;
+                &lt;error to=&quot;fail&quot;/&gt;
+        &lt;/action&gt;
+
+        &lt;decision name=&quot;fileUploadDecision&quot;&gt;
+                &lt;switch&gt;
+                        &lt;case to=&quot;end&quot;&gt;
+                                ${wf:actionData('fileupload')['output'] == '0'}
+                        &lt;/case&gt;
+                        &lt;default to=&quot;fail&quot;/&gt;
+                &lt;/switch&gt;
+        &lt;/decision&gt;
+
+        &lt;kill name=&quot;fail&quot;&gt;
+                &lt;message&gt;Workflow failed, error 
message[${wf:errorMessage(wf:lastErrorNode())}]&lt;/message&gt;
+        &lt;/kill&gt;
+
+        &lt;end name=&quot;end&quot; /&gt;
+&lt;/workflow-app&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>File Upload Script<a name="File_Upload_Script"></a></h5>
+<p>The script gets the file from hadoop, rsyncs the file to /tmp on remote 
host and deletes the file from hadoop</p>
+<div class="source">
+<pre>
+#!/bin/bash
+
+trap 'echo &quot;output=$?&quot;; exit $?' ERR INT TERM
+
+echo &quot;Arguments: $@&quot;
+SRCFILE=$1
+DESTHOST=$3
+
+FILENAME=`basename $SRCFILE`
+rm -f /tmp/$FILENAME
+hadoop fs -copyToLocal $SRCFILE /tmp/
+echo &quot;Copied $SRCFILE to /tmp&quot;
+
+rsync -ztv --rsh=ssh --stats /tmp/$FILENAME $DESTHOST:/tmp
+echo &quot;rsynced $FILENAME to $DESTUSER@$DESTHOST:$DESTFILE&quot;
+
+hadoop fs -rmr $SRCFILE
+echo &quot;Deleted $SRCFILE&quot;
+
+rm -f /tmp/$FILENAME
+echo &quot;output=0&quot;
+
+</pre></div></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2015
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Added: falcon/site/0.8/Operability.html
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/Operability.html?rev=1717229&view=auto
==============================================================================
--- falcon/site/0.8/Operability.html (added)
+++ falcon/site/0.8/Operability.html Mon Nov 30 11:11:50 2015
@@ -0,0 +1,153 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2015-11-30
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20151130" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Operationalizing Falcon</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Operationalizing Falcon</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-11-30</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.8</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Operationalizing Falcon<a name="Operationalizing_Falcon"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>Apache Falcon provides various tools to operationalize Falcon consisting of 
Alerts for unrecoverable errors, Audits of user actions, Metrics, and 
Notifications. They are detailed below.</p>
+<p>++ Lineage</p>
+<p>Currently Lineage has no way to access or restore information about entity 
instances created during the time lineage was disabled. Information about 
entities however, is preserved and bootstrapped when lineage is enabled. If you 
have to reset the graph db then you can delete the graph db files as specified 
in the startup.properties and restart the falcon. Please note: you will loose 
all the information about the instances if you delete the graph db.</p></div>
+<div class="section">
+<h3>Monitoring<a name="Monitoring"></a></h3>
+<p>Falcon provides monitoring of various events by capturing metrics of those 
events. The metric numbers can then be used to monitor performance and health 
of the Falcon system and the entire processing pipelines.</p>
+<p>Falcon also exposes <a class="externalLink" 
href="https://github.com/thinkaurelius/titan/wiki/Titan-Performance-and-Monitoring";>metrics
 for titandb</a></p>
+<p>Users can view the logs of these events in the metric.log file, by default 
this file is created under ${user.dir}/logs/ directory. Users may also extend 
the Falcon monitoring framework to send events to systems like Mondemand/lwes 
by implementingorg.apache.falcon.plugin.MonitoringPlugin interface.</p>
+<p>The following events are captured by Falcon for logging the metrics:</p>
+<ol style="list-style-type: decimal">
+<li>New cluster definitions posted to Falcon (success &amp; failures)</li>
+<li>New feed definition posted to Falcon (success &amp; failures)</li>
+<li>New process definition posted to Falcon (success &amp; failures)</li>
+<li>Process update events (success &amp; failures)</li>
+<li>Feed update events (success &amp; failures)</li>
+<li>Cluster update events (success &amp; failures)</li>
+<li>Process suspend events (success &amp; failures)</li>
+<li>Feed suspend events (success &amp; failures)</li>
+<li>Process resume events (success &amp; failures)</li>
+<li>Feed resume events (success &amp; failures)</li>
+<li>Process remove events (success &amp; failures)</li>
+<li>Feed remove events (success &amp; failures)</li>
+<li>Cluster remove events (success &amp; failures)</li>
+<li>Process instance kill events (success &amp; failures)</li>
+<li>Process instance re-run events (success &amp; failures)</li>
+<li>Process instance generation events</li>
+<li>Process instance failure events</li>
+<li>Process instance auto-retry events</li>
+<li>Process instance retry exhaust events</li>
+<li>Feed instance deletion event</li>
+<li>Feed instance deletion failure event (no retries)</li>
+<li>Feed instance replication event</li>
+<li>Feed instance replication failure event</li>
+<li>Feed instance replication auto-retry event</li>
+<li>Feed instance replication retry exhaust event</li>
+<li>Feed instance late arrival event</li>
+<li>Feed instance post cut-off arrival event</li>
+<li>Process re-run due to late feed event</li>
+<li>Transaction rollback failed event</li></ol>
+<p>The metric logged for an event has the following properties:</p>
+<ol style="list-style-type: decimal">
+<li>Action - Name of the event.</li>
+<li>Dimensions - A list of name/value pairs of various attributes for a given 
action.</li>
+<li>Status- Status of an action FAILED/SUCCEEDED.</li>
+<li>Time-taken - Time taken in nanoseconds for a given action.</li></ol>
+<p>An example for an event logged for a submit of a new process definition:</p>
+<p>2012-05-04 12:23:34,026 {Action:submit, Dimensions:{entityType=process}, 
Status: SUCCEEDED, Time-taken:97087000 ns}</p>
+<p>Users may parse the metric.log or capture these events from custom 
monitoring frameworks and can plot various graphs or send alerts according to 
their requirements.</p></div>
+<div class="section">
+<h3>Notifications<a name="Notifications"></a></h3>
+<p>Falcon creates a JMS topic for every process/feed that is scheduled in 
Falcon. The implementation class and the broker url of the JMS engine are read 
from the dependent cluster's definition. Users may register consumers on the 
required topic to check the availability or status of feed instances.</p>
+<p>For a given process that is scheduled, the name of the topic is same as the 
process name. Falcon sends a Map message for every feed produced by the 
instance of a process to the JMS topic. The JMS MapMessage sent to a topic has 
the following properties: entityName, feedNames, feedInstancePath, workflowId, 
runId, nominalTime, timeStamp, brokerUrl, brokerImplClass, entityType, 
operation, logFile, topicName, status, brokerTTL;</p>
+<p>For a given feed that is scheduled, the name of the topic is same as the 
feed name. Falcon sends a map message for every feed instance that is 
deleted/archived/replicated depending upon the retention policy set in the feed 
definition. The JMS MapMessage sent to a topic has the following properties: 
entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, 
timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, 
topicName, status, brokerTTL;</p>
+<p>The JMS messages are automatically purged after a certain period (default 3 
days) by the Falcon JMS house-keeping service.TTL (Time-to-live) for JMS 
message can be configured in the Falcon's startup.properties file.</p></div>
+<div class="section">
+<h3>Alerts<a name="Alerts"></a></h3>
+<p>Falcon generates alerts for unrecoverable errors into a log file by 
default. Users can view these alerts in the alerts.log file, by default this 
file is created under ${user.dir}/logs/ directory.</p>
+<p>Users may also extend the Falcon Alerting plugin to send events to systems 
like Nagios, etc. by extending org.apache.falcon.plugin.AlertingPlugin 
interface.</p></div>
+<div class="section">
+<h3>Audits<a name="Audits"></a></h3>
+<p>Falcon audits all user activity and captures them into a log file by 
default. Users can view these audits in the audit.log file, by default this 
file is created under ${user.dir}/logs/ directory.</p>
+<p>Users may also extend the Falcon Audit plugin to send audits to systems 
like Apache Argus, etc. by extending org.apache.falcon.plugin.AuditingPlugin 
interface.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2015
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Added: falcon/site/0.8/PrismSetup.png
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/PrismSetup.png?rev=1717229&view=auto
==============================================================================
Binary file - no diff available.

Propchange: falcon/site/0.8/PrismSetup.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: falcon/site/0.8/ProcessSchedule.png
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/ProcessSchedule.png?rev=1717229&view=auto
==============================================================================
Binary file - no diff available.

Propchange: falcon/site/0.8/ProcessSchedule.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: falcon/site/0.8/Recipes.html
URL: 
http://svn.apache.org/viewvc/falcon/site/0.8/Recipes.html?rev=1717229&view=auto
==============================================================================
--- falcon/site/0.8/Recipes.html (added)
+++ falcon/site/0.8/Recipes.html Mon Nov 30 11:11:50 2015
@@ -0,0 +1,175 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2015-11-30
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20151130" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Falcon Recipes</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Falcon Recipes</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-11-30</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.8</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Falcon Recipes<a name="Falcon_Recipes"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>A Falcon recipe is a static process template with parameterized workflow to 
realize a specific use case. Recipes are defined in user space. Recipes will 
not have support for update or lifecycle management.</p>
+<p>For example:</p>
+<p></p>
+<ul>
+<li>Replicating directories from one HDFS cluster to another (not timed 
partitions)</li>
+<li>Replicating hive metadata (database, table, views, etc.)</li>
+<li>Replicating between HDFS and Hive - either way</li>
+<li>Data masking etc.</li></ul></div>
+<div class="section">
+<h3>Proposal<a name="Proposal"></a></h3>
+<p>Falcon provides a Process abstraction that encapsulates the configuration 
for a user workflow with scheduling controls. All recipes can be modeled as a 
Process with in Falcon which executes the user workflow periodically. The 
process and its associated workflow are parameterized. The user will provide a 
properties file with name value pairs that are substituted by falcon before 
scheduling it. Falcon translates these recipes as a process entity by replacing 
the parameters in the workflow definition.</p></div>
+<div class="section">
+<h3>Falcon CLI recipe support<a name="Falcon_CLI_recipe_support"></a></h3>
+<p><a href="./FalconCLI.html">Recipe command usage is defined here.</a></p>
+<p>CLI accepts recipe option with a recipe name and optional tool and does the 
following:</p>
+<ul>
+<li>Validates the options; name option is mandatory and tool is optional and 
should be provided if user wants to override the base recipe tool</li>
+<li>Looks for &lt;name&gt;-workflow.xml, &lt;name&gt;-template.xml and 
&lt;name&gt;.properties file in the path specified by falcon.recipe.path in 
client.properties. If files cannot be found then Falcon CLI will fail</li>
+<li>Invokes a Tool to substitute the properties in the templated process for 
the recipe. By default invokes base tool if tool option is not passed. Tool is 
responsible for generating process entity at the path specified by 
FalconCLI</li>
+<li>Validates the generated entity</li>
+<li>Submit and schedule this entity</li>
+<li>Generated process entity files are stored in tmp directory</li></ul></div>
+<div class="section">
+<h3>Base Recipe tool<a name="Base_Recipe_tool"></a></h3>
+<p>Falcon provides a base tool that recipes can override. Base Recipe tool 
does the following:</p>
+<ul>
+<li>Expects recipe template file path, recipe properties file path and path 
where process entity to be submitted should be generated. Validates these 
arguments</li>
+<li>Validates the artifacts i.e. workflow and/or lib files specified in the 
recipe template exists on local filesystem or HDFS at the specified path else 
returns error</li>
+<li>Copies if the artifacts exists on local filesystem
+<ul>
+<li>If workflow is on local FS then falcon.recipe.workflow.path in recipe 
property file is mandatory for it to be copied to HDFS. If templated process 
requires custom libs falcon.recipe.workflow.lib.path property is mandatory for 
them to be copied from Local FS to HDFS. Recipe tool will copy the local 
artifacts only if these properties are set in properties file</li></ul></li>
+<li>Looks for the patten ##[A-Za-z0-9_.]*## in the templated process and 
substitutes it with the properties. Process entity generated after the 
substitution is written to the empty file passed by FalconCLI</li></ul></div>
+<div class="section">
+<h3>Recipe template file format<a name="Recipe_template_file_format"></a></h3>
+<p></p>
+<ul>
+<li>Any templatized string should be in the format ##[A-Za-z0-9_.]*##.</li>
+<li>There should be a corresponding entry in the recipe properties file 
&quot;falcon.recipe.&lt;templatized-string&gt; = &lt;value to be 
substituted&gt;&quot;</li></ul>
+<div class="source">
+<pre>
+Example: If the entry in recipe template is &lt;workflow 
name=&quot;##workflow.name##&quot;&gt; there should be a corresponding entry in 
the recipe properties file falcon.recipe.workflow.name=hdfs-dr-workflow
+
+</pre></div></div>
+<div class="section">
+<h3>Recipe properties file format<a 
name="Recipe_properties_file_format"></a></h3>
+<p></p>
+<ul>
+<li>Regular key value pair properties file</li>
+<li>Property key should be prefixed by &quot;falcon.recipe.&quot;</li></ul>
+<div class="source">
+<pre>
+Example: falcon.recipe.workflow.name=hdfs-dr-workflow
+Recipe template will have &lt;workflow name=&quot;##workflow.name##&quot;&gt;. 
Recipe tool will look for the patten ##workflow.name##
+and replace it with the property value &quot;hdfs-dr-workflow&quot;. 
Substituted template will have &lt;workflow 
name=&quot;hdfs-dr-workflow&quot;&gt;
+
+</pre></div></div>
+<div class="section">
+<h3>Metrics<a name="Metrics"></a></h3>
+<p>HDFS DR recipes will capture the replication metrics like TIMETAKEN, COPY, 
BYTESCOPIED for an instance and populate to the GraphDB for display on 
UI.</p></div>
+<div class="section">
+<h3>Managing the scheduled recipe process<a 
name="Managing_the_scheduled_recipe_process"></a></h3>
+<p></p>
+<ul>
+<li>Scheduled recipe process is similar to regular process
+<ul>
+<li>List : falcon entity -type process -name &lt;recipe-process-name&gt; 
-list</li>
+<li>Status : falcon entity -type process -name &lt;recipe-process-name&gt; 
-status</li>
+<li>Delete : falcon entity -type process -name &lt;recipe-process-name&gt; 
-delete</li></ul></li></ul></div>
+<div class="section">
+<h3>Sample recipes<a name="Sample_recipes"></a></h3>
+<p></p>
+<ul>
+<li>Sample recipes are published in addons/recipes</li></ul></div>
+<div class="section">
+<h3>Types of recipes<a name="Types_of_recipes"></a></h3>
+<p></p>
+<ul>
+<li><a href="./HDFSDR.html">HDFS Recipe</a></li>
+<li><a href="./HiveDR.html">HiveDR Recipe</a></li></ul></div>
+<div class="section">
+<h3>Packaging<a name="Packaging"></a></h3>
+<p></p>
+<ul>
+<li>There is no packaging for recipes at this time but will be added 
soon.</li></ul></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2015
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>


Reply via email to