Added: incubator/falcon/site/0.6-incubating/FalconCLI.html
URL: 
http://svn.apache.org/viewvc/incubator/falcon/site/0.6-incubating/FalconCLI.html?rev=1643497&view=auto
==============================================================================
--- incubator/falcon/site/0.6-incubating/FalconCLI.html (added)
+++ incubator/falcon/site/0.6-incubating/FalconCLI.html Sat Dec  6 06:11:41 2014
@@ -0,0 +1,278 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2014-12-05
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20141205" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - FalconCLI</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+    
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                                  <a href="../../index.html" 
id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Falcon" width="200px" 
height="45px"/>
+                </a>
+                      </div>
+        <div class="pull-right">                  <a 
href="http://incubator.apache.org"; id="bannerRight">
+                                                                               
                 <img src="images/apache-incubator-logo.png"  alt="Apache 
Incubator"/>
+                </a>
+      </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Home">
+        Home</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">FalconCLI</li>
+        
+                
+                    
+      
+                                              
+    <li class="pull-right">              <a 
href="http://s.apache.org/falcon-0.5-release-notes"; class="externalLink" 
title="Released: 2014-09-22">
+        Released: 2014-09-22</a>
+  </li>
+
+        <li class="divider pull-right">|</li>
+      
+    <li class="pull-right">              <a 
href="http://www.apache.org/dist/incubator/falcon"; class="externalLink" 
title="0.5-incubating">
+        0.5-incubating</a>
+  </li>
+
+                        </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>FalconCLI<a name="FalconCLI"></a></h2>
+<p>FalconCLI is a interface between user and Falcon. It is a command line 
utility provided by Falcon. FalconCLI supports Entity Management, Instance 
Management and Admin operations.There is a set of web services that are used by 
FalconCLI to interact with Falcon.</p></div>
+<div class="section">
+<h3>Entity Management Operations<a 
name="Entity_Management_Operations"></a></h3></div>
+<div class="section">
+<h4>Submit<a name="Submit"></a></h4>
+<p>Submit option is used to set up entity definition.</p>
+<p>Example:  $FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml</p>
+<p>Note: The url option in the above and all subsequent commands is optional. 
If not mentioned it will be picked from client.properties file. If the option 
is not provided and also not set in client.properties, Falcon CLI will 
fail.</p></div>
+<div class="section">
+<h4>Schedule<a name="Schedule"></a></h4>
+<p>Once submitted, an entity can be scheduled using schedule option. Process 
and feed can only be scheduled.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [process|feed] -name 
&lt;&lt;name&gt;&gt; -schedule</p>
+<p>Example: $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess 
-schedule</p></div>
+<div class="section">
+<h4>Suspend<a name="Suspend"></a></h4>
+<p>Suspend on an entity results in suspension of the oozie bundle that was 
scheduled earlier through the schedule function. No further instances are 
executed on a suspended entity. Only schedule-able entities(process/feed) can 
be suspended.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name 
&lt;&lt;name&gt;&gt; -suspend</p></div>
+<div class="section">
+<h4>Resume<a name="Resume"></a></h4>
+<p>Puts a suspended process/feed back to active, which in turn resumes 
applicable oozie bundle.</p>
+<p>Usage:  $FALCON_HOME/bin/falcon entity  -type [feed|process] -name 
&lt;&lt;name&gt;&gt; -resume</p></div>
+<div class="section">
+<h4>Delete<a name="Delete"></a></h4>
+<p>Delete removes the submitted entity definition for the specified entity and 
put it into the archive.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [cluster|feed|process] -name 
&lt;&lt;name&gt;&gt; -delete</p></div>
+<div class="section">
+<h4>List<a name="List"></a></h4>
+<p>Entities of a particular type can be listed with list sub-command.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -list</p>
+<p>Optional Args : -fields &lt;&lt;field1,field2&gt;&gt; -filterBy 
&lt;&lt;field1:value1,field2:value2&gt;&gt; -tags 
&lt;&lt;tagkey=tagvalue,tagkey=tagvalue&gt;&gt; -orderBy &lt;&lt;field&gt;&gt; 
-sortOrder &lt;&lt;sortOrder&gt;&gt; -offset 0 -numResults 10</p>
+<p><a href="./Restapi/EntityList.html">Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>Summary<a name="Summary"></a></h4>
+<p>Summary of entities of a particular type and a cluster will be listed. 
Entity summary has N most recent instances of entity.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [feed|process] -summary</p>
+<p>Optional Args : -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -fields &lt;&lt;field1,field2&gt;&gt; 
-filterBy &lt;&lt;field1:value1,field2:value2&gt;&gt; -tags 
&lt;&lt;tagkey=tagvalue,tagkey=tagvalue&gt;&gt; -orderBy &lt;&lt;field&gt;&gt; 
-sortOrder &lt;&lt;sortOrder&gt;&gt; -offset 0 -numResults 10 -numInstances 
7</p>
+<p><a href="./Restapi/EntitySummary.html">Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>Update<a name="Update"></a></h4>
+<p>Update operation allows an already submitted/scheduled entity to be 
updated. Cluster update is currently not allowed.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name 
&lt;&lt;name&gt;&gt; -update [-effective &lt;&lt;effective time&gt;&gt;] -file 
&lt;&lt;path_to_file&gt;&gt;</p>
+<p>Example: $FALCON_HOME/bin/falcon entity -type process -name <a 
href="./HourlyReportsGenerator.html">HourlyReportsGenerator</a> -update -file 
/process/definition.xml</p></div>
+<div class="section">
+<h4>Status<a name="Status"></a></h4>
+<p>Status returns the current status of the entity.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name 
&lt;&lt;name&gt;&gt; -status</p></div>
+<div class="section">
+<h4>Dependency<a name="Dependency"></a></h4>
+<p>With the use of dependency option, we can list all the entities on which 
the specified entity is dependent. For example for a feed, dependency return 
the cluster name and for process it returns all the input feeds, output feeds 
and cluster names.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name 
&lt;&lt;name&gt;&gt; -dependency</p></div>
+<div class="section">
+<h4>Definition<a name="Definition"></a></h4>
+<p>Definition option returns the entity definition submitted earlier during 
submit step.</p>
+<p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name 
&lt;&lt;name&gt;&gt; -definition</p></div>
+<div class="section">
+<h3>Instance Management Options<a 
name="Instance_Management_Options"></a></h3></div>
+<div class="section">
+<h4>Kill<a name="Kill"></a></h4>
+<p>Kill sub-command is used to kill all the instances of the specified process 
whose nominal time is between the given start time and end time.</p>
+<p>Note:  1. The start time and end time needs to be specified in TZ format. 
Example:   01 Jan 2012 01:00  =&gt; 2012-01-01T01:00Z</p>
+<p>3. Process name is compulsory parameter for each instance management 
command.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -kill -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Suspend<a name="Suspend"></a></h4>
+<p>Suspend is used to suspend a instance or instances  for the given process. 
This option pauses the parent workflow at the state, which it was in at the 
time of execution of this command.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -suspend -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; 
-end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Continue<a name="Continue"></a></h4>
+<p>Continue option is used to continue the failed workflow instance. This 
option is valid only for process instances in terminal state, i.e. KILLED or 
FAILED.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -continue -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; 
-end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Rerun<a name="Rerun"></a></h4>
+<p>Rerun option is used to rerun instances of a given process. This option is 
valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or 
FAILED. Optionally, you can specify the properties to override.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -rerun -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot; [-file &lt;&lt;properties 
file&gt;&gt;]</p></div>
+<div class="section">
+<h4>Resume<a name="Resume"></a></h4>
+<p>Resume option is used to resume any instance that  is in suspended 
state.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -resume -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; 
-end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Status<a name="Status"></a></h4>
+<p>Status option via CLI can be used to get the status of a single or multiple 
instances.  If the instance is not yet materialized but is within the process 
validity range, WAITING is returned as the state. Along with the status of the 
instance time is also returned. Log location gives the oozie workflow url If 
the instance is in WAITING state, missing dependencies are listed. The job urls 
are populated for all actions of user workflow and non-succeeded actions of the 
main-workflow. The user then need not go to the underlying scheduler to get the 
job urls when needed to debug an issue in the job.</p>
+<p>Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:</p>
+<p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getStatus
 is 
successful&quot;,&quot;instances&quot;:[{&quot;instance&quot;:&quot;2012-05-07T05:02Z&quot;,&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},{&quot;instance&quot;:&quot;2012-05-07T05:07Z&quot;,&quot;status&quot;:&quot;RUNNING&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},
 
{&quot;instance&quot;:&quot;2010-01-02T11:05Z&quot;,&quot;status&quot;:&quot;WAITING&quot;}]</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -status</p>
+<p>Optional Args : -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -colo &lt;&lt;colo&gt;&gt; -filterBy 
&lt;&lt;field1:value1,field2:value2&gt;&gt; -lifecycle 
&lt;&lt;lifecycles&gt;&gt; -orderBy field -sortOrder &lt;&lt;sortOrder&gt;&gt; 
-offset 0 -numResults 10</p>
+<p><a href="./Restapi/InstanceStatus.html"> Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>List<a name="List"></a></h4>
+<p>List option via CLI can be used to get single or multiple instances.  If 
the instance is not yet materialized but is within the process validity range, 
WAITING is returned as the state. Instance time is also returned. Log location 
gives the oozie workflow url If the instance is in WAITING state, missing 
dependencies are listed</p>
+<p>Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:</p>
+<p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getStatus
 is 
successful&quot;,&quot;instances&quot;:[{&quot;instance&quot;:&quot;2012-05-07T05:02Z&quot;,&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},{&quot;instance&quot;:&quot;2012-05-07T05:07Z&quot;,&quot;status&quot;:&quot;RUNNING&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},
 
{&quot;instance&quot;:&quot;2010-01-02T11:05Z&quot;,&quot;status&quot;:&quot;WAITING&quot;}]</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -list</p>
+<p>Optional Args : -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -colo &lt;&lt;colo&gt;&gt; -lifecycle 
&lt;&lt;lifecycles&gt;&gt; -filterBy 
&lt;&lt;field1:value1,field2:value2&gt;&gt; -orderBy field -sortOrder 
&lt;&lt;sortOrder&gt;&gt; -offset 0 -numResults 10</p>
+<p><a href="./Restapi/InstanceList.html">Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>Summary<a name="Summary"></a></h4>
+<p>Summary option via CLI can be used to get the consolidated status of the 
instances between the specified time period. Each status along with the 
corresponding instance count are listed for each of the applicable colos. The 
unscheduled instances between the specified time period are included as 
UNSCHEDULED in the output to provide more clarity.</p>
+<p>Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:</p>
+<p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getSummary
 is successful&quot;, &quot;cluster&quot;: &lt;&lt;name&gt;&gt; 
[{&quot;SUCCEEDED&quot;:&quot;1&quot;}, {&quot;WAITING&quot;:&quot;1&quot;}, 
{&quot;RUNNING&quot;:&quot;1&quot;}]}</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -summary</p>
+<p>Optional Args : -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -colo &lt;&lt;colo&gt;&gt; -lifecycle 
&lt;&lt;lifecycles&gt;&gt;</p>
+<p><a href="./Restapi/InstanceSummary.html">Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>Running<a name="Running"></a></h4>
+<p>Running option provides all the running instances of the mentioned 
process.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -running</p>
+<p>Optional Args : -colo &lt;&lt;colo&gt;&gt; -lifecycle 
&lt;&lt;lifecycles&gt;&gt; -filterBy 
&lt;&lt;field1:value1,field2:value2&gt;&gt; -orderBy &lt;&lt;field&gt;&gt; 
-sortOrder &lt;&lt;sortOrder&gt;&gt; -offset 0 -numResults 10</p>
+<p><a href="./Restapi/InstanceRunning.html">Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>FeedInstanceListing<a name="FeedInstanceListing"></a></h4>
+<p>Get falcon feed instance availability.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -entity feed -name 
&lt;&lt;name&gt;&gt; -listing</p>
+<p>Optional Args : -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -colo &lt;&lt;colo&gt;&gt;</p>
+<p><a href="./Restapi/FeedInstanceListing.html">Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>Logs<a name="Logs"></a></h4>
+<p>Get logs for instance actions</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -logs</p>
+<p>Optional Args : -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -runid &lt;&lt;runid&gt;&gt; -colo 
&lt;&lt;colo&gt;&gt; -lifecycle &lt;&lt;lifecycles&gt;&gt; -filterBy 
&lt;&lt;field1:value1,field2:value2&gt;&gt; -orderBy field -sortOrder 
&lt;&lt;sortOrder&gt;&gt; -offset 0 -numResults 10</p>
+<p><a href="./Restapi/InstanceLogs.html">Optional params described 
here.</a></p></div>
+<div class="section">
+<h4>LifeCycle<a name="LifeCycle"></a></h4>
+<p>Describes list of life cycles of a entity , for feed it can be 
replication/retention and for process it can be execution. This can be used 
with instance management options. Default values are replication for feed and 
execution for process.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -status -lifecycle &lt;&lt;lifecycletype&gt;&gt; 
-start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h4>Params<a name="Params"></a></h4>
+<p>Displays the workflow params of a given instance. Where start time is 
considered as nominal time of that instance.</p>
+<p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; 
-name &lt;&lt;name&gt;&gt; -params -start 
&quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div>
+<div class="section">
+<h3>Metadata Lineage Options<a name="Metadata_Lineage_Options"></a></h3></div>
+<div class="section">
+<h4>Vertex<a name="Vertex"></a></h4>
+<p>Get the vertex with the specified id.</p>
+<p>Usage: $FALCON_HOME/bin/falcon metadata -vertex -id &lt;&lt;id&gt;&gt;</p>
+<p>Example: $FALCON_HOME/bin/falcon metadata -vertex -id 4</p></div>
+<div class="section">
+<h4>Vertices<a name="Vertices"></a></h4>
+<p>Get all vertices for a key index given the specified value.</p>
+<p>Usage: $FALCON_HOME/bin/falcon metadata -vertices -key &lt;&lt;key&gt;&gt; 
-value &lt;&lt;value&gt;&gt;</p>
+<p>Example: $FALCON_HOME/bin/falcon metadata -vertices -key type -value 
feed-instance</p></div>
+<div class="section">
+<h4>Vertex Edges<a name="Vertex_Edges"></a></h4>
+<p>Get the adjacent vertices or edges of the vertex with the specified 
direction.</p>
+<p>Usage: $FALCON_HOME/bin/falcon metadata -edges -id 
&lt;&lt;vertex-id&gt;&gt; -direction &lt;&lt;direction&gt;&gt;</p>
+<p>Example: $FALCON_HOME/bin/falcon metadata -edges -id 4 -direction both 
$FALCON_HOME/bin/falcon metadata -edges -id 4 -direction inE</p></div>
+<div class="section">
+<h4>Edge<a name="Edge"></a></h4>
+<p>Get the edge with the specified id.</p>
+<p>Usage: $FALCON_HOME/bin/falcon metadata -edge -id &lt;&lt;id&gt;&gt;</p>
+<p>Example: $FALCON_HOME/bin/falcon metadata -edge -id Q9n-Q-5g</p></div>
+<div class="section">
+<h3>Metadata Discovery Options<a 
name="Metadata_Discovery_Options"></a></h3></div>
+<div class="section">
+<h4>List<a name="List"></a></h4>
+<p>Lists of all dimensions of given type. If the user provides optional param 
cluster, only the dimensions related to the cluster are listed. Usage: 
$FALCON_HOME/bin/falcon metadata -list -type 
[cluster_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines]</p>
+<p>Optional Args : -cluster &lt;&lt;cluster name&gt;&gt;</p>
+<p>Example: $FALCON_HOME/bin/falcon metadata -list -type process_entity 
-cluster primary-cluster $FALCON_HOME/bin/falcon metadata -list -type 
tags</p></div>
+<div class="section">
+<h4>Relations<a name="Relations"></a></h4>
+<p>List all dimensions related to specified Dimension identified by 
dimension-type and dimension-name. Usage: $FALCON_HOME/bin/falcon metadata 
-relations -type 
[cluster_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines] 
-name &lt;&lt;Dimension Name&gt;&gt;</p>
+<p>Example: $FALCON_HOME/bin/falcon metadata -relations -type process_entity 
-name sample-process</p></div>
+<div class="section">
+<h3>Admin Options<a name="Admin_Options"></a></h3></div>
+<div class="section">
+<h4>Help<a name="Help"></a></h4>
+<p>Usage: $FALCON_HOME/bin/falcon admin -help</p></div>
+<div class="section">
+<h4>Version<a name="Version"></a></h4>
+<p>Version returns the current version of Falcon installed. Usage: 
$FALCON_HOME/bin/falcon admin -version</p></div>
+<div class="section">
+<h4>Status<a name="Status"></a></h4>
+<p>Status returns the current state of Falcon (running or stopped). Usage: 
$FALCON_HOME/bin/falcon admin -status</p></div>
+<div class="section">
+<h3>Recipe Options<a name="Recipe_Options"></a></h3></div>
+<div class="section">
+<h4>Submit Recipe<a name="Submit_Recipe"></a></h4>
+<p>Submit the specified recipe.</p>
+<p>Usage: $FALCON_HOME/bin/falcon recipe -name &lt;name&gt; Name of the 
recipe. User should have defined &lt;name&gt;-template.xml and 
&lt;name&gt;.properties in the path specified by falcon.recipe.path in 
client.properties file. falcon.home path is used if its not specified in 
client.properties file. If its not specified in client.properties file and also 
if files cannot be found at falcon.home, Falcon CLI will fail.</p>
+<p>Optional Args : -tool &lt;recipeToolClassName&gt; Falcon provides a base 
tool that recipes can override. If this option is not specified the default 
Recipe Tool RecipeTool defined is used. This option is required if user defines 
his own recipe tool class.</p>
+<p>Example: $FALCON_HOME/bin/falcon recipe -name hdfs-replication</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2014
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Added: incubator/falcon/site/0.6-incubating/FalconDocumentation.html
URL: 
http://svn.apache.org/viewvc/incubator/falcon/site/0.6-incubating/FalconDocumentation.html?rev=1643497&view=auto
==============================================================================
--- incubator/falcon/site/0.6-incubating/FalconDocumentation.html (added)
+++ incubator/falcon/site/0.6-incubating/FalconDocumentation.html Sat Dec  6 
06:11:41 2014
@@ -0,0 +1,632 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2014-12-05
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20141205" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Contents</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+    
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                                  <a href="../../index.html" 
id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Falcon" width="200px" 
height="45px"/>
+                </a>
+                      </div>
+        <div class="pull-right">                  <a 
href="http://incubator.apache.org"; id="bannerRight">
+                                                                               
                 <img src="images/apache-incubator-logo.png"  alt="Apache 
Incubator"/>
+                </a>
+      </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Home">
+        Home</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Contents</li>
+        
+                
+                    
+      
+                                              
+    <li class="pull-right">              <a 
href="http://s.apache.org/falcon-0.5-release-notes"; class="externalLink" 
title="Released: 2014-09-22">
+        Released: 2014-09-22</a>
+  </li>
+
+        <li class="divider pull-right">|</li>
+      
+    <li class="pull-right">              <a 
href="http://www.apache.org/dist/incubator/falcon"; class="externalLink" 
title="0.5-incubating">
+        0.5-incubating</a>
+  </li>
+
+                        </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h3>Contents<a name="Contents"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Architecture">Architecture</a></li>
+<li><a href="#Control_flow">Control flow</a></li>
+<li><a href="#Modes_Of_Deployment">Modes Of Deployment</a></li>
+<li><a href="#Entity_Management_actions">Entity Management actions</a></li>
+<li><a href="#Instance_Management_actions">Instance Management actions</a></li>
+<li><a href="#Retention">Retention</a></li>
+<li><a href="#Replication">Replication</a></li>
+<li><a href="#Cross_entity_validations">Cross entity validations</a></li>
+<li><a href="#Updating_process_and_feed_definition">Updating process and feed 
definition</a></li>
+<li><a href="#Handling_late_input_data">Handling late input data</a></li>
+<li><a href="#Idempotency">Idempotency</a></li>
+<li><a href="#Falcon_EL_Expressions">Falcon EL Expressions</a></li>
+<li><a href="#Lineage">Lineage</a></li>
+<li><a href="#Security">Security</a></li>
+<li><a href="#Recipes">Recipes</a></li>
+<li><a href="#Monitoring">Monitoring</a></li>
+<li><a href="#Backwards_Compatibility">Backwards Compatibility 
Instructions</a></li></ul></div>
+<div class="section">
+<h3>Architecture<a name="Architecture"></a></h3></div>
+<div class="section">
+<h4>Introduction<a name="Introduction"></a></h4>
+<p>Falcon is a feed and process management platform over hadoop. Falcon 
essentially transforms user's feed and process configurations into repeated 
actions through a standard workflow engine. Falcon by itself doesn't do any 
heavy lifting. All the functions and workflow state management requirements are 
delegated to the workflow scheduler. The only thing that Falcon maintains is 
the dependencies and relationship between these entities. This is adequate to 
provide integrated and seamless experience to the developers using the falcon 
platform.</p></div>
+<div class="section">
+<h4>Falcon Architecture - Overview<a 
name="Falcon_Architecture_-_Overview"></a></h4>
+<p><img src="Architecture.png" alt="" /></p></div>
+<div class="section">
+<h4>Scheduler<a name="Scheduler"></a></h4>
+<p>Falcon system has picked Oozie as the default scheduler. However the system 
is open for integration with other schedulers. Lot of the data processing in 
hadoop requires scheduling to be based on both data availability as well as 
time. Oozie currently supports these capabilities off the shelf and hence the 
choice.</p></div>
+<div class="section">
+<h4>Control flow<a name="Control_flow"></a></h4>
+<p>Though the actual responsibility of the workflow is with the scheduler 
(Oozie), Falcon remains in the execution path, by subscribing to messages that 
each of the workflow may generate. When Falcon generates a workflow in Oozie, 
it does so, after instrumenting the workflow with additional steps which 
includes messaging via JMS. Falcon system itself subscribes to these control 
messages and can perform actions such as retries, handling late input arrival 
etc.</p></div>
+<div class="section">
+<h5>Feed Schedule flow<a name="Feed_Schedule_flow"></a></h5>
+<p><img src="FeedSchedule.png" alt="" /></p></div>
+<div class="section">
+<h5>Process Schedule flow<a name="Process_Schedule_flow"></a></h5>
+<p><img src="ProcessSchedule.png" alt="" /></p></div>
+<div class="section">
+<h3>Modes Of Deployment<a name="Modes_Of_Deployment"></a></h3>
+<p>There are two basic components of Falcon set up. Falcon Prism and Falcon 
Server. As the name suggests Falcon Prism splits the request it gets to the 
Falcon Servers. More details below:</p></div>
+<div class="section">
+<h4>Stand Alone Mode<a name="Stand_Alone_Mode"></a></h4>
+<p>Stand alone mode is useful when the hadoop jobs and relevant data 
processing involves only one hadoop cluster. In this mode there is single 
Falcon server that contacts with oozie to schedule jobs on Hadoop. All the 
process / feed request like submit, schedule, suspend, kill are sent to this 
server only. For running in this mode one should use the falcon which has been 
built for standalone mode, or build using standalone option if using source 
code.</p></div>
+<div class="section">
+<h4>Distributed Mode<a name="Distributed_Mode"></a></h4>
+<p>Distributed mode is the mode which you might me using most of the time. 
This is for organisations which have multiple instances of hadoop clusters, and 
multiple workflow schedulers to handle them. Here we have 2 components: Prism 
and Server. Both Prism and server have there own setup (runtime and startup 
properties) and there config locations. In this mode Prism acts as a contact 
point for Falcon servers. Below are the requests that can be sent to prism and 
server in this mode:</p>
+<p>Prism: submit, schedule, submitAndSchedule, Suspend, Resume, Kill, instance 
management  Server: schedule, suspend, resume, instance management</p>
+<p>As observed above submit and kill are kept exclusively as Prism operations 
to keep all the config stores in sync and to support feature of idempotency. 
Request may also be sent from prism but directed to a specific server using the 
option &quot;-colo&quot; from CLI or append the same in web request, if using 
API.</p>
+<p>When a cluster is submitted it is by default sent to all the servers 
configured in the prism. When is feed is SUBMIT / SCHEDULED request is only 
sent to the servers specified in the feed / process definitions. Servers are 
mentioned in the feed / process via CLUSTER tags in xml definition.</p>
+<p>Communication between prism and falcon server (for submit/update entity 
function) is secured over <a class="externalLink" href="https://";>https://</a> 
using a client-certificate based auth. Prism server needs to present a valid 
client certificate for the falcon server to accept the action.</p>
+<p>Startup property file in both falcon &amp; prism server need to be 
configured with the following configuration if TLS is enabled. * keystore.file 
* keystore.password</p></div>
+<div class="section">
+<h5>Prism Setup<a name="Prism_Setup"></a></h5>
+<p><img src="PrismSetup.png" alt="" /></p></div>
+<div class="section">
+<h4>Configuration Store<a name="Configuration_Store"></a></h4>
+<p>Configuration store is file system based store that the Falcon system 
maintains where the entity definitions are stored. File System used for the 
configuration store can either be a local file system or HDFS file system. It 
is recommended that the store be maintained outside of the system where Falcon 
is deployed. This is needed for handling issues relating to disk failures or 
other permanent failures of the system where Falcon is deployed. Configuration 
store also maintains an archive location where prior versions of the 
configuration or deleted configurations are maintained. They are never accessed 
by the Falcon system and they merely serve to track historical changes to the 
entity definitions.</p></div>
+<div class="section">
+<h4>Atomic Actions<a name="Atomic_Actions"></a></h4>
+<p>Often times when Falcon performs entity management actions, it may need to 
do several individual actions. If one of the action were to fail, then the 
system could be in an inconsistent state. To avoid this, all individual 
operations performed are recorded into a transaction journal. This journal is 
then used to undo the overall user action. In some cases, it is not possible to 
undo the action. In such cases, Falcon attempts to keep the system in an 
consistent state.</p></div>
+<div class="section">
+<h4>Storage<a name="Storage"></a></h4>
+<p>Falcon introduces a new abstraction to encapsulate the storage for a given 
feed which can either be expressed as a path on the file system, File System 
Storage or a table in a catalog such as Hive, Catalog Storage.</p>
+<div class="source">
+<pre>
+    &lt;xs:choice minOccurs=&quot;1&quot; maxOccurs=&quot;1&quot;&gt;
+        &lt;xs:element type=&quot;locations&quot; 
name=&quot;locations&quot;/&gt;
+        &lt;xs:element type=&quot;catalog-table&quot; 
name=&quot;table&quot;/&gt;
+    &lt;/xs:choice&gt;
+
+</pre></div>
+<p>Feed should contain one of the two storage options. Locations on File 
System or Table in a Catalog.</p></div>
+<div class="section">
+<h5>File System Storage<a name="File_System_Storage"></a></h5>
+<p>This is expressed as a location on the file system. Location specifies 
where the feed is available on this cluster. A location tag specifies the type 
of location like data, meta, stats and the corresponding paths for them. A feed 
should at least define the location for type data, which specifies the HDFS 
path pattern where the feed is generated periodically. ex: 
type=&quot;data&quot; 
path=&quot;/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic&quot; The 
granularity of date pattern in the path should be at least that of a frequency 
of a feed.</p>
+<div class="source">
+<pre>
+ &lt;location type=&quot;data&quot; path=&quot;/projects/falcon/clicks&quot; 
/&gt;
+ &lt;location type=&quot;stats&quot; 
path=&quot;/projects/falcon/clicksStats&quot; /&gt;
+ &lt;location type=&quot;meta&quot; 
path=&quot;/projects/falcon/clicksMetaData&quot; /&gt;
+
+</pre></div></div>
+<div class="section">
+<h5>Catalog Storage (Table)<a name="Catalog_Storage_Table"></a></h5>
+<p>A table tag specifies the table URI in the catalog registry as:</p>
+<div class="source">
+<pre>
+catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
+
+</pre></div>
+<p>This is modeled as a URI (similar to an ISBN URI). It does not have any 
reference to Hive or HCatalog. Its quite generic so it can be tied to other 
implementations of a catalog registry. The catalog implementation specified in 
the startup config provides implementation for the catalog URI.</p>
+<p>Top-level partition has to be a dated pattern and the granularity of date 
pattern should be at least that of a frequency of a feed.</p>
+<p>Examples:</p>
+<div class="source">
+<pre>
+&lt;table 
uri=&quot;catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}&quot;
 /&gt;
+&lt;table 
uri=&quot;catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+&lt;table 
uri=&quot;catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+
+</pre></div></div>
+<div class="section">
+<h3>Entity Management actions<a name="Entity_Management_actions"></a></h3>
+<p>All the following operation can also be done using <a 
href="./Restapi/ResourceList.html">Falcon's RESTful API</a>.</p></div>
+<div class="section">
+<h4>Submit<a name="Submit"></a></h4>
+<p>Entity submit action allows a new cluster/feed/process to be setup within 
Falcon. Submitted entity is not scheduled, meaning it would simply be in the 
configuration store within Falcon. Besides validating against the schema for 
the corresponding entity being added, the Falcon system would also perform 
inter-field validations within the configuration file and validations across 
dependent entities.</p></div>
+<div class="section">
+<h4>List<a name="List"></a></h4>
+<p>List all the entities within the falcon config store for the entity type 
being requested. This will include both scheduled and submitted entity 
configurations.</p></div>
+<div class="section">
+<h4>Dependency<a name="Dependency"></a></h4>
+<p>Returns the dependencies of the requested entity. Dependency list include 
both forward and backward dependencies (depends on &amp; is dependent on). For 
example, a feed would show process that are dependent on the feed and the 
clusters that it depends on.</p></div>
+<div class="section">
+<h4>Schedule<a name="Schedule"></a></h4>
+<p>Feeds or Processes that are already submitted and present in the config 
store can be scheduled. Upon schedule, Falcon system wraps the required 
repeatable action as a bundle of oozie coordinators and executes them on the 
Oozie scheduler. (It is possible to extend Falcon to use an alternate workflow 
engine other than Oozie). Falcon overrides the workflow instance's external id 
in Oozie to reflect the process/feed and the nominal time. This external Id can 
then be used for instance management functions.</p>
+<p>The schedule copies the user specified workflow and library to a staging 
path, and the scheduler references the workflow and lib from the staging 
path.</p></div>
+<div class="section">
+<h4>Suspend<a name="Suspend"></a></h4>
+<p>This action is applicable only on scheduled entity. This triggers suspend 
on the oozie bundle that was scheduled earlier through the schedule function. 
No further instances are executed on a suspended process/feed.</p></div>
+<div class="section">
+<h4>Resume<a name="Resume"></a></h4>
+<p>Puts a suspended process/feed back to active, which in turn resumes 
applicable oozie bundle.</p></div>
+<div class="section">
+<h4>Status<a name="Status"></a></h4>
+<p>Gets the current status of the entity.</p></div>
+<div class="section">
+<h4>Definition<a name="Definition"></a></h4>
+<p>Gets the current entity definition as stored in the configuration store. 
Please note that user documentations in the entity will not be 
retained.</p></div>
+<div class="section">
+<h4>Delete<a name="Delete"></a></h4>
+<p>Delete operation on the entity removes any scheduled activity on the 
workflow engine, besides removing the entity from the falcon configuration 
store. Delete operation on an entity would only succeed if there are no 
dependent entities on the deleted entity.</p></div>
+<div class="section">
+<h4>Update<a name="Update"></a></h4>
+<p>Update operation allows an already submitted/scheduled entity to be 
updated. Cluster update is currently not allowed. Feed update can cause 
cascading update to all the processes already scheduled. Process update 
triggers update in falcon if entity is updated/the user specified workflow/lib 
is updated. The following set of actions are performed in Oozie to realize an 
update:</p>
+<ul>
+<li>Suspend the previously scheduled Oozie coordinator. This is to prevent any 
new action from being triggered.</li>
+<li>Update the coordinator to set the end time to &quot;now&quot;</li>
+<li>Resume the suspended coordinators</li>
+<li>Schedule as per the new process/feed definition with the start time as 
&quot;now&quot;</li></ul>
+<p>Update optionally takes effective time as a parameter which is used as the 
end time of previously scheduled coordinator. So, the updated configuration 
will be effective since the given timestamp.</p></div>
+<div class="section">
+<h3>Instance Management actions<a name="Instance_Management_actions"></a></h3>
+<p>Instance Manager gives user the option to control individual instances of 
the process based on their instance start time (start time of that instance). 
Start time needs to be given in standard TZ format. Example: 01 Jan 2012 01:00 
=&gt; 2012-01-01T01:00Z</p>
+<p>All the instance management operations (except running) allow single 
instance or list of instance within a Date range to be acted on. Make sure the 
dates are valid. i.e. are within the start and end time of process itself.</p>
+<p>For every query in instance management the process name is a compulsory 
parameter.</p>
+<p>Parameters -start and -end are used to mention the date range within which 
you want the instance to be operated upon.</p>
+<p>-start: using only &quot;-start&quot; without &quot;-end&quot; will conduct 
the desired operation only on single instance given by date along with 
start.</p>
+<p>-end: &quot;-end&quot; can only be used along with &quot;-start&quot; . It 
corresponds to the end date till which instance need to operated upon.</p>
+<p></p>
+<ul>
+<li>1. <b>status</b>: -status option via CLI can be used to get the status of 
a single or multiple instances. If the instance is not yet materialized but is 
within the process validity range, WAITING is returned as the state. Along with 
the status of the instance log location is also returned.</li></ul>
+<p></p>
+<ul>
+<li>2. <b>running</b>: -running returns all the running instance of the 
process. It does not take any start or end dates but simply return all the 
instances in state RUNNING at that given time.</li></ul>
+<p></p>
+<ul>
+<li>3. <b>rerun</b>: -rerun is the option that you will use most often from 
instance management. As the name suggest this option is used to rerun a 
particular instance or instances of the process. The rerun option reruns all 
parent workflow for the instance, which in turn rerun all the sub-workflows for 
it. This option is valid for any instance in terminal state, i.e. KILLED, 
SUCCEEDED, FAILED. User can also set properties in the request, which will give 
options what types of actions should be rerun like, only failed, run all etc. 
These properties are dependent on the workflow engine being used along with 
falcon.</li></ul>
+<p></p>
+<ul>
+<li>4. <b>suspend</b>: -suspend is used to suspend a instance or instances for 
the given process. This option pauses the parent workflow at the state, which 
it was in at the time of execution of this command. This command is similar to 
SUSPEND process command in functionality only difference being, SUSPEND process 
suspends all the instance whereas suspend instance suspend only that instance 
or instances in the range.</li></ul>
+<p></p>
+<ul>
+<li>5. <b>resume</b>: -resume option is used to resume any instance that is in 
suspended state. (Note: due to a bug in oozie &#xef;&#xbf;&#xbd;resume option 
in some cases may not actually resume the suspended instance/ instances)</li>
+<li>6. <b>kill</b>: -kill option can be used to kill an instance or multiple 
instances</li></ul>
+<p></p>
+<ul>
+<li>7. <b>summary</b>: -summary option via CLI can be used to get the 
consolidated status of the instances between the specified time period. Each 
status along with the corresponding instance count are listed for each of the 
applicable colos.</li></ul>
+<p>In all the cases where your request is syntactically correct but logically 
not, the instance / instances are returned with the same status as earlier. 
Example: trying to resume a KILLED / SUCCEEDED instance will return the 
instance with KILLED / SUCCEEDED, without actually performing any operation. 
This is so because only an instance in SUSPENDED state can be resumed. Same 
thing is valid for rerun a SUSPENDED or RUNNING options etc.</p></div>
+<div class="section">
+<h3>Retention<a name="Retention"></a></h3>
+<p>In coherence with it's feed lifecycle management philosophy, Falcon allows 
the user to retain data in the system for a specific period of time for a 
scheduled feed. The user can specify the retention period in the respective 
feed/data xml in the following manner for each cluster the feed can belong to 
:</p>
+<div class="source">
+<pre>
+&lt;clusters&gt;
+        &lt;cluster name=&quot;corp&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2012-01-30T00:00Z&quot; 
end=&quot;2013-03-31T23:59Z&quot;
+                      timezone=&quot;UTC&quot; /&gt;
+            &lt;retention limit=&quot;hours(10)&quot; 
action=&quot;delete&quot; /&gt; 
+        &lt;/cluster&gt;
+ &lt;/clusters&gt; 
+
+</pre></div>
+<p>The 'limit' attribute can be specified in units of 
minutes/hours/days/months, and a corresponding numeric value can be attached to 
it. It essentially instructs the system to retain data spanning from the 
current moment to the time specified in the attribute spanning backwards in 
time. Any data beyond the limit (past/future) is erased from the system.</p>
+<p>With the integration of Hive, Falcon also provides retention for tables in 
Hive catalog.</p></div>
+<div class="section">
+<h4>Example:<a name="Example:"></a></h4>
+<p>If retention period is 10 hours, and the policy kicks in at time 't', the 
data retained by system is essentially the one in range [t-10h, t]. Any data 
before t-10h and after t is removed from the system.</p>
+<p>The 'action' attribute can attain values of DELETE/ARCHIVE. Based upon the 
tag value, the data eligible for removal is either deleted/archived.</p></div>
+<div class="section">
+<h4>NOTE: Falcon 0.1/0.2 releases support Delete operation only<a 
name="NOTE:_Falcon_0.10.2_releases_support_Delete_operation_only"></a></h4></div>
+<div class="section">
+<h4>When does retention policy come into play, aka when is retention really 
performed?<a 
name="When_does_retention_policy_come_into_play_aka_when_is_retention_really_performed"></a></h4>
+<p>Retention policy in Falcon kicks off on the basis of the time value 
specified by the user. Here are the basic rules:</p>
+<p></p>
+<ul>
+<li>If the retention policy specified is less than 24 hours: In this event, 
the retention policy automatically kicks off every 6 hours.</li>
+<li>If the retention policy specified is more than 24 hours: In this event, 
the retention policy automatically kicks off every 24 hours.</li>
+<li>As soon as a feed is successfully scheduled: the retention policy is 
triggered immediately regardless of the current timestamp/state of the 
system.</li></ul>
+<p>Relation between feed path and retention policy: Retention policy for a 
particular scheduled feed applies only to the eligible feed path specified in 
the feed xml. Any other paths that do not conform to the specified feed path 
are left unaffected by the retention policy.</p></div>
+<div class="section">
+<h3>Replication<a name="Replication"></a></h3>
+<p>Falcon's feed lifecycle management also supports Feed replication across 
different clusters out-of-the-box. Multiple source clusters and target clusters 
can be defined in feed definition. Falcon replicates the data using hadoop's 
distcp version 2 across different clusters whenever a feed is scheduled.</p>
+<p>The frequency at which the data is replicated is governed by the frequency 
specified in the feed definition. Ideally, the feeds data path should have the 
same granularity as that for frequency of the feed, i.e. if the frequency of 
the feed is hours(3), then the data path should be to level 
/${YEAR}/${MONTH}/${DAY}/${HOUR}.</p>
+<div class="source">
+<pre>
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;sourceCluster1&quot; type=&quot;source&quot; 
partition=&quot;${cluster.name}&quot; delay=&quot;minutes(40)&quot;&gt;
+            &lt;validity start=&quot;2021-11-01T00:00Z&quot; 
end=&quot;2021-12-31T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+        &lt;cluster name=&quot;sourceCluster2&quot; type=&quot;source&quot; 
partition=&quot;COUNTRY/${cluster.name}&quot;&gt;
+            &lt;validity start=&quot;2021-11-01T00:00Z&quot; 
end=&quot;2021-12-31T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+        &lt;cluster name=&quot;backupCluster&quot; type=&quot;target&quot;&gt;
+            &lt;validity start=&quot;2011-11-01T00:00Z&quot; 
end=&quot;2011-12-31T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+</pre></div>
+<p>If more than 1 source cluster is defined, then partition expression is 
compulsory, a partition can also have a constant. The expression is required to 
avoid copying data from different source location to the same target location, 
also only the data in the partition is considered for replication if it is 
present. The partitions defined in the cluster should be less than or equal to 
the number of partition declared in the feed definition.</p>
+<p>Falcon uses pull based replication mechanism, meaning in every target 
cluster, for a given source cluster, a coordinator is scheduled which pulls the 
data using distcp from source cluster. So in the above example, 2 coordinators 
are scheduled in backupCluster, one which pulls the data from sourceCluster1 
and another from sourceCluster2. Also, for every feed instance which is 
replicated Falcon sends a JMS message on success or failure of replication 
instance.</p>
+<p>Replication can be scheduled with the past date, the time frame considered 
for replication is the minimum overlapping window of start and end time of 
source and target cluster, ex: if s1 and e1 is the start and end time of source 
cluster respectively, and s2 and e2 of target cluster, then the coordinator is 
scheduled in target cluster with start time max(s1,s2) and min(e1,e2).</p>
+<p>A feed can also optionally specify the delay for replication instance in 
the cluster tag, the delay governs the replication instance delays. If the 
frequency of the feed is hours(2) and delay is hours(1), then the replication 
instance will run every 2 hours and replicates data with an offset of 1 hour, 
i.e. at 09:00 UTC, feed instance which is eligible for replication is 08:00; 
and 11:00 UTC, feed instance of 10:00 UTC is eligible and so on.</p></div>
+<div class="section">
+<h4>Where is the feed path defined for File System Storage?<a 
name="Where_is_the_feed_path_defined_for_File_System_Storage"></a></h4>
+<p>It's defined in the feed xml within the location tag.</p>
+<p><b>Example:</b></p>
+<div class="source">
+<pre>
+&lt;locations&gt;
+        &lt;location type=&quot;data&quot; 
path=&quot;/retention/testFolders/${YEAR}-${MONTH}-${DAY}&quot; /&gt;
+&lt;/locations&gt;
+
+</pre></div>
+<p>Now, if the above path contains folders in the following fashion:</p>
+<p>/retention/testFolders/${YEAR}-${MONTH}-${DAY} 
/retention/testFolders/${YEAR}-${MONTH}/someFolder</p>
+<p>The feed retention policy would only act on the former and not the 
latter.</p>
+<p>Users may choose to override the feed path specific to a cluster, so every 
cluster may have a different feed path. <b>Example:</b></p>
+<div class="source">
+<pre>
+&lt;clusters&gt;
+        &lt;cluster name=&quot;testCluster&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2011-11-01T00:00Z&quot; 
end=&quot;2011-12-31T00:00Z&quot;/&gt;
+                       &lt;locations&gt;
+                       &lt;location type=&quot;data&quot; 
path=&quot;/projects/falcon/clicks/${YEAR}-${MONTH}-${DAY}&quot; /&gt;
+                       &lt;location type=&quot;stats&quot; 
path=&quot;/projects/falcon/clicksStats/${YEAR}-${MONTH}-${DAY}&quot; /&gt;
+                       &lt;location type=&quot;meta&quot; 
path=&quot;/projects/falcon/clicksMetaData/${YEAR}-${MONTH}-${DAY}&quot; /&gt;
+               &lt;/locations&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Hive Table Replication<a name="Hive_Table_Replication"></a></h4>
+<p>With the integration of Hive, Falcon adds table replication of Hive catalog 
tables. Replication will be triggered for a partition when the partition is 
complete at the source.</p>
+<p></p>
+<ul>
+<li>Falcon will use HCatalog (Hive) API to export the data for a given table 
and the partition,</li></ul>which will result in a data collection that 
includes metadata on the data's storage format, the schema, how the data is 
sorted, what table the data came from, and values of any partition keys from 
that table.
+<ul>
+<li>Falcon will use discp tool to copy the exported data collection into the 
secondary cluster into a staging</li></ul>directory used by Falcon.
+<ul>
+<li>Falcon will then import the data into HCatalog (Hive) using the HCatalog 
(Hive) API. If the specified table does</li></ul>not yet exist, Falcon will 
create it, using the information in the imported metadata to set defaults for 
the table such as schema, storage format, etc.
+<ul>
+<li>The partition is not complete and hence not visible to users until all the 
data is committed on the secondary</li></ul>cluster, (no dirty reads)</div>
+<div class="section">
+<h4>Archival as Replication<a name="Archival_as_Replication"></a></h4>
+<p>Falcon allows users to archive data from on-premice to cloud, either Azure 
WASB or S3. It uses the underlying replication for archiving data from source 
to target. The archival URI is specified as the overridden location for the 
target cluster.</p>
+<p><b>Example:</b></p>
+<div class="source">
+<pre>
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;on-premise-cluster&quot; 
type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2021-11-01T00:00Z&quot; 
end=&quot;2021-12-31T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+        &lt;cluster name=&quot;cloud-cluster&quot; type=&quot;target&quot;&gt;
+            &lt;validity start=&quot;2011-11-01T00:00Z&quot; 
end=&quot;2011-12-31T00:00Z&quot;/&gt;
+            &lt;locations&gt;
+                &lt;location type=&quot;data&quot;
+                          
path=&quot;wasb://t...@blah.blob.core.windows.net/data/${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;/&gt;
+            &lt;/locations&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Relation between feed's retention limit and feed's late arrival cut off 
period:<a 
name="Relation_between_feeds_retention_limit_and_feeds_late_arrival_cut_off_period:"></a></h4>
+<p>For reasons that are obvious, Falcon has an external validation that 
ensures that the user always specifies the feed retention limit to be more than 
the feed's allowed late arrival period. If this rule is violated by the user, 
the feed submission call itself throws back an error.</p></div>
+<div class="section">
+<h3>Cross entity validations<a name="Cross_entity_validations"></a></h3></div>
+<div class="section">
+<h4>Entity Dependencies in a nutshell<a 
name="Entity_Dependencies_in_a_nutshell"></a></h4>
+<p><img src="EntityDependency.png" alt="" /></p>
+<p>The above schematic shows the dependencies between entities in Falcon. The 
arrow in above diagram points from a dependency to the dependent.</p>
+<p>Let's just get one simple rule stated here, which we will keep referring to 
time and again while talking about entities: A dependency in the system cannot 
be removed unless all it's dependents are removed first. This holds true for 
all transitive dependencies also.</p>
+<p>Now, let's follow it up with a simple illustration of an Falcon Job:</p>
+<p>Let's consider a process P that refers to feed F1 as an input feed, and 
generates feed F2 as an output feed. These feeds/processes are supposed to be 
associated with a cluster C1.</p>
+<p>The order of submission of this job would be in the following order:</p>
+<p>C1-&gt;F1/F2(in any order)-&gt;P</p>
+<p>The order of removal of this job from the system is in the exact opposite 
order, i.e.:</p>
+<p>P-&gt;F1/F2(in any order)-&gt;C1</p>
+<p>Please note that there might be multiple process referring to a particular 
feed, or a single feed belonging to multiple clusters. In that event, any of 
the dependencies cannot be removed unless ALL of their dependents are removed 
first. Attempting to do so will result in an error message and a 400 Bad 
Request operation.</p></div>
+<div class="section">
+<h4>Other cross validations between entities in Falcon system<a 
name="Other_cross_validations_between_entities_in_Falcon_system"></a></h4>
+<p><b>Cluster-Feed Cross validations:</b></p>
+<p></p>
+<ul>
+<li>The cluster(s) referenced by feed (inside the &lt;clusters&gt; tag) should 
be  present in the system at the time</li></ul>of submission. Any exception to 
this results in a feed submission failure. Note that a feed might be referring 
to more than a single cluster. The identifier for the same is the 'name' 
attribute for the individual cluster.
+<p><b>Example:</b></p>
+<p><b>Feed XML:</b></p>
+<div class="source">
+<pre>
+   &lt;clusters&gt;
+        &lt;cluster name=&quot;corp&quot; type=&quot;source&quot;&gt;
+            &lt;validity start=&quot;2009-01-01T00:00Z&quot; 
end=&quot;2012-12-31T23:59Z&quot;
+                      timezone=&quot;UTC&quot; /&gt;
+            &lt;retention limit=&quot;months(6)&quot; 
action=&quot;delete&quot; /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+
+</pre></div>
+<p><b>Cluster corp's XML:</b></p>
+<div class="source">
+<pre>
+&lt;cluster colo=&quot;gs&quot; description=&quot;&quot; name=&quot;corp&quot; 
xmlns=&quot;uri:falcon:cluster:0.1&quot; 
xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+
+</pre></div>
+<p><b>Cluster-Process Cross validations:</b></p>
+<p></p>
+<ul>
+<li>In a similar relationship to that of feed and a cluster, a process also 
refers to the relevant cluster by the</li></ul>'name' attribute. Any exception 
results in a process submission failure.</div>
+<div class="section">
+<h4>Example:<a name="Example:"></a></h4></div>
+<div class="section">
+<h4>Process XML:<a name="Process_XML:"></a></h4>
+<div class="source">
+<pre>
+&lt;process name=&quot;agregator-coord16&quot;&gt;
+    &lt;cluster name=&quot;corp&quot;/&gt;....
+
+</pre></div></div>
+<div class="section">
+<h4>Cluster corp's XML:<a name="Cluster_corps_XML:"></a></h4>
+<div class="source">
+<pre>
+&lt;cluster colo=&quot;gs&quot; description=&quot;&quot; name=&quot;corp&quot; 
xmlns=&quot;uri:falcon:cluster:0.1&quot; 
xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+
+</pre></div>
+<p><b>Feed-Process Cross Validations:</b></p>
+<p>1. The process &lt;input&gt; and feeds designated as input feeds for the 
job:</p>
+<p>For every feed referenced in the &lt;input&gt; tag in a process definition, 
following rules are applied when the process is due for submission:</p>
+<p></p>
+<ul>
+<li>The feed having a value associated with the 'feed' attribute in input tag 
should be present in</li></ul>the system. The corresponding attribute in the 
feed definition is the 'name' attribute in the &lt;feed&gt; tag.
+<p><b>Example:</b></p>
+<p><b>Process xml:</b></p>
+<div class="source">
+<pre>
+&lt;input end-instance=&quot;now(0,20)&quot; 
start-instance=&quot;now(0,-60)&quot;
+feed=&quot;raaw-logs16&quot; name=&quot;inputData&quot;/&gt;
+
+</pre></div>
+<p><b>Feed xml:</b></p>
+<div class="source">
+<pre>
+&lt;feed description=&quot;clicks log&quot; name=&quot;raw-logs16&quot;....
+
+</pre></div>
+<p>* The time interpretation for corresponding tags indicating the start and 
end instances for a particular input feed in the process xml should lie well 
within the time span of the period specified in &lt;validity&gt; tag of the 
particular feed.</p>
+<p><b>Example:</b></p>
+<p>1. In the following scenario, process submission will result in an 
error:</p>
+<p><b>Process XML:</b></p>
+<div class="source">
+<pre>
+&lt;input end-instance=&quot;now(0,20)&quot; 
start-instance=&quot;now(0,-60)&quot;
+   feed=&quot;raw-logs16&quot; name=&quot;inputData&quot;/&gt;
+
+</pre></div>
+<p><b>Feed XML:</b></p>
+<div class="source">
+<pre>
+&lt;validity start=&quot;2009-01-01T00:00Z&quot; 
end=&quot;2009-12-31T23:59Z&quot;.....
+
+</pre></div>
+<p>Explanation: The process timelines for the feed range between a 40 minute 
interval between [-60m,-20m] from the current timestamp (which lets assume is 
'today' as per the 'now' directive). However, the feed validity is between a 1 
year period in 2009, which makes it anachronistic.</p>
+<p>2. The following example would work just fine:</p>
+<p><b>Process XML:</b></p>
+<div class="source">
+<pre>
+&lt;input end-instance=&quot;now(0,20)&quot; 
start-instance=&quot;now(0,-60)&quot;
+   feed=&quot;raaw-logs16&quot; name=&quot;inputData&quot;/&gt;
+
+</pre></div>
+<p><b>Feed XML:</b></p>
+<div class="source">
+<pre>
+validity start=&quot;2009-01-01T00:00Z&quot; end=&quot;2012-12-31T23:59Z&quot; 
.......
+
+</pre></div>
+<p>since at the time of charting this document (03/03/2012), the feed validity 
is able to encapsulate the process input's start and end instances.</p>
+<p>Failure to follow any of the above rules would result in a process 
submission failure.</p>
+<p><b>NOTE:</b> Even though the above check ensures that the timelines are not 
anachronistic, if the input data is not present in the system for the specified 
time period, the process can be submitted and scheduled, but all instances 
created would result in a WAITING state unless data is actually provided in the 
cluster.</p></div>
+<div class="section">
+<h3>Updating process and feed definition<a 
name="Updating_process_and_feed_definition"></a></h3>
+<p>Any changes in feed/process can be done by updating its definition. After 
the update, any new workflows which are to be scheduled after the update call 
will pick up the new changes. Feed/process name and start time can't be 
updated. Updating a process triggers updates to the workflow that is triggered 
in the workflow engine. Updating feed updates feed workflows like retention, 
replication etc. and also updates the processes that reference the 
feed.</p></div>
+<div class="section">
+<h3>Handling late input data<a name="Handling_late_input_data"></a></h3>
+<p>Falcon system can handle late arrival of input data and appropriately 
re-trigger processing for the affected instance. From the perspective of late 
handling, there are two main configuration parameters late-arrival cut-off and 
late-inputs section in feed and process entity definition that are central. 
These configurations govern how and when the late processing happens. In the 
current implementation (oozie based) the late handling is very simple and 
basic. The falcon system looks at all dependent input feeds for a process and 
computes the max late cut-off period. Then it uses a scheduled messaging 
framework, like the one available in Apache ActiveMQ or Java's DelayQueue to 
schedule a message with a cut-off period, then after a cut-off period the 
message is dequeued and Falcon checks for changes in the feed data which is 
recorded in HDFS in latedata file by falcons &quot;record-size&quot; action, if 
it detects any changes then the workflow will be rerun with the new set of feed 
da
 ta.</p>
+<p><b>Example:</b> The late rerun policy can be configured in the process 
definition. Falcon supports 3 policies, periodic, exp-backoff and final. Delay 
specifies, how often the feed data should be checked for changes, also one 
needs to  explicitly set the feed names in late-input which needs to be checked 
for late data.</p>
+<div class="source">
+<pre>
+  &lt;late-process policy=&quot;exp-backoff&quot; 
delay=&quot;hours(1)&quot;&gt;
+        &lt;late-input input=&quot;impression&quot; 
workflow-path=&quot;hdfs://impression/late/workflow&quot; /&gt;
+        &lt;late-input input=&quot;clicks&quot; 
workflow-path=&quot;hdfs://clicks/late/workflow&quot; /&gt;
+   &lt;/late-process&gt;
+
+</pre></div>
+<p><b>NOTE:</b> Feeds configured with table storage does not support late 
input data handling at this point. This will be made available in the near 
future.</p></div>
+<div class="section">
+<h3>Idempotency<a name="Idempotency"></a></h3>
+<p>All the operations in Falcon are Idempotent. That is if you make same 
request to the falcon server / prism again you will get a SUCCESSFUL return if 
it was SUCCESSFUL in the first attempt. For example, you submit a new process / 
feed and get SUCCESSFUL message return. Now if you run the same command / api 
request on same entity you will again get a SUCCESSFUL message. Same is true 
for other operations like schedule, kill, suspend and resume. Idempotency also 
by takes care of the condition when request is sent through prism and fails on 
one or more servers. For example prism is configured to send request to 3 
servers. First user sends a request to SUBMIT a process on all 3 of them, and 
receives a response SUCCESSFUL from all of them. Then due to some issue one of 
the servers goes down, and user send a request to schedule the submitted 
process. This time he will receive a response with PARTIAL status and a FAILURE 
message from the server that has gone down. If the users check he wi
 ll find the process would have been started and running on the 2 SUCCESSFUL 
servers. Now the issue with server is figured out and it is brought up. Sending 
the SCHEDULE request again through prism will result in a SUCCESSFUL response 
from prism as well as other three servers, but this time PROCESS will be 
SCHEDULED only on the server which had failed earlier and other two will keep 
running as before.</p></div>
+<div class="section">
+<h3>Falcon EL Expressions<a name="Falcon_EL_Expressions"></a></h3>
+<p>Falcon expression language can be used in process definition for giving the 
start and end instance for various feeds.</p>
+<p>Before going into how to use falcon EL expressions it is necessary to 
understand what does instance and instance start time refer to with respect to 
Falcon.</p>
+<p>Lets consider a part of process definition below:</p>
+<div class="source">
+<pre>
+&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; 
standalone=&quot;yes&quot;?&gt;
+&lt;process name=&quot;testProcess&quot;&gt;
+    &lt;clusters&gt;
+        &lt;cluster name=&quot;corp&quot;&gt;
+            &lt;validity start=&quot;2010-01-02T01:00Z&quot; 
end=&quot;2011-01-03T03:00Z&quot; /&gt;
+        &lt;/cluster&gt;
+    &lt;/clusters&gt;
+   &lt;parallel&gt;2&lt;/parallel&gt;
+   &lt;order&gt;LIFO&lt;/order&gt;
+   &lt;timeout&gt;hours(3)&lt;/timeout&gt;
+   &lt;frequency&gt;minutes(30)&lt;/frequency&gt;
+
+  &lt;inputs&gt;
+ &lt;input end-instance=&quot;now(0,20)&quot; 
start-instance=&quot;now(0,-60)&quot;
+                       feed=&quot;input-log&quot; 
name=&quot;inputData&quot;/&gt;
+ &lt;/inputs&gt;
+&lt;outputs&gt;
+       &lt;output instance=&quot;now(0,0)&quot; feed=&quot;output-log&quot;
+               name=&quot;outputData&quot; /&gt;
+&lt;/outputs&gt;
+...
+...
+...
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>The above definition says that the process will start at 2nd of Jan 2010 at 
1 am and will end at 3rd of Jan 2011 at 3 am on cluster corp. Also process will 
start a user-defined workflow (which we will call instance) every 30 mins.</p>
+<p>This means starting 2010-01-02T01:00Z every 30 mins a instance will start 
will run user defined workflow. Now if this workflow needs some input data and 
produce some output, user needs to give that in &lt;inputs&gt; and 
&lt;outputs&gt; tags.  Since the inputs that the process takes can be 
distributed over a wide range we use the limits by giving &quot;start&quot; and 
&quot;end&quot; instance for input. Output is only one location so only 
instance is given.  The timeout specifies, the how long a given instance should 
wait for input data before being terminated by the workflow engine.</p>
+<p>Coming back to instance start time, since a instance will start every 30 
mins starting 2010-01-02T01:00Z, the time it is scheduled to start is called 
its instance time. For example first few instance time for above example 
are:</p>
+<p><pre>Instance Number      instance start Time</pre></p>
+<p><pre>1                       2010-01-02T01:00Z</pre> <pre>2                 
 2010-01-02T01:30Z</pre> <pre>3                  2010-01-02T02:00Z</pre> <pre>4 
                 2010-01-02T02:30Z</pre> <pre>.                         .</pre> 
<pre>.                          .</pre> <pre>.                          .</pre> 
<pre>.                          .</pre></p>
+<p>Now lets go to how to use expression language. Only thing to keep in mind 
is all EL evaluation are done based on the start time of that instance, and 
very instance will have different inputs / outputs based on the feed instance 
given in process definition.</p>
+<p>All the parameters in various El can be both positive, zero or negative 
values. Positive values indicate so many units in future, zero means the base 
time EL has been resolved to, and negative values indicate corresponding units 
in past.</p>
+<p><b><i>Note: if no instance is created at the resolved time, then the 
instance immediately before it is considered.</i></b></p>
+<p>Falcon currently support following ELs:</p>
+<p></p>
+<ul>
+<li>1. <b>now(hours,minutes)</b>: now refer to the instance start time. Hours 
and minutes given are in reference with the start time of instance. For example 
now(-2,40)  corresponds to feed instance at -2 hr and +40 minutes i.e.  feed 
instance 80 mins before the instance start time. Id user would have given 
now(0,-80) it would have correspond to the same.</li>
+<li>2. <b>today(hours,minutes)</b>: hours and minutes given in this EL 
corresponds to instance from the start day of instance start time. Ie. If 
instance start is at 2010-01-02T01:30Z  then today(-3,-20) will mean instance 
created at 2010-01-01T20:40 and today(3,20) will correspond to 
2010-01-02T3:20Z.</li></ul>
+<p></p>
+<ul>
+<li>3. <b>yesterday(hours,minutes)</b>: As the name suggest EL yesterday picks 
up feed instances with respect to start of day yesterday. Hours and minutes are 
added to the 00 hours starting yesterday, Example: yesterday(24,30) will 
actually correspond to 00:30 am of today, for 2010-01-02T01:30Z this would mean 
2010-01-02:00:30 feed.</li></ul>
+<p></p>
+<ul>
+<li>7. <b>lastYear(month,day,hour,minute)</b>: This is exactly similarly to 
currentYear in usage&gt; only difference being start reference is taken to 
start of previous year. For example: lastYear(4,2,2,20) will correspond to feed 
instance created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will correspond 
to feed at 2010-01-03T02:20Z.</li></ul>
+<p></p>
+<ul>
+<li>4. <b>currentMonth(day,hour,minute)</b>: Current month takes the reference 
to start of the month with respect to instance start time. One thing to keep in 
mind is that day is added to the first day of the month. So the value of day is 
the number of days you want to add to the first day of the month. For example: 
for instance start time 2010-01-12T01:30Z and El as currentMonth(3,2,40) will 
correspond to feed created at 2010-01-04T02:40Z and currentMonth(0,0,0) will 
mean 2010-01-01T00:00Z.</li></ul>
+<p></p>
+<ul>
+<li>5. <b>lastMonth(day,hour,minute)</b>: Parameters for lastMonth is same as 
currentMonth, only difference being the reference is shifted to one month back. 
For instance start 2010-01-12T01:30Z lastMonth(2,3,30) will correspond to feed 
instance at 2009-12-03:T03:30Z</li></ul>
+<p></p>
+<ul>
+<li>6. <b>currentYear(month,day,hour,minute)</b>: The month,day,hour, minutes 
in the parameter are added with reference to the start of year of instance 
start time. For our example start time 2010-01-02:00:30 reference will go back 
to 2010-01-01:T00:00Z. Also similar to days, months are added to the 1st month 
that Jan. So currentYear(0,2,2,20) will mean 2010-01-03T02:20Z while 
currentYear(11,2,2,20) will mean 2010-12-03T02:20Z</li></ul>
+<p></p>
+<ul>
+<li>7. <b>lastYear(month,day,hour,minute)</b>: This is exactly similarly to 
currentYear in usage&gt; only difference being start reference is taken to 
start of previous year. For example: lastYear(4,2,2,20) will corrospond to feed 
insatnce created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will corrospond 
to feed at 2010-01-03T02:20Z.</li></ul>
+<p></p>
+<ul>
+<li>8. <b>latest(number of latest instance)</b>: This will simply make you 
input consider the number of latest available instance of the feed given as 
parameter. For example: latest(0) will consider the last available instance of 
feed, where as latest latest(-1) will consider second last available feed and 
latest(-3) will consider 4th last available feed.</li></ul>
+<p></p>
+<ul>
+<li>9. <b>currentWeek(weekDayName,hour,minute)</b>: This is similar to 
currentMonth in the sense that it returns a relative time with respect to the 
instance start time, considering the day name provided as input as the start of 
the week. The day names can be one of SUN, MON, TUE, WED, THU, FRI, 
SAT.</li></ul>
+<p></p>
+<ul>
+<li>10. <b>lastWeek(weekDayName,hour,minute)</b>: This is typically 7 days 
less than what the currentWeek returns for similar parameters.</li></ul></div>
+<div class="section">
+<h3>Lineage<a name="Lineage"></a></h3>
+<p>Falcon adds the ability to capture lineage for both entities and its 
associated instances. It also captures the metadata tags associated with each 
of the entities as relationships. The following relationships are captured:</p>
+<p></p>
+<ul>
+<li>owner of entities - User</li>
+<li>data classification tags</li>
+<li>groups defined in feeds</li>
+<li>Relationships between entities
+<ul>
+<li>Clusters associated with Feed and Process entity</li>
+<li>Input and Output feeds for a Process</li></ul></li>
+<li>Instances refer to corresponding entities</li></ul>
+<p>Lineage is exposed in 3 ways:</p>
+<p></p>
+<ul>
+<li>REST API</li>
+<li>CLI</li>
+<li>Dashboard - Interactive lineage for Process instances</li></ul>
+<p>This feature is enabled by default but could be disabled by removing the 
following from:</p>
+<div class="source">
+<pre>
+config name: *.application.services
+config value: org.apache.falcon.metadata.MetadataMappingService
+
+</pre></div>
+<p>Lineage is only captured for Process executions. A future release will 
capture lineage for lifecycle policies such as replication and 
retention.</p></div>
+<div class="section">
+<h3>Security<a name="Security"></a></h3>
+<p>Security is detailed in <a href="./Security.html">Security</a>.</p></div>
+<div class="section">
+<h3>Recipes<a name="Recipes"></a></h3>
+<p>Recipes is detailed in <a href="./Recipes.html">Recipes</a>.</p></div>
+<div class="section">
+<h3>Monitoring<a name="Monitoring"></a></h3>
+<p>Monitoring and Operationalizing Falcon is detailed in <a 
href="./Operability.html">Operability</a>.</p></div>
+<div class="section">
+<h3>Backwards Compatibility<a name="Backwards_Compatibility"></a></h3>
+<p>Backwards compatibility instructions are <a 
href="./Compatibility.html">detailed here.</a></p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2014
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Added: incubator/falcon/site/0.6-incubating/FeedSchedule.png
URL: 
http://svn.apache.org/viewvc/incubator/falcon/site/0.6-incubating/FeedSchedule.png?rev=1643497&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/falcon/site/0.6-incubating/FeedSchedule.png
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream


Reply via email to