http://git-wip-us.apache.org/repos/asf/falcon/blob/91c68bea/content/0.11/EntitySpecification.html
----------------------------------------------------------------------
diff --git a/content/0.11/EntitySpecification.html 
b/content/0.11/EntitySpecification.html
new file mode 100644
index 0000000..ef3e501
--- /dev/null
+++ b/content/0.11/EntitySpecification.html
@@ -0,0 +1,1040 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2018-03-12
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20180312" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Contents</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Contents</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2018-03-12</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.11</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h3>Contents<a name="Contents"></a></h3>
+<p></p>
+<ul>
+<li><a href="#Cluster_Specification">Cluster Specification</a></li>
+<li><a href="#Feed_Specification">Feed Specification</a></li>
+<li><a href="#Process_Specification">Process Specification</a></li></ul></div>
+<div class="section">
+<h3>Cluster Specification<a name="Cluster_Specification"></a></h3>
+<p>The cluster XSD specification is available here: A cluster contains 
different interfaces which are used by Falcon like readonly, write, workflow 
and messaging. A cluster is referenced by feeds and processes which are 
on-boarded to Falcon by its name.</p>
+<p>Following are the tags defined in a cluster.xml:</p>
+<div class="source">
+<pre>
+&lt;cluster colo=&quot;gs&quot; description=&quot;&quot; name=&quot;corp&quot; 
xmlns=&quot;uri:falcon:cluster:0.1&quot;
+ xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+
+</pre></div>
+<p>The colo specifies the colo to which this cluster belongs to and name is 
the name of the cluster which has to  be unique.</p></div>
+<div class="section">
+<h4>Interfaces<a name="Interfaces"></a></h4>
+<p>A cluster has various interfaces as described below:</p>
+<div class="source">
+<pre>
+    &lt;interface type=&quot;readonly&quot; 
endpoint=&quot;hftp://localhost:50010&quot; version=&quot;0.20.2&quot; /&gt;
+
+</pre></div>
+<p>A readonly interface specifies the endpoint for Hadoop's HFTP protocol,  
this would be used in the context of feed replication.</p>
+<div class="source">
+<pre>
+&lt;interface type=&quot;write&quot; 
endpoint=&quot;hdfs://localhost:8020&quot; version=&quot;0.20.2&quot; /&gt;
+
+</pre></div>
+<p>A write interface specifies the interface to write to hdfs, it's endpoint 
is the value of fs.defaultFS. Falcon uses this interface to write system data 
to hdfs and feeds referencing this cluster are written to hdfs using the same 
write interface.</p>
+<div class="source">
+<pre>
+&lt;interface type=&quot;execute&quot; endpoint=&quot;localhost:8021&quot; 
version=&quot;0.20.2&quot; /&gt;
+
+</pre></div>
+<p>An execute interface specifies the interface for job tracker, it's endpoint 
is the value of mapreduce.jobtracker.address. Falcon uses this interface to 
submit the processes as jobs on JobTracker defined here.</p>
+<div class="source">
+<pre>
+&lt;interface type=&quot;workflow&quot; 
endpoint=&quot;http://localhost:11000/oozie/&quot; version=&quot;4.0&quot; /&gt;
+
+</pre></div>
+<p>A workflow interface specifies the interface for workflow engine, example 
of its endpoint is the value for OOZIE_URL. Falcon uses this interface to 
schedule the processes referencing this cluster on workflow engine defined 
here.</p>
+<div class="source">
+<pre>
+&lt;interface type=&quot;registry&quot; 
endpoint=&quot;thrift://localhost:9083&quot; version=&quot;0.11.0&quot; /&gt;
+
+</pre></div>
+<p>A registry interface specifies the interface for metadata catalog, such as 
Hive Metastore (or HCatalog). Falcon uses this interface to 
register/de-register partitions for a given database and table. Also, uses this 
information to schedule data availability events based on partitions in the 
workflow engine. Although Hive metastore supports both RPC and HTTP, Falcon 
comes with an implementation for RPC over thrift. For Hive HA mode, make sure 
the uris are separated with comma and you only add protocol 
&quot;thrift://&quot; at the beginning. See below for an example of Hive HA 
mode:</p>
+<div class="source">
+<pre>
+&lt;interface type=&quot;registry&quot; 
endpoint=&quot;thrift://c6402.ambari.apache.org:9083,c6403.ambari.apache.org:9083&quot;
 version=&quot;0.11.0&quot; /&gt;
+
+</pre></div>
+<div class="source">
+<pre>
+&lt;interface type=&quot;messaging&quot; 
endpoint=&quot;tcp://localhost:61616?daemon=true&quot; 
version=&quot;5.4.6&quot; /&gt;
+
+</pre></div>
+<p>A messaging interface specifies the interface for sending feed availability 
messages, it's endpoint is broker url with tcp address.</p></div>
+<div class="section">
+<h4>Locations<a name="Locations"></a></h4>
+<p>A cluster has a list of locations defined:</p>
+<div class="source">
+<pre>
+&lt;location name=&quot;staging&quot; 
path=&quot;/projects/falcon/staging&quot; /&gt;
+&lt;location name=&quot;working&quot; 
path=&quot;/projects/falcon/working&quot; /&gt; &lt;!--optional--&gt;
+
+</pre></div>
+<p>Location has the name and the path, name is the type of locations .Allowed 
values of name are staging, temp and working. Path is the hdfs path for each 
location. Falcon would use the location to do intermediate processing of 
entities in hdfs and hence Falcon should have read/write/execute permission on 
these locations. These locations MUST be created prior to submitting a cluster 
entity to Falcon. <b>staging</b> should have 777 permissions and is a mandatory 
location .The parent dirs must have execute permissions so multiple users can 
write to this location. <b>working</b> must have 755 permissions and is a 
optional location. If <b>working</b> is not specified, falcon creates a sub 
directory in the <b>staging</b> location with 755 perms. The parent dir for 
<b>working</b> must have execute permissions so multiple users can read from 
this location</p></div>
+<div class="section">
+<h4>ACL<a name="ACL"></a></h4>
+<p>A cluster has ACL (Access Control List) useful for implementing permission 
requirements and provide a way to set different permissions for specific users 
or named groups.</p>
+<div class="source">
+<pre>
+    &lt;ACL owner=&quot;test-user&quot; group=&quot;test-group&quot; 
permission=&quot;*&quot;/&gt;
+
+</pre></div>
+<p>ACL indicates the Access control list for this cluster. owner is the Owner 
of this entity. group is the one which has access to read. permission indicates 
the permission.</p></div>
+<div class="section">
+<h4>Custom Properties<a name="Custom_Properties"></a></h4>
+<p>A cluster has a list of properties: A key-value pair, which are propagated 
to the workflow engine.</p>
+<div class="source">
+<pre>
+&lt;property name=&quot;brokerImplClass&quot; 
value=&quot;org.apache.activemq.ActiveMQConnectionFactory&quot; /&gt;
+
+</pre></div>
+<p>Ideally JMS impl class name of messaging engine (brokerImplClass)  should 
be defined here.</p></div>
+<div class="section">
+<h3>Datasource Specification<a name="Datasource_Specification"></a></h3>
+<p>The datasource entity contains connection information required to connect 
to a data source like MySQL database. The datasource XSD specification is 
available here: A datasource contains read and write interfaces which are used 
by Falcon to import or export data from or to datasources respectively. A 
datasource is referenced by feeds which are on-boarded to Falcon by its 
name.</p>
+<p>Following are the tags defined in a datasource.xml:</p>
+<div class="source">
+<pre>
+&lt;datasource colo=&quot;west-coast&quot; description=&quot;Customer database 
on west coast&quot; type=&quot;mysql&quot;
+ name=&quot;test-hsql-db&quot; xmlns=&quot;uri:falcon:datasource:0.1&quot; 
xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Datasource Types<a name="Datasource_Types"></a></h4>
+<p>Falcon currently supports relational databases as data sources (both source 
and target). The following relational databases are supported</p>
+<ul>
+<li>MySQL, HSQL, Postgres, Oracle, Teradata, Netezza, DB2</li>
+<li>Generic - generic jdbc data source. This will require specifying a driver 
classname and jar file in the</li></ul>datasource entity specification. Please 
see samples in the examples dir.
+<p>The colo specifies the colo to which the datasource belongs to and name is 
the name of the datasource which has to be unique.</p></div>
+<div class="section">
+<h4>Interfaces<a name="Interfaces"></a></h4>
+<p>A datasource has two interfaces as described below:</p>
+<div class="source">
+<pre>
+    &lt;interface type=&quot;readonly&quot; 
endpoint=&quot;jdbc:hsqldb:localhost/db&quot;/&gt;
+
+</pre></div>
+<p>A readonly interface specifies the endpoint and protocol to connect to a 
datasource. This would be used in the context of import from datasource into 
HDFS.</p>
+<div class="source">
+<pre>
+&lt;interface type=&quot;write&quot; 
endpoint=&quot;jdbc:hsqldb:localhost/db1&quot;&gt;
+
+</pre></div>
+<p>A write interface specifies the endpoint and protocol to to write to the 
datasource. Falcon uses this interface to export data from hdfs to 
datasource.</p>
+<div class="source">
+<pre>
+&lt;credential type=&quot;password-text&quot;&gt;
+    &lt;userName&gt;SA&lt;/userName&gt;
+    &lt;passwordText&gt;&lt;/passwordText&gt;
+&lt;/credential&gt;
+
+</pre></div>
+<p>A credential is associated with an interface (read or write) providing user 
name and password to authenticate to the datasource.</p>
+<div class="source">
+<pre>
+&lt;credential type=&quot;password-text&quot;&gt;
+     &lt;userName&gt;SA&lt;/userName&gt;
+     &lt;passwordFile&gt;hdfs-file-path&lt;/passwordText&gt;
+&lt;/credential&gt;
+
+</pre></div>
+<p>The credential can be specified via a password file present in the HDFS. 
This file should only be accessible by the user.</p></div>
+<div class="section">
+<h3>Feed Specification<a name="Feed_Specification"></a></h3>
+<p>The Feed XSD specification is available here. A Feed defines various 
attributes of feed like feed location, frequency, late-arrival handling and 
retention policies. A feed can be scheduled on a cluster, once a feed is 
scheduled its retention and replication process are triggered in a given 
cluster.</p>
+<div class="source">
+<pre>
+&lt;feed description=&quot;clicks log&quot; name=&quot;clicks&quot; 
xmlns=&quot;uri:falcon:feed:0.1&quot;
+xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
+
+</pre></div>
+<p>A feed should have a unique name and this name is referenced by processes 
as input or output feed.</p></div>
+<div class="section">
+<h4>Storage<a name="Storage"></a></h4>
+<p>Falcon introduces a new abstraction to encapsulate the storage for a given 
feed which can either be expressed as a path on the file system, File System 
Storage or a table in a catalog such as Hive, Catalog Storage.</p>
+<div class="source">
+<pre>
+    &lt;xs:choice minOccurs=&quot;1&quot; maxOccurs=&quot;1&quot;&gt;
+        &lt;xs:element type=&quot;locations&quot; 
name=&quot;locations&quot;/&gt;
+        &lt;xs:element type=&quot;catalog-table&quot; 
name=&quot;table&quot;/&gt;
+    &lt;/xs:choice&gt;
+
+</pre></div>
+<p>Feed should contain one of the two storage options. Locations on File 
System or Table in a Catalog.</p></div>
+<div class="section">
+<h5>File System Storage<a name="File_System_Storage"></a></h5>
+<div class="source">
+<pre>
+        &lt;clusters&gt;
+        &lt;cluster name=&quot;test-cluster&quot;&gt;
+            &lt;validity start=&quot;2012-07-20T03:00Z&quot; 
end=&quot;2099-07-16T00:00Z&quot;/&gt;
+            &lt;retention limit=&quot;days(10)&quot; 
action=&quot;delete&quot;/&gt;
+            &lt;sla slaLow=&quot;hours(3)&quot; 
slaHigh=&quot;hours(4)&quot;/&gt;
+            &lt;locations&gt;
+                &lt;location type=&quot;data&quot; 
path=&quot;/hdfsDataLocation/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}&quot;/&gt;
+                &lt;location type=&quot;stats&quot; 
path=&quot;/projects/falcon/clicksStats&quot; /&gt;
+                &lt;location type=&quot;meta&quot; 
path=&quot;/projects/falcon/clicksMetaData&quot; /&gt;
+            &lt;/locations&gt;
+        &lt;/cluster&gt;
+..... more clusters &lt;/clusters&gt;
+
+</pre></div>
+<p>Feed references a cluster by it's name, before submitting a feed all the 
referenced cluster should be submitted to Falcon. type: specifies whether the 
referenced cluster should be treated as a source or target for a feed. A feed 
can have multiple source and target clusters. If the type of cluster is not 
specified then the cluster is not considered for replication. Validity of a 
feed on cluster specifies duration for which this feed is valid on this 
cluster. Retention specifies how long the feed is retained on this cluster and 
the action to be taken on the feed after the expiry of retention period. The 
retention limit is specified by expression frequency(times), ex: if feed should 
be retained for at least 6 hours then retention's limit=&quot;hours(6)&quot;. 
The field partitionExp contains partition tags. Number of partition tags has to 
be equal to number of partitions specified in feed schema. A partition tag can 
be a wildcard(*), a static string or an expression. Atleast one of t
 he strings has to be an expression. sla specifies sla for the feed on this 
cluster. This is an optional parameter and sla can be same or different from 
the global sla tag (mentioned outside the clusters tag ). This tag provides the 
user to flexibility to have different sla for different clusters e.g. in case 
of replication. If this attribute is missing then the default global sla is 
picked from the feed definition. Location specifies where the feed is available 
on this cluster. This is an optional parameter and path can be same or 
different from the global locations tag value ( it is mentioned outside the 
clusters tag ) . This tag provides the user to flexibility to have feed at 
different locations on different clusters. If this attribute is missing then 
the default global location is picked from the feed definition. Also the 
individual location tags data, stats, meta are optional.</p>
+<div class="source">
+<pre>
+ &lt;location type=&quot;data&quot; path=&quot;/projects/falcon/clicks&quot; 
/&gt;
+ &lt;location type=&quot;stats&quot; 
path=&quot;/projects/falcon/clicksStats&quot; /&gt;
+ &lt;location type=&quot;meta&quot; 
path=&quot;/projects/falcon/clicksMetaData&quot; /&gt;
+
+</pre></div>
+<p>A location tag specifies the type of location like data, meta, stats and 
the corresponding paths for them. A feed should at least define the location 
for type data, which specifies the HDFS path pattern where the feed is 
generated periodically. ex: type=&quot;data&quot; 
path=&quot;/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic&quot; The 
granularity of date pattern in the path should be at least that of a frequency 
of a feed. Other location type which are supported are stats and meta paths, if 
a process references a feed then the meta and stats paths are available as a 
property in a process.</p></div>
+<div class="section">
+<h5>Catalog Storage (Table)<a name="Catalog_Storage_Table"></a></h5>
+<p>A table tag specifies the table URI in the catalog registry as:</p>
+<div class="source">
+<pre>
+catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
+
+</pre></div>
+<p>This is modeled as a URI (similar to an ISBN URI). It does not have any 
reference to Hive or HCatalog. Its quite generic so it can be tied to other 
implementations of a catalog registry. The catalog implementation specified in 
the startup config provides implementation for the catalog URI.</p>
+<p>Top-level partition has to be a dated pattern and the granularity of date 
pattern should be at least that of a frequency of a feed.</p>
+<div class="source">
+<pre>
+    &lt;xs:complexType name=&quot;catalog-table&quot;&gt;
+        &lt;xs:annotation&gt;
+            &lt;xs:documentation&gt;
+                catalog specifies the uri of a Hive table along with the 
partition spec.
+                
uri=&quot;catalog:$database:$table#(partition-key=partition-value);+&quot;
+                Example: catalog:logs-db:clicks#ds=${YEAR}-${MONTH}-${DAY}
+            &lt;/xs:documentation&gt;
+        &lt;/xs:annotation&gt;
+        &lt;xs:attribute type=&quot;xs:string&quot; name=&quot;uri&quot; 
use=&quot;required&quot;/&gt;
+    &lt;/xs:complexType&gt;
+
+</pre></div>
+<p>Examples:</p>
+<div class="source">
+<pre>
+&lt;table 
uri=&quot;catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}&quot;
 /&gt;
+&lt;table 
uri=&quot;catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+&lt;table 
uri=&quot;catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;
 /&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Partitions<a name="Partitions"></a></h4>
+<div class="source">
+<pre>
+   &lt;partitions&gt;
+        &lt;partition name=&quot;country&quot; /&gt;
+        &lt;partition name=&quot;cluster&quot; /&gt;
+    &lt;/partitions&gt;
+
+</pre></div>
+<p>A feed can define multiple partitions, if a referenced cluster defines 
partitions then the number of partitions in feed has to be equal to or more 
than the cluster partitions.</p>
+<p><b>Note:</b> This will only apply for FileSystem storage but not Table 
storage as partitions are defined and maintained in Hive (HCatalog) 
registry.</p></div>
+<div class="section">
+<h4>Groups<a name="Groups"></a></h4>
+<div class="source">
+<pre>
+    &lt;groups&gt;online,bi&lt;/groups&gt;
+
+</pre></div>
+<p>A feed specifies a list of comma separated groups, a group is a logical 
grouping of feeds and a group is said to be available if all the feeds 
belonging to a group are available. The frequency of all the feed which belong 
to the same group must be same.</p></div>
+<div class="section">
+<h4>Availability Flags<a name="Availability_Flags"></a></h4>
+<div class="source">
+<pre>
+    &lt;availabilityFlag&gt;_SUCCESS&lt;/availabilityFlag&gt;
+
+</pre></div>
+<p>An availabilityFlag specifies the name of a file which when present/created 
in a feeds data directory,  the feed is termed as available. ex: _SUCCESS, if 
this element is ignored then Falcon would consider the presence of feed's data 
directory as feed availability.</p></div>
+<div class="section">
+<h4>Frequency<a name="Frequency"></a></h4>
+<div class="source">
+<pre>
+    &lt;frequency&gt;minutes(20)&lt;/frequency&gt;
+
+</pre></div>
+<p>A feed has a frequency which specifies the frequency by which this feed is 
generated.  ex: it can be generated every hour, every 5 minutes, daily, weekly 
etc. valid frequency type for a feed are minutes, hours, days, months. The 
values can be negative, zero or positive.</p></div>
+<div class="section">
+<h4>SLA<a name="SLA"></a></h4>
+<div class="source">
+<pre>
+    &lt;sla slaLow=&quot;hours(40)&quot; slaHigh=&quot;hours(44)&quot; /&gt;
+
+</pre></div>
+<p>A feed can have SLA and each SLA has two properties - slaLow and slaHigh. 
Both slaLow and slaHigh are written using expressions like frequency. slaLow is 
intended to serve for alerting for feed instances which are in danger of 
missing their availability SLAs. slaHigh is intended to serve for reporting the 
feeds which missed their SLAs. SLAs are relative to feed instance 
time.</p></div>
+<div class="section">
+<h4>Import<a name="Import"></a></h4>
+<div class="source">
+<pre>
+&lt;import&gt;
+    &lt;source name=&quot;test-hsql-db&quot; tableName=&quot;customer&quot;&gt;
+        &lt;extract type=&quot;full&quot;&gt;
+            &lt;mergepolicy&gt;snapshot&lt;/mergepolicy&gt;
+         &lt;/extract&gt;
+         &lt;fields&gt;
+            &lt;includes&gt;
+                &lt;field&gt;id&lt;/field&gt;
+                &lt;field&gt;name&lt;/field&gt;
+            &lt;/includes&gt;
+         &lt;/fields&gt;
+    &lt;/source&gt;
+    &lt;arguments&gt;
+        &lt;argument name=&quot;--split-by&quot; value=&quot;id&quot;/&gt;
+        &lt;argument name=&quot;--num-mappers&quot; value=&quot;2&quot;/&gt;
+    &lt;/arguments&gt;
+&lt;/import&gt;
+
+A feed can have an import policy associated with it. The souce name specified 
the datasource reference to the
+datasource entity from which the data will be imported to HDFS. The tableName 
spcified the table or topic to be
+imported from the datasource. The extract type specifies the pull mechanism 
(full or
+incremental extract). Full extract method extracts all the data from the 
datasource. The incremental extraction
+method feature implementation is in progress. The mergeplocy determines how 
the data is to be layed out on HDFS.
+The snapshot layout creates a snapshot of the data on HDFS using the feed's 
location specification. Fields is used
+to specify the projection columns. Feed import from database underneath uses 
sqoop to achieve the task. Any advanced
+Sqoop options can be specified via the arguments.
+
+The feed's data storage location should include some combination of 
timepartition if import policy is associated with it.
+Please see ImportExport documentation for more details.
+
+---+++ Late Arrival
+
+&lt;verbatim&gt;
+    &lt;late-arrival cut-off=&quot;hours(6)&quot; /&gt;
+
+</pre></div>
+<p>A late-arrival specifies the cut-off period till which the feed is expected 
to arrive late and should be honored be processes referring to it as input feed 
by rerunning the instances in case the data arrives late with in a cut-off 
period. The cut-off period is specified by expression frequency(times), ex: if 
the feed can arrive late upto 8 hours then late-arrival's 
cut-off=&quot;hours(8)&quot;</p>
+<p><b>Note:</b> This will only apply for FileSystem storage but not Table 
storage until a future time.</p></div>
+<div class="section">
+<h4>Email Notification<a name="Email_Notification"></a></h4>
+<div class="source">
+<pre>
+    &lt;notification type=&quot;email&quot; to=&quot;b...@xyz.com&quot;/&gt;
+
+</pre></div>
+<p>Specifying the notification element with &quot;type&quot; property allows 
users to receive email notification when a scheduled feed instance completes. 
Multiple recipients of an email can be provided as comma separated addresses 
with &quot;to&quot; property. To send email notification ensure that SMTP 
parameters are defined in Falcon startup.properties. Refer to <a 
href="./FalconEmailNotification.html">Falcon Email Notification</a> for more 
details.</p></div>
+<div class="section">
+<h4>ACL<a name="ACL"></a></h4>
+<p>A feed has ACL (Access Control List) useful for implementing permission 
requirements and provide a way to set different permissions for specific users 
or named groups.</p>
+<div class="source">
+<pre>
+    &lt;ACL owner=&quot;test-user&quot; group=&quot;test-group&quot; 
permission=&quot;*&quot;/&gt;
+
+</pre></div>
+<p>ACL indicates the Access control list for this cluster. owner is the Owner 
of this entity. group is the one which has access to read. permission indicates 
the permission.</p></div>
+<div class="section">
+<h4>Custom Properties<a name="Custom_Properties"></a></h4>
+<div class="source">
+<pre>
+    &lt;properties&gt;
+        &lt;property name=&quot;tmpFeedPath&quot; 
value=&quot;tmpFeedPathValue&quot; /&gt;
+        &lt;property name=&quot;field2&quot; value=&quot;value2&quot; /&gt;
+        &lt;property name=&quot;queueName&quot; 
value=&quot;hadoopQueue&quot;/&gt;
+        &lt;property name=&quot;jobPriority&quot; 
value=&quot;VERY_HIGH&quot;/&gt;
+        &lt;property name=&quot;timeout&quot; value=&quot;hours(1)&quot;/&gt;
+        &lt;property name=&quot;parallel&quot; value=&quot;3&quot;/&gt;
+        &lt;property name=&quot;maxMaps&quot; value=&quot;8&quot;/&gt;
+        &lt;property name=&quot;mapBandwidth&quot; value=&quot;1&quot;/&gt;
+        &lt;property name=&quot;overwrite&quot; value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;ignoreErrors&quot; value=&quot;false&quot;/&gt;
+        &lt;property name=&quot;skipChecksum&quot; value=&quot;false&quot;/&gt;
+        &lt;property name=&quot;removeDeletedFiles&quot; 
value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preserveBlockSize&quot; 
value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preserveReplicationNumber&quot; 
value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preservePermission&quot; 
value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preserveUser&quot; value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preserveGroup&quot; 
value=&quot;false&quot;/&gt;
+        &lt;property name=&quot;preserveChecksumType&quot; 
value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preserveAcl&quot; value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preserveXattr&quot; value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;preserveTimes&quot; value=&quot;true&quot;/&gt;
+        &lt;property name=&quot;tdeEncryptionEnabled&quot; 
value=&quot;false&quot;/&gt;
+        &lt;property name=&quot;order&quot; value=&quot;LIFO&quot;/&gt;
+    &lt;/properties&gt;
+
+</pre></div>
+<p>A key-value pair, which are propagated to the workflow engine. 
&quot;queueName&quot; and &quot;jobPriority&quot; are special properties 
available to user to specify the Hadoop job queue and priority, the same values 
are used by Falcon's launcher job. &quot;timeout&quot;, &quot;parallel&quot; 
and &quot;order&quot; are other special properties which decides replication 
instance's timeout value while waiting for the feed instance, parallel decides 
the concurrent replication instances that can run at any given time and order 
decides the execution order for replication instances like FIFO, LIFO and 
LAST_ONLY. <a href="./DistCp.html">DistCp</a> options can be passed as custom 
properties, which will be propagated to the <a href="./DistCp.html">DistCp</a> 
tool. &quot;maxMaps&quot; represents the maximum number of maps used during 
replication. &quot;mapBandwidth&quot; represents the bandwidth in MB/s used by 
each mapper during replication. &quot;overwrite&quot; represents overwrite 
destin
 ation during replication. &quot;ignoreErrors&quot; represents ignore failures 
not causing the job to fail during replication. &quot;skipChecksum&quot; 
represents bypassing checksum verification during replication. 
&quot;removeDeletedFiles&quot; represents deleting the files existing in the 
destination but not in source during replication. &quot;preserveBlockSize&quot; 
represents preserving block size during replication. 
&quot;preserveReplicationNumber&quot; represents preserving replication number 
during replication. &quot;preservePermission&quot; represents preserving 
permission during replication. &quot;preserveUser&quot; represents preserving 
user during replication. &quot;preserveGroup&quot; represents preserving group 
during replication. &quot;preserveChecksumType&quot; represents preserving 
checksum type during replication. &quot;preserveAcl&quot; represents preserving 
ACL during replication. &quot;preserveXattr&quot; represents preserving Xattr 
during replication. &quot;prese
 rveTimes&quot; represents preserving access and modification times during 
replication. &quot;tdeEncryptionEnabled&quot; if TDE is enabled.</p></div>
+<div class="section">
+<h4>Lifecycle<a name="Lifecycle"></a></h4>
+<div class="source">
+<pre>
+
+&lt;lifecycle&gt;
+    &lt;retention-stage&gt;
+        &lt;frequency&gt;hours(10)&lt;/frequency&gt;
+        &lt;queue&gt;reports&lt;/queue&gt;
+        &lt;priority&gt;NORMAL&lt;/priority&gt;
+        &lt;properties&gt;
+            &lt;property 
name=&quot;retention.policy.agebaseddelete.limit&quot; 
value=&quot;hours(9)&quot;&gt;&lt;/property&gt;
+        &lt;/properties&gt;
+    &lt;/retention-stage&gt;
+&lt;/lifecycle&gt;
+
+
+</pre></div>
+<p>lifecycle tag is the new way to define various stages of a feed's 
lifecycle. In the example above we have defined a retention-stage using 
lifecycle tag. You may define lifecycle at global level or a cluster level or 
both. Cluster level configuration takes precedence and falcon falls back to 
global definition if cluster level specification is missing.</p>
+<p>----++++ Retention Stage As of now there are two ways to specify retention. 
One is through the &lt;retention&gt; tag in the cluster and another is the new 
way through &lt;retention-stage&gt; tag in &lt;lifecycle&gt; tag. If both are 
defined for a feed, then the lifecycle tag will be considered effective and 
falcon will ignore the &lt;retention&gt; tag in the cluster. If there is an 
invalid configuration of retention-stage in lifecycle tag, then falcon will 
<b>NOT</b> fall back to retention tag even if it is defined and will throw 
validation error.</p>
+<p>In this new method of defining retention you can specify the frequency at 
which the retention should occur, you can also define the queue and priority 
parameters for retention jobs. The default behavior of retention-stage is same 
as the existing one which is to delete all instances corresponding to 
instance-time earlier than the duration provided in 
&quot;retention.policy.agebaseddelete.limit&quot;</p>
+<p>Property &quot;retention.policy.agebaseddelete.limit&quot; is a mandatory 
property and must contain a valid duration e.g. &quot;hours(1)&quot; Retention 
frequency is not a mandatory parameter. If user doesn't specify the frequency 
in the retention stage then it doesn't fallback to old retention policy 
frequency. Its default value is set to 6 hours if feed frequency is less than 6 
hours else its set to feed frequency as retention shouldn't be more frequent 
than data availability to avoid wastage of compute resources.</p>
+<p>In future, we will allow more customisation like customising how to choose 
instances to be deleted through this method.</p></div>
+<div class="section">
+<h3>Process Specification<a name="Process_Specification"></a></h3>
+<p>A process defines configuration for a workflow. A workflow is a directed 
acyclic graph(DAG) which defines the job for the workflow engine. A process 
definition defines  the configurations required to run the workflow job. For 
example, process defines the frequency at which the workflow should run, the 
clusters on which the workflow should run, the inputs and outputs for the 
workflow, how the workflow failures should be handled, how the late inputs 
should be handled and so on.</p>
+<p>The different details of process are:</p></div>
+<div class="section">
+<h4>Name<a name="Name"></a></h4>
+<p>Each process is identified with a unique name. Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Tags<a name="Tags"></a></h4>
+<p>An optional list of comma separated tags which are used for classification 
of processes. Syntax:</p>
+<div class="source">
+<pre>
+...
+    &lt;tags&gt;consumer=consu...@xyz.com, owner=produ...@xyz.com, 
department=forecasting&lt;/tags&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Pipelines<a name="Pipelines"></a></h4>
+<p>An optional list of comma separated word strings, specifies the data 
processing pipeline(s) to which this process belongs. Only letters, numbers and 
underscore are allowed for pipeline string. Syntax:</p>
+<div class="source">
+<pre>
+...
+    &lt;pipelines&gt;test_Pipeline, dataReplication, 
clickStream_pipeline&lt;/pipelines&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Cluster<a name="Cluster"></a></h4>
+<p>The cluster on which the workflow should run. A process should contain one 
or more clusters. Cluster definition for the cluster name gives the end points 
for workflow execution, name node, job tracker, messaging and so on. Each 
cluster inturn has validity mentioned, which tell the times between which the 
job should run on that specified cluster.  Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+   &lt;clusters&gt;
+        &lt;cluster name=&quot;test-cluster1&quot;&gt;
+            &lt;validity start=&quot;2012-12-21T08:15Z&quot; 
end=&quot;2100-01-01T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+        &lt;cluster name=&quot;test-cluster2&quot;&gt;
+            &lt;validity start=&quot;2012-12-21T08:15Z&quot; 
end=&quot;2100-01-01T00:00Z&quot;/&gt;
+        &lt;/cluster&gt;
+       ....
+       ....
+    &lt;/clusters&gt;
+
+...
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Parallel<a name="Parallel"></a></h4>
+<p>Parallel defines how many instances of the workflow can run concurrently. 
It should be a positive integer &gt; 0. For example, parallel of 1 ensures that 
only one instance of the workflow can run at a time. The next instance will 
start only after the running instance completes. Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+   &lt;parallel&gt;[parallel]&lt;/parallel&gt;
+...
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Order<a name="Order"></a></h4>
+<p>Order defines the order in which the ready instances are picked up. The 
possible values are FIFO(First In First Out), LIFO(Last In First Out), and 
ONLYLAST(Last Only). Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+   &lt;order&gt;[order]&lt;/order&gt;
+...
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Timeout<a name="Timeout"></a></h4>
+<p>A optional Timeout specifies the maximum time an instance waits for a 
dataset before being killed by the workflow engine, a time out is specified 
like frequency. If timeout is not specified, falcon computes a default timeout 
for a process based on its frequency, which is six times of the frequency of 
process or 30 minutes if computed timeout is less than 30 minutes.</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+   &lt;timeout&gt;[timeunit]([frequency])&lt;/timeout&gt;
+...
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Frequency<a name="Frequency"></a></h4>
+<p>Frequency defines how frequently the workflow job should run. For example, 
hours(1) defines the frequency as hourly, days(7) defines weekly frequency. The 
values for timeunit can be minutes/hours/days/months and the frequency number 
should be a positive integer &gt; 0.  Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+   &lt;frequency&gt;[timeunit]([frequency])&lt;/order&gt;
+...
+&lt;/process&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>SLA<a name="SLA"></a></h4>
+<div class="source">
+<pre>
+    &lt;sla shouldStartIn=&quot;hours(2)&quot; 
shouldEndIn=&quot;hours(4)&quot;/&gt;
+
+</pre></div>
+<p>A process can have SLA which is defined by 2 optional attributes - 
shouldStartIn and shouldEndIn. All the attributes are written using expressions 
like frequency. shouldStartIn is the time by which the process should have 
started. shouldEndIn is the time by which the process should have 
finished.</p></div>
+<div class="section">
+<h4>Validity<a name="Validity"></a></h4>
+<p>Validity defines how long the workflow should run. It has 3 components - 
start time, end time and timezone. Start time and end time are timestamps 
defined in yyyy-MM-dd'T'HH:mm'Z' format and should always be in UTC. Timezone 
is used to compute the next instances starting from start time. The workflow 
will start at start time and end before end time specified on a given cluster. 
So, there will not be a workflow instance at end time. Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+   &lt;validity start=[start time] end=[end time] timezone=[timezone]/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>Examples:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+    &lt;validity start=&quot;2012-01-01T00:40Z&quot; 
end=&quot;2012-04-01T00:00&quot; timezone=&quot;UTC&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>The daily workflow will start on Jan 1st 2012 at 00:40 UTC, it will run at 
40th minute of every hour and the last instance will be at March 31st 2012 at 
23:40 UTC.</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;validity start=&quot;2012-03-11T08:40Z&quot; 
end=&quot;2012-03-12T08:00&quot; timezone=&quot;PST8PDT&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>The hourly workflow will start on March 11th 2012 at 00:40 PST, the next 
instances will be at 01:40 PST, 03:40 PDT, 04:40 PDT and so on till 23:40 PDT. 
So, there will be just 23 instances of the workflow for March 11th 2012 because 
of DST switch.</p></div>
+<div class="section">
+<h4>Inputs<a name="Inputs"></a></h4>
+<p>Inputs define the input data for the workflow. The workflow job will start 
executing only after the schedule time and when all the inputs are available. 
There can be 0 or more inputs and each of the input maps to a feed. The path 
and frequency of input data is picked up from feed definition. Each input 
should also define start and end instances in terms of <a 
href="./FalconDocumentation.html">EL expressions</a> and can optionally specify 
specific partition of input that the workflow requires. The components in 
partition should be subset of partitions defined in the feed.</p>
+<p>For each input, Falcon will create a property with the input name that 
contains the comma separated list of input paths. This property can be used in 
workflow actions like pig scripts and so on.</p>
+<p>Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+    &lt;inputs&gt;
+        &lt;input name=[input name] feed=[feed name] start=[start el] end=[end 
el] partition=[partition]/&gt;
+        ...
+    &lt;/inputs&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>Example:</p>
+<div class="source">
+<pre>
+&lt;feed name=&quot;feed1&quot;&gt;
+...
+    &lt;partition name=&quot;isFraud&quot;/&gt;
+    &lt;partition name=&quot;country&quot;/&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;locations&gt;
+        &lt;location type=&quot;data&quot; 
path=&quot;/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;/&gt;
+        ...
+    &lt;/locations&gt;
+...
+&lt;/feed&gt;
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;inputs&gt;
+        &lt;input name=&quot;input1&quot; feed=&quot;feed1&quot; 
start=&quot;today(0,0)&quot; end=&quot;today(1,0)&quot; 
partition=&quot;*/US&quot;/&gt;
+        ...
+    &lt;/inputs&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>The input for the workflow is a hourly feed and takes 0th and 1st hour data 
of today(the day when the workflow runs). If the workflow is running for 
2012-03-01T06:40Z, the inputs are /projects/bootcamp/feed1/2012-03-01-00/*/US 
and /projects/bootcamp/feed1/2012-03-01-01/*/US. The property for this input is 
input1=/projects/bootcamp/feed1/2012-03-01-00/*/US,/projects/bootcamp/feed1/2012-03-01-01/*/US</p>
+<p>Also, feeds with Hive table storage can be used as inputs to a process. 
Several parameters from inputs are passed as params to the user workflow or pig 
script.</p>
+<div class="source">
+<pre>
+    ${wf:conf('falcon_input_database')} - database name associated with the 
feed for a given input
+    ${wf:conf('falcon_input_table')} - table name associated with the feed for 
a given input
+    ${wf:conf('falcon_input_catalog_url')} - Hive metastore URI for this input 
feed
+    ${wf:conf('falcon_input_partition_filter_pig')} - value of 
${coord:dataInPartitionFilter('$input', 'pig')}
+    ${wf:conf('falcon_input_partition_filter_hive')} - value of 
${coord:dataInPartitionFilter('$input', 'hive')}
+    ${wf:conf('falcon_input_partition_filter_java')} - value of 
${coord:dataInPartitionFilter('$input', 'java')}
+
+</pre></div>
+<p><b>NOTE:</b> input is the name of the input configured in the process, 
which is input.getName().</p>
+<div class="source">
+<pre>&lt;input name=&quot;input&quot; feed=&quot;clicks-raw-table&quot; 
start=&quot;yesterday(0,0)&quot; end=&quot;yesterday(20,0)&quot;/&gt;
+</pre></div>
+<p>Example workflow configuration:</p>
+<div class="source">
+<pre>
+&lt;configuration&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_database&lt;/name&gt;
+    &lt;value&gt;falcon_db&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_table&lt;/name&gt;
+    &lt;value&gt;input_table&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_catalog_url&lt;/name&gt;
+    &lt;value&gt;thrift://localhost:29083&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_storage_type&lt;/name&gt;
+    &lt;value&gt;TABLE&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;feedInstancePaths&lt;/name&gt;
+    
&lt;value&gt;hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_partition_filter_java&lt;/name&gt;
+    &lt;value&gt;(ds='2012-04-21-00')&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_partition_filter_hive&lt;/name&gt;
+    &lt;value&gt;(ds='2012-04-21-00')&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_partition_filter_pig&lt;/name&gt;
+    &lt;value&gt;(ds=='2012-04-21-00')&lt;/value&gt;
+  &lt;/property&gt;
+  ...
+&lt;/configuration&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Optional Inputs<a name="Optional_Inputs"></a></h4>
+<p>User can mention one or more inputs as optional inputs. In such cases the 
job does not wait on those inputs which are mentioned as optional. If they are 
present it considers them otherwise continues with the mandatory ones. If some 
instances of the optional feed are present for the given data window, those are 
considered and passed on to the process. While checking for presence of an feed 
instance, Falcon looks for <b><i>availabilityFlag</i></b> in the directory, if 
specified in the feed definition. If no <b><i>availabilityFlag</i></b> is 
specified, presence of the instance directory is treated as indication of 
availability of data. Example:</p>
+<div class="source">
+<pre>
+&lt;feed name=&quot;feed1&quot;&gt;
+...
+    &lt;partition name=&quot;isFraud&quot;/&gt;
+    &lt;partition name=&quot;country&quot;/&gt;
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;locations&gt;
+        &lt;location type=&quot;data&quot; 
path=&quot;/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}&quot;/&gt;
+        ...
+    &lt;/locations&gt;
+...
+&lt;/feed&gt;
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;inputs&gt;
+        &lt;input name=&quot;input1&quot; feed=&quot;feed1&quot; 
start=&quot;today(0,0)&quot; end=&quot;today(1,0)&quot; 
partition=&quot;*/US&quot;/&gt;
+        &lt;input name=&quot;input2&quot; feed=&quot;feed2&quot; 
start=&quot;today(0,0)&quot; end=&quot;today(1,0)&quot; 
partition=&quot;*/UK&quot; optional=&quot;true&quot; /&gt;
+        ...
+    &lt;/inputs&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p><b>Note:</b> This is only supported for FileSystem storage but not Table 
storage at this point.</p></div>
+<div class="section">
+<h4>Outputs<a name="Outputs"></a></h4>
+<p>Outputs define the output data that is generated by the workflow. A process 
can define 0 or more outputs. Each output is mapped to a feed and the output 
path is picked up from feed definition. The output instance that should be 
generated is specified in terms of <a href="./FalconDocumentation.html">EL 
expression</a>.</p>
+<p>For each output, Falcon creates a property with output name that contains 
the path of output data. This can be used in workflows to store in the path. 
Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+    &lt;outputs&gt;
+        &lt;output name=[input name] feed=[feed name] instance=[instance 
el]/&gt;
+        ...
+    &lt;/outputs&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>Example:</p>
+<div class="source">
+<pre>
+&lt;feed name=&quot;feed2&quot;&gt;
+...
+    &lt;frequency&gt;days(1)&lt;/frequency&gt;
+    &lt;locations&gt;
+        &lt;location type=&quot;data&quot; 
path=&quot;/projects/bootcamp/feed2/${YEAR}-${MONTH}-${DAY}&quot;/&gt;
+        ...
+    &lt;/locations&gt;
+...
+&lt;/feed&gt;
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;outputs&gt;
+        &lt;output name=&quot;output1&quot; feed=&quot;feed2&quot; 
instance=&quot;today(0,0)&quot;/&gt;
+        ...
+    &lt;/outputs&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>The output of the workflow is feed instance for today. If the workflow is 
running for 2012-03-01T06:40Z, the workflow generates output 
/projects/bootcamp/feed2/2012-03-01. The property for this output that is 
available for workflow is: output1=/projects/bootcamp/feed2/2012-03-01</p>
+<p>Also, feeds with Hive table storage can be used as outputs to a process. 
Several parameters from outputs are passed as params to the user workflow or 
pig script.</p>
+<div class="source">
+<pre>
+    ${wf:conf('falcon_output_database')} - database name associated with the 
feed for a given output
+    ${wf:conf('falcon_output_table')} - table name associated with the feed 
for a given output
+    ${wf:conf('falcon_output_catalog_url')} - Hive metastore URI for the given 
output feed
+    ${wf:conf('falcon_output_dataout_partitions')} - value of 
${coord:dataOutPartitions('$output')}
+
+</pre></div>
+<p><b>NOTE:</b> output is the name of the output configured in the process, 
which is output.getName().</p>
+<div class="source">
+<pre>&lt;output name=&quot;output&quot; feed=&quot;clicks-summary-table&quot; 
instance=&quot;today(0,0)&quot;/&gt;
+</pre></div>
+<p>Example workflow configuration:</p>
+<div class="source">
+<pre>
+&lt;configuration&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_database&lt;/name&gt;
+    &lt;value&gt;falcon_db&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_table&lt;/name&gt;
+    &lt;value&gt;output_table&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_catalog_url&lt;/name&gt;
+    &lt;value&gt;thrift://localhost:29083&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_storage_type&lt;/name&gt;
+    &lt;value&gt;TABLE&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;feedInstancePaths&lt;/name&gt;
+    
&lt;value&gt;hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_dataout_partitions&lt;/name&gt;
+    &lt;value&gt;'ds=2012-04-21-00'&lt;/value&gt;
+  &lt;/property&gt;
+  ....
+&lt;/configuration&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Custom Properties<a name="Custom_Properties"></a></h4>
+<p>The properties are key value pairs that are passed to the workflow. These 
properties are optional and can be used in workflow to parameterize the 
workflow. Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+    &lt;properties&gt;
+        &lt;property name=[key] value=[value]/&gt;
+        ...
+    &lt;/properties&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>The following are some special properties, which when present are used by 
the Falcon's launcher job, the same property is also available in workflow 
which can be used to propagate to pig or M/R job.</p>
+<div class="source">
+<pre>
+        &lt;property name=&quot;queueName&quot; 
value=&quot;hadoopQueue&quot;/&gt;
+        &lt;property name=&quot;jobPriority&quot; 
value=&quot;VERY_HIGH&quot;/&gt;
+        &lt;!-- This property is used to turn off JMS notifications for this 
process. JMS notifications are enabled by default. --&gt;
+        &lt;property name=&quot;userJMSNotificationEnabled&quot; 
value=&quot;false&quot;/&gt;
+
+</pre></div></div>
+<div class="section">
+<h4>Workflow<a name="Workflow"></a></h4>
+<p>The workflow defines the workflow engine that should be used and the path 
to the workflow on hdfs. Libraries required can be specified using lib 
attribute in the workflow element and will be comma separated HDFS paths. The 
workflow definition on hdfs contains the actual job that should run and it 
should confirm to the workflow specification of the engine specified. The 
libraries required by the workflow should be in lib folder inside the workflow 
path.</p>
+<p>The properties defined in the cluster and cluster properties(nameNode and 
jobTracker) will also be available for the workflow.</p>
+<p>There are 4 engines supported today.</p></div>
+<div class="section">
+<h5>Oozie<a name="Oozie"></a></h5>
+<p>As part of oozie workflow engine support, users can embed a oozie workflow. 
Refer to oozie <a class="externalLink" 
href="http://oozie.apache.org/docs/4.2.0/DG_Overview.html";>workflow 
overview</a> and <a class="externalLink" 
href="http://oozie.apache.org/docs/4.2.0/WorkflowFunctionalSpec.html";>workflow 
specification</a> for details.</p>
+<p>Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+    &lt;workflow engine=[workflow engine] path=[workflow path] lib=[comma 
separated lib paths]/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>Example:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;workflow engine=&quot;oozie&quot; 
path=&quot;/projects/bootcamp/workflow&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>This defines the workflow engine to be oozie and the workflow xml is 
defined at /projects/bootcamp/workflow/workflow.xml. The libraries are at 
/projects/bootcamp/workflow/lib. Libraries path can be overridden using lib 
attribute. e.g.: 
lib=&quot;/projects/bootcamp/wf/libs,/projects/bootcamp/oozie/libs&quot; in the 
workflow element.</p></div>
+<div class="section">
+<h5>Pig<a name="Pig"></a></h5>
+<p>Falcon also adds the Pig engine which enables users to embed a Pig script 
as a process.</p>
+<p>Example:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;workflow engine=&quot;pig&quot; 
path=&quot;/projects/bootcamp/pig.script&quot; 
lib=&quot;/projects/bootcamp/wf/libs,/projects/bootcamp/pig/libs&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>This defines the workflow engine to be pig and the pig script is defined at 
/projects/bootcamp/pig.script.</p>
+<p>Feeds with Hive table storage will send one more parameter apart from the 
general ones:</p>
+<div class="source">
+<pre>$input_filter
+</pre></div></div>
+<div class="section">
+<h5>Hive<a name="Hive"></a></h5>
+<p>Falcon also adds the Hive engine as part of Hive Integration which enables 
users to embed a Hive script as a process. This would enable users to create 
materialized queries in a declarative way.</p>
+<p>Example:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;workflow engine=&quot;hive&quot; 
path=&quot;/projects/bootcamp/hive-script.hql&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>This defines the workflow engine to be hive and the hive script is defined 
at /projects/bootcamp/hive-script.hql.</p>
+<p>Feeds with Hive table storage will send one more parameter apart from the 
general ones:</p>
+<div class="source">
+<pre>$input_filter
+</pre></div></div>
+<div class="section">
+<h5>Spark<a name="Spark"></a></h5>
+<p>Falcon also adds the Spark engine as part of Spark Integration which 
enables users to run the Java/Python Spark application as a process. When 
&quot;spark&quot; workflow engine is mentioned spark related parameters must be 
provided through &lt;spark-attributes&gt; Examples:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;spark-process&quot;&gt;
+...
+    &lt;workflow engine=&quot;spark&quot; 
path=&quot;/resources/action&quot;&gt;
+    &lt;spark-attributes&gt;
+          &lt;master&gt;local&lt;/master&gt;
+          &lt;name&gt;Spark WordCount&lt;/name&gt;
+          &lt;class&gt;org.examples.WordCount&lt;/class&gt;
+          &lt;jar&gt;/resources/action/lib/spark-application.jar&lt;/jar&gt;
+          &lt;spark-opts&gt;--num-executors 1 --driver-memory 
512m&lt;/spark-opts&gt;
+    &lt;/spark-attributes&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>This defines the workflow engine to be spark and Java/Python Spark 
application must be defined with &quot;jar&quot; option that need to be 
executed. There is flexibility to override the Spark master through process 
entity either to &quot;yarn-client&quot; or &quot;yarn-cluster&quot;, if spark 
interface is already defined in cluster entity. Input and Output data to the 
Spark application will be set as argument when Spark workflow will be 
generated, if input and output feed entity is defined in the process entity. In 
the set of arguments, first argument will always correspond to input feed, 
second argument will always correspond to output feed and then user's provided 
argument will be set.</p>
+<p>For running the Spark SQL process entity, that read and write the data 
stored on Hive, the datanucleus jars under the $HIVE_HOME/lib directory and 
hive-site.xml under $SPARK_HOME/conf/ directory need to be available on the 
driver and all executors launched by the YARN cluster. The convenient way to do 
this is adding them through the --jars option and --file option of the 
spark-opts attribute. Example:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;spark-process&quot;&gt;
+...
+    &lt;workflow engine=&quot;spark&quot; 
path=&quot;/resources/action&quot;&gt;
+    &lt;spark-attributes&gt;
+        &lt;master&gt;local&lt;/master&gt;
+        &lt;name&gt;Spark SQL&lt;/name&gt;
+        &lt;class&gt;org.examples.SparkSQLProcessTable&lt;/class&gt;
+        &lt;jar&gt;/resources/action/lib/spark-application.jar&lt;/jar&gt;
+        &lt;spark-opts&gt;--num-executors 1 --driver-memory 512m --jars 
/usr/local/hive/lib/datanucleus-rdbms.jar,/usr/local/hive/lib/datanucleus-core.jar,/usr/local/hive/lib/datanucleus-api-jdo.jar
 --files /usr/local/spark/conf/hive-site.xml&lt;/spark-opts&gt;
+    &lt;/spark-attributes&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>Input and Output to the Spark SQL application will be set as argument when 
Spark workflow will be generated, if input and output feed entity is defined in 
the process entity. If input feed is of table type, then input table partition, 
table name and database name will be set as input arguments. If output feed is 
of table type, then output table partition, table name and database name will 
be set as output arguments. Once input and output arguments is set, then user's 
provided argument will be set.</p></div>
+<div class="section">
+<h4>Retry<a name="Retry"></a></h4>
+<p>Retry policy defines how the workflow failures should be handled. Three 
retry policies are defined: periodic, exp-backoff(exponential backoff) and 
final. Depending on the delay and number of attempts, the workflow is re-tried 
after specific intervals. If user sets the onTimeout attribute to 
&quot;true&quot;, retries will happen for TIMED_OUT instances. Syntax:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+    &lt;retry policy=[retry policy] delay=[retry delay] attempts=[retry 
attempts] onTimeout=[retry onTimeout]/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>Examples:</p>
+<div class="source">
+<pre>
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;retry policy=&quot;periodic&quot; delay=&quot;minutes(10)&quot; 
attempts=&quot;3&quot; onTimeout=&quot;true&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>The workflow is re-tried after 10 mins, 20 mins and 30 mins. With 
exponential backoff, the workflow will be re-tried after 10 mins, 20 mins and 
40 mins.</p>
+<p><b>NOTE :</b> If user does a manual rerun with -force option (using the 
instance rerun API), then the runId will get reset and user might see more 
Falcon system retries than configured in the process definition.</p>
+<p>To enable retries for instances for feeds, user will have to set the 
following properties in runtime.properties</p>
+<div class="source">
+<pre>
+falcon.retry.policy=periodic
+falcon.retry.delay=minutes(30)
+falcon.retry.attempts=3
+falcon.retry.onTimeout=false
+&lt;verbatim&gt;
+---+++ Late data
+Late data handling defines how the late data should be handled. Each feed is 
defined with a late cut-off value which specifies the time till which late data 
is valid. For example, late cut-off of hours(6) means that data for nth hour 
can get delayed by upto 6 hours. Late data specification in process defines how 
this late data is handled.
+
+Late data policy defines how frequently check is done to detect late data. The 
policies supported are: backoff, exp-backoff(exponention backoff) and final(at 
feed's late cut-off). The policy along with delay defines the interval at which 
late data check is done.
+
+Late input specification for each input defines the workflow that should run 
when late data is detected for that input. 
+
+Syntax:
+&lt;verbatim&gt;
+&lt;process name=&quot;[process name]&quot;&gt;
+...
+    &lt;late-process policy=[late handling policy] delay=[delay]&gt;
+        &lt;late-input input=[input name] workflow-path=[workflow path]/&gt;
+        ...
+    &lt;/late-process&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>Example:</p>
+<div class="source">
+<pre>
+&lt;feed name=&quot;feed1&quot;&gt;
+...
+    &lt;frequency&gt;hours(1)&lt;/frequency&gt;
+    &lt;late-arrival cut-off=&quot;hours(6)&quot;/&gt;
+...
+&lt;/feed&gt;
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;inputs&gt;
+        &lt;input name=&quot;input1&quot; feed=&quot;feed1&quot; 
start=&quot;today(0,0)&quot; end=&quot;today(1,0)&quot;/&gt;
+        ...
+    &lt;/inputs&gt;
+    &lt;late-process policy=&quot;final&quot;&gt;
+        &lt;late-input input=&quot;input1&quot; 
workflow-path=&quot;/projects/bootcamp/workflow/lateinput1&quot; /&gt;
+        ...
+    &lt;/late-process&gt;
+...
+&lt;/process&gt;
+
+</pre></div>
+<p>This late handling specifies that late data detection should run at feed's 
late cut-off which is 6 hours in this case. If there is late data, Falcon 
should run the workflow specified at 
/projects/bootcamp/workflow/lateinput1/workflow.xml</p>
+<p><b>Note:</b> This is only supported for FileSystem storage but not Table 
storage at this point.</p></div>
+<div class="section">
+<h4>Email Notification<a name="Email_Notification"></a></h4>
+<div class="source">
+<pre>
+    &lt;notification type=&quot;email&quot; to=&quot;bob@@xyz.com&quot;/&gt;
+
+</pre></div>
+<p>Specifying the notification element with &quot;type&quot; property allows 
users to receive email notification when a scheduled process instance 
completes. Multiple recipients of an email can be provided as comma separated 
addresses with &quot;to&quot; property. To send email notification ensure that 
SMTP parameters are defined in Falcon startup.properties. Refer to <a 
href="./FalconEmailNotification.html">Falcon Email Notification</a> for more 
details.</p></div>
+<div class="section">
+<h4>ACL<a name="ACL"></a></h4>
+<p>A process has ACL (Access Control List) useful for implementing permission 
requirements and provide a way to set different permissions for specific users 
or named groups.</p>
+<div class="source">
+<pre>
+    &lt;ACL owner=&quot;test-user&quot; group=&quot;test-group&quot; 
permission=&quot;*&quot;/&gt;
+
+</pre></div>
+<p>ACL indicates the Access control list for this cluster. owner is the Owner 
of this entity. group is the one which has access to read. permission indicates 
the permission.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2018
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/91c68bea/content/0.11/Extensions.html
----------------------------------------------------------------------
diff --git a/content/0.11/Extensions.html b/content/0.11/Extensions.html
new file mode 100644
index 0000000..ea7a34d
--- /dev/null
+++ b/content/0.11/Extensions.html
@@ -0,0 +1,143 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2018-03-12
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20180312" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Falcon Extensions</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Falcon Extensions</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2018-03-12</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.11</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h2>Falcon Extensions<a name="Falcon_Extensions"></a></h2></div>
+<div class="section">
+<h3>Overview<a name="Overview"></a></h3>
+<p>A Falcon extension is a static process template with parameterized workflow 
to realize a specific use case and enable non-programmers to capture and re-use 
very complex business logic. Extensions are defined in server space. Objective 
of the extension is to solve a standard data management function that can be 
invoked as a tool using the standard Falcon features (REST API, CLI and UI 
access) supporting standard falcon features.</p>
+<p>For example:</p>
+<p></p>
+<ul>
+<li>Replicating directories from one HDFS cluster to another (not timed 
partitions)</li>
+<li>Replicating hive metadata (database, table, views, etc.)</li>
+<li>Replicating between HDFS and Hive - either way</li>
+<li>Data masking etc.</li></ul></div>
+<div class="section">
+<h3>Proposal<a name="Proposal"></a></h3>
+<p>Falcon provides a Process abstraction that encapsulates the configuration 
for a user workflow with scheduling controls. All extensions can be modeled as 
a Process and its dependent feeds with in Falcon which executes the user 
workflow periodically. The process and its associated workflow are 
parameterized. The user will provide properties which are &lt;name, value&gt; 
pairs that are substituted by falcon before scheduling it. Falcon translates 
these extensions as a process entity by replacing the parameters in the 
workflow definition.</p></div>
+<div class="section">
+<h3>Falcon extension artifacts to manage extensions<a 
name="Falcon_extension_artifacts_to_manage_extensions"></a></h3>
+<p>Extension artifacts are published in addons/extensions. Artifacts are 
expected to be installed on HDFS at &quot;extension.store.uri&quot; path 
defined in startup properties. Each extension is expected to ahve the below 
artifacts</p>
+<ul>
+<li>json file under META directory lists all the required and optional 
parameters/arguments for scheduling extension job</li>
+<li>process entity template to be scheduled under resources directory</li>
+<li>parameterized workflow under resources directory</li>
+<li>required libs under the libs directory</li>
+<li>README describing the functionality achieved by extension</li></ul>
+<p>REST API and CLI support has been added for extension artifact management 
on HDFS. Please Refer to <a href="./Falconcli/FalconCLI.html">Falcon CLI</a> 
and <a href="./Restapi/ResourceList.html">REST API</a> for more 
details.</p></div>
+<div class="section">
+<h3>CLI and REST API support<a name="CLI_and_REST_API_support"></a></h3>
+<p>REST APIs and CLI support has been added to manage extension jobs and 
instances.</p>
+<p>Please Refer to <a href="./Falconcli/FalconCLI.html">Falcon CLI</a> and <a 
href="./Restapi/ResourceList.html">REST API</a> for more details on usage of 
CLI and REST API's for extension jobs and instances management.</p></div>
+<div class="section">
+<h3>Metrics<a name="Metrics"></a></h3>
+<p>HDFS mirroring and Hive mirroring extensions will capture the replication 
metrics like TIMETAKEN, BYTESCOPIED, COPY (number of files copied) for an 
instance and populate to the GraphDB.</p></div>
+<div class="section">
+<h3>Sample extensions<a name="Sample_extensions"></a></h3>
+<p>Sample extensions are published in addons/extensions</p></div>
+<div class="section">
+<h3>Types of extensions<a name="Types_of_extensions"></a></h3>
+<p></p>
+<ul>
+<li><a href="./HDFSMirroring.html">HDFS mirroring extension</a></li>
+<li><a href="./HiveMirroring.html">Hive mirroring extension</a></li>
+<li><a href="./HdfsSnapshotMirroring.html">HDFS snapshot based 
mirroring</a></li></ul></div>
+<div class="section">
+<h3>Packaging and installation<a name="Packaging_and_installation"></a></h3>
+<p>This feature is enabled by default but could be disabled by removing the 
following from startup properties:</p>
+<div class="source">
+<pre>
+config name: *.application.services
+config value: org.apache.falcon.extensions.ExtensionService
+
+</pre></div>
+<p><a href="./ExtensionService.html">ExtensionService</a> should be added 
before <a href="./ConfigurationStore.html">ConfigurationStore</a> in startup 
properties for application services configuration. For manual installation user 
is expected to update &quot;extension.store.uri&quot; property defined in 
startup properties with HDFS path where the extension artifacts will be copied 
to. Extension artifacts in addons/extensions are packaged in falcon. For manual 
installation once the Falcon Server is setup user is expected to copy the 
extension artifacts under {falcon-server-dir}/extensions to HDFS at 
&quot;extension.store.uri&quot; path defined in startup properties and then 
restart Falcon.</p></div>
+<div class="section">
+<h3>Migration<a name="Migration"></a></h3>
+<p>Recipes framework and HDFS mirroring capability was added in Apache Falcon 
0.6.0 release and it was client side logic. With 0.10 release its moved to 
server side and renamed as server side extensions. Client side recipes only had 
CLI support and expected certain pre steps to get it working. This is no longer 
required in 0.10 release as new CLI and REST API support has been provided.</p>
+<p>Migrating to 0.10 release and above is not backward compatible for Recipes. 
If user is migrating to 0.10 release and above then old Recipe setup and CLI's 
won't work. For manual installation user is expected to copy Extension 
artifacts to HDFS. Please refer &quot;Packaging and installation&quot; section 
above for more details. Please Refer to <a 
href="./Falconcli/FalconCLI.html">Falcon CLI</a> and <a 
href="./Restapi/ResourceList.html">REST API</a> for more details on usage of 
CLI and REST API's for extension jobs and instances management.</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2018
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/falcon/blob/91c68bea/content/0.11/FalconDatabase.html
----------------------------------------------------------------------
diff --git a/content/0.11/FalconDatabase.html b/content/0.11/FalconDatabase.html
new file mode 100644
index 0000000..9007309
--- /dev/null
+++ b/content/0.11/FalconDatabase.html
@@ -0,0 +1,160 @@
+<!DOCTYPE html>
+<!--
+ | Generated by Apache Maven Doxia at 2018-03-12
+ | Rendered using Apache Maven Fluido Skin 1.3.0
+-->
+<html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <meta name="Date-Revision-yyyymmdd" content="20180312" />
+    <meta http-equiv="Content-Language" content="en" />
+    <title>Falcon - Configuring the state store for Falcon</title>
+    <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
+    <link rel="stylesheet" href="./css/site.css" />
+    <link rel="stylesheet" href="./css/print.css" media="print" />
+
+      
+    <script type="text/javascript" 
src="./js/apache-maven-fluido-1.3.0.min.js"></script>
+
+                          
+        
+<script type="text/javascript">$( document ).ready( function() { $( 
'.carousel' ).carousel( { interval: 3500 } ) } );</script>
+          
+            </head>
+        <body class="topBarDisabled">
+          
+                        
+                    
+    
+        <div class="container">
+          <div id="banner">
+        <div class="pull-left">
+                                <div id="bannerLeft">
+                                                                               
                 <img src="images/falcon-logo.png"  alt="Apache Falcon" 
width="200px" height="45px"/>
+                </div>
+                      </div>
+        <div class="pull-right">  </div>
+        <div class="clear"><hr/></div>
+      </div>
+
+      <div id="breadcrumbs">
+        <ul class="breadcrumb">
+                
+                    
+                              <li class="">
+                    <a href="index.html" title="Falcon">
+        Falcon</a>
+        </li>
+      <li class="divider ">/</li>
+        <li class="">Configuring the state store for Falcon</li>
+        
+                
+                    
+                  <li id="publishDate" class="pull-right">Last Published: 
2018-03-12</li> <li class="divider pull-right">|</li>
+              <li id="projectVersion" class="pull-right">Version: 0.11</li>
+            
+                            </ul>
+      </div>
+
+      
+                
+        <div id="bodyColumn" >
+                                  
+            <div class="section">
+<h4>Configuring the state store for Falcon<a 
name="Configuring_the_state_store_for_Falcon"></a></h4>
+<p>You can configure statestore by making changes to 
<b><i>$FALCON_HOME/conf/statestore.properties</i></b> as follows. You will need 
to restart Falcon Server for the changes to take effect.</p>
+<p>Falcon Server needs to maintain state of the entities and instances in a 
persistent store for the system to be recoverable. Since Prism only federates, 
it does not need to maintain any state information. Following properties need 
to be set in statestore.properties of Falcon Servers:</p>
+<div class="source">
+<pre>
+######### StateStore Properties #####
+*.falcon.state.store.impl=org.apache.falcon.state.store.jdbc.JDBCStateStore
+*.falcon.statestore.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
+*.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db
+# StateStore credentials file where username,password and other properties can 
be stored securely.
+# Set this credentials file permission 400 ;the user who starts falcon should 
only have read permission.
+# Give Absolute path to credentials file along with file name or put in 
classpath with file name statestore.credentials.
+# Credentials file should be present either in given location or class path, 
otherwise falcon won't start.
+*.falcon.statestore.credentials.file=
+*.falcon.statestore.jdbc.username=sa
+*.falcon.statestore.jdbc.password=
+*.falcon.statestore.connection.data.source=org.apache.commons.dbcp.BasicDataSource
+# Maximum number of active connections that can be allocated from this pool at 
the same time.
+*.falcon.statestore.pool.max.active.conn=10
+*.falcon.statestore.connection.properties=
+# Indicates the interval (in milliseconds) between eviction runs.
+*.falcon.statestore.validate.db.connection.eviction.interval=300000
+## The number of objects to examine during each run of the idle object evictor 
thread.
+*.falcon.statestore.validate.db.connection.eviction.num=10
+## Creates Falcon DB.
+## If set to true, Falcon creates the DB schema if it does not exist. If the 
DB schema exists is a NOP.
+## If set to false, Falcon does not create the DB schema. If the DB schema 
does not exist it fails start up.
+*.falcon.statestore.create.db.schema=true
+
+</pre></div>
+<p>The _*.falcon.statestore.jdbc.url_ property in statestore.properties 
determines the DB and data location. All other properties are common across 
RDBMS.</p>
+<p><b>NOTE : Although multiple Falcon Servers can share a DB (not applicable 
for Derby DB), it is recommended that you have different DBs for different 
Falcon Servers for better performance.</b></p>
+<p>You will need to create the state DB and tables before starting the Falcon 
Server. To create tables, a tool comes bundled with the Falcon installation. 
You can use the <i>falcon-db.sh</i> script to create tables in the DB. The 
script needs to be run only for Falcon Servers and can be run by any user that 
has execute permission on the script. The script picks up the DB connection 
details from <b><i>$FALCON_HOME/conf/statestore.properties</i></b>. Ensure that 
you have granted the right privileges to the user mentioned in 
statestore.properties_, so the tables can be created.</p>
+<p>You can use the help command to get details on the sub-commands 
supported:</p>
+<div class="source">
+<pre>
+./bin/falcon-db.sh help
+usage:
+      Falcon DB initialization tool currently supports Derby DB/ Mysql
+
+      falcondb help : Display usage for all commands or specified command
+
+      falcondb version : Show Falcon DB version information
+
+      falcondb create &lt;OPTIONS&gt; : Create Falcon DB schema
+                      -run             Confirmation option regarding DB schema 
creation/upgrade
+                      -sqlfile &lt;arg&gt;   Generate SQL script instead of 
creating/upgrading the DB
+                                       schema
+
+      falcondb upgrade &lt;OPTIONS&gt; : Upgrade Falcon DB schema
+                       -run             Confirmation option regarding DB 
schema creation/upgrade
+                       -sqlfile &lt;arg&gt;   Generate SQL script instead of 
creating/upgrading the DB
+                                        schema
+
+
+</pre></div>
+<p>Currently, MySQL, postgreSQL and Derby are supported as state stores. We 
may extend support to other DBs in the future. Falcon has been tested against 
MySQL v5.5 and PostgreSQL v9.5. If you are using MySQL ensure you also copy 
mysql-connector-java-&lt;version&gt;.jar under 
<b><i>$FALCON_HOME/server/webapp/falcon/WEB-INF/lib</i></b> and 
<b><i>$FALCON_HOME/client/lib</i></b></p></div>
+<div class="section">
+<h5>Using Derby as the State Store<a 
name="Using_Derby_as_the_State_Store"></a></h5>
+<p>Using Derby is ideal for QA and staging setup. Falcon comes bundled with a 
Derby connector and no explicit setup is required (although you can set it up) 
in terms creating the DB or tables. For example,</p>
+<div class="source">
+<pre> *.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db;create=true 
+</pre></div>
+<p>tells Falcon to use the Derby JDBC connector, with data directory, 
$FALCON_HOME/data/ and DB name 'falcon'. If <i>create=true</i> is specified, 
you will not need to create a DB up front; a database will be created if it 
does not exist.</p></div>
+<div class="section">
+<h5>Using MySQL as the State Store<a 
name="Using_MySQL_as_the_State_Store"></a></h5>
+<p>The jdbc.url property in statestore.properties determines the DB and data 
location. For example,</p>
+<div class="source">
+<pre> *.falcon.statestore.jdbc.url=jdbc:mysql://localhost:3306/falcon 
+</pre></div>
+<p>tells Falcon to use the MySQL JDBC connector, which is accessible 
@localhost:3306, with DB name 'falcon'.</p>
+<p>Note: First time we have to manually create the schema in production as we 
have set falcon.statestore.create.db.schema = false</p></div>
+                  </div>
+          </div>
+
+    <hr/>
+
+    <footer>
+            <div class="container">
+              <div class="row span12">Copyright &copy;                    
2013-2018
+                        <a href="http://www.apache.org";>Apache Software 
Foundation</a>.
+            All Rights Reserved.      
+                    
+      </div>
+
+                          
+                <p id="poweredBy" class="pull-right">
+                          <a href="http://maven.apache.org/"; title="Built by 
Maven" class="poweredBy">
+        <img class="builtBy" alt="Built by Maven" 
src="./images/logos/maven-feather.png" />
+      </a>
+              </p>
+        
+                </div>
+    </footer>
+  </body>
+</html>

Reply via email to