Added: falcon/trunk/releases/0.8/src/site/twiki/FalconDocumentation.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/FalconDocumentation.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/FalconDocumentation.twiki (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/FalconDocumentation.twiki Tue Nov 
17 01:22:04 2015
@@ -0,0 +1,766 @@
+---++ Contents
+   * <a href="#Architecture">Architecture</a>
+   * <a href="#Control_flow">Control flow</a>
+   * <a href="#Modes_Of_Deployment">Modes Of Deployment</a>
+   * <a href="#Entity_Management_actions">Entity Management actions</a>
+   * <a href="#Instance_Management_actions">Instance Management actions</a>
+   * <a href="#Retention">Retention</a>
+   * <a href="#Replication">Replication</a>
+   * <a href="#Cross_entity_validations">Cross entity validations</a>
+   * <a href="#Updating_process_and_feed_definition">Updating process and feed 
definition</a>
+   * <a href="#Handling_late_input_data">Handling late input data</a>
+   * <a href="#Idempotency">Idempotency</a>
+   * <a href="#Falcon_EL_Expressions">Falcon EL Expressions</a>
+   * <a href="#Lineage">Lineage</a>
+   * <a href="#Security">Security</a>
+   * <a href="#Recipes">Recipes</a>
+   * <a href="#Monitoring">Monitoring</a>
+   * <a href="#Email_Notification">Email Notification</a>
+   * <a href="#Backwards_Compatibility">Backwards Compatibility 
Instructions</a>
+   * <a href="#Proxyuser_support">Proxyuser support</a>
+
+---++ Architecture
+
+---+++ Introduction
+Falcon is a feed and process management platform over hadoop. Falcon 
essentially transforms user's feed
+and process configurations into repeated actions through a standard workflow 
engine. Falcon by itself
+doesn't do any heavy lifting. All the functions and workflow state management 
requirements are delegated
+to the workflow scheduler. The only thing that Falcon maintains is the 
dependencies and relationship between
+these entities. This is adequate to provide integrated and seamless experience 
to the developers using
+the falcon platform.
+
+---+++ Falcon Architecture - Overview
+<img src="Architecture.png" height="400" width="600" />
+
+---+++ Scheduler
+Falcon system has picked Oozie as the default scheduler. However the system is 
open for integration with
+other schedulers. Lot of the data processing in hadoop requires scheduling to 
be based on both data availability
+as well as time. Oozie currently supports these capabilities off the shelf and 
hence the choice.
+
+---+++ Control flow
+Though the actual responsibility of the workflow is with the scheduler 
(Oozie), Falcon remains in the
+execution path, by subscribing to messages that each of the workflow may 
generate. When Falcon generates a
+workflow in Oozie, it does so, after instrumenting the workflow with 
additional steps which includes messaging
+via JMS. Falcon system itself subscribes to these control messages and can 
perform actions such as retries,
+handling late input arrival etc.
+
+
+---++++ Feed Schedule flow
+<img src="FeedSchedule.png" height="400" width="600" />
+
+---++++ Process Schedule flow
+<img src="ProcessSchedule.png" height="400" width="600" />
+
+
+
+---++ Modes Of Deployment
+There are two basic components of Falcon set up. Falcon Prism and Falcon 
Server.
+As the name suggests Falcon Prism splits the request it gets to the Falcon 
Servers. More details below:
+
+---+++ Stand Alone Mode
+Stand alone mode is useful when the hadoop jobs and relevant data processing 
involves only one hadoop cluster.
+In this mode there is a single Falcon server that contacts Oozie to schedule 
jobs on Hadoop.
+All the process/feed requests like submit, schedule, suspend, kill etc. are 
sent to this server.
+For running falcon in this mode one should use the falcon which has been built 
using standalone option.
+
+---+++ Distributed Mode
+Distributed mode is for multiple (colos) instances of hadoop clusters, and 
multiple workflow schedulers to handle them.
+In this mode falcon has 2 components: Prism and Server(s).
+Both Prism and servers have their own setup (runtime and startup properties) 
and their own config locations.
+In this mode Prism acts as a contact point for Falcon servers.
+While all commands are available through Prism, only read and instance api's 
are available through Server.
+Below are the requests that can be sent to each of these:
+
+ Prism: submit, schedule, submitAndSchedule, Suspend, Resume, Kill, instance 
management
+ Server: schedule, suspend, resume, instance management
+ 
+As observed above submit and kill are kept exclusively as Prism operations to 
keep all the config stores in sync and to support feature of idempotency.
+Request may also be sent from prism but directed to a specific server using 
the option "-colo" from CLI or append the same in web request, if using API.
+
+When a cluster is submitted it is by default sent to all the servers 
configured in the prism.
+When is feed is SUBMIT / SCHEDULED request is only sent to the servers 
specified in the feed / process definitions. Servers are mentioned in the feed 
/ process via CLUSTER tags in xml definition.
+
+Communication between prism and falcon server (for submit/update entity 
function) is secured over https:// using a client-certificate based auth. Prism 
server needs to present a valid client certificate for the falcon server to 
accept the action.
+
+Startup property file in both falcon & prism server need to be configured with 
the following configuration if TLS is enabled.
+* keystore.file
+* keystore.password
+
+---++++ Prism Setup
+<img src="PrismSetup.png" height="400" width="600" />
+ 
+---+++ Configuration Store
+Configuration store is file system based store that the Falcon system 
maintains where the entity definitions
+are stored. File System used for the configuration store can either be a local 
file system or HDFS file system.
+It is recommended that the store be maintained outside of the system where 
Falcon is deployed. This is needed
+for handling issues relating to disk failures or other permanent failures of 
the system where Falcon is deployed.
+Configuration store also maintains an archive location where prior versions of 
the configuration or deleted
+configurations are maintained. They are never accessed by the Falcon system 
and they merely serve to track
+historical changes to the entity definitions.
+
+---+++ Atomic Actions
+Often times when Falcon performs entity management actions, it may need to do 
several individual actions.
+If one of the action were to fail, then the system could be in an inconsistent 
state. To avoid this, all
+individual operations performed are recorded into a transaction journal. This 
journal is then used to undo
+the overall user action. In some cases, it is not possible to undo the action. 
In such cases, Falcon attempts
+to keep the system in an consistent state.
+
+---+++ Storage
+Falcon introduces a new abstraction to encapsulate the storage for a given 
feed which can either be
+expressed as a path on the file system, File System Storage or a table in a 
catalog such as Hive, Catalog Storage.
+
+<verbatim>
+    <xs:choice minOccurs="1" maxOccurs="1">
+        <xs:element type="locations" name="locations"/>
+        <xs:element type="catalog-table" name="table"/>
+    </xs:choice>
+</verbatim>
+
+Feed should contain one of the two storage options. Locations on File System 
or Table in a Catalog.
+
+---++++ File System Storage
+
+This is expressed as a location on the file system. Location specifies where 
the feed is available on this cluster.
+A location tag specifies the type of location like data, meta, stats and the 
corresponding paths for them.
+A feed should at least define the location for type data, which specifies the 
HDFS path pattern where the feed is
+generated periodically. ex: type="data" 
path="/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic"
+The granularity of date pattern in the path should be at least that of a 
frequency of a feed.
+
+<verbatim>
+ <location type="data" path="/projects/falcon/clicks" />
+ <location type="stats" path="/projects/falcon/clicksStats" />
+ <location type="meta" path="/projects/falcon/clicksMetaData" />
+</verbatim>
+
+---++++ Catalog Storage (Table)
+
+A table tag specifies the table URI in the catalog registry as:
+<verbatim>
+catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
+</verbatim>
+
+This is modeled as a URI (similar to an ISBN URI). It does not have any 
reference to Hive or HCatalog. Its quite
+generic so it can be tied to other implementations of a catalog registry. The 
catalog implementation specified
+in the startup config provides implementation for the catalog URI.
+
+Top-level partition has to be a dated pattern and the granularity of date 
pattern should be at least that
+of a frequency of a feed.
+
+Examples:
+<verbatim>
+<table 
uri="catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}"
 />
+<table 
uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+<table 
uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+</verbatim>
+
+
+---++ Entity Management actions
+All the following operation can also be done using 
[[restapi/ResourceList][Falcon's RESTful API]].
+
+---+++ Submit
+Entity submit action allows a new cluster/feed/process to be setup within 
Falcon. Submitted entity is not
+scheduled, meaning it would simply be in the configuration store within 
Falcon. Besides validating against
+the schema for the corresponding entity being added, the Falcon system would 
also perform inter-field
+validations within the configuration file and validations across dependent 
entities.
+
+---+++ List
+List all the entities within the falcon config store for the entity type being 
requested. This will include
+both scheduled and submitted entity configurations.
+
+---+++ Dependency
+Returns the dependencies of the requested entity. Dependency list include both 
forward and backward
+dependencies (depends on & is dependent on). For example, a feed would show 
process that are dependent on the
+feed and the clusters that it depends on.
+
+---+++ Schedule
+Feeds or Processes that are already submitted and present in the config store 
can be scheduled. Upon schedule,
+Falcon system wraps the required repeatable action as a bundle of oozie 
coordinators and executes them on the
+Oozie scheduler. (It is possible to extend Falcon to use an alternate workflow 
engine other than Oozie).
+Falcon overrides the workflow instance's external id in Oozie to reflect the 
process/feed and the nominal
+time. This external Id can then be used for instance management functions.
+
+The schedule copies the user specified workflow and library to a staging path, 
and the scheduler references the workflow
+and lib from the staging path.
+
+---+++ Suspend
+This action is applicable only on scheduled entity. This triggers suspend on 
the oozie bundle that was
+scheduled earlier through the schedule function. No further instances are 
executed on a suspended process/feed.
+
+---+++ Resume
+Puts a suspended process/feed back to active, which in turn resumes applicable 
oozie bundle.
+
+---+++ Status
+Gets the current status of the entity.
+
+---+++ Definition
+Gets the current entity definition as stored in the configuration store. 
Please note that user documentations
+in the entity will not be retained.
+
+---+++ Delete
+Delete operation on the entity removes any scheduled activity on the workflow 
engine, besides removing the
+entity from the falcon configuration store. Delete operation on an entity 
would only succeed if there are
+no dependent entities on the deleted entity.
+
+---+++ Update
+Update operation allows an already submitted/scheduled entity to be updated. 
Cluster update is currently
+not allowed. Feed update can cause cascading update to all the processes 
already scheduled. Process update triggers
+update in falcon if entity is updated. The following set of actions are 
performed in scheduler to realize an update:
+   * Update the old scheduled entity to set the end time to "now"
+   * Schedule as per the new process/feed definition with the start time as 
"now"
+
+---++ Instance Management actions
+
+Instance Manager gives user the option to control individual instances of the 
process based on their instance start time (start time of that instance). Start 
time needs to be given in standard TZ format. Example: 01 Jan 2012 01:00 => 
2012-01-01T01:00Z
+
+All the instance management operations (except running) allow single instance 
or list of instance within a Date range to be acted on. Make sure the dates are 
valid. i.e. are within the start and end time of process itself. 
+
+For every query in instance management the process name is a compulsory 
parameter. 
+
+Parameters -start and -end are used to mention the date range within which you 
want the instance to be operated upon. 
+
+-start: using only "-start" without "-end" will conduct the desired operation 
only on single instance given by date along with start.
+
+-end: "-end" can only be used along with "-start" . It corresponds to the end 
date till which instance need to operated upon. 
+
+   * 1. *status*: -status option via CLI can be used to get the status of a 
single or multiple instances. If the instance is not yet materialized but is 
within the process validity range, WAITING is returned as the state. Along with 
the status of the instance log location is also returned.
+
+
+   * 2.        *running*: -running returns all the running instance of the 
process. It does not take any start or end dates but simply return all the 
instances in state RUNNING at that given time. 
+
+   * 3.        *rerun*: -rerun is the option that you will use most often from 
instance management. As the name suggest this option is used to rerun a 
particular instance or instances of the process. The rerun option reruns all 
parent workflow for the instance, which in turn rerun all the sub-workflows for 
it. This option is valid for any instance in terminal state, i.e. KILLED, 
SUCCEEDED, FAILED. User can also set properties in the request, which will give 
options what types of actions should be rerun like, only failed, run all etc. 
These properties are dependent on the workflow engine being used along with 
falcon.
+   
+   * 4. *suspend*: -suspend is used to suspend a instance or instances for the 
given process. This option pauses the parent workflow at the state, which it 
was in at the time of execution of this command. This command is similar to 
SUSPEND process command in functionality only difference being, SUSPEND process 
suspends all the instance whereas suspend instance suspend only that instance 
or instances in the range. 
+
+   * 5.        *resume*: -resume option is used to resume any instance that is 
in suspended state. (Note: due to a bug in oozie �resume option in some cases 
may not actually resume the suspended instance/ instances)
+   * 6. *kill*: -kill option can be used to kill an instance or multiple 
instances
+
+   * 7. *summary*: -summary option via CLI can be used to get the consolidated 
status of the instances between the specified time period. Each status along 
with the corresponding instance count are listed for each of the applicable 
colos.
+
+
+In all the cases where your request is syntactically correct but logically 
not, the instance / instances are returned with the same status as earlier. 
Example: trying to resume a KILLED / SUCCEEDED instance will return the 
instance with KILLED / SUCCEEDED, without actually performing any operation. 
This is so because only an instance in SUSPENDED state can be resumed. Same 
thing is valid for rerun a SUSPENDED or RUNNING options etc. 
+
+---++ Retention
+In coherence with it's feed lifecycle management philosophy, Falcon allows the 
user to retain data in the system
+for a specific period of time for a scheduled feed. The user can specify the 
retention period in the respective
+feed/data xml in the following manner for each cluster the feed can belong to :
+<verbatim>
+<clusters>
+        <cluster name="corp" type="source">
+            <validity start="2012-01-30T00:00Z" end="2013-03-31T23:59Z"
+                      timezone="UTC" />
+            <retention limit="hours(10)" action="delete" /> 
+        </cluster>
+ </clusters> 
+</verbatim>
+
+The 'limit' attribute can be specified in units of minutes/hours/days/months, 
and a corresponding numeric value can
+be attached to it. It essentially instructs the system to retain data till the 
time specified
+in the attribute spanning backwards in time, from now. Any data older than 
that is erased from the system.
+
+With the integration of Hive, Falcon also provides retention for tables in 
Hive catalog.
+
+---+++ Example:
+If retention period is 10 hours, and the policy kicks in at time 't', the data 
retained by system is essentially the
+one after or equal to t-10h . Any data before t-10h is removed from the system.
+
+The 'action' attribute can attain values of DELETE/ARCHIVE. Based upon the tag 
value, the data eligible for removal is
+either deleted/archived.
+
+---+++ NOTE: Falcon 0.1/0.2 releases support Delete operation only
+
+---+++ When does retention policy come into play, aka when is retention really 
performed?
+
+Retention policy in Falcon kicks off on the basis of the time value specified 
by the user. Here are the basic rules:
+
+   * If the retention policy specified is less than 24 hours: In this event, 
the retention policy automatically kicks off every 6 hours.
+   * If the retention policy specified is more than 24 hours: In this event, 
the retention policy automatically kicks off every 24 hours.
+   * As soon as a feed is successfully scheduled: the retention policy is 
triggered immediately regardless of the current timestamp/state of the system.
+
+Relation between feed path and retention policy: Retention policy for a 
particular scheduled feed applies only to the eligible feed path
+specified in the feed xml. Any other paths that do not conform to the 
specified feed path are left unaffected by the retention policy.
+
+---++ Replication
+Falcon's feed lifecycle management also supports Feed replication across 
different clusters out-of-the-box.
+Multiple source clusters and target clusters can be defined in feed 
definition. Falcon replicates the data using
+hadoop's distcp version 2 across different clusters whenever a feed is 
scheduled.
+
+The frequency at which the data is replicated is governed by the frequency 
specified in the feed definition.
+Ideally, the feeds data path should have the same granularity as that for 
frequency of the feed, i.e. if the frequency of the feed is hours(3), then the 
data path should be to level /${YEAR}/${MONTH}/${DAY}/${HOUR}. 
+<verbatim>
+    <clusters>
+        <cluster name="sourceCluster1" type="source" 
partition="${cluster.name}" delay="minutes(40)">
+            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
+        </cluster>
+        <cluster name="sourceCluster2" type="source" 
partition="COUNTRY/${cluster.name}">
+            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
+        </cluster>
+        <cluster name="backupCluster" type="target">
+            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
+        </cluster>
+    </clusters>
+</verbatim>
+
+If more than 1 source cluster is defined, then partition expression is 
compulsory, a partition can also have a constant.
+The expression is required to avoid copying data from different source 
location to the same target location,
+also only the data in the partition is considered for replication if it is 
present. The partitions defined in the
+cluster should be less than or equal to the number of partition declared in 
the feed definition.
+
+Falcon uses pull based replication mechanism, meaning in every target cluster, 
for a given source cluster,
+a coordinator is scheduled which pulls the data using distcp from source 
cluster. So in the above example,
+2 coordinators are scheduled in backupCluster, one which pulls the data from 
sourceCluster1 and another
+from sourceCluster2. Also, for every feed instance which is replicated Falcon 
sends a JMS message on success or
+failure of replication instance.
+
+Replication can be scheduled with the past date, the time frame considered for 
replication is the minimum
+overlapping window of start and end time of source and target cluster, ex: if 
s1 and e1 is the start and end time
+of source cluster respectively, and s2 and e2 of target cluster, then the 
coordinator is scheduled in
+target cluster with start time max(s1,s2) and min(e1,e2).
+
+A feed can also optionally specify the delay for replication instance in the 
cluster tag, the delay governs the
+replication instance delays. If the frequency of the feed is hours(2) and 
delay is hours(1), then the replication
+instance will run every 2 hours and replicates data with an offset of 1 hour, 
i.e. at 09:00 UTC, feed instance which
+is eligible for replication is 08:00; and 11:00 UTC, feed instance of 10:00 
UTC is eligible and so on.
+
+If it is required to capture the feed replication metrics like TIMETAKEN, 
COPY, BYTESCOPIED, set the parameter "job.counter" to "true"
+in feed entity properties section. Captured metrics from instance will be 
populated to the GraphDB for display on UI.
+
+*Example:*
+<verbatim>
+<properties>
+        <property name="job.counter" value="true" />
+</properties>
+</verbatim>
+
+---+++ Where is the feed path defined for File System Storage?
+
+It's defined in the feed xml within the location tag.
+
+*Example:*
+<verbatim>
+<locations>
+        <location type="data" 
path="/retention/testFolders/${YEAR}-${MONTH}-${DAY}" />
+</locations>
+</verbatim>
+
+Now, if the above path contains folders in the following fashion:
+
+/retention/testFolders/${YEAR}-${MONTH}-${DAY}
+/retention/testFolders/${YEAR}-${MONTH}/someFolder
+
+The feed retention policy would only act on the former and not the latter.
+
+Users may choose to override the feed path specific to a cluster, so every 
cluster
+may have a different feed path.
+*Example:*
+<verbatim>
+<clusters>
+        <cluster name="testCluster" type="source">
+            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
+                       <locations>
+                       <location type="data" 
path="/projects/falcon/clicks/${YEAR}-${MONTH}-${DAY}" />
+                       <location type="stats" 
path="/projects/falcon/clicksStats/${YEAR}-${MONTH}-${DAY}" />
+                       <location type="meta" 
path="/projects/falcon/clicksMetaData/${YEAR}-${MONTH}-${DAY}" />
+               </locations>
+        </cluster>
+    </clusters>
+</verbatim>
+
+---+++ Hive Table Replication
+
+With the integration of Hive, Falcon adds table replication of Hive catalog 
tables. Replication will be triggered
+for a partition when the partition is complete at the source.
+
+   * Falcon will use HCatalog (Hive) API to export the data for a given table 
and the partition,
+which will result in a data collection that includes metadata on the data's 
storage format, the schema,
+how the data is sorted, what table the data came from, and values of any 
partition keys from that table.
+   * Falcon will use discp tool to copy the exported data collection into the 
secondary cluster into a staging
+directory used by Falcon.
+   * Falcon will then import the data into HCatalog (Hive) using the HCatalog 
(Hive) API. If the specified table does
+not yet exist, Falcon will create it, using the information in the imported 
metadata to set defaults for the table
+such as schema, storage format, etc.
+   * The partition is not complete and hence not visible to users until all 
the data is committed on the secondary
+cluster, (no dirty reads)
+
+
+---+++ Archival as Replication
+
+Falcon allows users to archive data from on-premice to cloud, either Azure 
WASB or S3.
+It uses the underlying replication for archiving data from source to target. 
The archival URI is
+specified as the overridden location for the target cluster.
+
+*Example:*
+<verbatim>
+    <clusters>
+        <cluster name="on-premise-cluster" type="source">
+            <validity start="2021-11-01T00:00Z" end="2021-12-31T00:00Z"/>
+        </cluster>
+        <cluster name="cloud-cluster" type="target">
+            <validity start="2011-11-01T00:00Z" end="2011-12-31T00:00Z"/>
+            <locations>
+                <location type="data"
+                          
path="wasb://[email protected]/data/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
+            </locations>
+        </cluster>
+    </clusters>
+</verbatim>
+
+---+++ Relation between feed's retention limit and feed's late arrival cut off 
period:
+
+For reasons that are obvious, Falcon has an external validation that ensures 
that the user
+always specifies the feed retention limit to be more than the feed's allowed 
late arrival period.
+If this rule is violated by the user, the feed submission call itself throws 
back an error.
+
+
+---++ Cross entity validations
+
+
+---+++ Entity Dependencies in a nutshell
+<img src="EntityDependency.png" height="50" width="300" />
+
+
+The above schematic shows the dependencies between entities in Falcon. The 
arrow in above diagram
+points from a dependency to the dependent. 
+
+
+Let's just get one simple rule stated here, which we will keep referring to 
time and again while
+talking about entities: A dependency in the system cannot be removed unless 
all it's dependents are
+removed first. This holds true for all transitive dependencies also.
+
+Now, let's follow it up with a simple illustration of an Falcon Job:
+
+Let's consider a process P that refers to feed F1 as an input feed, and 
generates feed F2 as an
+output feed. These feeds/processes are supposed to be associated with a 
cluster C1.
+
+The order of submission of this job would be in the following order:
+
+C1->F1/F2(in any order)->P
+
+The order of removal of this job from the system is in the exact opposite 
order, i.e.:
+
+P->F1/F2(in any order)->C1
+
+Please note that there might be multiple process referring to a particular 
feed, or a single feed belonging
+to multiple clusters. In that event, any of the dependencies cannot be removed 
unless ALL of their dependents
+are removed first. Attempting to do so will result in an error message and a 
400 Bad Request operation.
+
+
+---+++ Other cross validations between entities in Falcon system
+
+*Cluster-Feed Cross validations:*
+
+   * The cluster(s) referenced by feed (inside the <clusters> tag) should be  
present in the system at the time
+of submission. Any exception to this results in a feed submission failure. 
Note that a feed might be referring
+to more than a single cluster. The identifier for the same is the 'name' 
attribute for the individual cluster.
+
+*Example:*
+
+*Feed XML:*
+   
+<verbatim>
+   <clusters>
+        <cluster name="corp" type="source">
+            <validity start="2009-01-01T00:00Z" end="2012-12-31T23:59Z"
+                      timezone="UTC" />
+            <retention limit="months(6)" action="delete" />
+        </cluster>
+    </clusters>
+</verbatim>
+
+*Cluster corp's XML:*
+
+<verbatim>
+<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+</verbatim>
+
+*Cluster-Process Cross validations:*
+
+
+   * In a similar relationship to that of feed and a cluster, a process also 
refers to the relevant cluster by the
+'name' attribute. Any exception results in a process submission failure.
+
+
+---+++ Example:
+---+++ Process XML:
+<verbatim>
+<process name="agregator-coord16">
+    <cluster name="corp"/>....
+</verbatim>
+---+++ Cluster corp's XML:
+<verbatim>
+<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+</verbatim>
+
+*Feed-Process Cross Validations:*
+
+
+1. The process <input> and feeds designated as input feeds for the job:
+
+ For every feed referenced in the <input> tag in a process definition, 
following rules are applied
+when the process is due for submission:
+
+   * The feed having a value associated with the 'feed' attribute in input tag 
should be present in
+the system. The corresponding attribute in the feed definition is the 'name' 
attribute in the <feed> tag.
+
+*Example:*
+
+*Process xml:*
+
+<verbatim>
+<input end-instance="now(0,20)" start-instance="now(0,-60)"
+feed="raaw-logs16" name="inputData"/>
+</verbatim>
+
+*Feed xml:*
+<verbatim>
+<feed description="clicks log" name="raw-logs16"....
+</verbatim>
+
+   
+    * The time interpretation for corresponding tags indicating the start and 
end instances for a
+particular input feed in the process xml should lie well within the time span 
of the period specified in
+<validity> tag of the particular feed.
+
+*Example:*
+
+1. In the following scenario, process submission will result in an error:
+
+*Process XML:*
+<verbatim>
+<input end-instance="now(0,20)" start-instance="now(0,-60)"
+   feed="raw-logs16" name="inputData"/>
+</verbatim>
+*Feed XML:*
+<verbatim>
+<validity start="2009-01-01T00:00Z" end="2009-12-31T23:59Z".....
+</verbatim>
+Explanation: The process timelines for the feed range between a 40 minute 
interval between [-60m,-20m] from
+the current timestamp (which lets assume is 'today' as per the 'now' 
directive). However, the feed validity
+is between a 1 year period in 2009, which makes it anachronistic. 
+
+2. The following example would work just fine:
+
+*Process XML:*
+<verbatim>
+<input end-instance="now(0,20)" start-instance="now(0,-60)"
+   feed="raaw-logs16" name="inputData"/>
+</verbatim>
+*Feed XML:*
+<verbatim>
+validity start="2009-01-01T00:00Z" end="2012-12-31T23:59Z" .......
+</verbatim>
+since at the time of charting this document (03/03/2012), the feed validity is 
able to encapsulate the process
+input's start and end instances.
+
+
+Failure to follow any of the above rules would result in a process submission 
failure.
+
+*NOTE:* Even though the above check ensures that the timelines are not 
anachronistic, if the input data is not
+present in the system for the specified time period, the process can be 
submitted and scheduled, but all instances
+created would result in a WAITING state unless data is actually provided in 
the cluster.
+
+
+
+---++ Updating process and feed definition
+Any changes in feed/process can be done by updating its definition. After the 
update, any new workflows which are to be scheduled after the update call will 
pick up the new changes. Feed/process name and start time can't be updated. 
Updating a process triggers updates to the workflow that is triggered in the 
workflow engine. Updating feed updates feed workflows like retention, 
replication etc. and also updates the processes that reference the feed.
+
+
+---++ Handling late input data
+Falcon system can handle late arrival of input data and appropriately 
re-trigger processing for the affected
+instance. From the perspective of late handling, there are two main 
configuration parameters late-arrival cut-off
+and late-inputs section in feed and process entity definition that are 
central. These configurations govern
+how and when the late processing happens. In the current implementation (oozie 
based) the late handling is very
+simple and basic. The falcon system looks at all dependent input feeds for a 
process and computes the max late
+cut-off period. Then it uses a scheduled messaging framework, like the one 
available in Apache ActiveMQ or Java's !DelayQueue to schedule a message with a 
cut-off period, then after a cut-off period the message is dequeued and Falcon 
checks for changes in the feed data which is recorded in HDFS in latedata file 
by falcons "record-size" action, if it detects any changes then the workflow 
will be rerun with the new set of feed data.
+
+*Example:*
+For a process entity, the late rerun policy can be configured in the process 
definition.
+Falcon supports 3 policies, periodic, exp-backoff and final.
+Delay specifies, how often the feed data should be checked for changes, also 
one needs to 
+explicitly set the feed names in late-input which needs to be checked for late 
data.
+<verbatim>
+  <late-process policy="exp-backoff" delay="hours(1)">
+        <late-input input="impression" 
workflow-path="hdfs://impression/late/workflow" />
+        <late-input input="clicks" workflow-path="hdfs://clicks/late/workflow" 
/>
+   </late-process>
+</verbatim>
+
+*NOTE:* Feeds configured with table storage does not support late input data 
handling at this point. This will be
+made available in the near future.
+
+For a feed entity replication job, the default late data handling policy can 
be configured in the runtime.properties file.
+Since these properties are runtime.properties, they will take effect for all 
replication jobs completed subsequent to the change.
+<verbatim>
+  # Default configs to handle replication for late arriving feeds.
+  *.feed.late.allowed=true
+  *.feed.late.frequency=hours(3)
+  *.feed.late.policy=exp-backoff
+</verbatim>
+
+
+---++ Idempotency
+All the operations in Falcon are Idempotent. That is if you make same request 
to the falcon server / prism again you will get a SUCCESSFUL return if it was 
SUCCESSFUL in the first attempt. For example, you submit a new process / feed 
and get SUCCESSFUL message return. Now if you run the same command / api 
request on same entity you will again get a SUCCESSFUL message. Same is true 
for other operations like schedule, kill, suspend and resume.
+Idempotency also by takes care of the condition when request is sent through 
prism and fails on one or more servers. For example prism is configured to send 
request to 3 servers. First user sends a request to SUBMIT a process on all 3 
of them, and receives a response SUCCESSFUL from all of them. Then due to some 
issue one of the servers goes down, and user send a request to schedule the 
submitted process. This time he will receive a response with PARTIAL status and 
a FAILURE message from the server that has gone down. If the users check he 
will find the process would have been started and running on the 2 SUCCESSFUL 
servers. Now the issue with server is figured out and it is brought up. Sending 
the SCHEDULE request again through prism will result in a SUCCESSFUL response 
from prism as well as other three servers, but this time PROCESS will be 
SCHEDULED only on the server which had failed earlier and other two will keep 
running as before. 
+ 
+
+---++ Falcon EL Expressions
+
+
+Falcon expression language can be used in process definition for giving the 
start and end instance for various feeds.
+
+Before going into how to use falcon EL expressions it is necessary to 
understand what does instance and instance start time refer to with respect to 
Falcon.
+
+Lets consider a part of process definition below:
+
+<verbatim>
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<process name="testProcess">
+    <clusters>
+        <cluster name="corp">
+            <validity start="2010-01-02T01:00Z" end="2011-01-03T03:00Z" />
+        </cluster>
+    </clusters>
+   <parallel>2</parallel>
+   <order>LIFO</order>
+   <timeout>hours(3)</timeout>
+   <frequency>minutes(30)</frequency>
+
+  <inputs>
+ <input end-instance="now(0,20)" start-instance="now(0,-60)"
+                       feed="input-log" name="inputData"/>
+ </inputs>
+<outputs>
+       <output instance="now(0,0)" feed="output-log"
+               name="outputData" />
+</outputs>
+...
+...
+...
+...
+</process>
+</verbatim>
+
+
+The above definition says that the process will start at 2nd of Jan 2010 at 1 
am and will end at 3rd of Jan 2011 at 3 am on cluster corp. Also process will 
start a user-defined workflow (which we will call instance) every 30 mins.
+
+This means starting 2010-01-02T01:00Z every 30 mins a instance will start will 
run user defined workflow. Now if this workflow needs some input data and 
produce some output, user needs to give that in <inputs> and <outputs> tags. 
+Since the inputs that the process takes can be distributed over a wide range 
we use the limits by giving "start" and "end" instance for input. Output is 
only one location so only instance is given. 
+The timeout specifies, the how long a given instance should wait for input 
data before being terminated by the workflow engine.
+
+Coming back to instance start time, since a instance will start every 30 mins 
starting 2010-01-02T01:00Z, the time it is scheduled to start is called its 
instance time. For example first few instance time for above example are:
+
+
+<pre>Instance Number      instance start Time</pre>
+
+<pre>1                  2010-01-02T01:00Z</pre>
+<pre>2                  2010-01-02T01:30Z</pre>
+<pre>3                  2010-01-02T02:00Z</pre>
+<pre>4                  2010-01-02T02:30Z</pre>
+<pre>.                         .</pre>
+<pre>.                         .</pre>
+<pre>.                         .</pre>
+<pre>.                         .</pre>
+
+Now lets go to how to use expression language. Only thing to keep in mind is 
all EL evaluation are done based on the start time of that instance, and very 
instance will have different inputs / outputs based on the feed instance given 
in process definition.  
+
+All the parameters in various El can be both positive, zero or negative 
values. Positive values indicate so many units in future, zero means the base 
time EL has been resolved to, and negative values indicate corresponding units 
in past. 
+
+__Note: if no instance is created at the resolved time, then the instance 
immediately before it is considered.__
+
+Falcon currently support following ELs:
+
+
+   * 1.        *now(hours,minutes)*: now refer to the instance start time. 
Hours and minutes given are in reference with the start time of instance. For 
example now(-2,40)  corresponds to feed instance at -2 hr and +40 minutes i.e.  
feed instance 80 mins before the instance start time. Id user would have given 
now(0,-80) it would have correspond to the same. 
+   * 2.        *today(hours,minutes)*: hours and minutes given in this EL 
corresponds to instance from the start day of instance start time. Ie. If 
instance start is at 2010-01-02T01:30Z  then today(-3,-20) will mean instance 
created at 2010-01-01T20:40 and today(3,20) will correspond to 
2010-01-02T3:20Z. 
+
+   * 3.        *yesterday(hours,minutes)*: As the name suggest EL yesterday 
picks up feed instances with respect to start of day yesterday. Hours and 
minutes are added to the 00 hours starting yesterday, Example: yesterday(24,30) 
will actually correspond to 00:30 am of today, for 2010-01-02T01:30Z this would 
mean 2010-01-02:00:30 feed. 
+
+   * 7.        *lastYear(month,day,hour,minute)*: This is exactly similarly to 
currentYear in usage> only difference being start reference is taken to start 
of previous year. For example: lastYear(4,2,2,20) will correspond to feed 
instance created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will correspond 
to feed at 2010-01-03T02:20Z.
+
+   * 4.        *currentMonth(day,hour,minute)*: Current month takes the 
reference to start of the month with respect to instance start time. One thing 
to keep in mind is that day is added to the first day of the month. So the 
value of day is the number of days you want to add to the first day of the 
month. For example: for instance start time 2010-01-12T01:30Z and El as 
currentMonth(3,2,40) will correspond to feed created at 2010-01-04T02:40Z and 
currentMonth(0,0,0) will mean 2010-01-01T00:00Z.
+
+   * 5.        *lastMonth(day,hour,minute)*: Parameters for lastMonth is same 
as currentMonth, only difference being the reference is shifted to one month 
back. For instance start 2010-01-12T01:30Z lastMonth(2,3,30) will correspond to 
feed instance at 2009-12-03:T03:30Z 
+
+   * 6.        *currentYear(month,day,hour,minute)*: The month,day,hour, 
minutes in the parameter are added with reference to the start of year of 
instance start time. For our example start time 2010-01-02:00:30 reference will 
go back to 2010-01-01:T00:00Z. Also similar to days, months are added to the 
1st month that Jan. So currentYear(0,2,2,20) will mean 2010-01-03T02:20Z while 
currentYear(11,2,2,20) will mean 2010-12-03T02:20Z
+
+
+   * 7.        *lastYear(month,day,hour,minute)*: This is exactly similarly to 
currentYear in usage> only difference being start reference is taken to start 
of previous year. For example: lastYear(4,2,2,20) will corrospond to feed 
insatnce created at 2009-05-03T02:20Z and lastYear(12,2,2,20) will corrospond 
to feed at 2010-01-03T02:20Z.
+   
+   * 8. *latest(number of latest instance)*: This will simply make you input 
consider the number of latest available instance of the feed given as 
parameter. For example: latest(0) will consider the last available instance of 
feed, where as latest latest(-1) will consider second last available feed and 
latest(-3) will consider 4th last available feed.
+   
+   * 9.        *currentWeek(weekDayName,hour,minute)*: This is similar to 
currentMonth in the sense that it returns a relative time with respect to the 
instance start time, considering the day name provided as input as the start of 
the week. The day names can be one of SUN, MON, TUE, WED, THU, FRI, SAT.
+
+   * 10. *lastWeek(weekDayName,hour,minute)*: This is typically 7 days less 
than what the currentWeek returns for similar parameters.
+
+
+---++ Lineage
+
+Falcon adds the ability to capture lineage for both entities and its 
associated instances. It
+also captures the metadata tags associated with each of the entities as 
relationships. The
+following relationships are captured:
+
+   * owner of entities - User
+   * data classification tags
+   * groups defined in feeds
+   * Relationships between entities
+      * Clusters associated with Feed and Process entity
+      * Input and Output feeds for a Process
+   * Instances refer to corresponding entities
+
+Lineage is exposed in 3 ways:
+
+   * REST API
+   * CLI
+   * Dashboard - Interactive lineage for Process instances
+
+This feature is enabled by default but could be disabled by removing the 
following from:
+<verbatim>
+config name: *.application.services
+config value: org.apache.falcon.metadata.MetadataMappingService
+</verbatim>
+
+Lineage is only captured for Process executions. A future release will capture 
lineage for
+lifecycle policies such as replication and retention.
+
+---++Security
+
+Security is detailed in [[Security][Security]].
+
+---++ Recipes
+
+Recipes is detailed in [[Recipes][Recipes]].
+
+---++ Monitoring
+
+Monitoring and Operationalizing Falcon is detailed in 
[[Operability][Operability]].
+
+---++ Email Notification
+Notification for instance completion in Falcon is defined in 
[[FalconEmailNotification][Falcon Email Notification]].
+
+---++ Backwards Compatibility
+
+Backwards compatibility instructions are [[Compatibility][detailed here.]]
+
+---++ Proxyuser support
+Falcon supports impersonation or proxyuser functionality (identical to Hadoop 
proxyuser capabilities and conceptually
+similar to Unix 'sudo').
+
+Proxyuser enables Falcon clients to submit entities on behalf of other users. 
Falcon will utilize Hadoop core's hadoop-auth
+module to implement this functionality.
+
+Because proxyuser is a powerful capability, Falcon provides the following 
restriction capabilities (similar to Hadoop):
+
+   * Proxyuser is an explicit configuration on per proxyuser user basis.
+   * A proxyuser user can be restricted to impersonate other users from a set 
of hosts.
+   * A proxyuser user can be restricted to impersonate users belonging to a 
set of groups.
+
+There are 2 configuration properties needed in runtime properties to set up a 
proxyuser:
+   * falcon.service.ProxyUserService.proxyuser.#USER#.hosts: hosts from where 
the user #USER# can impersonate other users.
+   * falcon.service.ProxyUserService.proxyuser.#USER#.groups: groups the users 
being impersonated by user #USER# must belong to.
+
+If these configurations are not present, impersonation will not be allowed and 
connection will fail. If more lax security is preferred,
+the wildcard value * may be used to allow impersonation from any host or of 
any user, although this is recommended only for testing/development.
+
+-doAs option via  CLI or doAs query parameter can be appended if using API to 
enable impersonation.
+
+
+

Added: falcon/trunk/releases/0.8/src/site/twiki/FalconEmailNotification.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/FalconEmailNotification.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/FalconEmailNotification.twiki 
(added)
+++ falcon/trunk/releases/0.8/src/site/twiki/FalconEmailNotification.twiki Tue 
Nov 17 01:22:04 2015
@@ -0,0 +1,29 @@
+---++Falcon Email Notification
+
+Falcon Email notification allows sending email notifications when scheduled 
feed/process instances complete.
+Email notification in feed/process entity can be defined as follows:
+<verbatim>
+<process name="[process name]">
+    ...
+    <notification type="email" to="[email protected],[email protected]"/>
+    ...
+</process>
+</verbatim>
+
+   *  *type*    - specifies about the type of notification. *Note:* Currently 
"email" notification type is supported.
+   *  *to*  - specifies the address to send notifications to; multiple 
recipients may be provided as a comma-separated list.
+
+
+Falcon email notification requires some SMTP server configuration to be 
defined in startup.properties. Following are the values
+it looks for:
+   * *falcon.email.smtp.host*   - The host where the email action may find the 
SMTP server (localhost by default).
+   * *falcon.email.smtp.port*   - The port to connect to for the SMTP server 
(25 by default).
+   * *falcon.email.from.address*    - The from address to be used for mailing 
all emails (falcon@localhost by default).
+   * *falcon.email.smtp.auth*   - Boolean property that specifies if 
authentication is to be done or not. (false by default).
+   * *falcon.email.smtp.user*   - If authentication is enabled, the username 
to login as (empty by default).
+   * *falcon.email.smtp.password*   - If authentication is enabled, the 
username's password (empty by default).
+
+
+
+Also ensure that email notification plugin is enabled in startup.properties to 
send email notifications:
+   * *monitoring.plugins*   - 
org.apache.falcon.plugin.EmailNotificationPlugin,org.apache.falcon.plugin.DefaultMonitoringPlugin
\ No newline at end of file

Added: falcon/trunk/releases/0.8/src/site/twiki/HDFSDR.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/HDFSDR.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/HDFSDR.twiki (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/HDFSDR.twiki Tue Nov 17 01:22:04 
2015
@@ -0,0 +1,21 @@
+---+ HDFS DR Recipe
+---++ Overview
+Falcon supports HDFS DR recipe to replicate data from source cluster to 
destination cluster.
+
+---++ Usage
+---+++ Setup cluster definition.
+   <verbatim>
+    $FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml
+   </verbatim>
+
+---+++ Update recipes properties
+   Update recipe properties file in addons/recipes/hdfs-replication with 
required attributes for replicating
+   data from source cluster to destination cluster.
+
+---+++ Submit HDFS DR recipe
+   <verbatim>
+    $FALCON_HOME/bin/falcon recipe -name hdfs-replication -operation 
HDFS_REPLICATION
+   </verbatim>
+
+Recipe templates for HDFS DR is available in addons/recipes/hdfs-replication 
and copy it to
+recipe path (*falcon.recipe.path=<recipe directory path>*) by specifying in 
client.properties.

Added: falcon/trunk/releases/0.8/src/site/twiki/HiveDR.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/HiveDR.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/HiveDR.twiki (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/HiveDR.twiki Tue Nov 17 01:22:04 
2015
@@ -0,0 +1,64 @@
+---+Hive Disaster Recovery
+
+
+---++Overview
+Falcon provides feature to replicate Hive metadata and data events from source 
cluster
+to destination cluster. This is supported for secure and unsecure cluster 
through Falcon Recipes.
+
+
+---++Prerequisites
+Following is the prerequisites to use Hive DR
+
+   * *Hive 1.2.0+*
+   * *Oozie 4.2.0+*
+
+*Note:* Set following properties in hive-site.xml for replicating the Hive 
events on source and destination Hive cluster:
+<verbatim>
+    <property>
+        <name>hive.metastore.event.listeners</name>
+        <value>org.apache.hive.hcatalog.listener.DbNotificationListener</value>
+        <description>event listeners that are notified of any metastore 
changes</description>
+    </property>
+
+    <property>
+        <name>hive.metastore.dml.events</name>
+        <value>true</value>
+    </property>
+</verbatim>
+
+---++ Usage
+---+++ Bootstrap
+   Perform initial bootstrap of Table and Database from source cluster to 
destination cluster
+   * *Database Bootstrap*
+     For bootstrapping DB replication, first destination DB should be created. 
This step is expected,
+     since DB replication definitions can be set up by users only on 
pre-existing DB’s. Second, Export all tables in
+     the source db and Import it in the destination db, as described in Table 
bootstrap.
+
+   * *Table Bootstrap*
+     For bootstrapping table replication, essentially after having turned on 
the !DbNotificationListener
+     on the source db, perform an Export of the table, distcp the Export over 
to the destination
+     warehouse and do an Import over there. Check the following 
[[https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport][Hive
 Export-Import]] for syntax details
+     and examples.
+     This will set up the destination table so that the events on the source 
cluster that modify the table
+     will then be replicated.
+
+---+++ Setup cluster definition
+   <verbatim>
+    $FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml
+   </verbatim>
+
+---+++ Update recipes properties
+   Update recipe properties file in addons/recipes/hive-disaster-recovery with 
required attributes for replicating
+   Hive data and metadata from source cluster to destination cluster.
+
+---+++ Submit Hive DR recipe
+   <verbatim>
+    $FALCON_HOME/bin/falcon recipe -name hive-disaster-recovery -operation 
HIVE_DISASTER_RECOVERY
+   </verbatim>
+
+
+Recipe templates for Hive DR is available in 
addons/recipes/hive-disaster-recovery and copy it to
+recipe path (*falcon.recipe.path=<recipe directory path>*) by specifying in 
client.properties.
+
+*Note:* If kerberos security is enabled on cluster, use the secure templates 
for Hive DR from
+        addons/recipes/hive-disaster-recovery

Added: falcon/trunk/releases/0.8/src/site/twiki/HiveIntegration.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/HiveIntegration.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/HiveIntegration.twiki (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/HiveIntegration.twiki Tue Nov 17 
01:22:04 2015
@@ -0,0 +1,372 @@
+---+ Hive Integration
+
+---++ Overview
+Falcon provides data management functions for feeds declaratively. It allows 
users to represent feed locations as
+time-based partition directories on HDFS containing files.
+
+Hive provides a simple and familiar database like tabular model of data 
management to its users,
+which are backed by HDFS. It supports two classes of tables, managed tables 
and external tables.
+
+Falcon allows users to represent feed location as Hive tables. Falcon supports 
both managed and external tables
+and provide data management services for tables such as replication, eviction, 
archival, etc. Falcon will notify
+HCatalog as a side effect of either acquiring, replicating or evicting a data 
set instance and adds the
+missing capability of HCatalog table replication.
+
+In the near future, Falcon will allow users to express pipeline processing in 
Hive scripts
+apart from Pig and Oozie workflows.
+
+
+---++ Assumptions
+   * Date is a mandatory first-level partition for Hive tables
+      * Data availability triggers are based on date pattern in Oozie
+   * Tables must be created in Hive prior to adding it as a Feed in Falcon.
+      * Duplicating this in Falcon will create confusion on the real source of 
truth. Also propagating schema changes
+    between systems is a hard problem.
+   * Falcon does not know about the encoding of the data and data should be in 
HCatalog supported format.
+
+---++ Configuration
+Falcon provides a system level option to enable Hive integration. Falcon must 
be configured with an implementation
+for the catalog registry. The default implementation for Hive is shipped with 
Falcon.
+
+<verbatim>
+catalog.service.impl=org.apache.falcon.catalog.HiveCatalogService
+</verbatim>
+
+
+---++ Incompatible changes
+Falcon depends heavily on data-availability triggers for scheduling Falcon 
workflows. Oozie must support
+data-availability triggers based on HCatalog partition availability. This is 
only available in oozie 4.x.
+
+Hence, Falcon for Hive support needs Oozie 4.x.
+
+
+---++ Oozie Shared Library setup
+Falcon post Hive integration depends heavily on the 
[[http://oozie.apache.org/docs/4.0.1/WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3][shared
 library feature of Oozie]].
+Since the sheer number of jars for HCatalog, Pig and Hive are in the many 10s 
in numbers, its quite daunting to
+redistribute the dependent jars from Falcon.
+
+[[http://oozie.apache.org/docs/4.0.1/DG_QuickStart.html#Oozie_Share_Lib_Installation][This
 is a one time effort in Oozie setup and is quite straightforward.]]
+
+
+---++ Approach
+
+---+++ Entity Changes
+
+   * Cluster DSL will have an additional registry-interface section, 
specifying the endpoint for the
+HCatalog server. If this is absent, no HCatalog publication will be done from 
Falcon for this cluster.
+      <verbatim>thrift://hcatalog-server:port</verbatim>
+   * Feed DSL will allow users to specify the URI (location) for HCatalog 
tables as:
+      
<verbatim>catalog:database_name:table_name#partitions(key=value?)*</verbatim>
+   * Failure to publish to HCatalog will be retried (configurable # of 
retires) with back off. Permanent failures
+   after all the retries are exhausted will fail the Falcon workflow
+
+---+++ Eviction
+
+   * Falcon will construct DDL statements to filter candidate partitions 
eligible for eviction drop partitions
+   * Falcon will construct DDL statements to drop the eligible partitions
+   * Additionally, Falcon will nuke the data on HDFS for external tables
+
+
+---+++ Replication
+
+   * Falcon will use HCatalog (Hive) API to export the data for a given table 
and the partition,
+which will result in a data collection that includes metadata on the data's 
storage format, the schema,
+how the data is sorted, what table the data came from, and values of any 
partition keys from that table.
+   * Falcon will use discp tool to copy the exported data collection into the 
secondary cluster into a staging
+directory used by Falcon.
+   * Falcon will then import the data into HCatalog (Hive) using the HCatalog 
(Hive) API. If the specified table does
+not yet exist, Falcon will create it, using the information in the imported 
metadata to set defaults for the
+table such as schema, storage format, etc.
+   * The partition is not complete and hence not visible to users until all 
the data is committed on the secondary
+cluster, (no dirty reads)
+   * Data collection is staged by Falcon and retries for copy continues from 
where it left off.
+   * Failure to register with Hive will be retired. After all the attempts are 
exhausted,
+the data will be cleaned up by Falcon.
+
+
+---+++ Security
+The user owns all data managed by Falcon. Falcon runs as the user who 
submitted the feed. Falcon will authenticate
+with HCatalog as the end user who owns the entity and the data.
+
+For Hive managed tables, the table may be owned by the end user or “hive”. 
For “hive” owned tables,
+user will have to configure the feed as “hive”.
+
+
+---++ Load on HCatalog from Falcon
+It generally depends on the frequency of the feeds configured in Falcon and 
how often data is ingested, replicated,
+or processed.
+
+
+---++ User Impact
+   * There should not be any impact to user due to this integration
+   * Falcon will be fully backwards compatible 
+   * Users have a choice to either choose storage based on files on HDFS as 
they do today or use HCatalog for
+accessing the data in tables
+
+
+---++ Known Limitations
+
+---+++ Oozie
+
+   * Falcon with Hadoop 1.x requires copying guava jars manually to sharelib 
in oozie. Hadoop 2.x ships this.
+   * hcatalog-pig-adapter needs to be copied manually to oozie sharelib.
+<verbatim>
+bin/hadoop dfs -copyFromLocal 
$LFS/share/lib/hcatalog/hcatalog-pig-adapter-0.5.0-incubating.jar 
share/lib/hcatalog
+</verbatim>
+   * Oozie 4.x with Hadoop-2.x
+Replication jobs are submitted to oozie on the destination cluster. Oozie runs 
a table export job
+on RM on source cluster. Oozie server on the target cluster must be configured 
with source hadoop
+configs else jobs fail with errors on secure and non-secure clusters as below:
+<verbatim>
+org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not 
found for ApplicationAttempt appattempt_1395965672651_0010_000002
+</verbatim>
+
+Make sure all oozie servers that falcon talks to has the hadoop configs 
configured in oozie-site.xml
+<verbatim>
+<property>
+      <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
+      
<value>*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3</value>
+      <description>
+          Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the 
HOST:PORT of
+          the Hadoop service (JobTracker, HDFS). The wildcard '*' 
configuration is
+          used when there is no exact match for an authority. The 
HADOOP_CONF_DIR contains
+          the relevant Hadoop *-site.xml files. If the path is relative is 
looked within
+          the Oozie configuration directory; though the path can be absolute 
(i.e. to point
+          to Hadoop client conf/ directories in the local filesystem.
+      </description>
+    </property>
+</verbatim>
+
+---+++ Hive
+
+   * Dated Partitions
+Falcon does not work well when table partition contains multiple dated 
columns. Falcon only works
+with a single dated partition. This is being tracked in FALCON-357 which is a 
limitation in Oozie.
+<verbatim>
+catalog:default:table4#year=${YEAR};month=${MONTH};day=${DAY};hour=${HOUR};minute=${MINUTE}
+</verbatim>
+
+   * [[https://issues.apache.org/jira/browse/HIVE-5550][Hive table import 
fails for tables created with default text and sequence file formats using 
HCatalog API]]
+For some arcane reason, hive substitutes the output format for text and 
sequence to be prefixed with Hive.
+Hive table import fails since it compares against the input and output formats 
of the source table and they are
+different. Say, a table was created with out specifying the file format, it 
defaults to:
+<verbatim>
+fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
+</verbatim>
+
+But, when hive fetches the table from the metastore, it replaces the output 
format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+and the comparison between source and target table fails.
+<verbatim>
+org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable
+      // check IF/OF/Serde
+      String existingifc = table.getInputFormatClass().getName();
+      String importedifc = tableDesc.getInputFormat();
+      String existingofc = table.getOutputFormatClass().getName();
+      String importedofc = tableDesc.getOutputFormat();
+      if ((!existingifc.equals(importedifc))
+          || (!existingofc.equals(importedofc))) {
+        throw new SemanticException(
+            ErrorMsg.INCOMPATIBLE_SCHEMA
+                .getMsg(" Table inputformat/outputformats do not match"));
+      }
+</verbatim>
+The above is not an issue with Hive 0.13.
+
+---++ Hive Examples
+Following is an example entity configuration for lifecycle management 
functions for tables in Hive.
+
+---+++ Hive Table Lifecycle Management - Replication and Retention
+
+---++++ Primary Cluster
+
+<verbatim>
+<?xml version="1.0"?>
+<!--
+    Primary cluster configuration for demo vm
+  -->
+<cluster colo="west-coast" description="Primary Cluster"
+         name="primary-cluster"
+         xmlns="uri:falcon:cluster:0.1" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+    <interfaces>
+        <interface type="readonly" endpoint="hftp://localhost:10070";
+                   version="1.1.1" />
+        <interface type="write" endpoint="hdfs://localhost:10020"
+                   version="1.1.1" />
+        <interface type="execute" endpoint="localhost:10300"
+                   version="1.1.1" />
+        <interface type="workflow" endpoint="http://localhost:11010/oozie/";
+                   version="4.0.1" />
+        <interface type="registry" endpoint="thrift://localhost:19083"
+                   version="0.11.0" />
+        <interface type="messaging" 
endpoint="tcp://localhost:61616?daemon=true"
+                   version="5.4.3" />
+    </interfaces>
+    <locations>
+        <location name="staging" path="/apps/falcon/staging" />
+        <location name="temp" path="/tmp" />
+        <location name="working" path="/apps/falcon/working" />
+    </locations>
+</cluster>
+</verbatim>
+
+---++++ BCP Cluster
+
+<verbatim>
+<?xml version="1.0"?>
+<!--
+    BCP cluster configuration for demo vm
+  -->
+<cluster colo="east-coast" description="BCP Cluster"
+         name="bcp-cluster"
+         xmlns="uri:falcon:cluster:0.1" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+    <interfaces>
+        <interface type="readonly" endpoint="hftp://localhost:20070";
+                   version="1.1.1" />
+        <interface type="write" endpoint="hdfs://localhost:20020"
+                   version="1.1.1" />
+        <interface type="execute" endpoint="localhost:20300"
+                   version="1.1.1" />
+        <interface type="workflow" endpoint="http://localhost:11020/oozie/";
+                   version="4.0.1" />
+        <interface type="registry" endpoint="thrift://localhost:29083"
+                   version="0.11.0" />
+        <interface type="messaging" 
endpoint="tcp://localhost:61616?daemon=true"
+                   version="5.4.3" />
+    </interfaces>
+    <locations>
+        <location name="staging" path="/apps/falcon/staging" />
+        <location name="temp" path="/tmp" />
+        <location name="working" path="/apps/falcon/working" />
+    </locations>
+</cluster>
+</verbatim>
+
+---++++ Feed with replication and eviction policy
+
+<verbatim>
+<?xml version="1.0"?>
+<!--
+    Replicating Hourly customer table from primary to secondary cluster.
+  -->
+<feed description="Replicating customer table feed" 
name="customer-table-replicating-feed"
+      xmlns="uri:falcon:feed:0.1">
+    <frequency>hours(1)</frequency>
+    <timezone>UTC</timezone>
+
+    <clusters>
+        <cluster name="primary-cluster" type="source">
+            <validity start="2013-09-24T00:00Z" end="2013-10-26T00:00Z"/>
+            <retention limit="hours(2)" action="delete"/>
+        </cluster>
+        <cluster name="bcp-cluster" type="target">
+            <validity start="2013-09-24T00:00Z" end="2013-10-26T00:00Z"/>
+            <retention limit="days(30)" action="delete"/>
+
+            <table 
uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+        </cluster>
+    </clusters>
+
+    <table 
uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+
+    <ACL owner="seetharam" group="users" permission="0755"/>
+    <schema location="" provider="hcatalog"/>
+</feed>
+</verbatim>
+
+
+---+++ Hive Table used in Processing Pipelines
+
+---++++ Primary Cluster
+The cluster definition from the lifecycle example can be used.
+
+---++++ Input Feed
+
+<verbatim>
+<?xml version="1.0"?>
+<feed description="clicks log table " name="input-table" 
xmlns="uri:falcon:feed:0.1">
+    <groups>online,bi</groups>
+    <frequency>hours(1)</frequency>
+    <timezone>UTC</timezone>
+
+    <clusters>
+        <cluster name="##cluster##" type="source">
+            <validity start="2010-01-01T00:00Z" end="2012-04-21T00:00Z"/>
+            <retention limit="hours(24)" action="delete"/>
+        </cluster>
+    </clusters>
+
+    <table 
uri="catalog:falcon_db:input_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+
+    <ACL owner="testuser" group="group" permission="0x755"/>
+    <schema location="/schema/clicks" provider="protobuf"/>
+</feed>
+</verbatim>
+
+
+---++++ Output Feed
+
+<verbatim>
+<?xml version="1.0"?>
+<feed description="clicks log identity table" name="output-table" 
xmlns="uri:falcon:feed:0.1">
+    <groups>online,bi</groups>
+    <frequency>hours(1)</frequency>
+    <timezone>UTC</timezone>
+
+    <clusters>
+        <cluster name="##cluster##" type="source">
+            <validity start="2010-01-01T00:00Z" end="2012-04-21T00:00Z"/>
+            <retention limit="hours(24)" action="delete"/>
+        </cluster>
+    </clusters>
+
+    <table 
uri="catalog:falcon_db:output_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+
+    <ACL owner="testuser" group="group" permission="0x755"/>
+    <schema location="/schema/clicks" provider="protobuf"/>
+</feed>
+</verbatim>
+
+
+---++++ Process
+
+<verbatim>
+<?xml version="1.0"?>
+<process name="##processName##" xmlns="uri:falcon:process:0.1">
+    <clusters>
+        <cluster name="##cluster##">
+            <validity end="2012-04-22T00:00Z" start="2012-04-21T00:00Z"/>
+        </cluster>
+    </clusters>
+
+    <parallel>1</parallel>
+    <order>FIFO</order>
+    <frequency>days(1)</frequency>
+    <timezone>UTC</timezone>
+
+    <inputs>
+        <input end="today(0,0)" start="today(0,0)" feed="input-table" 
name="input"/>
+    </inputs>
+
+    <outputs>
+        <output instance="now(0,0)" feed="output-table" name="output"/>
+    </outputs>
+
+    <properties>
+        <property name="blah" value="blah"/>
+    </properties>
+
+    <workflow engine="pig" path="/falcon/test/apps/pig/table-id.pig"/>
+
+    <retry policy="periodic" delay="minutes(10)" attempts="3"/>
+</process>
+</verbatim>
+
+
+---++++ Pig Script
+
+<verbatim>
+A = load '$input_database.$input_table' using 
org.apache.hcatalog.pig.HCatLoader();
+B = FILTER A BY $input_filter;
+C = foreach B generate id, value;
+store C into '$output_database.$output_table' USING 
org.apache.hcatalog.pig.HCatStorer('$output_dataout_partitions');
+</verbatim>

Added: falcon/trunk/releases/0.8/src/site/twiki/InstallationSteps.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/InstallationSteps.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/InstallationSteps.twiki (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/InstallationSteps.twiki Tue Nov 17 
01:22:04 2015
@@ -0,0 +1,87 @@
+---+Building & Installing Falcon
+
+
+---++Building Falcon
+
+---+++Prerequisites
+
+   * JDK 1.7
+   * Maven 3.2.x
+
+
+
+---+++Step 1 - Clone the Falcon repository
+
+<verbatim>
+$git clone https://git-wip-us.apache.org/repos/asf/falcon.git falcon
+</verbatim>
+
+
+---+++Step 2 - Build Falcon
+
+<verbatim>
+$cd falcon
+$export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=256m -noverify" && mvn clean 
install
+</verbatim>
+It builds and installs the package into the local repository, for use as a 
dependency in other projects locally.
+
+[optionally -Dhadoop.version=<<hadoop.version>> can be appended to build for a 
specific version of Hadoop]
+
+*NOTE:* Falcon drops support for Hadoop-1 and only supports Hadoop-2 from 
Falcon 0.6 onwards
+[optionally -Doozie.version=<<oozie version>> can be appended to build with a 
specific version of Oozie. Oozie versions
+>= 4 are supported]
+NOTE: Falcon builds with JDK 1.7 using -noverify option
+      To compile Falcon with Hive Replication, optionally "-P hadoop-2,hivedr" 
can be appended. For this Hive >= 1.2.0
+      and Oozie >= 4.2.0 should be available.
+
+
+
+---+++Step 3 - Package and Deploy Falcon
+
+Once the build successfully completes, artifacts can be packaged for 
deployment using the assembly plugin. The Assembly
+Plugin for Maven is primarily intended to allow users to aggregate the project 
output along with its dependencies,
+modules, site documentation, and other files into a single distributable 
archive. There are two basic ways in which you
+can deploy Falcon - Embedded mode(also known as Stand Alone Mode) and 
Distributed mode. Your next steps will vary based
+on the mode in which you want to deploy Falcon.
+
+*NOTE* : Oozie is being extended by Falcon (particularly on el-extensions) and 
hence the need for Falcon to build &
+re-package Oozie, so that users of Falcon can work with the right Oozie setup. 
Though Oozie is packaged by Falcon, it
+needs to be deployed separately by the administrator and is not auto deployed 
along with Falcon.
+
+
+---++++Embedded/Stand Alone Mode
+Embedded mode is useful when the Hadoop jobs and relevant data processing 
involve only one Hadoop cluster. In this mode
+ there is a single Falcon server that contacts the scheduler to schedule jobs 
on Hadoop. All the process/feed requests
+ like submit, schedule, suspend, kill etc. are sent to this server. For 
running Falcon in this mode one should use the
+ Falcon which has been built using standalone option. You can find the 
instructions for Embedded mode setup
+ [[Embedded-mode][here]].
+
+
+---++++Distributed Mode
+Distributed mode is for multiple (colos) instances of Hadoop clusters, and 
multiple workflow schedulers to handle them.
+In this mode Falcon has 2 components: Prism and Server(s). Both Prism and 
Server(s) have their own their own config
+locations(startup and runtime properties). In this mode Prism acts as a 
contact point for Falcon servers. While
+ all commands are available through Prism, only read and instance api's are 
available through Server. You can find the
+ instructions for Distributed Mode setup [[Distributed-mode][here]].
+
+
+
+---+++Preparing Oozie and Falcon packages for deployment
+<verbatim>
+$cd <<project home>>
+$src/bin/package.sh <<hadoop-version>> <<oozie-version>>
+
+>> ex. src/bin/package.sh 1.1.2 4.0.1 or src/bin/package.sh 0.20.2-cdh3u5 4.0.1
+>> ex. src/bin/package.sh 2.5.0 4.0.0
+>> Falcon package is available in <<falcon 
home>>/target/apache-falcon-<<version>>-bin.tar.gz
+>> Oozie package is available in <<falcon 
home>>/target/oozie-4.0.1-distro.tar.gz
+</verbatim>
+
+*NOTE:* If you have a separate Apache Oozie installation, you will need to 
follow some additional steps:
+   1. Once you have setup the Falcon Server, copy libraries under 
{falcon-server-dir}/oozie/libext/ to {oozie-install-dir}/libext.
+   1. Modify Oozie's configuration file. Copy all Falcon related properties 
from {falcon-server-dir}/oozie/conf/oozie-site.xml to 
{oozie-install-dir}/conf/oozie-site.xml
+   1. Restart oozie:
+      1. cd {oozie-install-dir}
+      1. sudo -u oozie ./bin/oozie-stop.sh
+      1. sudo -u oozie ./bin/oozie-setup.sh prepare-war
+      1. sudo -u oozie ./bin/oozie-start.sh

Added: falcon/trunk/releases/0.8/src/site/twiki/LICENSE.txt
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/LICENSE.txt?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/LICENSE.txt (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/LICENSE.txt Tue Nov 17 01:22:04 
2015
@@ -0,0 +1,3 @@
+All files in this directory and subdirectories are under Apache License 
Version 2.0.
+The reason being Maven Doxia plugin that converts twiki to html does not have
+commenting out feature.

Added: falcon/trunk/releases/0.8/src/site/twiki/MigrationInstructions.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/MigrationInstructions.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/MigrationInstructions.twiki (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/MigrationInstructions.twiki Tue 
Nov 17 01:22:04 2015
@@ -0,0 +1,15 @@
+---+ Migration Instructions
+
+---++ Migrate from 0.5-incubating to 0.6-incubating
+
+This is a placeholder wiki for migration instructions from falcon 
0.5-incubating to 0.6-incubating.
+
+---+++ Update Entities
+
+---+++ Change cluster dir permissions
+
+---+++ Enable/Disable TLS
+
+---+++ Authorization
+
+

Added: falcon/trunk/releases/0.8/src/site/twiki/OnBoarding.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.8/src/site/twiki/OnBoarding.twiki?rev=1714717&view=auto
==============================================================================
--- falcon/trunk/releases/0.8/src/site/twiki/OnBoarding.twiki (added)
+++ falcon/trunk/releases/0.8/src/site/twiki/OnBoarding.twiki Tue Nov 17 
01:22:04 2015
@@ -0,0 +1,269 @@
+---++ Contents
+   * <a href="#Onboarding Steps">Onboarding Steps</a>
+   * <a href="#Sample Pipeline">Sample Pipeline</a>
+   * [[HiveIntegration][Hive Examples]]
+
+---+++ Onboarding Steps
+   * Create cluster definition for the cluster, specifying name node, job 
tracker, workflow engine endpoint, messaging endpoint. Refer to 
[[EntitySpecification][cluster definition]] for details.
+   * Create Feed definitions for each of the input and output specifying 
frequency, data path, ownership. Refer to [[EntitySpecification][feed 
definition]] for details.
+   * Create Process definition for your job. Process defines configuration for 
the workflow job. Important attributes are frequency, inputs/outputs and 
workflow path. Refer to [[EntitySpecification][process definition]] for process 
details.
+   * Define workflow for your job using the workflow engine(only oozie is 
supported as of now). Refer 
[[http://oozie.apache.org/docs/3.1.3-incubating/WorkflowFunctionalSpec.html][Oozie
 Workflow Specification]]. The libraries required for the workflow should be 
available in lib folder in workflow path.
+   * Set-up workflow definition, libraries and referenced scripts on hadoop. 
+   * Submit cluster definition
+   * Submit and schedule feed and process definitions
+   
+
+---+++ Sample Pipeline
+---++++ Cluster   
+Cluster definition that contains end points for name node, job tracker, oozie 
and jms server:
+The cluster locations MUST be created prior to submitting a cluster entity to 
Falcon.
+*staging* must have 777 permissions and the parent dirs must have execute 
permissions
+*working* must have 755 permissions and the parent dirs must have execute 
permissions
+
+<verbatim>
+<?xml version="1.0"?>
+<!--
+    Cluster configuration
+  -->
+<cluster colo="ua2" description="" name="corp" xmlns="uri:falcon:cluster:0.1"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>    
+    <interfaces>
+        <interface type="readonly" endpoint="hftp://name-node.com:50070"; 
version="2.5.0" />
+
+        <interface type="write" endpoint="hdfs://name-node.com:54310" 
version="2.5.0" />
+
+        <interface type="execute" endpoint="job-tracker:54311" version="2.5.0" 
/>
+
+        <interface type="workflow" endpoint="http://oozie.com:11000/oozie/"; 
version="4.0.1" />
+
+        <interface type="messaging" 
endpoint="tcp://jms-server.com:61616?daemon=true" version="5.1.6" />
+    </interfaces>
+
+    <locations>
+        <location name="staging" path="/projects/falcon/staging" />
+        <location name="temp" path="/tmp" />
+        <location name="working" path="/projects/falcon/working" />
+    </locations>
+</cluster>
+</verbatim>
+   
+---++++ Input Feed
+Hourly feed that defines feed path, frequency, ownership and validity:
+<verbatim>
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+    Hourly sample input data
+  -->
+
+<feed description="sample input data" name="SampleInput" 
xmlns="uri:falcon:feed:0.1"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+    <groups>group</groups>
+
+    <frequency>hours(1)</frequency>
+
+    <late-arrival cut-off="hours(6)" />
+
+    <clusters>
+        <cluster name="corp" type="source">
+            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" 
timezone="UTC" />
+            <retention limit="months(24)" action="delete" />
+        </cluster>
+    </clusters>
+
+    <locations>
+        <location type="data" 
path="/projects/bootcamp/data/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleInput" />
+        <location type="stats" path="/projects/bootcamp/stats/SampleInput" />
+        <location type="meta" path="/projects/bootcamp/meta/SampleInput" />
+    </locations>
+
+    <ACL owner="suser" group="users" permission="0755" />
+
+    <schema location="/none" provider="none" />
+</feed>
+</verbatim>
+
+---++++ Output Feed
+Daily feed that defines feed path, frequency, ownership and validity:
+<verbatim>
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+    Daily sample output data
+  -->
+
+<feed description="sample output data" name="SampleOutput" 
xmlns="uri:falcon:feed:0.1"
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+    <groups>group</groups>
+
+    <frequency>days(1)</frequency>
+
+    <late-arrival cut-off="hours(6)" />
+
+    <clusters>
+        <cluster name="corp" type="source">
+            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" 
timezone="UTC" />
+            <retention limit="months(24)" action="delete" />
+        </cluster>
+    </clusters>
+
+    <locations>
+        <location type="data" 
path="/projects/bootcamp/output/${YEAR}-${MONTH}-${DAY}/SampleOutput" />
+        <location type="stats" path="/projects/bootcamp/stats/SampleOutput" />
+        <location type="meta" path="/projects/bootcamp/meta/SampleOutput" />
+    </locations>
+
+    <ACL owner="suser" group="users" permission="0755" />
+
+    <schema location="/none" provider="none" />
+</feed>
+</verbatim>
+
+---++++ Process
+Sample process which runs daily at 6th hour on corp cluster. It takes one 
input - !SampleInput for the previous day(24 instances). It generates one 
output - !SampleOutput for previous day. The workflow is defined at 
/projects/bootcamp/workflow/workflow.xml. Any libraries available for the 
workflow should be at /projects/bootcamp/workflow/lib. The process also defines 
properties queueName, ssh.host, and fileTimestamp which are passed to the 
workflow. In addition, Falcon exposes the following properties to the workflow: 
nameNode, jobTracker(hadoop properties), input and output(Input/Output 
properties).
+
+<verbatim>
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+    Daily sample process. Runs at 6th hour every day. Input - last day's 
hourly data. Generates output for yesterday
+ -->
+<process name="SampleProcess">
+    <cluster name="corp" />
+
+    <frequency>days(1)</frequency>
+
+    <validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" timezone="UTC" 
/>
+
+    <inputs>
+        <input name="input" feed="SampleInput" start="yesterday(0,0)" 
end="today(-1,0)" />
+    </inputs>
+
+    <outputs>
+            <output name="output" feed="SampleOutput" 
instance="yesterday(0,0)" />
+    </outputs>
+
+    <properties>
+        <property name="queueName" value="reports" />
+        <property name="ssh.host" value="host.com" />
+        <property name="fileTimestamp" 
value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
+    </properties>
+
+    <workflow engine="oozie" path="/projects/bootcamp/workflow" />
+
+    <retry policy="periodic" delay="minutes(5)" attempts="3" />
+    
+    <late-process policy="exp-backoff" delay="hours(1)">
+        <late-input input="input" 
workflow-path="/projects/bootcamp/workflow/lateinput" />
+    </late-process>
+</process>
+</verbatim>
+
+---++++ Oozie Workflow
+The sample user workflow contains 3 actions:
+   * Pig action - Executes pig script /projects/bootcamp/workflow/script.pig
+   * concatenator - Java action that concatenates part files and generates a 
single file
+   * file upload - ssh action that gets the concatenated file from hadoop and 
sends the file to a remote host
+   
+<verbatim>
+<workflow-app xmlns="uri:oozie:workflow:0.2" name="sample-wf">
+        <start to="pig" />
+
+        <action name="pig">
+                <pig>
+                        <job-tracker>${jobTracker}</job-tracker>
+                        <name-node>${nameNode}</name-node>
+                        <prepare>
+                                <delete path="${output}"/>
+                        </prepare>
+                        <configuration>
+                                <property>
+                                        <name>mapred.job.queue.name</name>
+                                        <value>${queueName}</value>
+                                </property>
+                                <property>
+                                        
<name>mapreduce.fileoutputcommitter.marksuccessfuljobs</name>
+                                        <value>true</value>
+                                </property>
+                        </configuration>
+                        
<script>${nameNode}/projects/bootcamp/workflow/script.pig</script>
+                        <param>input=${input}</param>
+                        <param>output=${output}</param>
+                        <file>lib/dependent.jar</file>
+                </pig>
+                <ok to="concatenator" />
+                <error to="fail" />
+        </action>
+
+        <action name="concatenator">
+                <java>
+                        <job-tracker>${jobTracker}</job-tracker>
+                        <name-node>${nameNode}</name-node>
+                        <prepare>
+                                <delete 
path="${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv"/>
+                        </prepare>
+                        <configuration>
+                                <property>
+                                        <name>mapred.job.queue.name</name>
+                                        <value>${queueName}</value>
+                                </property>
+                        </configuration>
+                        <main-class>com.wf.Concatenator</main-class>
+                        <arg>${output}</arg>
+                        
<arg>${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv</arg>
+                </java>
+                <ok to="fileupload" />
+                <error to="fail"/>
+        </action>
+                        
+        <action name="fileupload">
+                <ssh>
+                        <host>localhost</host>
+                        <command>/tmp/fileupload.sh</command>
+                        
<args>${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv</args>
+                        <args>${wf:conf("ssh.host")}</args>
+                        <capture-output/>
+                </ssh>
+                <ok to="fileUploadDecision" />
+                <error to="fail"/>
+        </action>
+
+        <decision name="fileUploadDecision">
+                <switch>
+                        <case to="end">
+                                ${wf:actionData('fileupload')['output'] == '0'}
+                        </case>
+                        <default to="fail"/>
+                </switch>
+        </decision>
+
+        <kill name="fail">
+                <message>Workflow failed, error 
message[${wf:errorMessage(wf:lastErrorNode())}]</message>
+        </kill>
+
+        <end name="end" />
+</workflow-app>
+</verbatim>
+
+---++++ File Upload Script
+The script gets the file from hadoop, rsyncs the file to /tmp on remote host 
and deletes the file from hadoop
+<verbatim>
+#!/bin/bash
+
+trap 'echo "output=$?"; exit $?' ERR INT TERM
+
+echo "Arguments: $@"
+SRCFILE=$1
+DESTHOST=$3
+
+FILENAME=`basename $SRCFILE`
+rm -f /tmp/$FILENAME
+hadoop fs -copyToLocal $SRCFILE /tmp/
+echo "Copied $SRCFILE to /tmp"
+
+rsync -ztv --rsh=ssh --stats /tmp/$FILENAME $DESTHOST:/tmp
+echo "rsynced $FILENAME to $DESTUSER@$DESTHOST:$DESTFILE"
+
+hadoop fs -rmr $SRCFILE
+echo "Deleted $SRCFILE"
+
+rm -f /tmp/$FILENAME
+echo "output=0"
+</verbatim>


Reply via email to