Added: falcon/trunk/releases/0.9/src/site/twiki/EntitySpecification.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.9/src/site/twiki/EntitySpecification.twiki?rev=1730446&view=auto
==============================================================================
--- falcon/trunk/releases/0.9/src/site/twiki/EntitySpecification.twiki (added)
+++ falcon/trunk/releases/0.9/src/site/twiki/EntitySpecification.twiki Mon Feb 
15 05:08:31 2016
@@ -0,0 +1,985 @@
+---++ Contents
+   * <a href="#Cluster_Specification">Cluster Specification</a>
+   * <a href="#Feed_Specification">Feed Specification</a>
+   * <a href="#Process_Specification">Process Specification</a>
+   
+---++ Cluster Specification
+The cluster XSD specification is available here:
+A cluster contains different interfaces which are used by Falcon like 
readonly, write, workflow and messaging.
+A cluster is referenced by feeds and processes which are on-boarded to Falcon 
by its name.
+
+Following are the tags defined in a cluster.xml:
+<verbatim>
+<cluster colo="gs" description="" name="corp" xmlns="uri:falcon:cluster:0.1"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+</verbatim>
+The colo specifies the colo to which this cluster belongs to and name is the 
name of the cluster which has to 
+be unique.
+
+
+---+++ Interfaces
+
+A cluster has various interfaces as described below:
+<verbatim>
+    <interface type="readonly" endpoint="hftp://localhost:50010"; 
version="0.20.2" />
+</verbatim>
+A readonly interface specifies the endpoint for Hadoop's HFTP protocol, 
+this would be used in the context of feed replication.
+
+<verbatim>
+<interface type="write" endpoint="hdfs://localhost:8020" version="0.20.2" />
+</verbatim>
+A write interface specifies the interface to write to hdfs, it's endpoint is 
the value of fs.defaultFS.
+Falcon uses this interface to write system data to hdfs and feeds referencing 
this cluster are written to hdfs
+using the same write interface.
+
+<verbatim>
+<interface type="execute" endpoint="localhost:8021" version="0.20.2" />
+</verbatim>
+An execute interface specifies the interface for job tracker, it's endpoint is 
the value of mapreduce.jobtracker.address.
+Falcon uses this interface to submit the processes as jobs on !JobTracker 
defined here.
+
+<verbatim>
+<interface type="workflow" endpoint="http://localhost:11000/oozie/"; 
version="4.0" />
+</verbatim>
+A workflow interface specifies the interface for workflow engine, example of 
its endpoint is the value for OOZIE_URL.
+Falcon uses this interface to schedule the processes referencing this cluster 
on workflow engine defined here.
+
+<verbatim>
+<interface type="registry" endpoint="thrift://localhost:9083" version="0.11.0" 
/>
+</verbatim>
+A registry interface specifies the interface for metadata catalog, such as 
Hive Metastore (or HCatalog).
+Falcon uses this interface to register/de-register partitions for a given 
database and table. Also,
+uses this information to schedule data availability events based on partitions 
in the workflow engine.
+Although Hive metastore supports both RPC and HTTP, Falcon comes with an 
implementation for RPC over thrift.
+
+<verbatim>
+<interface type="messaging" endpoint="tcp://localhost:61616?daemon=true" 
version="5.4.6" />
+</verbatim>
+A messaging interface specifies the interface for sending feed availability 
messages, it's endpoint is broker url with tcp address.
+
+---+++ Locations
+
+A cluster has a list of locations defined:
+<verbatim>
+<location name="staging" path="/projects/falcon/staging" />
+<location name="working" path="/projects/falcon/working" /> <!--optional-->
+</verbatim>
+Location has the name and the path, name is the type of locations .Allowed 
values of name are staging, temp and working.
+Path is the hdfs path for each location.
+Falcon would use the location to do intermediate processing of entities in 
hdfs and hence Falcon
+should have read/write/execute permission on these locations.
+These locations MUST be created prior to submitting a cluster entity to Falcon.
+*staging* should have 777 permissions and is a mandatory location .The parent 
dirs must have execute permissions so multiple
+users can write to this location. *working* must have 755 permissions and is a 
optional location.
+If *working* is not specified, falcon creates a sub directory in the *staging* 
location with 755 perms.
+The parent dir for *working* must have execute permissions so multiple
+users can read from this location
+
+---+++ ACL
+
+A cluster has ACL (Access Control List) useful for implementing permission 
requirements
+and provide a way to set different permissions for specific users or named 
groups.
+<verbatim>
+    <ACL owner="test-user" group="test-group" permission="*"/>
+</verbatim>
+ACL indicates the Access control list for this cluster.
+owner is the Owner of this entity.
+group is the one which has access to read.
+permission indicates the permission.
+
+---+++ Custom Properties
+
+A cluster has a list of properties:
+A key-value pair, which are propagated to the workflow engine.
+<verbatim>
+<property name="brokerImplClass" 
value="org.apache.activemq.ActiveMQConnectionFactory" />
+</verbatim>
+Ideally JMS impl class name of messaging engine (brokerImplClass) 
+should be defined here.
+
+---++ Datasource Specification
+
+The datasource entity contains connection information required to connect to a 
data source like MySQL database.
+The datasource XSD specification is available here:
+A datasource contains read and write interfaces which are used by Falcon to 
import or export data from or to
+datasources respectively. A datasource is referenced by feeds which are 
on-boarded to Falcon by its name.
+
+Following are the tags defined in a datasource.xml:
+
+<verbatim>
+<datasource colo="west-coast" description="Customer database on west coast" 
type="mysql"
+ name="test-hsql-db" xmlns="uri:falcon:datasource:0.1" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+</verbatim>
+
+The colo specifies the colo to which the datasource belongs to and name is the 
name of the datasource which has to
+be unique.
+
+---+++ Interfaces
+
+A datasource has two interfaces as described below:
+<verbatim>
+    <interface type="readonly" endpoint="jdbc:hsqldb:localhost/db"/>
+</verbatim>
+
+A readonly interface specifies the endpoint and protocol to connect to a 
datasource.
+This would be used in the context of import from datasource into HDFS.
+
+<verbatim>
+<interface type="write" endpoint="jdbc:hsqldb:localhost/db1">
+</verbatim>
+
+A write interface specifies the endpoint and protocol to to write to the 
datasource.
+Falcon uses this interface to export data from hdfs to datasource.
+
+<verbatim>
+<credential type="password-text">
+    <userName>SA</userName>
+    <passwordText></passwordText>
+</credential>
+</verbatim>
+
+
+A credential is associated with an interface (read or write) providing user 
name and password to authenticate
+to the datasource.
+
+<verbatim>
+<credential type="password-text">
+     <userName>SA</userName>
+     <passwordFile>hdfs-file-path</passwordText>
+</credential>
+</verbatim>
+
+The credential can be specified via a password file present in the HDFS. This 
file should only be accessible by
+the user.
+
+---++ Feed Specification
+The Feed XSD specification is available here.
+A Feed defines various attributes of feed like feed location, frequency, 
late-arrival handling and retention policies.
+A feed can be scheduled on a cluster, once a feed is scheduled its retention 
and replication process are triggered in a given cluster.
+<verbatim>
+<feed description="clicks log" name="clicks" xmlns="uri:falcon:feed:0.1"
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
+</verbatim>
+A feed should have a unique name and this name is referenced by processes as 
input or output feed.
+
+---+++ Storage
+Falcon introduces a new abstraction to encapsulate the storage for a given 
feed which can either be
+expressed as a path on the file system, File System Storage or a table in a 
catalog such as Hive, Catalog Storage.
+
+<verbatim>
+    <xs:choice minOccurs="1" maxOccurs="1">
+        <xs:element type="locations" name="locations"/>
+        <xs:element type="catalog-table" name="table"/>
+    </xs:choice>
+</verbatim>
+
+Feed should contain one of the two storage options. Locations on File System 
or Table in a Catalog.
+
+---++++ File System Storage
+
+<verbatim>
+        <clusters>
+        <cluster name="test-cluster">
+            <validity start="2012-07-20T03:00Z" end="2099-07-16T00:00Z"/>
+            <retention limit="days(10)" action="delete"/>
+            <sla slaLow="hours(3)" slaHigh="hours(4)"/>
+            <locations>
+                <location type="data" 
path="/hdfsDataLocation/${YEAR}/${MONTH}/${DAY}/${HOUR}/${MINUTE}"/>
+                <location type="stats" path="/projects/falcon/clicksStats" />
+                <location type="meta" path="/projects/falcon/clicksMetaData" />
+            </locations>
+        </cluster>
+..... more clusters </clusters>
+</verbatim>
+Feed references a cluster by it's name, before submitting a feed all the 
referenced cluster should be submitted to Falcon.
+type: specifies whether the referenced cluster should be treated as a source 
or target for a feed. A feed can have multiple source and target clusters. If 
the type of cluster is not specified then the cluster is not considered for 
replication.
+Validity of a feed on cluster specifies duration for which this feed is valid 
on this cluster.
+Retention specifies how long the feed is retained on this cluster and the 
action to be taken on the feed after the expiry of retention period.
+The retention limit is specified by expression frequency(times), ex: if feed 
should be retained for at least 6 hours then retention's limit="hours(6)".
+The field partitionExp contains partition tags. Number of partition tags has 
to be equal to number of partitions specified in feed schema. A partition tag 
can be a wildcard(*), a static string or an expression. Atleast one of the 
strings has to be an expression.
+sla specifies sla for the feed on this cluster. This is an optional parameter 
and sla can be same or different from the
+global sla tag (mentioned outside the clusters tag ). This tag provides the 
user to flexibility to have
+different sla for different clusters e.g. in case of replication. If this 
attribute is missing then the default global
+sla is picked from the feed definition.
+Location specifies where the feed is available on this cluster. This is an 
optional parameter and path can be same or different from the global locations 
tag value ( it is mentioned outside the clusters tag ) . This tag provides the 
user to flexibility to have feed at different locations on different clusters. 
If this attribute is missing then the default global location is picked from 
the feed definition. Also the individual location tags data, stats, meta are 
optional.
+<verbatim>
+ <location type="data" path="/projects/falcon/clicks" />
+ <location type="stats" path="/projects/falcon/clicksStats" />
+ <location type="meta" path="/projects/falcon/clicksMetaData" />
+</verbatim>
+A location tag specifies the type of location like data, meta, stats and the 
corresponding paths for them.
+A feed should at least define the location for type data, which specifies the 
HDFS path pattern where the feed is generated
+periodically. ex: type="data" 
path="/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic"
+The granularity of date pattern in the path should be at least that of a 
frequency of a feed.
+Other location type which are supported are stats and meta paths, if a process 
references a feed then the meta and stats
+paths are available as a property in a process.
+
+---++++ Catalog Storage (Table)
+
+A table tag specifies the table URI in the catalog registry as:
+<verbatim>
+catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
+</verbatim>
+
+This is modeled as a URI (similar to an ISBN URI). It does not have any 
reference to Hive or HCatalog. Its quite
+generic so it can be tied to other implementations of a catalog registry. The 
catalog implementation specified
+in the startup config provides implementation for the catalog URI.
+
+Top-level partition has to be a dated pattern and the granularity of date 
pattern should be at least that
+of a frequency of a feed.
+
+<verbatim>
+    <xs:complexType name="catalog-table">
+        <xs:annotation>
+            <xs:documentation>
+                catalog specifies the uri of a Hive table along with the 
partition spec.
+                
uri="catalog:$database:$table#(partition-key=partition-value);+"
+                Example: catalog:logs-db:clicks#ds=${YEAR}-${MONTH}-${DAY}
+            </xs:documentation>
+        </xs:annotation>
+        <xs:attribute type="xs:string" name="uri" use="required"/>
+    </xs:complexType>
+</verbatim>
+
+Examples:
+<verbatim>
+<table 
uri="catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}"
 />
+<table 
uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+<table 
uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
+</verbatim>
+
+---+++ Partitions
+
+<verbatim>
+   <partitions>
+        <partition name="country" />
+        <partition name="cluster" />
+    </partitions>
+</verbatim>
+A feed can define multiple partitions, if a referenced cluster defines 
partitions then the number of partitions in feed has to be equal to or more 
than the cluster partitions.
+
+*Note:* This will only apply for !FileSystem storage but not Table storage as 
partitions are defined and maintained in
+Hive (HCatalog) registry.
+
+---+++ Groups
+
+<verbatim>
+    <groups>online,bi</groups>
+</verbatim>
+A feed specifies a list of comma separated groups, a group is a logical 
grouping of feeds and a group is said to be
+available if all the feeds belonging to a group are available. The frequency 
of all the feed which belong to the same group
+must be same.
+
+---+++ Availability Flags
+
+<verbatim>
+    <availabilityFlag>_SUCCESS</availabilityFlag>
+</verbatim>
+An availabilityFlag specifies the name of a file which when present/created in 
a feeds data directory, 
+the feed is termed as available. ex: _SUCCESS, if this element is ignored then 
Falcon would consider the presence of feed's
+data directory as feed availability.
+
+---+++ Frequency
+
+<verbatim>
+    <frequency>minutes(20)</frequency>
+</verbatim>
+A feed has a frequency which specifies the frequency by which this feed is 
generated. 
+ex: it can be generated every hour, every 5 minutes, daily, weekly etc.
+valid frequency type for a feed are minutes, hours, days, months. The values 
can be negative, zero or positive.
+
+---+++ SLA
+<verbatim>
+    <sla slaLow="hours(40)" slaHigh="hours(44)" />
+</verbatim>
+
+A feed can have SLA and each SLA has two properties - slaLow and slaHigh. Both 
slaLow and slaHigh are written using
+expressions like frequency. slaLow is intended to serve for alerting for feed 
instances which are in danger of missing their
+availability SLAs. slaHigh is intended to serve for reporting the feeds which 
missed their SLAs. SLAs are relative to
+feed instance time.
+
+---+++ Import
+
+<verbatim>
+<import>
+    <source name="test-hsql-db" tableName="customer">
+        <extract type="full">
+            <mergepolicy>snapshot</mergepolicy>
+         </extract>
+         <fields>
+            <includes>
+                <field>id</field>
+                <field>name</field>
+            </includes>
+         </fields>
+    </source>
+    <arguments>
+        <argument name="--split-by" value="id"/>
+        <argument name="--num-mappers" value="2"/>
+    </arguments>
+</import>
+
+A feed can have an import policy associated with it. The souce name specified 
the datasource reference to the
+datasource entity from which the data will be imported to HDFS. The tableName 
spcified the table or topic to be
+imported from the datasource. The extract type specifies the pull mechanism 
(full or
+incremental extract). Full extract method extracts all the data from the 
datasource. The incremental extraction
+method feature implementation is in progress. The mergeplocy determines how 
the data is to be layed out on HDFS.
+The snapshot layout creates a snapshot of the data on HDFS using the feed's 
location specification. Fields is used
+to specify the projection columns. Feed import from database underneath uses 
sqoop to achieve the task. Any advanced
+Sqoop options can be specified via the arguments.
+
+---+++ Late Arrival
+
+<verbatim>
+    <late-arrival cut-off="hours(6)" />
+</verbatim>
+A late-arrival specifies the cut-off period till which the feed is expected to 
arrive late and should be honored be processes referring to it as input feed by 
rerunning the instances in case the data arrives late with in a cut-off period.
+The cut-off period is specified by expression frequency(times), ex: if the 
feed can arrive late
+upto 8 hours then late-arrival's cut-off="hours(8)"
+
+*Note:* This will only apply for !FileSystem storage but not Table storage 
until a future time.
+
+
+---+++ Email Notification
+
+<verbatim>
+    <notification type="email" to="[email protected]"/>
+</verbatim>
+Specifying the notification element with "type" property allows users to 
receive email notification when a scheduled feed instance completes.
+Multiple recipients of an email can be provided as comma separated addresses 
with "to" property.
+To send email notification ensure that SMTP parameters are defined in Falcon 
startup.properties.
+Refer to [[FalconEmailNotification][Falcon Email Notification]] for more 
details.
+
+
+---+++ ACL
+
+A feed has ACL (Access Control List) useful for implementing permission 
requirements
+and provide a way to set different permissions for specific users or named 
groups.
+<verbatim>
+    <ACL owner="test-user" group="test-group" permission="*"/>
+</verbatim>
+ACL indicates the Access control list for this cluster.
+owner is the Owner of this entity.
+group is the one which has access to read.
+permission indicates the permission.
+
+---+++ Custom Properties
+
+<verbatim>
+    <properties>
+        <property name="tmpFeedPath" value="tmpFeedPathValue" />
+        <property name="field2" value="value2" />
+        <property name="queueName" value="hadoopQueue"/>
+        <property name="jobPriority" value="VERY_HIGH"/>
+        <property name="timeout" value="hours(1)"/>
+        <property name="parallel" value="3"/>
+        <property name="maxMaps" value="8"/>
+        <property name="mapBandwidth" value="1"/>
+        <property name="overwrite" value="true"/>
+        <property name="ignoreErrors" value="false"/>
+        <property name="skipChecksum" value="false"/>
+        <property name="removeDeletedFiles" value="true"/>
+        <property name="preserveBlockSize" value="true"/>
+        <property name="preserveReplicationNumber" value="true"/>
+        <property name="preservePermission" value="true"/>
+        <property name="order" value="LIFO"/>
+    </properties>
+</verbatim>
+A key-value pair, which are propagated to the workflow engine. "queueName" and 
"jobPriority" are special properties
+available to user to specify the Hadoop job queue and priority, the same 
values are used by Falcon's launcher job.
+"timeout", "parallel" and "order" are other special properties which decides 
replication instance's timeout value while
+waiting for the feed instance, parallel decides the concurrent replication 
instances that can run at any given time and
+order decides the execution order for replication instances like FIFO, LIFO 
and LAST_ONLY.
+DistCp options can be passed as custom properties, which will be propagated to 
the DistCp tool. "maxMaps" represents
+the maximum number of maps used during replication. "mapBandwidth" represents 
the bandwidth in MB/s
+used by each mapper during replication. "overwrite" represents overwrite 
destination during replication.
+"ignoreErrors" represents ignore failures not causing the job to fail during 
replication. "skipChecksum" represents
+bypassing checksum verification during replication. "removeDeletedFiles" 
represents deleting the files existing in the
+destination but not in source during replication. "preserveBlockSize" 
represents preserving block size during
+replication. "preserveReplicationNumber" represents preserving replication 
number during replication.
+"preservePermission" represents preserving permission during
+
+
+---+++ Lifecycle
+<verbatim>
+
+<lifecycle>
+    <retention-stage>
+        <frequency>hours(10)</frequency>
+        <queue>reports</queue>
+        <priority>NORMAL</priority>
+        <properties>
+            <property name="retention.policy.agebaseddelete.limit" 
value="hours(9)"></property>
+        </properties>
+    </retention-stage>
+</lifecycle>
+
+</verbatim>
+
+lifecycle tag is the new way to define various stages of a feed's lifecycle. 
In the example above we have defined a
+retention-stage using lifecycle tag. You may define lifecycle at global level 
or a cluster level or both. Cluster level
+configuration takes precedence and falcon falls back to global definition if 
cluster level specification is missing.
+
+
+----++++ Retention Stage
+As of now there are two ways to specify retention. One is through the 
<retention> tag in the cluster and another is the
+new way through <retention-stage> tag in <lifecycle> tag. If both are defined 
for a feed, then the lifecycle tag will be
+considered effective and falcon will ignore the <retention> tag in the 
cluster. If there is an invalid configuration of
+retention-stage in lifecycle tag, then falcon will *NOT* fall back to 
retention tag even if it is defined and will
+throw validation error.
+
+In this new method of defining retention you can specify the frequency at 
which the retention should occur, you can
+also define the queue and priority parameters for retention jobs. The default 
behavior of retention-stage is same as
+the existing one which is to delete all instances corresponding to 
instance-time earlier than the duration provided in
+"retention.policy.agebaseddelete.limit"
+
+Property "retention.policy.agebaseddelete.limit" is a mandatory property and 
must contain a valid duration e.g. "hours(1)"
+Retention frequency is not a mandatory parameter. If user doesn't specify the 
frequency in the retention stage then
+it doesn't fallback to old retention policy frequency. Its default value is 
set to 6 hours if feed frequency is less
+than 6 hours else its set to feed frequency as retention shouldn't be more 
frequent than data availability to avoid
+wastage of compute resources.
+
+In future, we will allow more customisation like customising how to choose 
instances to be deleted through this method.
+
+
+
+---++ Process Specification
+A process defines configuration for a workflow. A workflow is a directed 
acyclic graph(DAG) which defines the job for the workflow engine. A process 
definition defines  the configurations required to run the workflow job. For 
example, process defines the frequency at which the workflow should run, the 
clusters on which the workflow should run, the inputs and outputs for the 
workflow, how the workflow failures should be handled, how the late inputs 
should be handled and so on.  
+
+The different details of process are:
+---+++ Name
+Each process is identified with a unique name.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+</process>
+</verbatim>
+
+---+++ Tags
+An optional list of comma separated tags which are used for classification of 
processes.
+Syntax:
+<verbatim>
+...
+    <tags>[email protected], [email protected], 
department=forecasting</tags>
+</verbatim>
+
+---+++ Pipelines
+An optional list of comma separated word strings, specifies the data 
processing pipeline(s) to which this process belongs.
+Only letters, numbers and underscore are allowed for pipeline string.
+Syntax:
+<verbatim>
+...
+    <pipelines>test_Pipeline, dataReplication, clickStream_pipeline</pipelines>
+</verbatim>
+
+---+++ Cluster
+The cluster on which the workflow should run. A process should contain one or 
more clusters. Cluster definition for the cluster name gives the end points for 
workflow execution, name node, job tracker, messaging and so on. Each cluster 
inturn has validity mentioned, which tell the times between which the job 
should run on that specified cluster. 
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <clusters>
+        <cluster name="test-cluster1">
+            <validity start="2012-12-21T08:15Z" end="2100-01-01T00:00Z"/>
+        </cluster>
+        <cluster name="test-cluster2">
+            <validity start="2012-12-21T08:15Z" end="2100-01-01T00:00Z"/>
+        </cluster>
+       ....
+       ....
+    </clusters>
+
+...
+</process>
+</verbatim>
+
+---+++ Parallel
+Parallel defines how many instances of the workflow can run concurrently. It 
should be a positive integer > 0.
+For example, parallel of 1 ensures that only one instance of the workflow can 
run at a time. The next instance will start only after the running instance 
completes.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <parallel>[parallel]</parallel>
+...
+</process>
+</verbatim>
+
+---+++ Order
+Order defines the order in which the ready instances are picked up. The 
possible values are FIFO(First In First Out), LIFO(Last In First Out), and 
ONLYLAST(Last Only).
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <order>[order]</order>
+...
+</process>
+</verbatim>
+
+---+++ Timeout
+A optional Timeout specifies the maximum time an instance waits for a dataset 
before being killed by the workflow engine, a time out is specified like 
frequency.
+If timeout is not specified, falcon computes a default timeout for a process 
based on its frequency, which is six times of the frequency of process or 30 
minutes if computed timeout is less than 30 minutes.
+<verbatim>
+<process name="[process name]">
+...
+   <timeout>[timeunit]([frequency])</timeout>
+...
+</process>
+</verbatim>
+
+---+++ Frequency
+Frequency defines how frequently the workflow job should run. For example, 
hours(1) defines the frequency as hourly, days(7) defines weekly frequency. The 
values for timeunit can be minutes/hours/days/months and the frequency number 
should be a positive integer > 0. 
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <frequency>[timeunit]([frequency])</order>
+...
+</process>
+</verbatim>
+
+---+++ SLA
+<verbatim>
+    <sla shouldStartIn="hours(2)" shouldEndIn="hours(4)"/>
+</verbatim>
+A process can have SLA which is defined by 2 optional attributes - 
shouldStartIn and shouldEndIn. All the attributes
+are written using expressions like frequency. shouldStartIn is the time by 
which the process should have started.
+shouldEndIn is the time by which the process should have finished.
+
+
+---+++ Validity
+Validity defines how long the workflow should run. It has 3 components - start 
time, end time and timezone. Start time and end time are timestamps defined in 
yyyy-MM-dd'T'HH:mm'Z' format and should always be in UTC. Timezone is used to 
compute the next instances starting from start time. The workflow will start at 
start time and end before end time specified on a given cluster. So, there will 
not be a workflow instance at end time.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+   <validity start=[start time] end=[end time] timezone=[timezone]/>
+...
+</process>
+</verbatim>
+
+Examples:
+<verbatim>
+<process name="sample-process">
+...
+    <frequency>days(1)</frequency>
+    <validity start="2012-01-01T00:40Z" end="2012-04-01T00:00" timezone="UTC"/>
+...
+</process>
+</verbatim>
+The daily workflow will start on Jan 1st 2012 at 00:40 UTC, it will run at 
40th minute of every hour and the last instance will be at March 31st 2012 at 
23:40 UTC.
+                                                                               
                
+<verbatim>
+<process name="sample-process">
+...
+    <frequency>hours(1)</frequency>
+    <validity start="2012-03-11T08:40Z" end="2012-03-12T08:00" 
timezone="PST8PDT"/>
+...
+</process>
+</verbatim>
+The hourly workflow will start on March 11th 2012 at 00:40 PST, the next 
instances will be at 01:40 PST, 03:40 PDT, 04:40 PDT and so on till 23:40 PDT. 
So, there will be just 23 instances of the workflow for March 11th 2012 because 
of DST switch.
+
+---+++ Inputs
+Inputs define the input data for the workflow. The workflow job will start 
executing only after the schedule time and when all the inputs are available. 
There can be 0 or more inputs and each of the input maps to a feed. The path 
and frequency of input data is picked up from feed definition. Each input 
should also define start and end instances in terms of 
[[FalconDocumentation][EL expressions]] and can optionally specify specific 
partition of input that the workflow requires. The components in partition 
should be subset of partitions defined in the feed.
+
+For each input, Falcon will create a property with the input name that 
contains the comma separated list of input paths. This property can be used in 
workflow actions like pig scripts and so on.
+
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <inputs>
+        <input name=[input name] feed=[feed name] start=[start el] end=[end 
el] partition=[partition]/>
+        ...
+    </inputs>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<feed name="feed1">
+...
+    <partition name="isFraud"/>
+    <partition name="country"/>
+    <frequency>hours(1)</frequency>
+    <locations>
+        <location type="data" 
path="/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
+        ...
+    </locations>
+...
+</feed>
+<process name="sample-process">
+...
+    <inputs>
+        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)" 
partition="*/US"/>
+        ...
+    </inputs>
+...
+</process>
+</verbatim>
+The input for the workflow is a hourly feed and takes 0th and 1st hour data of 
today(the day when the workflow runs).
+If the workflow is running for 2012-03-01T06:40Z, the inputs are 
/projects/bootcamp/feed1/2012-03-01-00/*/US and
+/projects/bootcamp/feed1/2012-03-01-01/*/US. The property for this input is
+input1=/projects/bootcamp/feed1/2012-03-01-00/*/US,/projects/bootcamp/feed1/2012-03-01-01/*/US
+
+Also, feeds with Hive table storage can be used as inputs to a process. 
Several parameters from inputs are passed as
+params to the user workflow or pig script.
+
+<verbatim>
+    ${wf:conf('falcon_input_database')} - database name associated with the 
feed for a given input
+    ${wf:conf('falcon_input_table')} - table name associated with the feed for 
a given input
+    ${wf:conf('falcon_input_catalog_url')} - Hive metastore URI for this input 
feed
+    ${wf:conf('falcon_input_partition_filter_pig')} - value of 
${coord:dataInPartitionFilter('$input', 'pig')}
+    ${wf:conf('falcon_input_partition_filter_hive')} - value of 
${coord:dataInPartitionFilter('$input', 'hive')}
+    ${wf:conf('falcon_input_partition_filter_java')} - value of 
${coord:dataInPartitionFilter('$input', 'java')}
+</verbatim>
+
+*NOTE:* input is the name of the input configured in the process, which is 
input.getName().
+<verbatim><input name="input" feed="clicks-raw-table" start="yesterday(0,0)" 
end="yesterday(20,0)"/></verbatim>
+
+Example workflow configuration:
+
+<verbatim>
+<configuration>
+  <property>
+    <name>falcon_input_database</name>
+    <value>falcon_db</value>
+  </property>
+  <property>
+    <name>falcon_input_table</name>
+    <value>input_table</value>
+  </property>
+  <property>
+    <name>falcon_input_catalog_url</name>
+    <value>thrift://localhost:29083</value>
+  </property>
+  <property>
+    <name>falcon_input_storage_type</name>
+    <value>TABLE</value>
+  </property>
+  <property>
+    <name>feedInstancePaths</name>
+    
<value>hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00</value>
+  </property>
+  <property>
+    <name>falcon_input_partition_filter_java</name>
+    <value>(ds='2012-04-21-00')</value>
+  </property>
+  <property>
+    <name>falcon_input_partition_filter_hive</name>
+    <value>(ds='2012-04-21-00')</value>
+  </property>
+  <property>
+    <name>falcon_input_partition_filter_pig</name>
+    <value>(ds=='2012-04-21-00')</value>
+  </property>
+  ...
+</configuration>
+</verbatim>
+
+
+---+++ Optional Inputs
+User can mention one or more inputs as optional inputs. In such cases the job 
does not wait on those inputs which are
+mentioned as optional. If they are present it considers them otherwise 
continue with the compulsory ones.
+Example:
+<verbatim>
+<feed name="feed1">
+...
+    <partition name="isFraud"/>
+    <partition name="country"/>
+    <frequency>hours(1)</frequency>
+    <locations>
+        <location type="data" 
path="/projects/bootcamp/feed1/${YEAR}-${MONTH}-${DAY}-${HOUR}"/>
+        ...
+    </locations>
+...
+</feed>
+<process name="sample-process">
+...
+    <inputs>
+        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)" 
partition="*/US"/>
+        <input name="input2" feed="feed2" start="today(0,0)" end="today(1,0)" 
partition="*/UK" optional="true" />
+        ...
+    </inputs>
+...
+</process>
+</verbatim>
+
+*Note:* This is only supported for !FileSystem storage but not Table storage 
at this point.
+
+
+---+++ Outputs
+Outputs define the output data that is generated by the workflow. A process 
can define 0 or more outputs. Each output is mapped to a feed and the output 
path is picked up from feed definition. The output instance that should be 
generated is specified in terms of [[FalconDocumentation][EL expression]].
+
+For each output, Falcon creates a property with output name that contains the 
path of output data. This can be used in workflows to store in the path.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <outputs>
+        <output name=[input name] feed=[feed name] instance=[instance el]/>
+        ...
+    </outputs>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<feed name="feed2">
+...
+    <frequency>days(1)</frequency>
+    <locations>
+        <location type="data" 
path="/projects/bootcamp/feed2/${YEAR}-${MONTH}-${DAY}"/>
+        ...
+    </locations>
+...
+</feed>
+<process name="sample-process">
+...
+    <outputs>
+        <output name="output1" feed="feed2" instance="today(0,0)"/>
+        ...
+    </outputs>
+...
+</process>
+</verbatim>
+The output of the workflow is feed instance for today. If the workflow is 
running for 2012-03-01T06:40Z,
+the workflow generates output /projects/bootcamp/feed2/2012-03-01. The 
property for this output that is available
+for workflow is: output1=/projects/bootcamp/feed2/2012-03-01
+
+Also, feeds with Hive table storage can be used as outputs to a process. 
Several parameters from outputs are passed as
+params to the user workflow or pig script.
+<verbatim>
+    ${wf:conf('falcon_output_database')} - database name associated with the 
feed for a given output
+    ${wf:conf('falcon_output_table')} - table name associated with the feed 
for a given output
+    ${wf:conf('falcon_output_catalog_url')} - Hive metastore URI for the given 
output feed
+    ${wf:conf('falcon_output_dataout_partitions')} - value of 
${coord:dataOutPartitions('$output')}
+</verbatim>
+
+*NOTE:* output is the name of the output configured in the process, which is 
output.getName().
+<verbatim><output name="output" feed="clicks-summary-table" 
instance="today(0,0)"/></verbatim>
+
+Example workflow configuration:
+
+<verbatim>
+<configuration>
+  <property>
+    <name>falcon_output_database</name>
+    <value>falcon_db</value>
+  </property>
+  <property>
+    <name>falcon_output_table</name>
+    <value>output_table</value>
+  </property>
+  <property>
+    <name>falcon_output_catalog_url</name>
+    <value>thrift://localhost:29083</value>
+  </property>
+  <property>
+    <name>falcon_output_storage_type</name>
+    <value>TABLE</value>
+  </property>
+  <property>
+    <name>feedInstancePaths</name>
+    
<value>hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00</value>
+  </property>
+  <property>
+    <name>falcon_output_dataout_partitions</name>
+    <value>'ds=2012-04-21-00'</value>
+  </property>
+  ....
+</configuration>
+</verbatim>
+
+---+++ Custom Properties
+The properties are key value pairs that are passed to the workflow. These 
properties are optional and can be used
+in workflow to parameterize the workflow.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <properties>
+        <property name=[key] value=[value]/>
+        ...
+    </properties>
+...
+</process>
+</verbatim>
+
+The following are some special properties, which when present are used by the 
Falcon's launcher job, the same property is also available in workflow which 
can be used to propagate to pig or M/R job.
+<verbatim>
+        <property name="queueName" value="hadoopQueue"/>
+        <property name="jobPriority" value="VERY_HIGH"/>
+        <!-- This property is used to turn off JMS notifications for this 
process. JMS notifications are enabled by default. -->
+        <property name="userJMSNotificationEnabled" value="false"/>
+</verbatim>
+
+---+++ Workflow
+
+The workflow defines the workflow engine that should be used and the path to 
the workflow on hdfs.
+The workflow definition on hdfs contains the actual job that should run and it 
should confirm to
+the workflow specification of the engine specified. The libraries required by 
the workflow should
+be in lib folder inside the workflow path.
+
+The properties defined in the cluster and cluster properties(nameNode and 
jobTracker) will also
+be available for the workflow.
+
+There are 3 engines supported today.
+
+---++++ Oozie
+
+As part of oozie workflow engine support, users can embed a oozie workflow.
+Refer to oozie [[http://oozie.apache.org/docs/4.0.1/DG_Overview.html][workflow 
overview]] and
+[[http://oozie.apache.org/docs/4.0.1/WorkflowFunctionalSpec.html][workflow 
specification]] for details.
+
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <workflow engine=[workflow engine] path=[workflow path]/>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<process name="sample-process">
+...
+    <workflow engine="oozie" path="/projects/bootcamp/workflow"/>
+...
+</process>
+</verbatim>
+
+This defines the workflow engine to be oozie and the workflow xml is defined at
+/projects/bootcamp/workflow/workflow.xml. The libraries are at 
/projects/bootcamp/workflow/lib.
+
+---++++ Pig
+
+Falcon also adds the Pig engine which enables users to embed a Pig script as a 
process.
+
+Example:
+<verbatim>
+<process name="sample-process">
+...
+    <workflow engine="pig" path="/projects/bootcamp/pig.script"/>
+...
+</process>
+</verbatim>
+
+This defines the workflow engine to be pig and the pig script is defined at
+/projects/bootcamp/pig.script.
+
+Feeds with Hive table storage will send one more parameter apart from the 
general ones:
+<verbatim>$input_filter</verbatim>
+
+---++++ Hive
+
+Falcon also adds the Hive engine as part of Hive Integration which enables 
users to embed a Hive script as a process.
+This would enable users to create materialized queries in a declarative way.
+
+Example:
+<verbatim>
+<process name="sample-process">
+...
+    <workflow engine="hive" path="/projects/bootcamp/hive-script.hql"/>
+...
+</process>
+</verbatim>
+
+This defines the workflow engine to be hive and the hive script is defined at
+/projects/bootcamp/hive-script.hql.
+
+Feeds with Hive table storage will send one more parameter apart from the 
general ones:
+<verbatim>$input_filter</verbatim>
+
+---+++ Retry
+Retry policy defines how the workflow failures should be handled. Three retry 
policies are defined: periodic, exp-backoff(exponential backoff) and final. 
Depending on the delay and number of attempts, the workflow is re-tried after 
specific intervals.
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <retry policy=[retry policy] delay=[retry delay] attempts=[retry 
attempts]/>
+...
+</process>
+</verbatim>
+
+Examples:
+<verbatim>
+<process name="sample-process">
+...
+    <retry policy="periodic" delay="minutes(10)" attempts="3"/>
+...
+</process>
+</verbatim>
+The workflow is re-tried after 10 mins, 20 mins and 30 mins. With exponential 
backoff, the workflow will be re-tried after 10 mins, 20 mins and 40 mins.
+
+---+++ Late data
+Late data handling defines how the late data should be handled. Each feed is 
defined with a late cut-off value which specifies the time till which late data 
is valid. For example, late cut-off of hours(6) means that data for nth hour 
can get delayed by upto 6 hours. Late data specification in process defines how 
this late data is handled.
+
+Late data policy defines how frequently check is done to detect late data. The 
policies supported are: backoff, exp-backoff(exponention backoff) and final(at 
feed's late cut-off). The policy along with delay defines the interval at which 
late data check is done.
+
+Late input specification for each input defines the workflow that should run 
when late data is detected for that input. 
+
+Syntax:
+<verbatim>
+<process name="[process name]">
+...
+    <late-process policy=[late handling policy] delay=[delay]>
+        <late-input input=[input name] workflow-path=[workflow path]/>
+        ...
+    </late-process>
+...
+</process>
+</verbatim>
+
+Example:
+<verbatim>
+<feed name="feed1">
+...
+    <frequency>hours(1)</frequency>
+    <late-arrival cut-off="hours(6)"/>
+...
+</feed>
+<process name="sample-process">
+...
+    <inputs>
+        <input name="input1" feed="feed1" start="today(0,0)" end="today(1,0)"/>
+        ...
+    </inputs>
+    <late-process policy="final">
+        <late-input input="input1" 
workflow-path="/projects/bootcamp/workflow/lateinput1" />
+        ...
+    </late-process>
+...
+</process>
+</verbatim>
+This late handling specifies that late data detection should run at feed's 
late cut-off which is 6 hours in this case. If there is late data, Falcon 
should run the workflow specified at 
/projects/bootcamp/workflow/lateinput1/workflow.xml
+
+*Note:* This is only supported for !FileSystem storage but not Table storage 
at this point.
+
+---+++ Email Notification
+
+<verbatim>
+    <notification type="email" to="bob@@xyz.com"/>
+</verbatim>
+Specifying the notification element with "type" property allows users to 
receive email notification when a scheduled process instance completes.
+Multiple recipients of an email can be provided as comma separated addresses 
with "to" property.
+To send email notification ensure that SMTP parameters are defined in Falcon 
startup.properties.
+Refer to [[FalconEmailNotification][Falcon Email Notification]] for more 
details.
+
+---+++ ACL
+
+A process has ACL (Access Control List) useful for implementing permission 
requirements
+and provide a way to set different permissions for specific users or named 
groups.
+<verbatim>
+    <ACL owner="test-user" group="test-group" permission="*"/>
+</verbatim>
+ACL indicates the Access control list for this cluster.
+owner is the Owner of this entity.
+group is the one which has access to read.
+permission indicates the permission.
+

Added: falcon/trunk/releases/0.9/src/site/twiki/FalconCLI.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/releases/0.9/src/site/twiki/FalconCLI.twiki?rev=1730446&view=auto
==============================================================================
--- falcon/trunk/releases/0.9/src/site/twiki/FalconCLI.twiki (added)
+++ falcon/trunk/releases/0.9/src/site/twiki/FalconCLI.twiki Mon Feb 15 
05:08:31 2016
@@ -0,0 +1,534 @@
+---+FalconCLI
+
+FalconCLI is a interface between user and Falcon. It is a command line utility 
provided by Falcon. FalconCLI supports Entity Management, Instance Management 
and Admin operations.There is a set of web services that are used by FalconCLI 
to interact with Falcon.
+
+---++Common CLI Options
+
+---+++Falcon URL
+
+Optional -url option indicating the URL of the Falcon system to run the 
command against can be provided.  If not mentioned it will be picked from the 
system environment variable FALCON_URL. If FALCON_URL is not set then it will 
be picked from client.properties file. If the option is not
+provided and also not set in client.properties, Falcon CLI will fail.
+
+---+++Proxy user support
+
+The -doAs option allows the current user to impersonate other users when 
interacting with the Falcon system. The current user must be configured as a 
proxyuser in the Falcon system. The proxyuser configuration may restrict from
+which hosts a user may impersonate users, as well as users of which groups can 
be impersonated.
+
+<a href="./FalconDocumentation.html#Proxyuser_support">Proxyuser support 
described here.</a>
+
+---+++Debug Mode
+
+If you export FALCON_DEBUG=true then the Falcon CLI will output the Web 
Services API details used by any commands you execute. This is useful for 
debugging purposes to or see how the Falcon CLI works with the WS API.
+Alternately, you can specify '-debug' through the CLI arguments to get the 
debug statements.
+Example:
+$FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml -debug
+
+---++Entity Management Operations
+
+---+++Submit
+
+Submit option is used to set up entity definition.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -submit -type [cluster|datasource|feed|process] 
-file <entity-definition.xml>
+
+Example: 
+$FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml
+
+Note: The url option in the above and all subsequent commands is optional. If 
not mentioned it will be picked from client.properties file. If the option is 
not provided and also not set in client.properties, Falcon CLI will fail.
+
+---+++Schedule
+
+Once submitted, an entity can be scheduled using schedule option. Process and 
feed can only be scheduled.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [process|feed] -name <<name>> -schedule
+
+Optional Arg : -skipDryRun. When this argument is specified, Falcon skips 
oozie dryrun.
+
+Example:
+$FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule
+
+---+++Suspend
+
+Suspend on an entity results in suspension of the oozie bundle that was 
scheduled earlier through the schedule function. No further instances are 
executed on a suspended entity. Only schedule-able entities(process/feed) can 
be suspended.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -suspend
+
+---+++Resume
+
+Puts a suspended process/feed back to active, which in turn resumes applicable 
oozie bundle.
+
+Usage:
+ $FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -resume
+
+---+++Delete
+
+Delete removes the submitted entity definition for the specified entity and 
put it into the archive.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [cluster|datasource|feed|process] -name 
<<name>> -delete
+
+---+++List
+
+Entities of a particular type can be listed with list sub-command.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -list
+
+Optional Args : -fields <<field1,field2>>
+-type <<[cluster|datasource|feed|process],[cluster|datasource|feed|process]>>
+-nameseq <<namesubsequence>> -tagkeys <<tagkeyword1,tagkeyword2>>
+-filterBy <<field1:value1,field2:value2>> -tags 
<<tagkey=tagvalue,tagkey=tagvalue>>
+-orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/EntityList.html">Optional params described here.</a>
+
+
+---+++Summary
+
+Summary of entities of a particular type and a cluster will be listed. Entity 
summary has N most recent instances of entity.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [feed|process] -summary
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-fields <<field1,field2>>
+-filterBy <<field1:value1,field2:value2>> -tags 
<<tagkey=tagvalue,tagkey=tagvalue>>
+-orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10 
-numInstances 7
+
+<a href="./Restapi/EntitySummary.html">Optional params described here.</a>
+
+---+++Update
+
+Update operation allows an already submitted/scheduled entity to be updated. 
Cluster and datasource updates are
+currently not allowed.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -update 
-file <<path_to_file>>
+
+Optional Arg : -skipDryRun. When this argument is specified, Falcon skips 
oozie dryrun.
+
+Example:
+$FALCON_HOME/bin/falcon entity -type process -name HourlyReportsGenerator 
-update -file /process/definition.xml
+
+---+++Touch
+
+Force Update operation allows an already submitted/scheduled entity to be 
updated.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -touch
+
+Optional Arg : -skipDryRun. When this argument is specified, Falcon skips 
oozie dryrun.
+
+---+++Status
+
+Status returns the current status of the entity.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name 
<<name>> -status
+
+---+++Dependency
+
+With the use of dependency option, we can list all the entities on which the 
specified entity is dependent.
+For example for a feed, dependency return the cluster name and for process it 
returns all the input feeds,
+output feeds and cluster names.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name 
<<name>> -dependency
+
+---+++Definition
+
+Definition option returns the entity definition submitted earlier during 
submit step.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name 
<<name>> -definition
+
+
+---+++Lookup
+
+Lookup option tells you which feed does a given path belong to. This can be 
useful in several scenarios e.g. generally you would want to have a single 
definition for common feeds like metadata with same location
+otherwise it can result in a problem (different retention durations can result 
in surprises for one team) If you want to check if there are multiple 
definitions of same metadata then you can pick
+an instance of that and run through the lookup command like below.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type feed -lookup -path 
/data/projects/my-hourly/2014/10/10/23/
+
+If you have multiple feeds with location as 
/data/projects/my-hourly/${YEAR}/${MONTH}/${DAY}/${HOUR} then this command will 
return all of them.
+
+
+---+++SLAAlert
+<verbatim>
+Since: 0.8
+</verbatim>
+
+This command lists all the feed instances which have missed sla and are still 
not available. If a feed instance missed
+sla but is now available, then it will not be reported in results. The purpose 
of this API is alerting and hence it
+ doesn't return feed instances which missed SLA but are available as they 
don't require any action.
+
+* Currently sla monitoring is supported only for feeds.
+
+* Option end is optional and will default to current time if missing.
+
+* Option name is optional, if provided only instances of that feed will be 
considered.
+
+Usage:
+
+*Example 1*
+
+*$FALCON_HOME/bin/falcon entity -type feed -start 2014-09-05T00:00Z -slaAlert  
-end 2016-05-03T00:00Z -colo local*
+
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T11:59Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:00Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:01Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:02Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:03Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:04Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:05Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:06Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:07Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:08Z, tags: 
Missed SLA Low
+
+
+Response: default/Success!
+
+Request Id: default/216978070@qtp-830047511-4 - 
f5a6c129-ab42-4feb-a2bf-c3baed356248
+
+*Example 2*
+
+*$FALCON_HOME/bin/falcon entity -type feed -start 2014-09-05T00:00Z -slaAlert  
-end 2016-05-03T00:00Z -colo local -name in*
+
+name: in, type: FEED, cluster: local, instanceTime: 2015-09-26T06:00Z, tags: 
Missed SLA High
+
+Response: default/Success!
+
+Request Id: default/1580107885@qtp-830047511-7 - 
f16cbc51-5070-4551-ad25-28f75e5e4cf2
+
+
+---++Instance Management Options
+
+---+++Kill
+
+Kill sub-command is used to kill all the instances of the specified process 
whose nominal time is between the given start time and end time.
+
+Note: 
+1. The start time and end time needs to be specified in TZ format.
+Example:   01 Jan 2012 01:00  => 2012-01-01T01:00Z
+
+3. Process name is compulsory parameter for each instance management command.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -kill 
-start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+
+---+++Suspend
+
+Suspend is used to suspend a instance or instances  for the given process. 
This option pauses the parent workflow at the state, which it was in at the 
time of execution of this command.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> 
-suspend -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+
+---+++Continue
+
+Continue option is used to continue the failed workflow instance. This option 
is valid only for process instances in terminal state, i.e. KILLED or FAILED.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> 
-continue -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+
+---+++Rerun
+
+Rerun option is used to rerun instances of a given process. On issuing a 
rerun, by default the execution resumes from the last failed node in the 
workflow. This option is valid only for process instances in terminal state, 
i.e. SUCCEEDED, KILLED or FAILED.
+If one wants to forcefully rerun the entire workflow, -force should be passed 
along with -rerun
+Additionally, you can also specify properties to override via a properties 
file and this will be prioritized over force option in case of contradiction.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -rerun 
-start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" [-force] [-file 
<<properties file>>]
+
+---+++Resume
+
+Resume option is used to resume any instance that  is in suspended state.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -resume 
-start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+
+---+++Status
+
+Status option via CLI can be used to get the status of a single or multiple 
instances.  If the instance is not yet materialized but is within the process 
validity range, WAITING is returned as the state. Along with the status of the 
instance time is also returned. Log location gives the oozie workflow url
+If the instance is in WAITING state, missing dependencies are listed.
+The job urls are populated for all actions of user workflow and non-succeeded 
actions of the main-workflow. The user then need not go to the underlying 
scheduler to get the job urls when needed to debug an issue in the job.
+
+Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:
+
+{"status":"SUCCEEDED","message":"getStatus is 
successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"},
 {"instance":"2010-01-02T11:05Z","status":"WAITING"}] 
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-colo <<colo>>
+-filterBy <<field1:value1,field2:value2>> -lifecycle <<lifecycles>>
+-orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceStatus.html"> Optional params described here.</a>
+
+---+++List
+
+List option via CLI can be used to get single or multiple instances.  If the 
instance is not yet materialized but is within the process validity range, 
WAITING is returned as the state. Instance time is also returned. Log location 
gives the oozie workflow url
+If the instance is in WAITING state, missing dependencies are listed
+
+Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:
+
+{"status":"SUCCEEDED","message":"getStatus is 
successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"},
 {"instance":"2010-01-02T11:05Z","status":"WAITING"}]}
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -list
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+-colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder 
<<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceList.html">Optional params described here.</a>
+
+---+++Summary
+
+Summary option via CLI can be used to get the consolidated status of the 
instances between the specified time period.
+Each status along with the corresponding instance count are listed for each of 
the applicable colos.
+The unscheduled instances between the specified time period are included as 
UNSCHEDULED in the output to provide more clarity.
+
+Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:
+
+{"status":"SUCCEEDED","message":"getSummary is successful", 
instancesSummary:[{"cluster": <<name>> "map":[{"SUCCEEDED":"1"}, 
{"WAITING":"1"}, {"RUNNING":"1"}]}]}
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -summary
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-colo <<colo>>
+-filterBy <<field1:value1,field2:value2>> -lifecycle <<lifecycles>>
+-orderBy field -sortOrder <<sortOrder>>
+
+<a href="./Restapi/InstanceSummary.html">Optional params described here.</a>
+
+---+++Running
+
+Running option provides all the running instances of the mentioned process.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -running
+
+Optional Args : -colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy <<field>> -sortOrder 
<<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceRunning.html">Optional params described here.</a>
+
+---+++FeedInstanceListing
+
+Get falcon feed instance availability.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type feed -name <<name>> -listing
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+-colo <<colo>>
+
+<a href="./Restapi/FeedInstanceListing.html">Optional params described 
here.</a>
+
+---+++Logs
+
+Get logs for instance actions
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -logs
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-runid <<runid>>
+-colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder 
<<sortOrder>> -offset 0 -numResults 10
+
+<a href="./Restapi/InstanceLogs.html">Optional params described here.</a>
+
+---+++LifeCycle
+
+Describes list of life cycles of a entity , for feed it can be 
replication/retention and for process it can be execution.
+This can be used with instance management options. Default values are 
replication for feed and execution for process.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status 
-lifecycle <<lifecycletype>> -start "yyyy-MM-dd'T'HH:mm'Z'" -end 
"yyyy-MM-dd'T'HH:mm'Z'"
+
+---+++Triage
+
+Given a feed/process instance this command traces it's ancestors to find what 
all ancestors have failed. It's useful if
+lot of instances are failing in a pipeline as it then finds out the root cause 
of the pipeline being stuck.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -triage -type <<feed/process>> -name <<name>> 
-start "yyyy-MM-dd'T'HH:mm'Z'"
+
+---+++Params
+
+Displays the workflow params of a given instance. Where start time is 
considered as nominal time of that instance and end time won't be considered.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -params 
-start "yyyy-MM-dd'T'HH:mm'Z'"
+
+
+
+---+++Dependency
+Display the dependent instances which are dependent on the given instance. For 
example for a given process instance it will
+list all the input feed instances(if any) and the output feed instances(if 
any).
+
+An example use case of this command is as follows:
+Suppose you find out that the data in a feed instance was incorrect and you 
need to figure out which all process instances
+consumed this feed instance so that you can reprocess them after correcting 
the feed instance. You can give the feed instance
+and it will tell you which process instance produced this feed and which all 
process instances consumed this feed.
+
+NOTE:
+1. instanceTime must be a valid instanceTime e.g. instanceTime of a feed 
should be in it's validity range on applicable clusters,
+ and it should be in the range of instances produced by the producer 
process(if any)
+
+2. For processes with inputs like latest() which vary with time the results 
are not guaranteed to be correct.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -params 
-instanceTime "yyyy-MM-dd'T'HH:mm'Z'"
+
+For example:
+$FALCON_HOME/bin/falcon instance -dependency -type feed -name out 
-instanceTime 2014-12-15T00:00Z
+name: producer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:00Z, tags: Output
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:03Z, tags: Input
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:04Z, tags: Input
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:02Z, tags: Input
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:05Z, tags: Input
+
+
+Response: default/Success!
+
+Request Id: default/1125035965@qtp-503156953-7 - 
447be0ad-1d38-4dce-b438-20f3de69b172
+
+
+<a href="./Restapi/InstanceDependency.html">Optional params described here.</a>
+
+---++ Metadata Lineage Options
+
+---+++Lineage
+
+Returns the relationship between processes and feeds in a given pipeline in <a 
href="http://www.graphviz.org/content/dot-language";>dot</a> format.
+You can use the output and view a graphical representation of DAG using an 
online graphviz viewer like <a href="http://graphviz-dev.appspot.com/";>this</a>.
+
+
+Usage:
+
+$FALCON_HOME/bin/falcon metadata -lineage -pipeline my-pipeline
+
+pipeline is a mandatory option.
+
+
+
+---+++ Vertex
+
+Get the vertex with the specified id.
+
+Usage:
+$FALCON_HOME/bin/falcon metadata -vertex -id <<id>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -vertex -id 4
+
+---+++ Vertices
+
+Get all vertices for a key index given the specified value.
+
+Usage:
+$FALCON_HOME/bin/falcon metadata -vertices -key <<key>> -value <<value>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -vertices -key type -value feed-instance
+
+---+++ Vertex Edges
+
+Get the adjacent vertices or edges of the vertex with the specified direction.
+
+Usage:
+$FALCON_HOME/bin/falcon metadata -edges -id <<vertex-id>> -direction 
<<direction>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -edges -id 4 -direction both
+$FALCON_HOME/bin/falcon metadata -edges -id 4 -direction inE
+
+---+++ Edge
+
+Get the edge with the specified id.
+
+Usage:
+$FALCON_HOME/bin/falcon metadata -edge -id <<id>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -edge -id Q9n-Q-5g
+
+---++ Metadata Discovery Options
+
+---+++ List
+
+Lists of all dimensions of given type. If the user provides optional param 
cluster, only the dimensions related to the cluster are listed.
+Usage:
+$FALCON_HOME/bin/falcon metadata -list -type 
[cluster_entity|datasource_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines|replication_metrics]
+
+Optional Args : -cluster <<cluster name>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -list -type process_entity -cluster 
primary-cluster
+$FALCON_HOME/bin/falcon metadata -list -type tags
+
+
+To display replication metrics from recipe based replication process and from 
feed replication.
+Usage:
+$FALCON_HOME/bin/falcon metadata -list -type replication_metrics 
-process/-feed <entity name>
+Optional Args : -numResults <<value>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -list -type replication_metrics -process 
hdfs-replication
+$FALCON_HOME/bin/falcon metadata -list -type replication_metrics -feed 
fs-replication
+
+
+---+++ Relations
+
+List all dimensions related to specified Dimension identified by 
dimension-type and dimension-name.
+Usage:
+$FALCON_HOME/bin/falcon metadata -relations -type 
[cluster_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines] 
-name <<Dimension Name>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -relations -type process_entity -name 
sample-process
+
+
+---++Admin Options
+
+---+++Help
+
+Usage:
+$FALCON_HOME/bin/falcon admin -help
+
+---+++Version
+
+Version returns the current version of Falcon installed.
+Usage:
+$FALCON_HOME/bin/falcon admin -version
+
+---+++Status
+
+Status returns the current state of Falcon (running or stopped).
+Usage:
+$FALCON_HOME/bin/falcon admin -status
+
+
+---++ Recipe Options
+
+---+++ Submit Recipe
+
+Submit the specified recipe.
+
+Usage:
+$FALCON_HOME/bin/falcon recipe -name <name>
+Name of the recipe. User should have defined <name>-template.xml and 
<name>.properties in the path specified by falcon.recipe.path in 
client.properties file. falcon.home path is used if its not specified in 
client.properties file.
+If its not specified in client.properties file and also if files cannot be 
found at falcon.home, Falcon CLI will fail.
+
+Optional Args : -tool <recipeToolClassName>
+Falcon provides a base tool that recipes can override. If this option is not 
specified the default Recipe Tool
+RecipeTool defined is used. This option is required if user defines his own 
recipe tool class.
+
+Example:
+$FALCON_HOME/bin/falcon recipe -name hdfs-replication
+


Reply via email to