Added: falcon/trunk/general/src/site/twiki/HiveDR.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/HiveDR.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/HiveDR.twiki (added)
+++ falcon/trunk/general/src/site/twiki/HiveDR.twiki Mon Feb 15 05:48:00 2016
@@ -0,0 +1,74 @@
+---+Hive Disaster Recovery
+
+
+---++Overview
+Falcon provides feature to replicate Hive metadata and data events from source 
cluster
+to destination cluster. This is supported for secure and unsecure cluster 
through Falcon Recipes.
+
+
+---++Prerequisites
+Following is the prerequisites to use Hive DR
+
+   * *Hive 1.2.0+*
+   * *Oozie 4.2.0+*
+
+*Note:* Set following properties in hive-site.xml for replicating the Hive 
events on source and destination Hive cluster:
+<verbatim>
+    <property>
+        <name>hive.metastore.event.listeners</name>
+        <value>org.apache.hive.hcatalog.listener.DbNotificationListener</value>
+        <description>event listeners that are notified of any metastore 
changes</description>
+    </property>
+
+    <property>
+        <name>hive.metastore.dml.events</name>
+        <value>true</value>
+    </property>
+</verbatim>
+
+---++ Usage
+---+++ Bootstrap
+   Perform initial bootstrap of Table and Database from source cluster to 
destination cluster
+   * *Database Bootstrap*
+     For bootstrapping DB replication, first destination DB should be created. 
This step is expected,
+     since DB replication definitions can be set up by users only on 
pre-existing DB’s. Second, Export all tables in
+     the source db and Import it in the destination db, as described in Table 
bootstrap.
+
+   * *Table Bootstrap*
+     For bootstrapping table replication, essentially after having turned on 
the !DbNotificationListener
+     on the source db, perform an Export of the table, distcp the Export over 
to the destination
+     warehouse and do an Import over there. Check the following 
[[https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport][Hive
 Export-Import]] for syntax details
+     and examples.
+     This will set up the destination table so that the events on the source 
cluster that modify the table
+     will then be replicated.
+
+---+++ Setup cluster definition
+   <verbatim>
+    $FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml
+   </verbatim>
+
+---+++ Update recipes properties
+   Copy Hive DR recipe properties, workflow and template file from 
$FALCON_HOME/data-mirroring/hive-disaster-recovery to the accessible
+   directory path or to the recipe directory path (*falcon.recipe.path=<recipe 
directory path>*). *"falcon.recipe.path"* must be specified
+   in Falcon conf client.properties. Now update the copied recipe properties 
file with required attributes to replicate metadata and data from source 
cluster to
+   destination cluster for Hive DR.
+
+---+++ Submit Hive DR recipe
+   After updating the recipe properties file with required attributes in 
directory path or in falcon.recipe.path,
+   there are two ways of submitting the Hive DR recipe:
+
+   * 1. Specify Falcon recipe properties file through recipe command line.
+   <verbatim>
+       $FALCON_HOME/bin/falcon recipe -name hive-disaster-recovery -operation 
HIVE_DISASTER_RECOVERY
+       -properties /cluster/hive-disaster-recovery.properties
+   </verbatim>
+
+   * 2. Use Falcon recipe path specified in Falcon conf client.properties .
+   <verbatim>
+       $FALCON_HOME/bin/falcon recipe -name hive-disaster-recovery -operation 
HIVE_DISASTER_RECOVERY
+   </verbatim>
+
+
+*Note:*
+   * Recipe properties file, workflow file and template file name must match 
to the recipe name, it must be unique and in the same directory.
+   * If kerberos security is enabled on cluster, use the secure templates for 
Hive DR from $FALCON_HOME/data-mirroring/hive-disaster-recovery .

Added: falcon/trunk/general/src/site/twiki/ImportExport.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/ImportExport.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/ImportExport.twiki (added)
+++ falcon/trunk/general/src/site/twiki/ImportExport.twiki Mon Feb 15 05:48:00 
2016
@@ -0,0 +1,236 @@
+---+Falcon Data Import and Export
+
+
+---++Overview
+
+Falcon provides constructs to periodically bring raw data from external data 
sources (like databases, drop boxes etc)
+onto Hadoop and push derived data computed on Hadoop onto external data 
sources.
+
+As of this release, Falcon only supports Relational Databases (e.g. Oracle, 
MySQL etc) via JDBC as external data source.
+The future releases will add support for other external data sources.
+
+
+---++Prerequisites
+
+Following are the prerequisites to import external data from and export to 
databases.
+
+   * *Sqoop 1.4.6+*
+   * *Oozie 4.2.0+*
+   * *Appropriate database connector*
+
+
+*Note:* Falcon uses Sqoop for import/export operation. Sqoop will require 
appropriate database driver to connect to
+the relational database. Please refer to the Sqoop documentation for any Sqoop 
related question. Please make sure
+the database driver jar is copied into oozie share lib for Sqoop.
+
+<verbatim>
+For example, in order to import and export with MySQL, please make sure the 
latest MySQL connector
+*mysql-connector-java-5.1.31.jar+* is copied into oozie's Sqoop share lib
+
+/user/oozie/share/lib/{lib-dir}/sqoop/mysql-connector-java-5.1.31.jar+
+
+where {lib-dir} value varies in oozie deployments.
+
+</verbatim>
+
+---++ Usage
+---+++ Entity Definition and Setup
+   * *Datasource Entity*
+      Datasource entity abstracts connection and credential details to 
external data sources. The Datasource entity
+      supports read and write interfaces with specific credentials. The 
default credential will be used if the read
+      or write interface does not have its own credentials. In general, the 
Datasource entity will be defined by
+      system administrator. Please refer to datasource XSD for more details.
+
+      The following example defines a Datasource entity for a MySQL database. 
The import operation will use
+      the read interface with url "jdbc:mysql://dbhost/test", user name 
"import_usr" and password text "sqoop".
+      Where as, the export operation will use the write interface with url 
"jdbc:mysql://dbhost/test" with user
+      name "export_usr" and password specified in a HDFS file at the location 
"/user/ambari-qa/password-store/password_write_user".
+
+      The default credential specifies the password using password text and 
will be used if either read or write interface
+      does not provide credentials.
+
+      The available read and write interfaces enable database administrators 
to segregate read and write workloads.
+
+      <verbatim>
+
+      File: mysql-database.xml
+
+      <?xml version="1.0" encoding="UTF-8"?>
+      <datasource colo="west-coast" description="MySQL database on west coast" 
type="mysql" name="mysql-db" xmlns="uri:falcon:datasource:0.1">
+          <tags>[email protected], 
[email protected]</tags>
+          <interfaces>
+              <!-- ***** read interface ***** -->
+              <interface type="readonly" endpoint="jdbc:mysql://dbhost/test">
+                  <credential type="password-text">
+                      <userName>import_usr</userName>
+                      <passwordText>sqoop</passwordFile>
+                  </credential>
+              </interface>
+
+              <!-- ***** write interface ***** -->
+              <interface type="write"  endpoint="jdbc:mysql://dbhost/test">
+                  <credential type="password-file">
+                      <userName>export_usr</userName>
+                      
<passwordFile>/user/ambari-qa/password-store/password_write_user</passwordFile>
+                  </credential>
+              </interface>
+
+              <!-- *** default credential *** -->
+              <credential type="password-text">
+                <userName>sqoop2_user</userName>
+                <passwordText>sqoop</passwordText>
+              </credential>
+
+          </interfaces>
+
+          <driver>
+              <clazz>com.mysql.jdbc.Driver</clazz>
+              
<jar>/user/oozie/share/lib/lib_20150721010816/sqoop/mysql-connector-java-5.1.31</jar>
+          </driver>
+      </datasource>
+      </verbatim>
+
+   * *Feed  Entity*
+      Feed entity now enables users to define IMPORT and EXPORT policies in 
addition to RETENTION and REPLICATION.
+      The IMPORT and EXPORT policies will refer to a already defined 
Datasource entity for connection and credential
+      details and take a table name from the policy to operate on. Please 
refer to feed entity XSD for details.
+
+      The following example defines a Feed entity with IMPORT and EXPORT 
policies. Both the IMPORT and EXPORT operations
+      refer to a datasource entity "mysql-db". The IMPORT operation will use 
the read interface and credentials while
+      the EXPORT operation will use the write interface and credentials. A 
feed instance is created every 1 hour
+      since the frequency of the Feed is hour(1) and the Feed instances are 
deleted after 90 days because of the
+      retention policy.
+
+
+      <verbatim>
+
+      File: customer_email_feed.xml
+
+      <?xml version="1.0" encoding="UTF-8"?>
+      <!--
+       A feed representing Hourly customer email data retained for 90 days
+       -->
+      <feed description="Raw customer email feed" name="customer_feed" 
xmlns="uri:falcon:feed:0.1">
+          <tags>externalSystem=USWestEmailServers,classification=secure</tags>
+          <groups>DataImportPipeline</groups>
+          <frequency>hours(1)</frequency>
+          <late-arrival cut-off="hours(4)"/>
+          <clusters>
+              <cluster name="primaryCluster" type="source">
+                  <validity start="2015-12-15T00:00Z" end="2016-03-31T00:00Z"/>
+                  <retention limit="days(90)" action="delete"/>
+                  <import>
+                      <source name="mysql-db" tableName="simple">
+                          <extract type="full">
+                              <mergepolicy>snapshot</mergepolicy>
+                          </extract>
+                          <fields>
+                              <includes>
+                                  <field>id</field>
+                                  <field>name</field>
+                              </includes>
+                          </fields>
+                      </source>
+                      <arguments>
+                          <argument name="--split-by" value="id"/>
+                          <argument name="--num-mappers" value="2"/>
+                      </arguments>
+                  </import>
+                  <export>
+                        <target name="mysql-db" tableName="simple_export">
+                            <load type="insert"/>
+                            <fields>
+                              <includes>
+                                <field>id</field>
+                                <field>name</field>
+                              </includes>
+                            </fields>
+                        </target>
+                        <arguments>
+                             <argument name="--update-key" value="id"/>
+                        </arguments>
+                    </export>
+              </cluster>
+          </clusters>
+
+          <locations>
+              <location type="data" 
path="/user/ambari-qa/falcon/demo/primary/importfeed/${YEAR}-${MONTH}-${DAY}-${HOUR}-${MINUTE}"/>
+              <location type="stats" path="/none"/>
+              <location type="meta" path="/none"/>
+          </locations>
+
+          <ACL owner="ambari-qa" group="users" permission="0755"/>
+          <schema location="/none" provider="none"/>
+
+      </feed>
+      </verbatim>
+
+   * *Import policy*
+     The import policy uses the datasource entity specified in the "source" to 
connect to the database. The tableName
+     specified should exist in the source datasource.
+
+     Extraction type specifies whether to pull data from external datasource 
"full" everytime or "incrementally".
+     The mergepolicy specifies how to organize (snapshot or append, i.e time 
series partiitons) the data on hadoop. 
+     The valid combinations are: 
+      * [full,snapshot] - data is extracted in full and dumped into the feed 
instance location.
+      * [incremental, append] - data is extracted incrementally using the key 
specified in the *deltacolumn* 
+        and added as a partition to the feed instance location. 
+      * [incremental, snapshot] - data is extracted incrementally and merged 
with already existing data on hadoop to
+        produce one latest feed instance.*This feature is not supported 
currently*. The use case for this feature is 
+        to efficiently import very large dimention tables that have updates 
and inserts onto hadoop and make it available
+        as a snapshot with latest updates to consumers.
+
+      The following example defines an incremental extraction with append 
organization:
+
+      <verbatim>
+           <import> 
+                <source name="mysql-db" tableName="simple">
+                    <extract type="incremental">
+                        <deltacolumn>modified_time</deltacolumn>
+                        <mergepolicy>append</mergepolicy>
+                    </extract>  
+                    <fields>
+                        <includes>
+                            <field>id</field>
+                            <field>name</field>
+                        </includes>
+                    </fields>
+                </source>
+                <arguments>
+                    <argument name="--split-by" value="id"/>
+                    <argument name="--num-mappers" value="2"/>
+                </arguments>
+            </import>
+       </verbatim>
+
+      
+     The fields option enables to control what fields get imported. By 
default, all fields get import. The "includes" option
+     brings only those fields specified. The "excludes" option brings all the 
fields other than specified.
+
+     The arguments section enables to pass in any extra arguments needed for 
fine control on the underlying implementation --
+     in this case, Sqoop.
+
+   * *Export policy*
+
+     The export, like import, uses the datasource for connecting to the 
database. Load type specifies whether to insert
+     or only update data onto the external table. Fields option behaves the 
same way as in import policy.
+     The tableName specified should exist in the external datasource.
+
+---+++ Operation
+   Once the Datasource and Feed entity with import and export policies are 
defined, Users can submit and schedule
+   the Import and Export operations via CLI and REST API as below:
+
+   <verbatim>
+
+    ## submit the mysql-db datasource defined in the file mysql_datasource.xml
+    falcon entity -submit -type datasource -file mysql_datasource.xml
+
+    ## submit the customer_feed specified in the customer_email_feed.xml
+    falcon entity -submit -type feed -file customer_email_feed.xml
+
+    ## schedule the customer_feed
+    falcon entity -schedule -type feed -name customer_feed
+
+   </verbatim>
+
+   Falcon will create corresponding oozie bundles with coordinator and 
workflow for import and export operation.

Modified: falcon/trunk/general/src/site/twiki/InstallationSteps.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/InstallationSteps.twiki?rev=1730449&r1=1730448&r2=1730449&view=diff
==============================================================================
--- falcon/trunk/general/src/site/twiki/InstallationSteps.twiki (original)
+++ falcon/trunk/general/src/site/twiki/InstallationSteps.twiki Mon Feb 15 
05:48:00 2016
@@ -1,281 +1,75 @@
----++ Building & Installing Falcon
+---+Building & Installing Falcon
 
 
----+++ Building Falcon
+---++Building Falcon
 
-<verbatim>
-You would need the following installed to build Falcon
-
-* JDK 1.7
-* Maven 3.x
-
-git clone https://git-wip-us.apache.org/repos/asf/falcon.git falcon
+---+++Prerequisites
 
-cd falcon
+   * JDK 1.7/1.8
+   * Maven 3.2.x
 
-export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=256m -noverify" && mvn clean 
install
 
-[optionally -Dhadoop.version=<<hadoop.version>> can be appended to build for a 
specific version of hadoop]
-*Note:* Falcon drops support for Hadoop-1 and only supports Hadoop-2 from 
Falcon 0.6 onwards
-[optionally -Doozie.version=<<oozie version>> can be appended to build with a 
specific version of oozie.
-Oozie versions >= 4 are supported]
-Falcon build with JDK 1.7 using -noverify option
 
-</verbatim>
-
-Once the build successfully completes, artifacts can be packaged for 
deployment. The package can be built in embedded or distributed mode.
+---+++Step 1 - Clone the Falcon repository
 
-*Embedded Mode*
 <verbatim>
-
-mvn clean assembly:assembly -DskipTests -DskipCheck=true
-
+$git clone https://git-wip-us.apache.org/repos/asf/falcon.git falcon
 </verbatim>
 
-Tar can be found in {project 
dir}/target/apache-falcon-${project.version}-bin.tar.gz
 
-Tar is structured as follows
+---+++Step 2 - Build Falcon
 
 <verbatim>
-
-|- bin
-   |- falcon
-   |- falcon-start
-   |- falcon-stop
-   |- falcon-config.sh
-   |- service-start.sh
-   |- service-stop.sh
-|- conf
-   |- startup.properties
-   |- runtime.properties
-   |- client.properties
-   |- log4j.xml
-   |- falcon-env.sh
-|- docs
-|- client
-   |- lib (client support libs)
-|- server
-   |- webapp
-      |- falcon.war
-|- hadooplibs
-|- README
-|- NOTICE.txt
-|- LICENSE.txt
-|- DISCLAIMER.txt
-|- CHANGES.txt
+$cd falcon
+$export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=256m -noverify" && mvn clean 
install
 </verbatim>
+It builds and installs the package into the local repository, for use as a 
dependency in other projects locally.
 
-*Distributed Mode*
+[optionally -Dhadoop.version=<<hadoop.version>> can be appended to build for a 
specific version of Hadoop]
 
-<verbatim>
+*NOTE:* Falcon drops support for Hadoop-1 and only supports Hadoop-2 from 
Falcon 0.6 onwards
+[optionally -Doozie.version=<<oozie version>> can be appended to build with a 
specific version of Oozie. Oozie versions
+>= 4 are supported]
+NOTE: Falcon builds with JDK 1.7/1.8 using -noverify option
+      To compile Falcon with Hive Replication, optionally "-P hadoop-2,hivedr" 
can be appended. For this Hive >= 1.2.0
+      and Oozie >= 4.2.0 should be available.
 
-mvn clean assembly:assembly -DskipTests -DskipCheck=true -Pdistributed,hadoop-2
 
-</verbatim>
 
-Tar can be found in {project 
dir}/target/apache-falcon-distributed-${project.version}-server.tar.gz
+---+++Step 3 - Package and Deploy Falcon
 
-Tar is structured as follows
+Once the build successfully completes, artifacts can be packaged for 
deployment using the assembly plugin. The Assembly
+Plugin for Maven is primarily intended to allow users to aggregate the project 
output along with its dependencies,
+modules, site documentation, and other files into a single distributable 
archive. There are two basic ways in which you
+can deploy Falcon - Embedded mode(also known as Stand Alone Mode) and 
Distributed mode. Your next steps will vary based
+on the mode in which you want to deploy Falcon.
 
-<verbatim>
+*NOTE* : Oozie is being extended by Falcon (particularly on el-extensions) and 
hence the need for Falcon to build &
+re-package Oozie, so that users of Falcon can work with the right Oozie setup. 
Though Oozie is packaged by Falcon, it
+needs to be deployed separately by the administrator and is not auto deployed 
along with Falcon.
 
-|- bin
-   |- falcon
-   |- falcon-start
-   |- falcon-stop
-   |- falcon-config.sh
-   |- service-start.sh
-   |- service-stop.sh
-   |- prism-stop
-   |- prism-start
-|- conf
-   |- startup.properties
-   |- runtime.properties
-   |- client.properties
-   |- log4j.xml
-   |- falcon-env.sh
-|- docs
-|- client
-   |- lib (client support libs)
-|- server
-   |- webapp
-      |- falcon.war
-      |- prism.war
-|- hadooplibs
-|- README
-|- NOTICE.txt
-|- LICENSE.txt
-|- DISCLAIMER.txt
-|- CHANGES.txt
-</verbatim>
 
----+++ Installing & running Falcon
+---++++Embedded/Stand Alone Mode
+Embedded mode is useful when the Hadoop jobs and relevant data processing 
involve only one Hadoop cluster. In this mode
+ there is a single Falcon server that contacts the scheduler to schedule jobs 
on Hadoop. All the process/feed requests
+ like submit, schedule, suspend, kill etc. are sent to this server. For 
running Falcon in this mode one should use the
+ Falcon which has been built using standalone option. You can find the 
instructions for Embedded mode setup
+ [[Embedded-mode][here]].
 
-*Installing falcon*
-<verbatim>
-tar -xzvf {falcon package}
-cd falcon-distributed-${project.version} or falcon-${project.version}
-</verbatim>
 
-*Configuring Falcon*
+---++++Distributed Mode
+Distributed mode is for multiple (colos) instances of Hadoop clusters, and 
multiple workflow schedulers to handle them.
+In this mode Falcon has 2 components: Prism and Server(s). Both Prism and 
Server(s) have their own their own config
+locations(startup and runtime properties). In this mode Prism acts as a 
contact point for Falcon servers. While
+ all commands are available through Prism, only read and instance api's are 
available through Server. You can find the
+ instructions for Distributed Mode setup [[Distributed-mode][here]].
 
-By default config directory used by falcon is {package dir}/conf. To override 
this set environment variable FALCON_CONF to the path of the conf dir.
 
-falcon-env.sh has been added to the falcon conf. This file can be used to set 
various environment variables that you need for you services.
-In addition you can set any other environment variables you might need. This 
file will be sourced by falcon scripts before any commands are executed. The 
following environment variables are available to set.
 
+---+++Preparing Oozie and Falcon packages for deployment
 <verbatim>
-# The java implementation to use. If JAVA_HOME is not found we expect java and 
jar to be in path
-#export JAVA_HOME=
-
-# any additional java opts you want to set. This will apply to both client and 
server operations
-#export FALCON_OPTS=
-
-# any additional java opts that you want to set for client only
-#export FALCON_CLIENT_OPTS=
-
-# java heap size we want to set for the client. Default is 1024MB
-#export FALCON_CLIENT_HEAP=
-
-# any additional opts you want to set for prism service.
-#export FALCON_PRISM_OPTS=
-
-# java heap size we want to set for the prism service. Default is 1024MB
-#export FALCON_PRISM_HEAP=
-
-# any additional opts you want to set for falcon service.
-#export FALCON_SERVER_OPTS=
-
-# java heap size we want to set for the falcon server. Default is 1024MB
-#export FALCON_SERVER_HEAP=
-
-# What is is considered as falcon home dir. Default is the base location of 
the installed software
-#export FALCON_HOME_DIR=
-
-# Where log files are stored. Default is logs directory under the base install 
location
-#export FALCON_LOG_DIR=
-
-# Where pid files are stored. Default is logs directory under the base install 
location
-#export FALCON_PID_DIR=
-
-# where the falcon active mq data is stored. Default is logs/data directory 
under the base install location
-#export FALCON_DATA_DIR=
-
-# Where do you want to expand the war file. By Default it is in /server/webapp 
dir under the base install dir.
-#export FALCON_EXPANDED_WEBAPP_DIR=
-</verbatim>
-
-*Configuring Monitoring plugin to register catalog partitions*
-Falcon comes with a monitoring plugin that registers catalog partition. This 
comes in really handy during migration from filesystem based feeds to hcatalog 
based feeds.
-This plugin enables the user to de-couple the partition registration and 
assume that all partitions are already on hcatalog even before the migration, 
simplifying the hcatalog migration.
-
-By default this plugin is disabled.
-To enable this plugin and leverage the feature, there are 3 pre-requisites:
-
-<verbatim>
-In {package dir}/conf/startup.properties, add
-*.workflow.execution.listeners=org.apache.falcon.catalog.CatalogPartitionHandler
-
-In the cluster definition, ensure registry endpoint is defined.
-Ex:
-<interface type="registry" endpoint="thrift://localhost:1109" 
version="0.13.3"/>
-
-In the feed definition, ensure the corresponding catalog table is mentioned in 
feed-properties
-Ex:
-<properties>
-    <property name="catalog.table" 
value="catalog:default:in_table#year={YEAR};month={MONTH};day={DAY};hour={HOUR};minute={MINUTE}"/>
-</properties>
-</verbatim>
-
-*NOTE for Mac OS users*
-<verbatim>
-If you are using a Mac OS, you will need to configure the FALCON_SERVER_OPTS 
(explained above).
-
-In  {package dir}/conf/falcon-env.sh uncomment the following line
-#export FALCON_SERVER_OPTS=
-
-and change it to look as below
-export FALCON_SERVER_OPTS="-Djava.awt.headless=true 
-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
-</verbatim>
-
-*Starting Falcon Server*
-<verbatim>
-bin/falcon-start [-port <port>]
-</verbatim>
-
-By default,
-* If falcon.enableTLS is set to true explicitly or not set at all, falcon 
starts at port 15443 on https:// by default.
-* If falcon.enableTLS is set to false explicitly, falcon starts at port 15000 
on http://.
-* To change the port, use -port option.
-   * If falcon.enableTLS is not set explicitly, port that ends with 443 will 
automatically put falcon on https://. Any other port will put falcon on http://.
-* falcon server starts embedded active mq. To control this behaviour, set the 
following system properties using -D option in environment variable FALCON_OPTS:
-   * falcon.embeddedmq=<true/false> - Should server start embedded active mq, 
default true
-   * falcon.embeddedmq.port=<port> - Port for embedded active mq, default 61616
-   * falcon.embeddedmq.data=<path> - Data path for embedded active mq, default 
{package dir}/logs/data
-* falcon server starts with conf from {package dir}/conf. To override this (to 
use the same conf with multiple falcon upgrades), set environment variable 
FALCON_CONF to the path of conf dir
-
-__Adding Extension Libraries__
-Library extensions allows users to add custom libraries to entity lifecycles 
such as feed retention, feed replication and process execution. This is useful 
for usecases such as adding filesystem extensions. To enable this, add the 
following configs to startup.properties:
-*.libext.paths=<paths to be added to all entity lifecycles>
-*.libext.feed.paths=<paths to be added to all feed lifecycles>
-*.libext.feed.retentions.paths=<paths to be added to feed retention workflow>
-*.libext.feed.replication.paths=<paths to be added to feed replication 
workflow>
-*.libext.process.paths=<paths to be added to process workflow>
-
-The configured jars are added to falcon classpath and the corresponding 
workflows
-
-
-*Starting Prism*
-<verbatim>
-bin/prism-start [-port <port>]
-</verbatim>
-
-By default, 
-* prism server starts at port 16443. To change the port, use -port option
-   * falcon.enableTLS can be set to true or false explicitly to enable SSL, if 
not port that end with 443 will automatically put prism on https://
-* prism starts with conf from {package dir}/conf. To override this (to use the 
same conf with multiple prism upgrades), set environment variable FALCON_CONF 
to the path of conf dir
-
-*Using Falcon*
-<verbatim>
-bin/falcon admin -version
-Falcon server build version: 
{Version:"0.3-SNAPSHOT-rd7e2be9afa2a5dc96acd1ec9e325f39c6b2f17f7",Mode:"embedded"}
-
-----
-
-bin/falcon help
-(for more details about falcon cli usage)
-</verbatim>
-
-*Dashboard*
-
-Once falcon / prism is started, you can view the status of falcon entities 
using the Web-based dashboard. The web UI works in both distributed and 
embedded mode. You can open your browser at the corresponding port to use the 
web UI.
-
-Falcon dashboard makes the REST api calls as user "falcon-dashboard". If this 
user does not exist on your falcon and oozie servers, please create the user.
-
-<verbatim>
-## create user.
-[root@falconhost ~] useradd -U -m falcon-dashboard -G users
-
-## verify user is created with membership in correct groups.
-[root@falconhost ~] groups falcon-dashboard
-falcon-dashboard : falcon-dashboard users
-[root@falconhost ~]
-</verbatim>
-
-*Stopping Falcon Server*
-<verbatim>
-bin/falcon-stop
-</verbatim>
-
-*Stopping Prism*
-<verbatim>
-bin/prism-stop
-</verbatim>
-
----+++ Preparing Oozie and Falcon packages for deployment
-<verbatim>
-cd <<project home>>
-src/bin/package.sh <<hadoop-version>> <<oozie-version>>
+$cd <<project home>>
+$src/bin/package.sh <<hadoop-version>> <<oozie-version>>
 
 >> ex. src/bin/package.sh 1.1.2 4.0.1 or src/bin/package.sh 0.20.2-cdh3u5 4.0.1
 >> ex. src/bin/package.sh 2.5.0 4.0.0
@@ -283,40 +77,11 @@ src/bin/package.sh <<hadoop-version>> <<
 >> Oozie package is available in <<falcon 
 >> home>>/target/oozie-4.0.1-distro.tar.gz
 </verbatim>
 
----+++ Running Examples using embedded package
-<verbatim>
-bin/falcon-start
-</verbatim>
-Make sure the hadoop and oozie endpoints are according to your setup in 
examples/entity/filesystem/standalone-cluster.xml
-The cluster locations,staging and working dirs, MUST be created prior to 
submitting a cluster entity to Falcon.
-*staging* must have 777 permissions and the parent dirs must have execute 
permissions
-*working* must have 755 permissions and the parent dirs must have execute 
permissions
-<verbatim>
-bin/falcon entity -submit -type cluster -file 
examples/entity/filesystem/standalone-cluster.xml
-</verbatim>
-Submit input and output feeds:
-<verbatim>
-bin/falcon entity -submit -type feed -file 
examples/entity/filesystem/in-feed.xml
-bin/falcon entity -submit -type feed -file 
examples/entity/filesystem/out-feed.xml
-</verbatim>
-Set-up workflow for the process:
-<verbatim>
-hadoop fs -put examples/app /
-</verbatim>
-Submit and schedule the process:
-<verbatim>
-bin/falcon entity -submitAndSchedule -type process -file 
examples/entity/filesystem/oozie-mr-process.xml
-bin/falcon entity -submitAndSchedule -type process -file 
examples/entity/filesystem/pig-process.xml
-</verbatim>
-Generate input data:
-<verbatim>
-examples/data/generate.sh <<hdfs endpoint>>
-</verbatim>
-Get status of instances:
-<verbatim>
-bin/falcon instance -status -type process -name oozie-mr-process -start 
2013-11-15T00:05Z -end 2013-11-15T01:00Z
-</verbatim>
-
-HCat based example entities are in examples/entity/hcat.
-
-
+*NOTE:* If you have a separate Apache Oozie installation, you will need to 
follow some additional steps:
+   1. Once you have setup the Falcon Server, copy libraries under 
{falcon-server-dir}/oozie/libext/ to {oozie-install-dir}/libext.
+   1. Modify Oozie's configuration file. Copy all Falcon related properties 
from {falcon-server-dir}/oozie/conf/oozie-site.xml to 
{oozie-install-dir}/conf/oozie-site.xml
+   1. Restart oozie:
+      1. cd {oozie-install-dir}
+      1. sudo -u oozie ./bin/oozie-stop.sh
+      1. sudo -u oozie ./bin/oozie-setup.sh prepare-war
+      1. sudo -u oozie ./bin/oozie-start.sh

Modified: falcon/trunk/general/src/site/twiki/OnBoarding.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/OnBoarding.twiki?rev=1730449&r1=1730448&r2=1730449&view=diff
==============================================================================
--- falcon/trunk/general/src/site/twiki/OnBoarding.twiki (original)
+++ falcon/trunk/general/src/site/twiki/OnBoarding.twiki Mon Feb 15 05:48:00 
2016
@@ -148,7 +148,7 @@ Sample process which runs daily at 6th h
 
     <workflow engine="oozie" path="/projects/bootcamp/workflow" />
 
-    <retry policy="backoff" delay="minutes(5)" attempts="3" />
+    <retry policy="periodic" delay="minutes(5)" attempts="3" />
     
     <late-process policy="exp-backoff" delay="hours(1)">
         <late-input input="input" 
workflow-path="/projects/bootcamp/workflow/lateinput" />

Modified: falcon/trunk/general/src/site/twiki/Operability.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/Operability.twiki?rev=1730449&r1=1730448&r2=1730449&view=diff
==============================================================================
--- falcon/trunk/general/src/site/twiki/Operability.twiki (original)
+++ falcon/trunk/general/src/site/twiki/Operability.twiki Mon Feb 15 05:48:00 
2016
@@ -5,6 +5,12 @@
 Apache Falcon provides various tools to operationalize Falcon consisting of 
Alerts for
 unrecoverable errors, Audits of user actions, Metrics, and Notifications. They 
are detailed below.
 
+++ Lineage
+
+Currently Lineage has no way to access or restore information about entity 
instances created during the time lineage
+was disabled. Information about entities however, is preserved and 
bootstrapped when lineage is enabled. If you have to
+reset the graph db then you can delete the graph db files as specified in the 
startup.properties and restart the falcon.
+Please note: you will loose all the information about the instances if you 
delete the graph db.
 
 ---++ Monitoring
 
@@ -12,6 +18,8 @@ Falcon provides monitoring of various ev
 The metric numbers can then be used to monitor performance and health of the 
Falcon system and
 the entire processing pipelines.
 
+Falcon also exposes 
[[https://github.com/thinkaurelius/titan/wiki/Titan-Performance-and-Monitoring][metrics
 for titandb]]
+
 Users can view the logs of these events in the metric.log file, by default 
this file is created
 under ${user.dir}/logs/ directory. Users may also extend the Falcon monitoring 
framework to send
 events to systems like Mondemand/lwes by 
implementingorg.apache.falcon.plugin.MonitoringPlugin

Modified: falcon/trunk/general/src/site/twiki/Recipes.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/Recipes.twiki?rev=1730449&r1=1730448&r2=1730449&view=diff
==============================================================================
--- falcon/trunk/general/src/site/twiki/Recipes.twiki (original)
+++ falcon/trunk/general/src/site/twiki/Recipes.twiki Mon Feb 15 05:48:00 2016
@@ -62,6 +62,10 @@ Recipe template will have <workflow name
 and replace it with the property value "hdfs-dr-workflow". Substituted 
template will have <workflow name="hdfs-dr-workflow">
 </verbatim>
 
+---++ Metrics
+HDFS DR and Hive DR recipes will capture the replication metrics like 
TIMETAKEN, BYTESCOPIED, COPY (number of files copied) for an
+instance and populate to the GraphDB.
+
 ---++ Managing the scheduled recipe process
    * Scheduled recipe process is similar to regular process
       * List : falcon entity -type process -name <recipe-process-name> -list
@@ -72,6 +76,10 @@ and replace it with the property value "
 
    * Sample recipes are published in addons/recipes
 
+---++ Types of recipes
+   * [[HDFSDR][HDFS Recipe]]
+   * [[HiveDR][HiveDR Recipe]]
+
 ---++ Packaging
 
    * There is no packaging for recipes at this time but will be added soon.

Modified: falcon/trunk/general/src/site/twiki/Security.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/Security.twiki?rev=1730449&r1=1730448&r2=1730449&view=diff
==============================================================================
--- falcon/trunk/general/src/site/twiki/Security.twiki (original)
+++ falcon/trunk/general/src/site/twiki/Security.twiki Mon Feb 15 05:48:00 2016
@@ -112,28 +112,28 @@ To authenticate user for REST api calls,
 *operations on Entity Resource*
 
 | *Resource*                                                                   
       | *Description*                      | *Authorization* |
-| [[./restapi/EntityValidate][api/entities/validate/:entity-type]]             
         | Validate the entity                | Owner/Group     |
-| [[./restapi/EntitySubmit][api/entities/submit/:entity-type]]                 
         | Submit the entity                  | Owner/Group     |
-| [[./restapi/EntityUpdate][api/entities/update/:entity-type/:entity-name]]    
         | Update the entity                  | Owner/Group     |
-| 
[[./restapi/EntitySubmitAndSchedule][api/entities/submitAndSchedule/:entity-type]]
    | Submit & Schedule the entity       | Owner/Group     |
-| 
[[./restapi/EntitySchedule][api/entities/schedule/:entity-type/:entity-name]]   
      | Schedule the entity                | Owner/Group     |
-| [[./restapi/EntitySuspend][api/entities/suspend/:entity-type/:entity-name]]  
         | Suspend the entity                 | Owner/Group     |
-| [[./restapi/EntityResume][api/entities/resume/:entity-type/:entity-name]]    
         | Resume the entity                  | Owner/Group     |
-| [[./restapi/EntityDelete][api/entities/delete/:entity-type/:entity-name]]    
         | Delete the entity                  | Owner/Group     |
-| [[./restapi/EntityStatus][api/entities/status/:entity-type/:entity-name]]    
         | Get the status of the entity       | Owner/Group     |
-| 
[[./restapi/EntityDefinition][api/entities/definition/:entity-type/:entity-name]]
     | Get the definition of the entity   | Owner/Group     |
-| [[./restapi/EntityList][api/entities/list/:entity-type?fields=:fields]]      
         | Get the list of entities           | Owner/Group     |
-| 
[[./restapi/EntityDependencies][api/entities/dependencies/:entity-type/:entity-name]]
 | Get the dependencies of the entity | Owner/Group     |
+| [[restapi/EntityValidate][api/entities/validate/:entity-type]]               
       | Validate the entity                | Owner/Group     |
+| [[restapi/EntitySubmit][api/entities/submit/:entity-type]]                   
       | Submit the entity                  | Owner/Group     |
+| [[restapi/EntityUpdate][api/entities/update/:entity-type/:entity-name]]      
       | Update the entity                  | Owner/Group     |
+| 
[[restapi/EntitySubmitAndSchedule][api/entities/submitAndSchedule/:entity-type]]
    | Submit & Schedule the entity       | Owner/Group     |
+| [[restapi/EntitySchedule][api/entities/schedule/:entity-type/:entity-name]]  
       | Schedule the entity                | Owner/Group     |
+| [[restapi/EntitySuspend][api/entities/suspend/:entity-type/:entity-name]]    
       | Suspend the entity                 | Owner/Group     |
+| [[restapi/EntityResume][api/entities/resume/:entity-type/:entity-name]]      
       | Resume the entity                  | Owner/Group     |
+| [[restapi/EntityDelete][api/entities/delete/:entity-type/:entity-name]]      
       | Delete the entity                  | Owner/Group     |
+| [[restapi/EntityStatus][api/entities/status/:entity-type/:entity-name]]      
       | Get the status of the entity       | Owner/Group     |
+| 
[[restapi/EntityDefinition][api/entities/definition/:entity-type/:entity-name]] 
    | Get the definition of the entity   | Owner/Group     |
+| [[restapi/EntityList][api/entities/list/:entity-type?fields=:fields]]        
       | Get the list of entities           | Owner/Group     |
+| 
[[restapi/EntityDependencies][api/entities/dependencies/:entity-type/:entity-name]]
 | Get the dependencies of the entity | Owner/Group     |
 
 *REST Call on Feed and Process Instances*
 
 | *Resource*                                                                  
| *Description*                | *Authorization* |
-| 
[[./restapi/InstanceRunning][api/instance/running/:entity-type/:entity-name]] | 
List of running instances.   | Owner/Group     |
-| [[./restapi/InstanceStatus][api/instance/status/:entity-type/:entity-name]]  
 | Status of a given instance   | Owner/Group     |
-| [[./restapi/InstanceKill][api/instance/kill/:entity-type/:entity-name]]      
 | Kill a given instance        | Owner/Group     |
-| 
[[./restapi/InstanceSuspend][api/instance/suspend/:entity-type/:entity-name]] | 
Suspend a running instance   | Owner/Group     |
-| [[./restapi/InstanceResume][api/instance/resume/:entity-type/:entity-name]]  
 | Resume a given instance      | Owner/Group     |
-| [[./restapi/InstanceRerun][api/instance/rerun/:entity-type/:entity-name]]    
 | Rerun a given instance       | Owner/Group     |
+| [[restapi/InstanceRunning][api/instance/running/:entity-type/:entity-name]] 
| List of running instances.   | Owner/Group     |
+| [[restapi/InstanceStatus][api/instance/status/:entity-type/:entity-name]]   
| Status of a given instance   | Owner/Group     |
+| [[restapi/InstanceKill][api/instance/kill/:entity-type/:entity-name]]       
| Kill a given instance        | Owner/Group     |
+| [[restapi/InstanceSuspend][api/instance/suspend/:entity-type/:entity-name]] 
| Suspend a running instance   | Owner/Group     |
+| [[restapi/InstanceResume][api/instance/resume/:entity-type/:entity-name]]   
| Resume a given instance      | Owner/Group     |
+| [[restapi/InstanceRerun][api/instance/rerun/:entity-type/:entity-name]]     
| Rerun a given instance       | Owner/Group     |
 | [[InstanceLogs][api/instance/logs/:entity-type/:entity-name]]               
| Get logs of a given instance | Owner/Group     |
 
 ---++++ Admin Resources Policy
@@ -142,9 +142,9 @@ Only users belonging to admin users or g
 determined by a static configuration parameter.
 
 | *Resource*                                             | *Description*       
                        | *Authorization*  |
-| [[./restapi/AdminVersion][api/admin/version]]            | Get version of 
the server                   | No restriction   |
-| [[./restapi/AdminStack][api/admin/stack]]                | Get stack of the 
server                     | Admin User/Group |
-| [[./restapi/AdminConfig][api/admin/config/:config-type]] | Get configuration 
information of the server | Admin User/Group |
+| [[restapi/AdminVersion][api/admin/version]]            | Get version of the 
server                   | No restriction   |
+| [[restapi/AdminStack][api/admin/stack]]                | Get stack of the 
server                     | Admin User/Group |
+| [[restapi/AdminConfig][api/admin/config/:config-type]] | Get configuration 
information of the server | Admin User/Group |
 
 
 ---++++ Lineage Resource Policy
@@ -178,6 +178,9 @@ Following is the Server Side Configurati
 # name node principal to talk to config store
 *.dfs.namenode.kerberos.principal=nn/[email protected]
 
+# Indicates how long (in seconds) falcon authentication token is valid before 
it has to be renewed.
+*.falcon.service.authentication.token.validity=86400
+
 ##### SPNEGO Configuration
 
 # Authentication type must be specified: simple|kerberos|<class>

Added: falcon/trunk/general/src/site/twiki/falconcli/CommonCLI.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/CommonCLI.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/CommonCLI.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/CommonCLI.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,21 @@
+---++ Common CLI Options
+
+---+++Falcon URL
+
+Optional -url option indicating the URL of the Falcon system to run the 
command against can be provided.  If not mentioned it will be picked from the 
system environment variable FALCON_URL. If FALCON_URL is not set then it will 
be picked from client.properties file. If the option is not
+provided and also not set in client.properties, Falcon CLI will fail.
+
+---+++Proxy user support
+
+The -doAs option allows the current user to impersonate other users when 
interacting with the Falcon system. The current user must be configured as a 
proxyuser in the Falcon system. The proxyuser configuration may restrict from
+which hosts a user may impersonate users, as well as users of which groups can 
be impersonated.
+
+<a href="../FalconDocumentation.html#Proxyuser_support">Proxyuser support 
described here.</a>
+
+---+++Debug Mode
+
+If you export FALCON_DEBUG=true then the Falcon CLI will output the Web 
Services API details used by any commands you execute. This is useful for 
debugging purposes to or see how the Falcon CLI works with the WS API.
+Alternately, you can specify '-debug' through the CLI arguments to get the 
debug statements.
+
+Example:
+$FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml -debug
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/ContinueInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/ContinueInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/ContinueInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/ContinueInstance.twiki Mon 
Feb 15 05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Continue
+
+[[CommonCLI][Common CLI Options]]
+
+Continue option is used to continue the failed workflow instance. This option 
is valid only for process instances in terminal state, i.e. KILLED or FAILED.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> 
-continue -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

Added: falcon/trunk/general/src/site/twiki/falconcli/Definition.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/Definition.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/Definition.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/Definition.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Definition
+
+[[CommonCLI][Common CLI Options]]
+
+Definition option returns the entity definition submitted earlier during 
submit step.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name 
<<name>> -definition

Added: falcon/trunk/general/src/site/twiki/falconcli/DeleteEntity.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/DeleteEntity.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/DeleteEntity.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/DeleteEntity.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Delete
+
+[[CommonCLI][Common CLI Options]]
+
+Delete removes the submitted entity definition for the specified entity and 
put it into the archive.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [cluster|datasource|feed|process] -name 
<<name>> -delete

Added: falcon/trunk/general/src/site/twiki/falconcli/DependencyEntity.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/DependencyEntity.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/DependencyEntity.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/DependencyEntity.twiki Mon 
Feb 15 05:48:00 2016
@@ -0,0 +1,10 @@
+---+++Dependency
+
+[[CommonCLI][Common CLI Options]]
+
+With the use of dependency option, we can list all the entities on which the 
specified entity is dependent.
+For example for a feed, dependency return the cluster name and for process it 
returns all the input feeds,
+output feeds and cluster names.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name 
<<name>> -dependency
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/DependencyInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/DependencyInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/DependencyInstance.twiki 
(added)
+++ falcon/trunk/general/src/site/twiki/falconcli/DependencyInstance.twiki Mon 
Feb 15 05:48:00 2016
@@ -0,0 +1,33 @@
+---+++Dependency
+Display the dependent instances which are dependent on the given instance. For 
example for a given process instance it will
+list all the input feed instances(if any) and the output feed instances(if 
any).
+
+An example use case of this command is as follows:
+Suppose you find out that the data in a feed instance was incorrect and you 
need to figure out which all process instances
+consumed this feed instance so that you can reprocess them after correcting 
the feed instance. You can give the feed instance
+and it will tell you which process instance produced this feed and which all 
process instances consumed this feed.
+
+NOTE:
+1. instanceTime must be a valid instanceTime e.g. instanceTime of a feed 
should be in it's validity range on applicable clusters,
+ and it should be in the range of instances produced by the producer 
process(if any)
+
+2. For processes with inputs like latest() which vary with time the results 
are not guaranteed to be correct.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> 
-dependency -instanceTime "yyyy-MM-dd'T'HH:mm'Z'"
+
+For example:
+$FALCON_HOME/bin/falcon instance -dependency -type feed -name out 
-instanceTime 2014-12-15T00:00Z
+name: producer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:00Z, tags: Output
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:03Z, tags: Input
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:04Z, tags: Input
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:02Z, tags: Input
+name: consumer, type: PROCESS, cluster: local, instanceTime: 
2014-12-15T00:05Z, tags: Input
+
+
+Response: default/Success!
+
+Request Id: default/1125035965@qtp-503156953-7 - 
447be0ad-1d38-4dce-b438-20f3de69b172
+
+
+<a href="../Restapi/InstanceDependencies.html">Optional params described 
here.</a>
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/EdgeMetadata.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/EdgeMetadata.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/EdgeMetadata.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/EdgeMetadata.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,11 @@
+---+++ Edge
+
+[[CommonCLI][Common CLI Options]]
+
+Get the edge with the specified id.
+
+Usage:
+$FALCON_HOME/bin/falcon metadata -edge -id <<id>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -edge -id Q9n-Q-5g
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/FalconCLI.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/FalconCLI.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/FalconCLI.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/FalconCLI.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,112 @@
+---+FalconCLI
+
+FalconCLI is a interface between user and Falcon. It is a command line utility 
provided by Falcon. FalconCLI supports Entity Management, Instance Management 
and Admin operations.There is a set of web services that are used by FalconCLI 
to interact with Falcon.
+
+---+++Types of CLI Options
+
+CLI options are classified into :
+
+   * <a href="#Common_CLI_Options">Common CLI Options</a>
+   * <a href="#Entity_Management_Commands">Entity Management Commands</a>
+   * <a href="#Instance_Management_Commands">Instance Management Commands</a>
+   * <a href="#Metadata_Commands">Metadata Commands</a>
+   * <a href="#Admin_Commands">Admin commands</a>
+   * <a href="#Recipe_Commands">Recipe commands</a>
+
+
+
+-----------
+
+---++Common CLI Options
+
+---+++Falcon URL
+
+Optional -url option indicating the URL of the Falcon system to run the 
command against can be provided.  If not mentioned it will be picked from the 
system environment variable FALCON_URL. If FALCON_URL is not set then it will 
be picked from client.properties file. If the option is not
+provided and also not set in client.properties, Falcon CLI will fail.
+
+---+++Proxy user support
+
+The -doAs option allows the current user to impersonate other users when 
interacting with the Falcon system. The current user must be configured as a 
proxyuser in the Falcon system. The proxyuser configuration may restrict from
+which hosts a user may impersonate users, as well as users of which groups can 
be impersonated.
+
+<a href="../FalconDocumentation.html#Proxyuser_support">Proxyuser support 
described here.</a>
+
+---+++Debug Mode
+
+If you export FALCON_DEBUG=true then the Falcon CLI will output the Web 
Services API details used by any commands you execute. This is useful for 
debugging purposes to or see how the Falcon CLI works with the WS API.
+Alternately, you can specify '-debug' through the CLI arguments to get the 
debug statements.
+Example:
+$FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml -debug
+
+-----------
+
+---++Entity Management Commands
+
+| *Command*                                      | *Description*               
                    |
+| [[Submit]]                                     | Submit the entity 
definition.                   |
+| [[Schedule]]                                   | Scheduled the entity        
                    |
+| [[SuspendEntity][Suspend]]                     | Suspends the scheduled 
entity                   |
+| [[ResumeEntity][Resume]]                       | Puts a suspended entity 
back in action          |
+| [[DeleteEntity][Delete]]                       | Remove the submitted entity 
                    |
+| [[ListEntity][List]]                           | Lists the particular type 
of entity             |
+| [[SummaryEntity][Summary]]                     | Shows summary of the type 
of entity             |
+| [[UpdateEntity][Update]]                       | Update already submitted 
entity                 |
+| [[Touch]]                                      | Force update already 
submitted entity           |
+| [[StatusEntity][Status]]                       | Return's the status of the 
entity               |
+| [[DependencyEntity][Dependency]]               | List all the entities on 
which the specified entity is dependent|
+| [[Definition]]                                 | Return's the definition of 
the entity           |
+| [[Lookup]]                                     | Return's the feed name for 
a path               |
+| [[SLAAlert]]                                   | Return's the feed instance 
which have missed sla|
+
+
+-----------
+---++Instance Management Commands
+
+| *Command*                                      | *Description*               
                    |
+| [[KillInstance][Kill]]                         | Kills all the instances of 
specified process    |
+| [[SuspendInstance][Suspend]]                   | Suspends instances of a 
specified process       |
+| [[ContinueInstance][Continue]]                 | Continue the failed 
workflow instances          |
+| [[RerunInstance][Rerun]]                       | Rerun instances of 
specified process            |
+| [[ResumeInstance][Resume]]                     | Resume instance of 
specified process from suspended state   |
+| [[StatusInstance][Status]]                     | Gets the status of entity   
                    |
+| [[ListInstance][List]]                         | Gets single or multiple 
instances               |
+| [[SummaryInstance][Summary]]                   | Gets consolidated status of 
the instances between the specified time period    |
+| [[RunningInstance][Running]]                   | Gets running instances of 
the mentioned process |
+| [[FeedInstanceListing]]                        | Gets falcon feed instance 
availability          |
+| [[LogsInstance][Logs]]                         | Gets logs for instance      
                    |
+| [[LifeCycleInstance][LifeCycle]]               | Describes list of life 
cycles of a entity       |
+| [[TriageInstance][Triage]]                     | Traces entities ancestors 
for failure           |
+| [[ParamsInstance][Params]]                     | Displays workflow params    
                    |
+| [[DependencyInstance][Dependency]]             | Displays the dependent 
instances    |
+
+-----------
+
+---++Metadata Commands
+
+| *Command*                                      | *Description*               
                     |
+|[[LineageMetadata][Lineage]]                    | Returns the relationship 
between processes and feeds |
+|[[VertexMetadata][Vertex]]                      | Gets the vertex with the 
specified id            |
+|[[VerticesMetadata][Vertices]]                  | Gets all vertices for a key 
                     |
+|[[VertexEdgesMetadata][Vertex Edges]]           | Gets the adjacent vertices 
or edges of the vertex|
+|[[EdgeMetadata][Edge]]                          | Gets the edge with the 
specified id              |
+|[[ListMetadata][List]]                          | Return list of all 
dimension of given type       |
+|[[RelationMetadata][Relations]]                | Return all dimensions 
related to specified Dimension |
+
+-----------
+
+---++Admin Commands
+
+| *Command*                                      | *Description*               
                    |
+|[[HelpAdmin][Help]]                             | Return help options         
                    |
+|[[VersionAdmin][Version]]                       | Return current falcon 
version                   |
+|[[StatusAdmin][Status]]                         | Return the status of falcon 
                    |
+
+-----------
+
+---++Recipe Commands
+
+| *Command*                                      | *Description*               
                    |
+|[[SubmitRecipe][Submit]]                        | Submit the specified Recipe 
                    |
+
+
+

Added: falcon/trunk/general/src/site/twiki/falconcli/FeedInstanceListing.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/FeedInstanceListing.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/FeedInstanceListing.twiki 
(added)
+++ falcon/trunk/general/src/site/twiki/falconcli/FeedInstanceListing.twiki Mon 
Feb 15 05:48:00 2016
@@ -0,0 +1,11 @@
+---+++FeedInstanceListing
+
+Get falcon feed instance availability.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type feed -name <<name>> -listing
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+-colo <<colo>>
+
+<a href="../Restapi/FeedInstanceListing.html">Optional params described 
here.</a>

Added: falcon/trunk/general/src/site/twiki/falconcli/HelpAdmin.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/HelpAdmin.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/HelpAdmin.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/HelpAdmin.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,6 @@
+---+++Help
+
+[[CommonCLI][Common CLI Options]]
+
+Usage:
+$FALCON_HOME/bin/falcon admin -help

Added: falcon/trunk/general/src/site/twiki/falconcli/KillInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/KillInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/KillInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/KillInstance.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,14 @@
+---+++Kill
+
+[[CommonCLI][Common CLI Options]]
+
+Kill sub-command is used to kill all the instances of the specified process 
whose nominal time is between the given start time and end time.
+
+Note:
+1. The start time and end time needs to be specified in TZ format.
+Example:   01 Jan 2012 01:00  => 2012-01-01T01:00Z
+
+2. Process name is compulsory parameter for each instance management command.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -kill 
-start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

Added: falcon/trunk/general/src/site/twiki/falconcli/LifeCycleInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/LifeCycleInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/LifeCycleInstance.twiki 
(added)
+++ falcon/trunk/general/src/site/twiki/falconcli/LifeCycleInstance.twiki Mon 
Feb 15 05:48:00 2016
@@ -0,0 +1,9 @@
+---+++LifeCycle
+
+[[CommonCLI][Common CLI Options]]
+
+Describes list of life cycles of a entity , for feed it can be 
replication/retention and for process it can be execution.
+This can be used with instance management options. Default values are 
replication for feed and execution for process.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status 
-lifecycle <<lifecycletype>> -start "yyyy-MM-dd'T'HH:mm'Z'" -end 
"yyyy-MM-dd'T'HH:mm'Z'"

Added: falcon/trunk/general/src/site/twiki/falconcli/LineageMetadata.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/LineageMetadata.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/LineageMetadata.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/LineageMetadata.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,12 @@
+---+++Lineage
+
+
+Returns the relationship between processes and feeds in a given pipeline in 
[[http://www.graphviz.org/content/dot-language/][dot]] format.
+You can use the output and view a graphical representation of DAG using an 
online graphviz viewer like [[http://www.webgraphviz.com/][this]].
+
+Usage:
+
+$FALCON_HOME/bin/falcon metadata -lineage -pipeline my-pipeline
+
+pipeline is a mandatory option.
+

Added: falcon/trunk/general/src/site/twiki/falconcli/ListEntity.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/ListEntity.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/ListEntity.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/ListEntity.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,17 @@
+---+++List
+
+[[CommonCLI][Common CLI Options]]
+
+Entities of a particular type can be listed with list sub-command.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -list
+
+Optional Args : -fields <<field1,field2>>
+-type <<[cluster|datasource|feed|process],[cluster|datasource|feed|process]>>
+-nameseq <<namesubsequence>> -tagkeys <<tagkeyword1,tagkeyword2>>
+-filterBy <<field1:value1,field2:value2>> -tags 
<<tagkey=tagvalue,tagkey=tagvalue>>
+-orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="../Restapi/EntityList.html">Optional params described here.</a>
+

Added: falcon/trunk/general/src/site/twiki/falconcli/ListInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/ListInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/ListInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/ListInstance.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,19 @@
+---+++List
+
+[[CommonCLI][Common CLI Options]]
+
+List option via CLI can be used to get single or multiple instances.  If the 
instance is not yet materialized but is within the process validity range, 
WAITING is returned as the state. Instance time is also returned. Log location 
gives the oozie workflow url
+If the instance is in WAITING state, missing dependencies are listed
+
+Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:
+
+{"status":"SUCCEEDED","message":"getStatus is 
successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"},
 {"instance":"2010-01-02T11:05Z","status":"WAITING"}]}
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -list
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"
+-colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder 
<<sortOrder>> -offset 0 -numResults 10
+
+<a href="../Restapi/InstanceList.html">Optional params described here.</a>

Added: falcon/trunk/general/src/site/twiki/falconcli/ListMetadata.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/ListMetadata.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/ListMetadata.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/ListMetadata.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,13 @@
+---+++ List
+
+[[CommonCLI][Common CLI Options]]
+
+Lists of all dimensions of given type. If the user provides optional param 
cluster, only the dimensions related to the cluster are listed.
+Usage:
+$FALCON_HOME/bin/falcon metadata -list -type 
[cluster_entity|datasource_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines]
+
+Optional Args : -cluster <<cluster name>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -list -type process_entity -cluster 
primary-cluster
+$FALCON_HOME/bin/falcon metadata -list -type tags

Added: falcon/trunk/general/src/site/twiki/falconcli/LogsInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/LogsInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/LogsInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/LogsInstance.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,14 @@
+---+++Logs
+
+[[CommonCLI][Common CLI Options]]
+
+Get logs for instance actions
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -logs
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-runid <<runid>>
+-colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy field -sortOrder 
<<sortOrder>> -offset 0 -numResults 10
+
+<a href="../Restapi/InstanceLogs.html">Optional params described here.</a>

Added: falcon/trunk/general/src/site/twiki/falconcli/Lookup.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/Lookup.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/Lookup.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/Lookup.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,12 @@
+---+++Lookup
+
+[[CommonCLI][Common CLI Options]]
+
+Lookup option tells you which feed does a given path belong to. This can be 
useful in several scenarios e.g. generally you would want to have a single 
definition for common feeds like metadata with same location
+otherwise it can result in a problem (different retention durations can result 
in surprises for one team) If you want to check if there are multiple 
definitions of same metadata then you can pick
+an instance of that and run through the lookup command like below.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type feed -lookup -path 
/data/projects/my-hourly/2014/10/10/23/
+
+If you have multiple feeds with location as 
/data/projects/my-hourly/${YEAR}/${MONTH}/${DAY}/${HOUR} then this command will 
return all of them.

Added: falcon/trunk/general/src/site/twiki/falconcli/ParamsInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/ParamsInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/ParamsInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/ParamsInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Params
+
+[[CommonCLI][Common CLI Options]]
+
+Displays the workflow params of a given instance. Where start time is 
considered as nominal time of that instance and end time won't be considered.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -params 
-start "yyyy-MM-dd'T'HH:mm'Z'"

Added: falcon/trunk/general/src/site/twiki/falconcli/RelationMetadata.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/RelationMetadata.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/RelationMetadata.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/RelationMetadata.twiki Mon 
Feb 15 05:48:00 2016
@@ -0,0 +1,10 @@
+---+++ Relations
+
+[[CommonCLI][Common CLI Options]]
+
+List all dimensions related to specified Dimension identified by 
dimension-type and dimension-name.
+Usage:
+$FALCON_HOME/bin/falcon metadata -relations -type 
[cluster_entity|feed_entity|process_entity|user|colo|tags|groups|pipelines] 
-name <<Dimension Name>>
+
+Example:
+$FALCON_HOME/bin/falcon metadata -relations -type process_entity -name 
sample-process
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/RerunInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/RerunInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/RerunInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/RerunInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,10 @@
+---+++Rerun
+
+[[CommonCLI][Common CLI Options]]
+
+Rerun option is used to rerun instances of a given process. On issuing a 
rerun, by default the execution resumes from the last failed node in the 
workflow. This option is valid only for process instances in terminal state, 
i.e. SUCCEEDED, KILLED or FAILED.
+If one wants to forcefully rerun the entire workflow, -force should be passed 
along with -rerun
+Additionally, you can also specify properties to override via a properties 
file.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -rerun 
-start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" [-force] [-file 
<<properties file>>]

Added: falcon/trunk/general/src/site/twiki/falconcli/ResumeEntity.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/ResumeEntity.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/ResumeEntity.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/ResumeEntity.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Resume
+
+[[CommonCLI][Common CLI Options]]
+
+Puts a suspended process/feed back to active, which in turn resumes applicable 
oozie bundle.
+
+Usage:
+ $FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -resume

Added: falcon/trunk/general/src/site/twiki/falconcli/ResumeInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/ResumeInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/ResumeInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/ResumeInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Resume
+
+[[CommonCLI][Common CLI Options]]
+
+Resume option is used to resume any instance that  is in suspended state.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -resume 
-start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

Added: falcon/trunk/general/src/site/twiki/falconcli/RunningInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/RunningInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/RunningInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/RunningInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,13 @@
+---+++Running
+
+[[CommonCLI][Common CLI Options]]
+
+Running option provides all the running instances of the mentioned process.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -running
+
+Optional Args : -colo <<colo>> -lifecycle <<lifecycles>>
+-filterBy <<field1:value1,field2:value2>> -orderBy <<field>> -sortOrder 
<<sortOrder>> -offset 0 -numResults 10
+
+<a href="../Restapi/InstanceRunning.html">Optional params described here.</a>
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/SLAAlert.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/SLAAlert.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/SLAAlert.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/SLAAlert.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,49 @@
+---+++SLAAlert
+
+[[CommonCLI][Common CLI Options]]
+
+<verbatim>
+Since: 0.8
+</verbatim>
+
+This command lists all the feed instances which have missed sla and are still 
not available. If a feed instance missed
+sla but is now available, then it will not be reported in results. The purpose 
of this API is alerting and hence it
+ doesn't return feed instances which missed SLA but are available as they 
don't require any action.
+
+* Currently sla monitoring is supported only for feeds.
+
+* Option end is optional and will default to current time if missing.
+
+* Option name is optional, if provided only instances of that feed will be 
considered.
+
+Usage:
+
+*Example 1*
+
+*$FALCON_HOME/bin/falcon entity -type feed -start 2014-09-05T00:00Z -slaAlert  
-end 2016-05-03T00:00Z -colo local*
+
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T11:59Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:00Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:01Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:02Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:03Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:04Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:05Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:06Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:07Z, tags: 
Missed SLA High
+name: out, type: FEED, cluster: local, instanceTime: 2015-09-26T12:08Z, tags: 
Missed SLA Low
+
+
+Response: default/Success!
+
+Request Id: default/216978070@qtp-830047511-4 - 
f5a6c129-ab42-4feb-a2bf-c3baed356248
+
+*Example 2*
+
+*$FALCON_HOME/bin/falcon entity -type feed -start 2014-09-05T00:00Z -slaAlert  
-end 2016-05-03T00:00Z -colo local -name in*
+
+name: in, type: FEED, cluster: local, instanceTime: 2015-09-26T06:00Z, tags: 
Missed SLA High
+
+Response: default/Success!
+
+Request Id: default/1580107885@qtp-830047511-7 - 
f16cbc51-5070-4551-ad25-28f75e5e4cf2

Added: falcon/trunk/general/src/site/twiki/falconcli/Schedule.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/Schedule.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/Schedule.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/Schedule.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,22 @@
+---+++Schedule
+
+[[CommonCLI][Common CLI Options]]
+
+Once submitted, an entity can be scheduled using schedule option. Process and 
feed can only be scheduled.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [process|feed] -name <<name>> -schedule
+
+Optional Args :
+
+-skipDryRun When this argument is specified, Falcon skips oozie dryrun.
+
+-doAs <username>
+
+-properties <<key1:val1,...,keyN:valN>>. Specifying 'falcon.scheduler:native' 
as a property will schedule the entity on the the native scheduler of Falcon. 
Else, it will default to the engine specified in startup.properties. For 
details on Native scheduler, refer to [[FalconNativeScheduler][Falcon Native 
Scheduler]]
+
+Examples:
+
+ $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule
+
+ $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule 
-properties falcon.scheduler:native

Added: falcon/trunk/general/src/site/twiki/falconcli/StatusAdmin.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/StatusAdmin.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/StatusAdmin.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/StatusAdmin.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Status
+
+[[CommonCLI][Common CLI Options]]
+
+Status returns the current state of Falcon (running or stopped).
+Usage:
+$FALCON_HOME/bin/falcon admin -status
+

Added: falcon/trunk/general/src/site/twiki/falconcli/StatusEntity.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/StatusEntity.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/StatusEntity.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/StatusEntity.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Status
+
+[[CommonCLI][Common CLI Options]]
+
+Status returns the current status of the entity.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name 
<<name>> -status
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/StatusInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/StatusInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/StatusInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/StatusInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,20 @@
+---+++Status
+
+[[CommonCLI][Common CLI Options]]
+
+Status option via CLI can be used to get the status of a single or multiple 
instances.  If the instance is not yet materialized but is within the process 
validity range, WAITING is returned as the state. Along with the status of the 
instance time is also returned. Log location gives the oozie workflow url
+If the instance is in WAITING state, missing dependencies are listed.
+The job urls are populated for all actions of user workflow and non-succeeded 
actions of the main-workflow. The user then need not go to the underlying 
scheduler to get the job urls when needed to debug an issue in the job.
+
+Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:
+
+{"status":"SUCCEEDED","message":"getStatus is 
successful","instances":[{"instance":"2012-05-07T05:02Z","status":"SUCCEEDED","logFile":"http://oozie-dashboard-url"},{"instance":"2012-05-07T05:07Z","status":"RUNNING","logFile":"http://oozie-dashboard-url"},
 {"instance":"2010-01-02T11:05Z","status":"WAITING"}]
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -status
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-colo <<colo>>
+-filterBy <<field1:value1,field2:value2>> -lifecycle <<lifecycles>>
+-orderBy field -sortOrder <<sortOrder>> -offset 0 -numResults 10
+
+<a href="../Restapi/InstanceStatus.html"> Optional params described here.</a>

Added: falcon/trunk/general/src/site/twiki/falconcli/Submit.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/Submit.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/Submit.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/Submit.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,13 @@
+---+++Submit
+
+[[CommonCLI][Common CLI Options]]
+
+Submit option is used to set up entity definition.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -submit -type [cluster|datasource|feed|process] 
-file <entity-definition.xml>
+
+Example:
+$FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml
+
+Note: The url option in the above and all subsequent commands is optional. If 
not mentioned it will be picked from client.properties file. If the option is 
not provided and also not set in client.properties, Falcon CLI will fail.

Added: falcon/trunk/general/src/site/twiki/falconcli/SubmitRecipe.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/SubmitRecipe.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/SubmitRecipe.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/SubmitRecipe.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,17 @@
+---+++ Submit Recipe
+
+[[CommonCLI][Common CLI Options]]
+
+Submit the specified recipe.
+
+Usage:
+$FALCON_HOME/bin/falcon recipe -name <name>
+Name of the recipe. User should have defined <name>-template.xml and 
<name>.properties in the path specified by falcon.recipe.path in 
client.properties file. falcon.home path is used if its not specified in 
client.properties file.
+If its not specified in client.properties file and also if files cannot be 
found at falcon.home, Falcon CLI will fail.
+
+Optional Args : -tool <recipeToolClassName>
+Falcon provides a base tool that recipes can override. If this option is not 
specified the default Recipe Tool
+RecipeTool defined is used. This option is required if user defines his own 
recipe tool class.
+
+Example:
+$FALCON_HOME/bin/falcon recipe -name hdfs-replication
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/SummaryEntity.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/SummaryEntity.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/SummaryEntity.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/SummaryEntity.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,14 @@
+---+++Summary
+
+[[CommonCLI][Common CLI Options]]
+
+Summary of entities of a particular type and a cluster will be listed. Entity 
summary has N most recent instances of entity.
+
+Usage:
+$FALCON_HOME/bin/falcon entity -type [feed|process] -summary
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-fields <<field1,field2>>
+-filterBy <<field1:value1,field2:value2>> -tags 
<<tagkey=tagvalue,tagkey=tagvalue>>
+-orderBy <<field>> -sortOrder <<sortOrder>> -offset 0 -numResults 10 
-numInstances 7
+
+<a href="../Restapi/EntitySummary.html">Optional params described here.</a>
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/SummaryInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/SummaryInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/SummaryInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/SummaryInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,20 @@
+---+++Summary
+
+[[CommonCLI][Common CLI Options]]
+
+Summary option via CLI can be used to get the consolidated status of the 
instances between the specified time period.
+Each status along with the corresponding instance count are listed for each of 
the applicable colos.
+The unscheduled instances between the specified time period are included as 
UNSCHEDULED in the output to provide more clarity.
+
+Example : Suppose a process has 3 instance, one has succeeded,one is in 
running state and other one is waiting, the expected output is:
+
+{"status":"SUCCEEDED","message":"getSummary is successful", 
instancesSummary:[{"cluster": <<name>> "map":[{"SUCCEEDED":"1"}, 
{"WAITING":"1"}, {"RUNNING":"1"}]}]}
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> -summary
+
+Optional Args : -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'" 
-colo <<colo>>
+-filterBy <<field1:value1,field2:value2>> -lifecycle <<lifecycles>>
+-orderBy field -sortOrder <<sortOrder>>
+
+<a href="../Restapi/InstanceSummary.html">Optional params described here.</a>

Added: falcon/trunk/general/src/site/twiki/falconcli/SuspendEntity.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/SuspendEntity.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/SuspendEntity.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/SuspendEntity.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Suspend
+
+[[CommonCLI][Common CLI Options]]
+
+Suspend on an entity results in suspension of the oozie bundle that was 
scheduled earlier through the schedule function. No further instances are 
executed on a suspended entity. Only schedule-able entities(process/feed) can 
be suspended.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -suspend
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/SuspendInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/SuspendInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/SuspendInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/SuspendInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,8 @@
+---+++Suspend
+
+[[CommonCLI][Common CLI Options]]
+
+Suspend is used to suspend a instance or instances  for the given process. 
This option pauses the parent workflow at the state, which it was in at the 
time of execution of this command.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -type <<feed/process>> -name <<name>> 
-suspend -start "yyyy-MM-dd'T'HH:mm'Z'" -end "yyyy-MM-dd'T'HH:mm'Z'"

Added: falcon/trunk/general/src/site/twiki/falconcli/Touch.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/Touch.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/Touch.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/Touch.twiki Mon Feb 15 
05:48:00 2016
@@ -0,0 +1,10 @@
+---+++Touch
+
+[[CommonCLI][Common CLI Options]]
+
+Force Update operation allows an already submitted/scheduled entity to be 
updated.
+
+Usage:
+$FALCON_HOME/bin/falcon entity  -type [feed|process] -name <<name>> -touch
+
+Optional Arg : -skipDryRun. When this argument is specified, Falcon skips 
oozie dryrun.
\ No newline at end of file

Added: falcon/trunk/general/src/site/twiki/falconcli/TriageInstance.twiki
URL: 
http://svn.apache.org/viewvc/falcon/trunk/general/src/site/twiki/falconcli/TriageInstance.twiki?rev=1730449&view=auto
==============================================================================
--- falcon/trunk/general/src/site/twiki/falconcli/TriageInstance.twiki (added)
+++ falcon/trunk/general/src/site/twiki/falconcli/TriageInstance.twiki Mon Feb 
15 05:48:00 2016
@@ -0,0 +1,9 @@
+---+++Triage
+
+[[CommonCLI][Common CLI Options]]
+
+Given a feed/process instance this command traces it's ancestors to find what 
all ancestors have failed. It's useful if
+lot of instances are failing in a pipeline as it then finds out the root cause 
of the pipeline being stuck.
+
+Usage:
+$FALCON_HOME/bin/falcon instance -triage -type <<feed/process>> -name <<name>> 
-start "yyyy-MM-dd'T'HH:mm'Z'"



Reply via email to