http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki 
b/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki
deleted file mode 100644
index 25abdd2..0000000
--- a/trunk/releases/master/src/site/twiki/FalconEmailNotification.twiki
+++ /dev/null
@@ -1,29 +0,0 @@
----++Falcon Email Notification
-
-Falcon Email notification allows sending email notifications when scheduled 
feed/process instances complete.
-Email notification in feed/process entity can be defined as follows:
-<verbatim>
-<process name="[process name]">
-    ...
-    <notification type="email" to="[email protected],[email protected]"/>
-    ...
-</process>
-</verbatim>
-
-   *  *type*    - specifies about the type of notification. *Note:* Currently 
"email" notification type is supported.
-   *  *to*  - specifies the address to send notifications to; multiple 
recipients may be provided as a comma-separated list.
-
-
-Falcon email notification requires some SMTP server configuration to be 
defined in startup.properties. Following are the values
-it looks for:
-   * *falcon.email.smtp.host*   - The host where the email action may find the 
SMTP server (localhost by default).
-   * *falcon.email.smtp.port*   - The port to connect to for the SMTP server 
(25 by default).
-   * *falcon.email.from.address*    - The from address to be used for mailing 
all emails (falcon@localhost by default).
-   * *falcon.email.smtp.auth*   - Boolean property that specifies if 
authentication is to be done or not. (false by default).
-   * *falcon.email.smtp.user*   - If authentication is enabled, the username 
to login as (empty by default).
-   * *falcon.email.smtp.password*   - If authentication is enabled, the 
username's password (empty by default).
-
-
-
-Also ensure that email notification plugin is enabled in startup.properties to 
send email notifications:
-   * *monitoring.plugins*   - 
org.apache.falcon.plugin.EmailNotificationPlugin,org.apache.falcon.plugin.DefaultMonitoringPlugin
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki 
b/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki
deleted file mode 100644
index 9ffc5e9..0000000
--- a/trunk/releases/master/src/site/twiki/FalconNativeScheduler.twiki
+++ /dev/null
@@ -1,213 +0,0 @@
----+ Falcon Native Scheduler
-
----++ Overview
-Falcon has been using Oozie as its scheduling engine.  While the use of Oozie 
works reasonably well, there are scenarios where Oozie scheduling is proving to 
be a limiting factor. In its current form, Falcon relies on Oozie for both 
scheduling and for workflow execution, due to which the scheduling is limited 
to time based/cron based scheduling with additional gating conditions on data 
availability. Also, this imposes restrictions on datasets being periodic in 
nature. In order to offer better scheduling capabilities, Falcon comes with its 
own native scheduler. 
-
----++ Capabilities
-The native scheduler will offer the capabilities offered by Oozie co-ordinator 
and more. The native scheduler will be built and released over the next few 
releases of Falcon giving users an opportunity to use it and provide feedback.
-
-Currently, the native scheduler offers the following capabilities:
-   1. Submit and schedule a Falcon process that runs periodically (without 
data dependency) - It could be a PIG script, oozie workflow, Hive (all the 
engine types currently supported).
-   1. Monitor/Query/Modify the scheduled process - All applicable entity APIs 
and instance APIs should work as it does now.  Falcon provides data management 
functions for feeds declaratively. It allows users to represent feed locations 
as time-based partition directories on HDFS containing files.
-
-*NOTE: Execution order is FIFO. LIFO and LAST_ONLY are not supported yet.*
-
-In the near future, Falcon scheduler will provide feature parity with Oozie 
scheduler and in subsequent releases will provide the following features:
-   * Periodic, cron-based, calendar-based scheduling.
-   * Data availability based scheduling.
-   * External trigger/notification based scheduling.
-   * Support for periodic/a-periodic datasets.
-   * Support for optional/mandatory datasets. Option to specify 
minumum/maximum/exactly-N instances of data to consume.
-   * Handle dependencies across entities during re-run.
-
----++ Configuring Native Scheduler
-You can enable native scheduler by making changes to 
__$FALCON_HOME/conf/startup.properties__ as follows. You will need to restart 
Falcon Server for the changes to take effect.
-<verbatim>
-*.dag.engine.impl=org.apache.falcon.workflow.engine.OozieDAGEngine
-*.application.services=org.apache.falcon.security.AuthenticationInitializationService,\
-                        
org.apache.falcon.workflow.WorkflowJobEndNotificationService, \
-                        org.apache.falcon.service.ProcessSubscriberService,\
-                        org.apache.falcon.service.FeedSLAMonitoringService,\
-                        org.apache.falcon.service.LifecyclePolicyMap,\
-                        
org.apache.falcon.state.store.service.FalconJPAService,\
-                        org.apache.falcon.entity.store.ConfigurationStore,\
-                        org.apache.falcon.rerun.service.RetryService,\
-                        org.apache.falcon.rerun.service.LateRunService,\
-                        org.apache.falcon.metadata.MetadataMappingService,\
-                        org.apache.falcon.service.LogCleanupService,\
-                        org.apache.falcon.service.GroupsService,\
-                        org.apache.falcon.service.ProxyUserService,\
-                        
org.apache.falcon.notification.service.impl.JobCompletionService,\
-                        
org.apache.falcon.notification.service.impl.SchedulerService,\
-                        
org.apache.falcon.notification.service.impl.AlarmService,\
-                        
org.apache.falcon.notification.service.impl.DataAvailabilityService,\
-                        org.apache.falcon.execution.FalconExecutionService
-</verbatim>
-
----+++ Making the Native Scheduler the default scheduler
-To ensure backward compatibility, even when the native scheduler is enabled, 
the default scheduler is still Oozie. This means users will be scheduling 
entities on Oozie scheduler, by default. They will need to explicitly specify 
the scheduler as native, if they wish to schedule entities using native 
scheduler. 
-
-<a href="#Scheduling_new_entities_on_Native_Scheduler">This section</a> has 
more details on how to schedule on either of the schedulers. 
-
-If you wish to make the Falcon Native Scheduler your default scheduler and 
remove Oozie as the scheduler, set the following property in 
__$FALCON_HOME/conf/startup.properties__
-<verbatim>
-## If you wish to use Falcon native scheduler as your default scheduler, set 
the workflow engine to FalconWorkflowEngine instead of OozieWorkflowEngine. ##
-*.workflow.engine.impl=org.apache.falcon.workflow.engine.FalconWorkflowEngine
-</verbatim>
-
----+++ Configuring the state store for Native Scheduler
-You can configure statestore by making changes to 
__$FALCON_HOME/conf/statestore.properties__ as follows. You will need to 
restart Falcon Server for the changes to take effect.
-
-Falcon Server needs to maintain state of the entities and instances in a 
persistent store for the system to be recoverable. Since Prism only federates, 
it does not need to maintain any state information. Following properties need 
to be set in statestore.properties of Falcon Servers:
-<verbatim>
-######### StateStore Properties #####
-*.falcon.state.store.impl=org.apache.falcon.state.store.jdbc.JDBCStateStore
-*.falcon.statestore.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
-*.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db
-# StateStore credentials file where username,password and other properties can 
be stored securely.
-# Set this credentials file permission 400 and make sure user who starts 
falcon should only have read permission.
-# Give Absolute path to credentials file along with file name or put in 
classpath with file name statestore.credentials.
-# Credentials file should be present either in given location or class path, 
otherwise falcon won't start.
-*.falcon.statestore.credentials.file=
-*.falcon.statestore.jdbc.username=sa
-*.falcon.statestore.jdbc.password=
-*.falcon.statestore.connection.data.source=org.apache.commons.dbcp.BasicDataSource
-# Maximum number of active connections that can be allocated from this pool at 
the same time.
-*.falcon.statestore.pool.max.active.conn=10
-*.falcon.statestore.connection.properties=
-# Indicates the interval (in milliseconds) between eviction runs.
-*.falcon.statestore.validate.db.connection.eviction.interval=300000
-## The number of objects to examine during each run of the idle object evictor 
thread.
-*.falcon.statestore.validate.db.connection.eviction.num=10
-## Creates Falcon DB.
-## If set to true, it creates the DB schema if it does not exist. If the DB 
schema exists is a NOP.
-## If set to false, it does not create the DB schema. If the DB schema does 
not exist it fails start up.
-*.falcon.statestore.create.db.schema=true
-</verbatim> 
-
-The _*.falcon.statestore.jdbc.url_ property in statestore.properties 
determines the DB and data location. All other properties are common across 
RDBMS.
-
-*NOTE : Although multiple Falcon Servers can share a DB (not applicable for 
Derby DB), it is recommended that you have different DBs for different Falcon 
Servers for better performance.*
-
-You will need to create the state DB and tables before starting the Falcon 
Server. To create tables, a tool comes bundled with the Falcon installation. 
You can use the _falcon-db.sh_ script to create tables in the DB. The script 
needs to be run only for Falcon Servers and can be run by any user that has 
execute permission on the script. The script picks up the DB connection details 
from __$FALCON_HOME/conf/statestore.properties__. Ensure that you have granted 
the right privileges to the user mentioned in statestore.properties_, so the 
tables can be created.
-
-You can use the help command to get details on the sub-commands supported:
-<verbatim>
-./bin/falcon-db.sh help
-Hadoop home is set, adding libraries from 
'/Users/pallavi.rao/falcon/hadoop-2.6.0/bin/hadoop classpath' into falcon 
classpath
-usage: 
-      Falcon DB initialization tool currently supports Derby DB/ Mysql
-
-      falcondb help : Display usage for all commands or specified command
-
-      falcondb version : Show Falcon DB version information
-
-      falcondb create <OPTIONS> : Create Falcon DB schema
-                      -run             Confirmation option regarding DB schema 
creation/upgrade
-                      -sqlfile <arg>   Generate SQL script instead of 
creating/upgrading the DB
-                                       schema
-
-      falcondb upgrade <OPTIONS> : Upgrade Falcon DB schema
-                       -run             Confirmation option regarding DB 
schema creation/upgrade
-                       -sqlfile <arg>   Generate SQL script instead of 
creating/upgrading the DB
-                                        schema
-
-</verbatim>
-Currently, MySQL, postgreSQL and Derby are supported as state stores. We may 
extend support to other DBs in the future. Falcon has been tested against MySQL 
v5.5 and PostgreSQL v9.5. If you are using MySQL ensure you also copy 
mysql-connector-java-<version>.jar under 
__$FALCON_HOME/server/webapp/falcon/WEB-INF/lib__ and 
__$FALCON_HOME/client/lib__
-
----++++ Using Derby as the State Store
-Using Derby is ideal for QA and staging setup. Falcon comes bundled with a 
Derby connector and no explicit setup is required (although you can set it up) 
in terms creating the DB or tables.
-For example,
- <verbatim> *.falcon.statestore.jdbc.url=jdbc:derby:data/falcon.db;create=true 
</verbatim>
-
- tells Falcon to use the Derby JDBC connector, with data directory, 
$FALCON_HOME/data/ and DB name 'falcon'. If _create=true_ is specified, you 
will not need to create a DB up front; a database will be created if it does 
not exist.
-
----++++ Using MySQL as the State Store
-The jdbc.url property in statestore.properties determines the DB and data 
location.
-For example,
- <verbatim> *.falcon.statestore.jdbc.url=jdbc:mysql://localhost:3306/falcon 
</verbatim>
-
- tells Falcon to use the MySQL JDBC connector, which is accessible 
@localhost:3306, with DB name 'falcon'.
-
----++ Scheduling new entities on Native Scheduler
-To schedule an entity (currently only process is supported) using the native 
scheduler, you need to specify the scheduler in the schedule command as shown 
below:
-<verbatim>
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule 
-properties falcon.scheduler:native
-</verbatim>
-
-If Oozie is configured as the default scheduler, you can skip the scheduler 
option or explicitly set it to _oozie_, as shown below:
-<verbatim>
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule
-OR
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule 
-properties falcon.scheduler:oozie
-</verbatim>
-
-If the native scheduler is configured as the default scheduler, then, you can 
omit the scheduler option, as shown below:
-<verbatim>
-$FALCON_HOME/bin/falcon entity -type process -name <process name> -schedule 
-</verbatim>
-
----++ Migrating entities from Oozie Scheduler to Native Scheduler
-Currently, user will have to delete and re-create entities in order to move 
across schedulers. Attempting to schedule an already scheduled entity on a 
different scheduler will result in an error. Note that the history of instances 
prior to scheduling on native scheduler will not be available via the instance 
APIs. However, user can retrieve that information using metadata APIs. Native 
scheduler must be enabled before migrating entities to native scheduler.
-
-<a href="#Configuring_Native_Scheduler">Configuring Native Scheduler</a> has 
more details on how to enable native scheduler.
-
----+++ Migrating from Oozie to Native Scheduler
-   * Delete the entity (process). 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -name <process name> 
-delete </verbatim>
-   * Submit the entity (process) with start time from where the Oozie 
scheduler left off. 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -submit <path to 
process xml> </verbatim>
-   * Schedule the entity on native scheduler. 
-<verbatim> $FALCON_HOME/bin/falcon entity -type process -name <process name> 
-schedule -properties falcon.scheduler:native </verbatim>
-
----+++ Reverting to Oozie from Native Scheduler
-   * Delete the entity (process). 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -name <process name> 
-delete </verbatim>
-   * Submit the entity (process) with start time from where the Native 
scheduler left off. 
-<verbatim>$FALCON_HOME/bin/falcon entity -type process -submit <path to 
process xml> </verbatim>
-   * Schedule the entity on the default scheduler (Oozie).
- <verbatim> $FALCON_HOME/bin/falcon entity -type process -name <process name> 
-schedule </verbatim>
-
----+++ Differences in API responses between Oozie and Native Scheduler
-Most API responses are similar whether the entity is scheduled via Oozie or 
via Native scheduler. However, there are a few exceptions and those are listed 
below.
----++++ Rerun API
-When a user performs a rerun using Oozie scheduler, Falcon directly reruns the 
workflow on Oozie and the instance will be moved to 'RUNNING'.
-
-Example response:
-<verbatim>
-$ falcon instance -rerun processMerlinOozie -start 2016-01-08T12:13Z -end 
2016-01-08T12:15Z
-Consolidated Status: SUCCEEDED
-
-Instances:
-Instance               Cluster         SourceCluster           Status          
Start           End             Details                                 Log
------------------------------------------------------------------------------------------------
-2016-01-08T12:13Z      ProcessMultipleClustersTest-corp-9706f068       -       
RUNNING 2016-01-08T13:03Z       2016-01-08T13:03Z       -       
http://8RPCG32.corp.inmobi.com:11000/oozie?job=0001811-160104160825636-oozie-oozi-W
-2016-01-08T12:13Z      ProcessMultipleClustersTest-corp-0b270a1d       -       
RUNNING 2016-01-08T13:03Z       2016-01-08T13:03Z       -       
http://lda01:11000/oozie?job=0002247-160104115615658-oozie-oozi-W
-
-Additional Information:
-Response: ua1/RERUN
-ua2/RERUN
-Request Id: ua1/871377866@qtp-630572412-35 - 
7190c4c8-bacb-4639-8d48-c9e639f544da
-ua2/1554129706@qtp-536122141-13 - bc18127b-1bf8-4ea1-99e6-b1f10ba3a441
-</verbatim>
-
-However, when a user performs a rerun on native scheduler, the instance is 
scheduled again. This is done intentionally so as to not violate the number of 
instances running in parallel.  Hence, the user will see the status of the 
instance as 'READY'.
-
-Example response:
-<verbatim>
-$ falcon instance -rerun 
ProcessMultipleClustersTest-agregator-coord16-8f55f59b -start 2016-01-08T12:13Z 
-end 2016-01-08T12:15Z
-Consolidated Status: SUCCEEDED
-
-Instances:
-Instance               Cluster         SourceCluster           Status          
Start           End             Details                                 Log
------------------------------------------------------------------------------------------------
-2016-01-08T12:13Z      ProcessMultipleClustersTest-corp-9706f068       -       
READY   2016-01-08T13:03Z       2016-01-08T13:03Z       -       
http://8RPCG32.corp.inmobi.com:11000/oozie?job=0001812-160104160825636-oozie-oozi-W
-
-2016-01-08T12:13Z      ProcessMultipleClustersTest-corp-0b270a1d       -       
READY   2016-01-08T13:03Z       2016-01-08T13:03Z       -       
http://lda01:11000/oozie?job=0002248-160104115615658-oozie-oozi-W
-
-Additional Information:
-Response: ua1/RERUN
-ua2/RERUN
-Request Id: ua1/871377866@qtp-630572412-35 - 
8d118d4d-c0ef-4335-a9af-10364498ec4f
-ua2/1554129706@qtp-536122141-13 - c2a3fc50-8b05-47ce-9c85-ca432b96d923
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/HDFSDR.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/HDFSDR.twiki 
b/trunk/releases/master/src/site/twiki/HDFSDR.twiki
deleted file mode 100644
index 1c1e3f5..0000000
--- a/trunk/releases/master/src/site/twiki/HDFSDR.twiki
+++ /dev/null
@@ -1,34 +0,0 @@
----+ HDFS DR Recipe
----++ Overview
-Falcon supports HDFS DR recipe to replicate data from source cluster to 
destination cluster.
-
----++ Usage
----+++ Setup cluster definition.
-   <verbatim>
-    $FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml
-   </verbatim>
-
----+++ Update recipes properties
-   Copy HDFS replication recipe properties, workflow and template file from 
$FALCON_HOME/data-mirroring/hdfs-replication to the accessible
-   directory path or to the recipe directory path (*falcon.recipe.path=<recipe 
directory path>*). *"falcon.recipe.path"* must be specified
-   in Falcon conf client.properties. Now update the copied recipe properties 
file with required attributes to replicate data from source cluster to
-   destination cluster for HDFS DR.
-
----+++ Submit HDFS DR recipe
-
-   After updating the recipe properties file with required attributes in 
directory path or in falcon.recipe.path,
-   there are two ways of submitting the HDFS DR recipe:
-
-   * 1. Specify Falcon recipe properties file through recipe command line.
-   <verbatim>
-    $FALCON_HOME/bin/falcon recipe -name hdfs-replication -operation 
HDFS_REPLICATION
-    -properties /cluster/hdfs-replication.properties
-   </verbatim>
-
-   * 2. Use Falcon recipe path specified in Falcon conf client.properties .
-   <verbatim>
-    $FALCON_HOME/bin/falcon recipe -name hdfs-replication -operation 
HDFS_REPLICATION
-   </verbatim>
-
-
-*Note:* Recipe properties file, workflow file and template file name must 
match to the recipe name, it must be unique and in the same directory.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/HiveDR.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/HiveDR.twiki 
b/trunk/releases/master/src/site/twiki/HiveDR.twiki
deleted file mode 100644
index a8f6aee..0000000
--- a/trunk/releases/master/src/site/twiki/HiveDR.twiki
+++ /dev/null
@@ -1,74 +0,0 @@
----+Hive Disaster Recovery
-
-
----++Overview
-Falcon provides feature to replicate Hive metadata and data events from source 
cluster
-to destination cluster. This is supported for secure and unsecure cluster 
through Falcon Recipes.
-
-
----++Prerequisites
-Following is the prerequisites to use Hive DR
-
-   * *Hive 1.2.0+*
-   * *Oozie 4.2.0+*
-
-*Note:* Set following properties in hive-site.xml for replicating the Hive 
events on source and destination Hive cluster:
-<verbatim>
-    <property>
-        <name>hive.metastore.event.listeners</name>
-        <value>org.apache.hive.hcatalog.listener.DbNotificationListener</value>
-        <description>event listeners that are notified of any metastore 
changes</description>
-    </property>
-
-    <property>
-        <name>hive.metastore.dml.events</name>
-        <value>true</value>
-    </property>
-</verbatim>
-
----++ Usage
----+++ Bootstrap
-   Perform initial bootstrap of Table and Database from source cluster to 
destination cluster
-   * *Database Bootstrap*
-     For bootstrapping DB replication, first destination DB should be created. 
This step is expected,
-     since DB replication definitions can be set up by users only on 
pre-existing DB’s. Second, Export all tables in
-     the source db and Import it in the destination db, as described in Table 
bootstrap.
-
-   * *Table Bootstrap*
-     For bootstrapping table replication, essentially after having turned on 
the !DbNotificationListener
-     on the source db, perform an Export of the table, distcp the Export over 
to the destination
-     warehouse and do an Import over there. Check the following 
[[https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport][Hive
 Export-Import]] for syntax details
-     and examples.
-     This will set up the destination table so that the events on the source 
cluster that modify the table
-     will then be replicated.
-
----+++ Setup cluster definition
-   <verbatim>
-    $FALCON_HOME/bin/falcon entity -submit -type cluster -file 
/cluster/definition.xml
-   </verbatim>
-
----+++ Update recipes properties
-   Copy Hive DR recipe properties, workflow and template file from 
$FALCON_HOME/data-mirroring/hive-disaster-recovery to the accessible
-   directory path or to the recipe directory path (*falcon.recipe.path=<recipe 
directory path>*). *"falcon.recipe.path"* must be specified
-   in Falcon conf client.properties. Now update the copied recipe properties 
file with required attributes to replicate metadata and data from source 
cluster to
-   destination cluster for Hive DR.
-
----+++ Submit Hive DR recipe
-   After updating the recipe properties file with required attributes in 
directory path or in falcon.recipe.path,
-   there are two ways of submitting the Hive DR recipe:
-
-   * 1. Specify Falcon recipe properties file through recipe command line.
-   <verbatim>
-       $FALCON_HOME/bin/falcon recipe -name hive-disaster-recovery -operation 
HIVE_DISASTER_RECOVERY
-       -properties /cluster/hive-disaster-recovery.properties
-   </verbatim>
-
-   * 2. Use Falcon recipe path specified in Falcon conf client.properties .
-   <verbatim>
-       $FALCON_HOME/bin/falcon recipe -name hive-disaster-recovery -operation 
HIVE_DISASTER_RECOVERY
-   </verbatim>
-
-
-*Note:*
-   * Recipe properties file, workflow file and template file name must match 
to the recipe name, it must be unique and in the same directory.
-   * If kerberos security is enabled on cluster, use the secure templates for 
Hive DR from $FALCON_HOME/data-mirroring/hive-disaster-recovery .

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/HiveIntegration.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/HiveIntegration.twiki 
b/trunk/releases/master/src/site/twiki/HiveIntegration.twiki
deleted file mode 100644
index 688305d..0000000
--- a/trunk/releases/master/src/site/twiki/HiveIntegration.twiki
+++ /dev/null
@@ -1,372 +0,0 @@
----+ Hive Integration
-
----++ Overview
-Falcon provides data management functions for feeds declaratively. It allows 
users to represent feed locations as
-time-based partition directories on HDFS containing files.
-
-Hive provides a simple and familiar database like tabular model of data 
management to its users,
-which are backed by HDFS. It supports two classes of tables, managed tables 
and external tables.
-
-Falcon allows users to represent feed location as Hive tables. Falcon supports 
both managed and external tables
-and provide data management services for tables such as replication, eviction, 
archival, etc. Falcon will notify
-HCatalog as a side effect of either acquiring, replicating or evicting a data 
set instance and adds the
-missing capability of HCatalog table replication.
-
-In the near future, Falcon will allow users to express pipeline processing in 
Hive scripts
-apart from Pig and Oozie workflows.
-
-
----++ Assumptions
-   * Date is a mandatory first-level partition for Hive tables
-      * Data availability triggers are based on date pattern in Oozie
-   * Tables must be created in Hive prior to adding it as a Feed in Falcon.
-      * Duplicating this in Falcon will create confusion on the real source of 
truth. Also propagating schema changes
-    between systems is a hard problem.
-   * Falcon does not know about the encoding of the data and data should be in 
HCatalog supported format.
-
----++ Configuration
-Falcon provides a system level option to enable Hive integration. Falcon must 
be configured with an implementation
-for the catalog registry. The default implementation for Hive is shipped with 
Falcon.
-
-<verbatim>
-catalog.service.impl=org.apache.falcon.catalog.HiveCatalogService
-</verbatim>
-
-
----++ Incompatible changes
-Falcon depends heavily on data-availability triggers for scheduling Falcon 
workflows. Oozie must support
-data-availability triggers based on HCatalog partition availability. This is 
only available in oozie 4.x.
-
-Hence, Falcon for Hive support needs Oozie 4.x.
-
-
----++ Oozie Shared Library setup
-Falcon post Hive integration depends heavily on the 
[[http://oozie.apache.org/docs/4.0.1/WorkflowFunctionalSpec.html#a17_HDFS_Share_Libraries_for_Workflow_Applications_since_Oozie_2.3][shared
 library feature of Oozie]].
-Since the sheer number of jars for HCatalog, Pig and Hive are in the many 10s 
in numbers, its quite daunting to
-redistribute the dependent jars from Falcon.
-
-[[http://oozie.apache.org/docs/4.0.1/DG_QuickStart.html#Oozie_Share_Lib_Installation][This
 is a one time effort in Oozie setup and is quite straightforward.]]
-
-
----++ Approach
-
----+++ Entity Changes
-
-   * Cluster DSL will have an additional registry-interface section, 
specifying the endpoint for the
-HCatalog server. If this is absent, no HCatalog publication will be done from 
Falcon for this cluster.
-      <verbatim>thrift://hcatalog-server:port</verbatim>
-   * Feed DSL will allow users to specify the URI (location) for HCatalog 
tables as:
-      
<verbatim>catalog:database_name:table_name#partitions(key=value?)*</verbatim>
-   * Failure to publish to HCatalog will be retried (configurable # of 
retires) with back off. Permanent failures
-   after all the retries are exhausted will fail the Falcon workflow
-
----+++ Eviction
-
-   * Falcon will construct DDL statements to filter candidate partitions 
eligible for eviction drop partitions
-   * Falcon will construct DDL statements to drop the eligible partitions
-   * Additionally, Falcon will nuke the data on HDFS for external tables
-
-
----+++ Replication
-
-   * Falcon will use HCatalog (Hive) API to export the data for a given table 
and the partition,
-which will result in a data collection that includes metadata on the data's 
storage format, the schema,
-how the data is sorted, what table the data came from, and values of any 
partition keys from that table.
-   * Falcon will use discp tool to copy the exported data collection into the 
secondary cluster into a staging
-directory used by Falcon.
-   * Falcon will then import the data into HCatalog (Hive) using the HCatalog 
(Hive) API. If the specified table does
-not yet exist, Falcon will create it, using the information in the imported 
metadata to set defaults for the
-table such as schema, storage format, etc.
-   * The partition is not complete and hence not visible to users until all 
the data is committed on the secondary
-cluster, (no dirty reads)
-   * Data collection is staged by Falcon and retries for copy continues from 
where it left off.
-   * Failure to register with Hive will be retired. After all the attempts are 
exhausted,
-the data will be cleaned up by Falcon.
-
-
----+++ Security
-The user owns all data managed by Falcon. Falcon runs as the user who 
submitted the feed. Falcon will authenticate
-with HCatalog as the end user who owns the entity and the data.
-
-For Hive managed tables, the table may be owned by the end user or “hive”. 
For “hive” owned tables,
-user will have to configure the feed as “hive”.
-
-
----++ Load on HCatalog from Falcon
-It generally depends on the frequency of the feeds configured in Falcon and 
how often data is ingested, replicated,
-or processed.
-
-
----++ User Impact
-   * There should not be any impact to user due to this integration
-   * Falcon will be fully backwards compatible 
-   * Users have a choice to either choose storage based on files on HDFS as 
they do today or use HCatalog for
-accessing the data in tables
-
-
----++ Known Limitations
-
----+++ Oozie
-
-   * Falcon with Hadoop 1.x requires copying guava jars manually to sharelib 
in oozie. Hadoop 2.x ships this.
-   * hcatalog-pig-adapter needs to be copied manually to oozie sharelib.
-<verbatim>
-bin/hadoop dfs -copyFromLocal 
$LFS/share/lib/hcatalog/hcatalog-pig-adapter-0.5.0-incubating.jar 
share/lib/hcatalog
-</verbatim>
-   * Oozie 4.x with Hadoop-2.x
-Replication jobs are submitted to oozie on the destination cluster. Oozie runs 
a table export job
-on RM on source cluster. Oozie server on the target cluster must be configured 
with source hadoop
-configs else jobs fail with errors on secure and non-secure clusters as below:
-<verbatim>
-org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not 
found for ApplicationAttempt appattempt_1395965672651_0010_000002
-</verbatim>
-
-Make sure all oozie servers that falcon talks to has the hadoop configs 
configured in oozie-site.xml
-<verbatim>
-<property>
-      <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
-      
<value>*=/etc/hadoop/conf,arpit-new-falcon-1.cs1cloud.internal:8020=/etc/hadoop-1,arpit-new-falcon-1.cs1cloud.internal:8032=/etc/hadoop-1,arpit-new-falcon-2.cs1cloud.internal:8020=/etc/hadoop-2,arpit-new-falcon-2.cs1cloud.internal:8032=/etc/hadoop-2,arpit-new-falcon-5.cs1cloud.internal:8020=/etc/hadoop-3,arpit-new-falcon-5.cs1cloud.internal:8032=/etc/hadoop-3</value>
-      <description>
-          Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the 
HOST:PORT of
-          the Hadoop service (JobTracker, HDFS). The wildcard '*' 
configuration is
-          used when there is no exact match for an authority. The 
HADOOP_CONF_DIR contains
-          the relevant Hadoop *-site.xml files. If the path is relative is 
looked within
-          the Oozie configuration directory; though the path can be absolute 
(i.e. to point
-          to Hadoop client conf/ directories in the local filesystem.
-      </description>
-    </property>
-</verbatim>
-
----+++ Hive
-
-   * Dated Partitions
-Falcon does not work well when table partition contains multiple dated 
columns. Falcon only works
-with a single dated partition. This is being tracked in FALCON-357 which is a 
limitation in Oozie.
-<verbatim>
-catalog:default:table4#year=${YEAR};month=${MONTH};day=${DAY};hour=${HOUR};minute=${MINUTE}
-</verbatim>
-
-   * [[https://issues.apache.org/jira/browse/HIVE-5550][Hive table import 
fails for tables created with default text and sequence file formats using 
HCatalog API]]
-For some arcane reason, hive substitutes the output format for text and 
sequence to be prefixed with Hive.
-Hive table import fails since it compares against the input and output formats 
of the source table and they are
-different. Say, a table was created with out specifying the file format, it 
defaults to:
-<verbatim>
-fileFormat=TextFile, inputformat=org.apache.hadoop.mapred.TextInputFormat, 
outputformat=org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
-</verbatim>
-
-But, when hive fetches the table from the metastore, it replaces the output 
format with org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
-and the comparison between source and target table fails.
-<verbatim>
-org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer#checkTable
-      // check IF/OF/Serde
-      String existingifc = table.getInputFormatClass().getName();
-      String importedifc = tableDesc.getInputFormat();
-      String existingofc = table.getOutputFormatClass().getName();
-      String importedofc = tableDesc.getOutputFormat();
-      if ((!existingifc.equals(importedifc))
-          || (!existingofc.equals(importedofc))) {
-        throw new SemanticException(
-            ErrorMsg.INCOMPATIBLE_SCHEMA
-                .getMsg(" Table inputformat/outputformats do not match"));
-      }
-</verbatim>
-The above is not an issue with Hive 0.13.
-
----++ Hive Examples
-Following is an example entity configuration for lifecycle management 
functions for tables in Hive.
-
----+++ Hive Table Lifecycle Management - Replication and Retention
-
----++++ Primary Cluster
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    Primary cluster configuration for demo vm
-  -->
-<cluster colo="west-coast" description="Primary Cluster"
-         name="primary-cluster"
-         xmlns="uri:falcon:cluster:0.1" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
-    <interfaces>
-        <interface type="readonly" endpoint="hftp://localhost:10070";
-                   version="1.1.1" />
-        <interface type="write" endpoint="hdfs://localhost:10020"
-                   version="1.1.1" />
-        <interface type="execute" endpoint="localhost:10300"
-                   version="1.1.1" />
-        <interface type="workflow" endpoint="http://localhost:11010/oozie/";
-                   version="4.0.1" />
-        <interface type="registry" endpoint="thrift://localhost:19083"
-                   version="0.11.0" />
-        <interface type="messaging" 
endpoint="tcp://localhost:61616?daemon=true"
-                   version="5.4.3" />
-    </interfaces>
-    <locations>
-        <location name="staging" path="/apps/falcon/staging" />
-        <location name="temp" path="/tmp" />
-        <location name="working" path="/apps/falcon/working" />
-    </locations>
-</cluster>
-</verbatim>
-
----++++ BCP Cluster
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    BCP cluster configuration for demo vm
-  -->
-<cluster colo="east-coast" description="BCP Cluster"
-         name="bcp-cluster"
-         xmlns="uri:falcon:cluster:0.1" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
-    <interfaces>
-        <interface type="readonly" endpoint="hftp://localhost:20070";
-                   version="1.1.1" />
-        <interface type="write" endpoint="hdfs://localhost:20020"
-                   version="1.1.1" />
-        <interface type="execute" endpoint="localhost:20300"
-                   version="1.1.1" />
-        <interface type="workflow" endpoint="http://localhost:11020/oozie/";
-                   version="4.0.1" />
-        <interface type="registry" endpoint="thrift://localhost:29083"
-                   version="0.11.0" />
-        <interface type="messaging" 
endpoint="tcp://localhost:61616?daemon=true"
-                   version="5.4.3" />
-    </interfaces>
-    <locations>
-        <location name="staging" path="/apps/falcon/staging" />
-        <location name="temp" path="/tmp" />
-        <location name="working" path="/apps/falcon/working" />
-    </locations>
-</cluster>
-</verbatim>
-
----++++ Feed with replication and eviction policy
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    Replicating Hourly customer table from primary to secondary cluster.
-  -->
-<feed description="Replicating customer table feed" 
name="customer-table-replicating-feed"
-      xmlns="uri:falcon:feed:0.1">
-    <frequency>hours(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <clusters>
-        <cluster name="primary-cluster" type="source">
-            <validity start="2013-09-24T00:00Z" end="2013-10-26T00:00Z"/>
-            <retention limit="hours(2)" action="delete"/>
-        </cluster>
-        <cluster name="bcp-cluster" type="target">
-            <validity start="2013-09-24T00:00Z" end="2013-10-26T00:00Z"/>
-            <retention limit="days(30)" action="delete"/>
-
-            <table 
uri="catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-        </cluster>
-    </clusters>
-
-    <table 
uri="catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-
-    <ACL owner="seetharam" group="users" permission="0755"/>
-    <schema location="" provider="hcatalog"/>
-</feed>
-</verbatim>
-
-
----+++ Hive Table used in Processing Pipelines
-
----++++ Primary Cluster
-The cluster definition from the lifecycle example can be used.
-
----++++ Input Feed
-
-<verbatim>
-<?xml version="1.0"?>
-<feed description="clicks log table " name="input-table" 
xmlns="uri:falcon:feed:0.1">
-    <groups>online,bi</groups>
-    <frequency>hours(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <clusters>
-        <cluster name="##cluster##" type="source">
-            <validity start="2010-01-01T00:00Z" end="2012-04-21T00:00Z"/>
-            <retention limit="hours(24)" action="delete"/>
-        </cluster>
-    </clusters>
-
-    <table 
uri="catalog:falcon_db:input_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-
-    <ACL owner="testuser" group="group" permission="0x755"/>
-    <schema location="/schema/clicks" provider="protobuf"/>
-</feed>
-</verbatim>
-
-
----++++ Output Feed
-
-<verbatim>
-<?xml version="1.0"?>
-<feed description="clicks log identity table" name="output-table" 
xmlns="uri:falcon:feed:0.1">
-    <groups>online,bi</groups>
-    <frequency>hours(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <clusters>
-        <cluster name="##cluster##" type="source">
-            <validity start="2010-01-01T00:00Z" end="2012-04-21T00:00Z"/>
-            <retention limit="hours(24)" action="delete"/>
-        </cluster>
-    </clusters>
-
-    <table 
uri="catalog:falcon_db:output_table#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}" />
-
-    <ACL owner="testuser" group="group" permission="0x755"/>
-    <schema location="/schema/clicks" provider="protobuf"/>
-</feed>
-</verbatim>
-
-
----++++ Process
-
-<verbatim>
-<?xml version="1.0"?>
-<process name="##processName##" xmlns="uri:falcon:process:0.1">
-    <clusters>
-        <cluster name="##cluster##">
-            <validity end="2012-04-22T00:00Z" start="2012-04-21T00:00Z"/>
-        </cluster>
-    </clusters>
-
-    <parallel>1</parallel>
-    <order>FIFO</order>
-    <frequency>days(1)</frequency>
-    <timezone>UTC</timezone>
-
-    <inputs>
-        <input end="today(0,0)" start="today(0,0)" feed="input-table" 
name="input"/>
-    </inputs>
-
-    <outputs>
-        <output instance="now(0,0)" feed="output-table" name="output"/>
-    </outputs>
-
-    <properties>
-        <property name="blah" value="blah"/>
-    </properties>
-
-    <workflow engine="pig" path="/falcon/test/apps/pig/table-id.pig"/>
-
-    <retry policy="periodic" delay="minutes(10)" attempts="3"/>
-</process>
-</verbatim>
-
-
----++++ Pig Script
-
-<verbatim>
-A = load '$input_database.$input_table' using 
org.apache.hcatalog.pig.HCatLoader();
-B = FILTER A BY $input_filter;
-C = foreach B generate id, value;
-store C into '$output_database.$output_table' USING 
org.apache.hcatalog.pig.HCatStorer('$output_dataout_partitions');
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/ImportExport.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/ImportExport.twiki 
b/trunk/releases/master/src/site/twiki/ImportExport.twiki
deleted file mode 100644
index b0ce7ff..0000000
--- a/trunk/releases/master/src/site/twiki/ImportExport.twiki
+++ /dev/null
@@ -1,242 +0,0 @@
----+Falcon Data Import and Export
-
-
----++Overview
-
-Falcon provides constructs to periodically bring raw data from external data 
sources (like databases, drop boxes etc)
-onto Hadoop and push derived data computed on Hadoop onto external data 
sources.
-
-As of this release, Falcon only supports Relational Databases (e.g. Oracle, 
MySQL etc) via JDBC as external data source.
-The future releases will add support for other external data sources.
-
-
----++Prerequisites
-
-Following are the prerequisites to import external data from and export to 
databases.
-
-   * *Sqoop 1.4.6+*
-   * *Oozie 4.2.0+*
-   * *Appropriate database connector*
-
-
-*Note:* Falcon uses Sqoop for import/export operation. Sqoop will require 
appropriate database driver to connect to
-the relational database. Please refer to the Sqoop documentation for any Sqoop 
related question. Please make sure
-the database driver jar is copied into oozie share lib for Sqoop.
-
-<verbatim>
-For example, in order to import and export with MySQL, please make sure the 
latest MySQL connector
-*mysql-connector-java-5.1.31.jar+* is copied into oozie's Sqoop share lib
-
-/user/oozie/share/lib/{lib-dir}/sqoop/mysql-connector-java-5.1.31.jar+
-
-where {lib-dir} value varies in oozie deployments.
-
-</verbatim>
-
----++ Usage
----+++ Entity Definition and Setup
-   * *Datasource Entity*
-      Datasource entity abstracts connection and credential details to 
external data sources. The Datasource entity
-      supports read and write interfaces with specific credentials. The 
default credential will be used if the read
-      or write interface does not have its own credentials. In general, the 
Datasource entity will be defined by
-      system administrator. Please refer to datasource XSD for more details.
-
-      The following example defines a Datasource entity for a MySQL database. 
The import operation will use
-      the read interface with url "jdbc:mysql://dbhost/test", user name 
"import_usr" and password text "sqoop".
-      Where as, the export operation will use the write interface with url 
"jdbc:mysql://dbhost/test" with user
-      name "export_usr" and password specified in a HDFS file at the location 
"/user/ambari-qa/password-store/password_write_user".
-
-      The default credential specified will be used if either the read or 
write interface does not provide its own
-      credentials. The default credential specifies the password using 
password alias feature available via hadoop credential
-      functionality. User will be able to create a password alias using 
"hadoop credential -create <alias> -provider
-      <provider-path>" command, where <alias> is a string and <provider-path> 
is a HDFS jceks file. During runtime,
-      the specified alias will be used to look up the password stored 
encrypted in the jceks hdfs file specified under
-      the providerPath element.
-
-      The available read and write interfaces enable database administrators 
to segregate read and write workloads.
-
-      <verbatim>
-
-      File: mysql-database.xml
-
-      <?xml version="1.0" encoding="UTF-8"?>
-      <datasource colo="west-coast" description="MySQL database on west coast" 
type="mysql" name="mysql-db" xmlns="uri:falcon:datasource:0.1">
-          <tags>[email protected], 
[email protected]</tags>
-          <interfaces>
-              <!-- ***** read interface ***** -->
-              <interface type="readonly" endpoint="jdbc:mysql://dbhost/test">
-                  <credential type="password-text">
-                      <userName>import_usr</userName>
-                      <passwordText>sqoop</passwordFile>
-                  </credential>
-              </interface>
-
-              <!-- ***** write interface ***** -->
-              <interface type="write"  endpoint="jdbc:mysql://dbhost/test">
-                  <credential type="password-file">
-                      <userName>export_usr</userName>
-                      
<passwordFile>/user/ambari-qa/password-store/password_write_user</passwordFile>
-                  </credential>
-              </interface>
-
-              <!-- *** default credential *** -->
-              <credential type="password-alias">
-                <userName>sqoop2_user</userName>
-                <passwordAlias>
-                    <alias>sqoop.password.alias</alias>
-                    
<providerPath>hdfs://namenode:8020/user/ambari-qa/sqoop_password.jceks</providerPath>
-                </passwordAlias>
-              </credential>
-
-          </interfaces>
-
-          <driver>
-              <clazz>com.mysql.jdbc.Driver</clazz>
-              
<jar>/user/oozie/share/lib/lib_20150721010816/sqoop/mysql-connector-java-5.1.31</jar>
-          </driver>
-      </datasource>
-      </verbatim>
-
-   * *Feed  Entity*
-      Feed entity now enables users to define IMPORT and EXPORT policies in 
addition to RETENTION and REPLICATION.
-      The IMPORT and EXPORT policies will refer to a already defined 
Datasource entity for connection and credential
-      details and take a table name from the policy to operate on. Please 
refer to feed entity XSD for details.
-
-      The following example defines a Feed entity with IMPORT and EXPORT 
policies. Both the IMPORT and EXPORT operations
-      refer to a datasource entity "mysql-db". The IMPORT operation will use 
the read interface and credentials while
-      the EXPORT operation will use the write interface and credentials. A 
feed instance is created every 1 hour
-      since the frequency of the Feed is hour(1) and the Feed instances are 
deleted after 90 days because of the
-      retention policy.
-
-
-      <verbatim>
-
-      File: customer_email_feed.xml
-
-      <?xml version="1.0" encoding="UTF-8"?>
-      <!--
-       A feed representing Hourly customer email data retained for 90 days
-       -->
-      <feed description="Raw customer email feed" name="customer_feed" 
xmlns="uri:falcon:feed:0.1">
-          <tags>externalSystem=USWestEmailServers,classification=secure</tags>
-          <groups>DataImportPipeline</groups>
-          <frequency>hours(1)</frequency>
-          <late-arrival cut-off="hours(4)"/>
-          <clusters>
-              <cluster name="primaryCluster" type="source">
-                  <validity start="2015-12-15T00:00Z" end="2016-03-31T00:00Z"/>
-                  <retention limit="days(90)" action="delete"/>
-                  <import>
-                      <source name="mysql-db" tableName="simple">
-                          <extract type="full">
-                              <mergepolicy>snapshot</mergepolicy>
-                          </extract>
-                          <fields>
-                              <includes>
-                                  <field>id</field>
-                                  <field>name</field>
-                              </includes>
-                          </fields>
-                      </source>
-                      <arguments>
-                          <argument name="--split-by" value="id"/>
-                          <argument name="--num-mappers" value="2"/>
-                      </arguments>
-                  </import>
-                  <export>
-                        <target name="mysql-db" tableName="simple_export">
-                            <load type="insert"/>
-                            <fields>
-                              <includes>
-                                <field>id</field>
-                                <field>name</field>
-                              </includes>
-                            </fields>
-                        </target>
-                        <arguments>
-                             <argument name="--update-key" value="id"/>
-                        </arguments>
-                    </export>
-              </cluster>
-          </clusters>
-
-          <locations>
-              <location type="data" 
path="/user/ambari-qa/falcon/demo/primary/importfeed/${YEAR}-${MONTH}-${DAY}-${HOUR}-${MINUTE}"/>
-              <location type="stats" path="/none"/>
-              <location type="meta" path="/none"/>
-          </locations>
-
-          <ACL owner="ambari-qa" group="users" permission="0755"/>
-          <schema location="/none" provider="none"/>
-
-      </feed>
-      </verbatim>
-
-   * *Import policy*
-     The import policy uses the datasource entity specified in the "source" to 
connect to the database. The tableName
-     specified should exist in the source datasource.
-
-     Extraction type specifies whether to pull data from external datasource 
"full" everytime or "incrementally".
-     The mergepolicy specifies how to organize (snapshot or append, i.e time 
series partiitons) the data on hadoop.
-     The valid combinations are:
-      * [full,snapshot] - data is extracted in full and dumped into the feed 
instance location.
-      * [incremental, append] - data is extracted incrementally using the key 
specified in the *deltacolumn*
-        and added as a partition to the feed instance location.
-      * [incremental, snapshot] - data is extracted incrementally and merged 
with already existing data on hadoop to
-        produce one latest feed instance.*This feature is not supported 
currently*. The use case for this feature is
-        to efficiently import very large dimention tables that have updates 
and inserts onto hadoop and make it available
-        as a snapshot with latest updates to consumers.
-
-      The following example defines an incremental extraction with append 
organization:
-
-      <verbatim>
-           <import>
-                <source name="mysql-db" tableName="simple">
-                    <extract type="incremental">
-                        <deltacolumn>modified_time</deltacolumn>
-                        <mergepolicy>append</mergepolicy>
-                    </extract>
-                    <fields>
-                        <includes>
-                            <field>id</field>
-                            <field>name</field>
-                        </includes>
-                    </fields>
-                </source>
-                <arguments>
-                    <argument name="--split-by" value="id"/>
-                    <argument name="--num-mappers" value="2"/>
-                </arguments>
-            </import>
-        </verbatim>
-
-
-     The fields option enables to control what fields get imported. By 
default, all fields get import. The "includes" option
-     brings only those fields specified. The "excludes" option brings all the 
fields other than specified.
-
-     The arguments section enables to pass in any extra arguments needed for 
fine control on the underlying implementation --
-     in this case, Sqoop.
-
-   * *Export policy*
-     The export, like import, uses the datasource for connecting to the 
database. Load type specifies whether to insert
-     or only update data onto the external table. Fields option behaves the 
same way as in import policy.
-     The tableName specified should exist in the external datasource.
-
----+++ Operation
-   Once the Datasource and Feed entity with import and export policies are 
defined, Users can submit and schedule
-   the Import and Export operations via CLI and REST API as below:
-
-   <verbatim>
-
-    ## submit the mysql-db datasource defined in the file mysql_datasource.xml
-    falcon entity -submit -type datasource -file mysql_datasource.xml
-
-    ## submit the customer_feed specified in the customer_email_feed.xml
-    falcon entity -submit -type feed -file customer_email_feed.xml
-
-    ## schedule the customer_feed
-    falcon entity -schedule -type feed -name customer_feed
-
-   </verbatim>
-
-   Falcon will create corresponding oozie bundles with coordinator and 
workflow for import and export operation.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/InstallationSteps.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/InstallationSteps.twiki 
b/trunk/releases/master/src/site/twiki/InstallationSteps.twiki
deleted file mode 100644
index a5ee2cc..0000000
--- a/trunk/releases/master/src/site/twiki/InstallationSteps.twiki
+++ /dev/null
@@ -1,87 +0,0 @@
----+Building & Installing Falcon
-
-
----++Building Falcon
-
----+++Prerequisites
-
-   * JDK 1.7/1.8
-   * Maven 3.2.x
-
-
-
----+++Step 1 - Clone the Falcon repository
-
-<verbatim>
-$git clone https://git-wip-us.apache.org/repos/asf/falcon.git falcon
-</verbatim>
-
-
----+++Step 2 - Build Falcon
-
-<verbatim>
-$cd falcon
-$export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=256m -noverify" && mvn clean 
install
-</verbatim>
-It builds and installs the package into the local repository, for use as a 
dependency in other projects locally.
-
-[optionally -Dhadoop.version=<<hadoop.version>> can be appended to build for a 
specific version of Hadoop]
-
-*NOTE:* Falcon drops support for Hadoop-1 and only supports Hadoop-2 from 
Falcon 0.6 onwards
-[optionally -Doozie.version=<<oozie version>> can be appended to build with a 
specific version of Oozie. Oozie versions
->= 4 are supported]
-NOTE: Falcon builds with JDK 1.7/1.8 using -noverify option
-      To compile Falcon with Hive Replication, optionally "-P hadoop-2,hivedr" 
can be appended. For this Hive >= 1.2.0
-      and Oozie >= 4.2.0 should be available.
-
-
-
----+++Step 3 - Package and Deploy Falcon
-
-Once the build successfully completes, artifacts can be packaged for 
deployment using the assembly plugin. The Assembly
-Plugin for Maven is primarily intended to allow users to aggregate the project 
output along with its dependencies,
-modules, site documentation, and other files into a single distributable 
archive. There are two basic ways in which you
-can deploy Falcon - Embedded mode(also known as Stand Alone Mode) and 
Distributed mode. Your next steps will vary based
-on the mode in which you want to deploy Falcon.
-
-*NOTE* : Oozie is being extended by Falcon (particularly on el-extensions) and 
hence the need for Falcon to build &
-re-package Oozie, so that users of Falcon can work with the right Oozie setup. 
Though Oozie is packaged by Falcon, it
-needs to be deployed separately by the administrator and is not auto deployed 
along with Falcon.
-
-
----++++Embedded/Stand Alone Mode
-Embedded mode is useful when the Hadoop jobs and relevant data processing 
involve only one Hadoop cluster. In this mode
- there is a single Falcon server that contacts the scheduler to schedule jobs 
on Hadoop. All the process/feed requests
- like submit, schedule, suspend, kill etc. are sent to this server. For 
running Falcon in this mode one should use the
- Falcon which has been built using standalone option. You can find the 
instructions for Embedded mode setup
- [[Embedded-mode][here]].
-
-
----++++Distributed Mode
-Distributed mode is for multiple (colos) instances of Hadoop clusters, and 
multiple workflow schedulers to handle them.
-In this mode Falcon has 2 components: Prism and Server(s). Both Prism and 
Server(s) have their own their own config
-locations(startup and runtime properties). In this mode Prism acts as a 
contact point for Falcon servers. While
- all commands are available through Prism, only read and instance api's are 
available through Server. You can find the
- instructions for Distributed Mode setup [[Distributed-mode][here]].
-
-
-
----+++Preparing Oozie and Falcon packages for deployment
-<verbatim>
-$cd <<project home>>
-$src/bin/package.sh <<hadoop-version>> <<oozie-version>>
-
->> ex. src/bin/package.sh 1.1.2 4.0.1 or src/bin/package.sh 0.20.2-cdh3u5 4.0.1
->> ex. src/bin/package.sh 2.5.0 4.0.0
->> Falcon package is available in <<falcon 
home>>/target/apache-falcon-<<version>>-bin.tar.gz
->> Oozie package is available in <<falcon 
home>>/target/oozie-4.0.1-distro.tar.gz
-</verbatim>
-
-*NOTE:* If you have a separate Apache Oozie installation, you will need to 
follow some additional steps:
-   1. Once you have setup the Falcon Server, copy libraries under 
{falcon-server-dir}/oozie/libext/ to {oozie-install-dir}/libext.
-   1. Modify Oozie's configuration file. Copy all Falcon related properties 
from {falcon-server-dir}/oozie/conf/oozie-site.xml to 
{oozie-install-dir}/conf/oozie-site.xml
-   1. Restart oozie:
-      1. cd {oozie-install-dir}
-      1. sudo -u oozie ./bin/oozie-stop.sh
-      1. sudo -u oozie ./bin/oozie-setup.sh prepare-war
-      1. sudo -u oozie ./bin/oozie-start.sh

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/LICENSE.txt
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/LICENSE.txt 
b/trunk/releases/master/src/site/twiki/LICENSE.txt
deleted file mode 100644
index d3b580f..0000000
--- a/trunk/releases/master/src/site/twiki/LICENSE.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-All files in this directory and subdirectories are under Apache License 
Version 2.0.
-The reason being Maven Doxia plugin that converts twiki to html does not have
-commenting out feature.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki 
b/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki
deleted file mode 100644
index 7c0e027..0000000
--- a/trunk/releases/master/src/site/twiki/MigrationInstructions.twiki
+++ /dev/null
@@ -1,15 +0,0 @@
----+ Migration Instructions
-
----++ Migrate from 0.5-incubating to 0.6-incubating
-
-This is a placeholder wiki for migration instructions from falcon 
0.5-incubating to 0.6-incubating.
-
----+++ Update Entities
-
----+++ Change cluster dir permissions
-
----+++ Enable/Disable TLS
-
----+++ Authorization
-
-

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/OnBoarding.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/OnBoarding.twiki 
b/trunk/releases/master/src/site/twiki/OnBoarding.twiki
deleted file mode 100644
index 8b02150..0000000
--- a/trunk/releases/master/src/site/twiki/OnBoarding.twiki
+++ /dev/null
@@ -1,269 +0,0 @@
----++ Contents
-   * <a href="#Onboarding Steps">Onboarding Steps</a>
-   * <a href="#Sample Pipeline">Sample Pipeline</a>
-   * [[HiveIntegration][Hive Examples]]
-
----+++ Onboarding Steps
-   * Create cluster definition for the cluster, specifying name node, job 
tracker, workflow engine endpoint, messaging endpoint. Refer to 
[[EntitySpecification][cluster definition]] for details.
-   * Create Feed definitions for each of the input and output specifying 
frequency, data path, ownership. Refer to [[EntitySpecification][feed 
definition]] for details.
-   * Create Process definition for your job. Process defines configuration for 
the workflow job. Important attributes are frequency, inputs/outputs and 
workflow path. Refer to [[EntitySpecification][process definition]] for process 
details.
-   * Define workflow for your job using the workflow engine(only oozie is 
supported as of now). Refer 
[[http://oozie.apache.org/docs/3.1.3-incubating/WorkflowFunctionalSpec.html][Oozie
 Workflow Specification]]. The libraries required for the workflow should be 
available in lib folder in workflow path.
-   * Set-up workflow definition, libraries and referenced scripts on hadoop. 
-   * Submit cluster definition
-   * Submit and schedule feed and process definitions
-   
-
----+++ Sample Pipeline
----++++ Cluster   
-Cluster definition that contains end points for name node, job tracker, oozie 
and jms server:
-The cluster locations MUST be created prior to submitting a cluster entity to 
Falcon.
-*staging* must have 777 permissions and the parent dirs must have execute 
permissions
-*working* must have 755 permissions and the parent dirs must have execute 
permissions
-
-<verbatim>
-<?xml version="1.0"?>
-<!--
-    Cluster configuration
-  -->
-<cluster colo="ua2" description="" name="corp" xmlns="uri:falcon:cluster:0.1"
-    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>    
-    <interfaces>
-        <interface type="readonly" endpoint="hftp://name-node.com:50070"; 
version="2.5.0" />
-
-        <interface type="write" endpoint="hdfs://name-node.com:54310" 
version="2.5.0" />
-
-        <interface type="execute" endpoint="job-tracker:54311" version="2.5.0" 
/>
-
-        <interface type="workflow" endpoint="http://oozie.com:11000/oozie/"; 
version="4.0.1" />
-
-        <interface type="messaging" 
endpoint="tcp://jms-server.com:61616?daemon=true" version="5.1.6" />
-    </interfaces>
-
-    <locations>
-        <location name="staging" path="/projects/falcon/staging" />
-        <location name="temp" path="/tmp" />
-        <location name="working" path="/projects/falcon/working" />
-    </locations>
-</cluster>
-</verbatim>
-   
----++++ Input Feed
-Hourly feed that defines feed path, frequency, ownership and validity:
-<verbatim>
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Hourly sample input data
-  -->
-
-<feed description="sample input data" name="SampleInput" 
xmlns="uri:falcon:feed:0.1"
-    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
-    <groups>group</groups>
-
-    <frequency>hours(1)</frequency>
-
-    <late-arrival cut-off="hours(6)" />
-
-    <clusters>
-        <cluster name="corp" type="source">
-            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" 
timezone="UTC" />
-            <retention limit="months(24)" action="delete" />
-        </cluster>
-    </clusters>
-
-    <locations>
-        <location type="data" 
path="/projects/bootcamp/data/${YEAR}-${MONTH}-${DAY}-${HOUR}/SampleInput" />
-        <location type="stats" path="/projects/bootcamp/stats/SampleInput" />
-        <location type="meta" path="/projects/bootcamp/meta/SampleInput" />
-    </locations>
-
-    <ACL owner="suser" group="users" permission="0755" />
-
-    <schema location="/none" provider="none" />
-</feed>
-</verbatim>
-
----++++ Output Feed
-Daily feed that defines feed path, frequency, ownership and validity:
-<verbatim>
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Daily sample output data
-  -->
-
-<feed description="sample output data" name="SampleOutput" 
xmlns="uri:falcon:feed:0.1"
-xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
-    <groups>group</groups>
-
-    <frequency>days(1)</frequency>
-
-    <late-arrival cut-off="hours(6)" />
-
-    <clusters>
-        <cluster name="corp" type="source">
-            <validity start="2009-01-01T00:00Z" end="2099-12-31T00:00Z" 
timezone="UTC" />
-            <retention limit="months(24)" action="delete" />
-        </cluster>
-    </clusters>
-
-    <locations>
-        <location type="data" 
path="/projects/bootcamp/output/${YEAR}-${MONTH}-${DAY}/SampleOutput" />
-        <location type="stats" path="/projects/bootcamp/stats/SampleOutput" />
-        <location type="meta" path="/projects/bootcamp/meta/SampleOutput" />
-    </locations>
-
-    <ACL owner="suser" group="users" permission="0755" />
-
-    <schema location="/none" provider="none" />
-</feed>
-</verbatim>
-
----++++ Process
-Sample process which runs daily at 6th hour on corp cluster. It takes one 
input - !SampleInput for the previous day(24 instances). It generates one 
output - !SampleOutput for previous day. The workflow is defined at 
/projects/bootcamp/workflow/workflow.xml. Any libraries available for the 
workflow should be at /projects/bootcamp/workflow/lib. The process also defines 
properties queueName, ssh.host, and fileTimestamp which are passed to the 
workflow. In addition, Falcon exposes the following properties to the workflow: 
nameNode, jobTracker(hadoop properties), input and output(Input/Output 
properties).
-
-<verbatim>
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-    Daily sample process. Runs at 6th hour every day. Input - last day's 
hourly data. Generates output for yesterday
- -->
-<process name="SampleProcess">
-    <cluster name="corp" />
-
-    <frequency>days(1)</frequency>
-
-    <validity start="2012-04-03T06:00Z" end="2022-12-30T00:00Z" timezone="UTC" 
/>
-
-    <inputs>
-        <input name="input" feed="SampleInput" start="yesterday(0,0)" 
end="today(-1,0)" />
-    </inputs>
-
-    <outputs>
-            <output name="output" feed="SampleOutput" 
instance="yesterday(0,0)" />
-    </outputs>
-
-    <properties>
-        <property name="queueName" value="reports" />
-        <property name="ssh.host" value="host.com" />
-        <property name="fileTimestamp" 
value="${coord:formatTime(coord:nominalTime(), 'yyyy-MM-dd')}" />
-    </properties>
-
-    <workflow engine="oozie" path="/projects/bootcamp/workflow" />
-
-    <retry policy="periodic" delay="minutes(5)" attempts="3" />
-    
-    <late-process policy="exp-backoff" delay="hours(1)">
-        <late-input input="input" 
workflow-path="/projects/bootcamp/workflow/lateinput" />
-    </late-process>
-</process>
-</verbatim>
-
----++++ Oozie Workflow
-The sample user workflow contains 3 actions:
-   * Pig action - Executes pig script /projects/bootcamp/workflow/script.pig
-   * concatenator - Java action that concatenates part files and generates a 
single file
-   * file upload - ssh action that gets the concatenated file from hadoop and 
sends the file to a remote host
-   
-<verbatim>
-<workflow-app xmlns="uri:oozie:workflow:0.2" name="sample-wf">
-        <start to="pig" />
-
-        <action name="pig">
-                <pig>
-                        <job-tracker>${jobTracker}</job-tracker>
-                        <name-node>${nameNode}</name-node>
-                        <prepare>
-                                <delete path="${output}"/>
-                        </prepare>
-                        <configuration>
-                                <property>
-                                        <name>mapred.job.queue.name</name>
-                                        <value>${queueName}</value>
-                                </property>
-                                <property>
-                                        
<name>mapreduce.fileoutputcommitter.marksuccessfuljobs</name>
-                                        <value>true</value>
-                                </property>
-                        </configuration>
-                        
<script>${nameNode}/projects/bootcamp/workflow/script.pig</script>
-                        <param>input=${input}</param>
-                        <param>output=${output}</param>
-                        <file>lib/dependent.jar</file>
-                </pig>
-                <ok to="concatenator" />
-                <error to="fail" />
-        </action>
-
-        <action name="concatenator">
-                <java>
-                        <job-tracker>${jobTracker}</job-tracker>
-                        <name-node>${nameNode}</name-node>
-                        <prepare>
-                                <delete 
path="${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv"/>
-                        </prepare>
-                        <configuration>
-                                <property>
-                                        <name>mapred.job.queue.name</name>
-                                        <value>${queueName}</value>
-                                </property>
-                        </configuration>
-                        <main-class>com.wf.Concatenator</main-class>
-                        <arg>${output}</arg>
-                        
<arg>${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv</arg>
-                </java>
-                <ok to="fileupload" />
-                <error to="fail"/>
-        </action>
-                        
-        <action name="fileupload">
-                <ssh>
-                        <host>localhost</host>
-                        <command>/tmp/fileupload.sh</command>
-                        
<args>${nameNode}/projects/bootcamp/concat/data-${fileTimestamp}.csv</args>
-                        <args>${wf:conf("ssh.host")}</args>
-                        <capture-output/>
-                </ssh>
-                <ok to="fileUploadDecision" />
-                <error to="fail"/>
-        </action>
-
-        <decision name="fileUploadDecision">
-                <switch>
-                        <case to="end">
-                                ${wf:actionData('fileupload')['output'] == '0'}
-                        </case>
-                        <default to="fail"/>
-                </switch>
-        </decision>
-
-        <kill name="fail">
-                <message>Workflow failed, error 
message[${wf:errorMessage(wf:lastErrorNode())}]</message>
-        </kill>
-
-        <end name="end" />
-</workflow-app>
-</verbatim>
-
----++++ File Upload Script
-The script gets the file from hadoop, rsyncs the file to /tmp on remote host 
and deletes the file from hadoop
-<verbatim>
-#!/bin/bash
-
-trap 'echo "output=$?"; exit $?' ERR INT TERM
-
-echo "Arguments: $@"
-SRCFILE=$1
-DESTHOST=$3
-
-FILENAME=`basename $SRCFILE`
-rm -f /tmp/$FILENAME
-hadoop fs -copyToLocal $SRCFILE /tmp/
-echo "Copied $SRCFILE to /tmp"
-
-rsync -ztv --rsh=ssh --stats /tmp/$FILENAME $DESTHOST:/tmp
-echo "rsynced $FILENAME to $DESTUSER@$DESTHOST:$DESTFILE"
-
-hadoop fs -rmr $SRCFILE
-echo "Deleted $SRCFILE"
-
-rm -f /tmp/$FILENAME
-echo "output=0"
-</verbatim>

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Operability.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Operability.twiki 
b/trunk/releases/master/src/site/twiki/Operability.twiki
deleted file mode 100644
index 05850c1..0000000
--- a/trunk/releases/master/src/site/twiki/Operability.twiki
+++ /dev/null
@@ -1,110 +0,0 @@
----+ Operationalizing Falcon
-
----++ Overview
-
-Apache Falcon provides various tools to operationalize Falcon consisting of 
Alerts for
-unrecoverable errors, Audits of user actions, Metrics, and Notifications. They 
are detailed below.
-
-++ Lineage
-
-Currently Lineage has no way to access or restore information about entity 
instances created during the time lineage
-was disabled. Information about entities however, is preserved and 
bootstrapped when lineage is enabled. If you have to
-reset the graph db then you can delete the graph db files as specified in the 
startup.properties and restart the falcon.
-Please note: you will loose all the information about the instances if you 
delete the graph db.
-
----++ Monitoring
-
-Falcon provides monitoring of various events by capturing metrics of those 
events.
-The metric numbers can then be used to monitor performance and health of the 
Falcon system and
-the entire processing pipelines.
-
-Falcon also exposes 
[[https://github.com/thinkaurelius/titan/wiki/Titan-Performance-and-Monitoring][metrics
 for titandb]]
-
-Users can view the logs of these events in the metric.log file, by default 
this file is created
-under ${user.dir}/logs/ directory. Users may also extend the Falcon monitoring 
framework to send
-events to systems like Mondemand/lwes by 
implementingorg.apache.falcon.plugin.MonitoringPlugin
-interface.
-
-The following events are captured by Falcon for logging the metrics:
-   1. New cluster definitions posted to Falcon (success & failures)
-   1. New feed definition posted to Falcon (success & failures)
-   1. New process definition posted to Falcon (success & failures)
-   1. Process update events (success & failures)
-   1. Feed update events (success & failures)
-   1. Cluster update events (success & failures)
-   1. Process suspend events (success & failures)
-   1. Feed suspend events (success & failures)
-   1. Process resume events (success & failures)
-   1. Feed resume events (success & failures)
-   1. Process remove events (success & failures)
-   1. Feed remove events (success & failures)
-   1. Cluster remove events (success & failures)
-   1. Process instance kill events (success & failures)
-   1. Process instance re-run events (success & failures)
-   1. Process instance generation events
-   1. Process instance failure events
-   1. Process instance auto-retry events
-   1. Process instance retry exhaust events
-   1. Feed instance deletion event
-   1. Feed instance deletion failure event (no retries)
-   1. Feed instance replication event
-   1. Feed instance replication failure event
-   1. Feed instance replication auto-retry event
-   1. Feed instance replication retry exhaust event
-   1. Feed instance late arrival event
-   1. Feed instance post cut-off arrival event
-   1. Process re-run due to late feed event
-   1. Transaction rollback failed event
-
-The metric logged for an event has the following properties:
-   1. Action - Name of the event.
-   2. Dimensions - A list of name/value pairs of various attributes for a 
given action.
-   3. Status- Status of an action FAILED/SUCCEEDED.
-   4. Time-taken - Time taken in nanoseconds for a given action.
-
-An example for an event logged for a submit of a new process definition:
-
-   2012-05-04 12:23:34,026 {Action:submit, Dimensions:{entityType=process}, 
Status: SUCCEEDED, Time-taken:97087000 ns}
-
-Users may parse the metric.log or capture these events from custom monitoring 
frameworks and can plot various graphs
-or send alerts according to their requirements.
-
-
----++ Notifications
-
-Falcon creates a JMS topic for every process/feed that is scheduled in Falcon.
-The implementation class and the broker url of the JMS engine are read from 
the dependent cluster's definition.
-Users may register consumers on the required topic to check the availability 
or status of feed instances.
-
-For a given process that is scheduled, the name of the topic is same as the 
process name.
-Falcon sends a Map message for every feed produced by the instance of a 
process to the JMS topic.
-The JMS !MapMessage sent to a topic has the following properties:
-entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, 
timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, 
topicName, status, brokerTTL;
-
-For a given feed that is scheduled, the name of the topic is same as the feed 
name.
-Falcon sends a map message for every feed instance that is 
deleted/archived/replicated depending upon the retention policy set in the feed 
definition.
-The JMS !MapMessage sent to a topic has the following properties:
-entityName, feedNames, feedInstancePath, workflowId, runId, nominalTime, 
timeStamp, brokerUrl, brokerImplClass, entityType, operation, logFile, 
topicName, status, brokerTTL;
-
-The JMS messages are automatically purged after a certain period (default 3 
days) by the Falcon JMS house-keeping service.TTL (Time-to-live) for JMS message
-can be configured in the Falcon's startup.properties file.
-
-
----++ Alerts
-
-Falcon generates alerts for unrecoverable errors into a log file by default.
-Users can view these alerts in the alerts.log file, by default this file is 
created
-under ${user.dir}/logs/ directory.
-
-Users may also extend the Falcon Alerting plugin to send events to systems 
like Nagios, etc. by
-extending org.apache.falcon.plugin.AlertingPlugin interface.
-
-
----++ Audits
-
-Falcon audits all user activity and captures them into a log file by default.
-Users can view these audits in the audit.log file, by default this file is 
created
-under ${user.dir}/logs/ directory.
-
-Users may also extend the Falcon Audit plugin to send audits to systems like 
Apache Argus, etc. by
-extending org.apache.falcon.plugin.AuditingPlugin interface.

http://git-wip-us.apache.org/repos/asf/falcon/blob/4e4b8457/trunk/releases/master/src/site/twiki/Recipes.twiki
----------------------------------------------------------------------
diff --git a/trunk/releases/master/src/site/twiki/Recipes.twiki 
b/trunk/releases/master/src/site/twiki/Recipes.twiki
deleted file mode 100644
index b5faa1e..0000000
--- a/trunk/releases/master/src/site/twiki/Recipes.twiki
+++ /dev/null
@@ -1,85 +0,0 @@
----+ Falcon Recipes
-
----++ Overview
-
-A Falcon recipe is a static process template with parameterized workflow to 
realize a specific use case. Recipes are
-defined in user space. Recipes will not have support for update or lifecycle 
management.
-
-For example:
-
-   * Replicating directories from one HDFS cluster to another (not timed 
partitions)
-   * Replicating hive metadata (database, table, views, etc.)
-   * Replicating between HDFS and Hive - either way
-   * Data masking etc.
-
----++ Proposal
-
-Falcon provides a Process abstraction that encapsulates the configuration for 
a user workflow with scheduling
-controls. All recipes can be modeled as a Process with in Falcon which 
executes the user workflow periodically. The
-process and its associated workflow are parameterized. The user will provide a 
properties file with name value pairs
-that are substituted by falcon before scheduling it. Falcon translates these 
recipes as a process entity by
-replacing the parameters in the workflow definition.
-
----++ Falcon CLI recipe support
-
-Falcon CLI functionality to support recipes has been added.
-[[falconcli/FalconCLI][Falcon CLI]] Recipe command usage is defined here.
-
-CLI accepts recipe option with a recipe name and optional tool and does the 
following:
-   * Validates the options; name option is mandatory and tool is optional and 
should be provided if user wants to override the base recipe tool
-   * Looks for <name>-workflow.xml, <name>-template.xml and <name>.properties 
file in the path specified by falcon.recipe.path in client.properties. If files 
cannot be found then Falcon CLI will fail
-   * Invokes a Tool to substitute the properties in the templated process for 
the recipe. By default invokes base tool if tool option is not passed. Tool is 
responsible for generating process entity at the path specified by FalconCLI
-   * Validates the generated entity
-   * Submit and schedule this entity
-   * Generated process entity files are stored in tmp directory
-
----++ Base Recipe tool
-
-Falcon provides a base tool that recipes can override. Base Recipe tool does 
the following:
-   * Expects recipe template file path, recipe properties file path and path 
where process entity to be submitted should be generated. Validates these 
arguments
-   * Validates the artifacts i.e. workflow and/or lib files specified in the 
recipe template exists on local filesystem or HDFS at the specified path else 
returns error
-   * Copies if the artifacts exists on local filesystem
-      * If workflow is on local FS then falcon.recipe.workflow.path in recipe 
property file is mandatory for it to be copied to HDFS. If templated process 
requires custom libs falcon.recipe.workflow.lib.path property is mandatory for 
them to be copied from Local FS to HDFS. Recipe tool will copy the local 
artifacts only if these properties are set in properties file
-   * Looks for the patten ##[A-Za-z0-9_.]*## in the templated process and 
substitutes it with the properties. Process entity generated after the 
substitution is written to the empty file passed by FalconCLI
-
----++ Recipe template file format
-
-   * Any templatized string should be in the format ##[A-Za-z0-9_.]*##.
-   * There should be a corresponding entry in the recipe properties file 
"falcon.recipe.<templatized-string> = <value to be substituted>"
-
-<verbatim>
-Example: If the entry in recipe template is <workflow 
name="##workflow.name##"> there should be a corresponding entry in the recipe 
properties file falcon.recipe.workflow.name=hdfs-dr-workflow
-</verbatim>
-
----++ Recipe properties file format
-
-   * Regular key value pair properties file
-   * Property key should be prefixed by "falcon.recipe."
-
-<verbatim>
-Example: falcon.recipe.workflow.name=hdfs-dr-workflow
-Recipe template will have <workflow name="##workflow.name##">. Recipe tool 
will look for the patten ##workflow.name##
-and replace it with the property value "hdfs-dr-workflow". Substituted 
template will have <workflow name="hdfs-dr-workflow">
-</verbatim>
-
----++ Metrics
-HDFS DR and Hive DR recipes will capture the replication metrics like 
TIMETAKEN, BYTESCOPIED, COPY (number of files copied) for an
-instance and populate to the GraphDB.
-
----++ Managing the scheduled recipe process
-   * Scheduled recipe process is similar to regular process
-      * List : falcon entity -type process -name <recipe-process-name> -list
-      * Status : falcon entity -type process -name <recipe-process-name> 
-status
-      * Delete : falcon entity -type process -name <recipe-process-name> 
-delete
-
----++ Sample recipes
-
-   * Sample recipes are published in addons/recipes
-
----++ Types of recipes
-   * [[HDFSDR][HDFS Recipe]]
-   * [[HiveDR][HiveDR Recipe]]
-
----++ Packaging
-
-   * There is no packaging for recipes at this time but will be added soon.

Reply via email to