Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ListHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">ListHDFS</h1><h2>Description: </h2><p>Retrieves a listing of files from 
HDFS. For each file that is listed in HDFS, creates a FlowFile that represents 
the HDFS file so that it can be fetched in conjunction with FetchHDFS. This 
Processor is designed to run on Primary Node only in a cluster. If the primary 
node changes, the new Primary Node will pick up where the previous node left 
off without duplicating all of the data. Unlike GetHDFS, this Processor does 
not delete any data from HDFS.</p><h3>Tags: </h3><p>hadoop, HDFS, get, list, 
ingest, source, filesystem</p><h3>Properties: </h3><p>In the list below,
  the names of required properties appear in <strong>bold</strong>. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration.</td></tr><tr><td id="name">Kerberos 
Principal</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your 
 nifi.properties</td></tr><tr><td id="name">Kerberos Keytab</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos keytab associated with the principal. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties</td></tr><tr><td 
id="name">Kerberos Relogin Period</td><td id="default-value">4 hours</td><td 
id="allowable-values"></td><td id="description">Period of time which should 
pass before attempting a kerberos relogin</td></tr><tr><td id="name">Additional 
Classpath Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of paths 
to files and/or directories that will be added to the classpath. When 
specifying a directory, all files with in the directory will be added to the 
classpath, but further sub-directories will not be included.</td></tr><tr><td 
id="name">Distributed Cache Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: </s
 
trong><br/>DistributedMapCacheClient<br/><strong>Implementation:</strong><br/><a
 
href="../../../nifi-distributed-cache-services-nar/1.2.0/org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService/index.html">DistributedMapCacheClientService</a></td><td
 id="description">Specifies the Controller Service that should be used to 
maintain state about what has been pulled from HDFS so that if a new node 
begins pulling data, it won't duplicate all of the work that has been 
done.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
HDFS directory from which files should be read<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Recurse 
Subdirectories</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to list files from subdirectories of the 
HDFS direc
 tory</td></tr><tr><td id="name"><strong>File Filter</strong></td><td 
id="default-value">[^\.].*</td><td id="allowable-values"></td><td 
id="description">Only files whose names match the given regular expression will 
be picked up</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 FlowFiles are transferred to this relationship</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the absolute path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". If the Recurse Subdirectories property is 
set to true and a file is picked up from /tmp/abc/1/2/3, then the path 
attribute will
  be set to "/tmp/abc/1/2/3".</td></tr><tr><td>hdfs.owner</td><td>The user that 
owns the file in HDFS</td></tr><tr><td>hdfs.group</td><td>The group that owns 
the file in HDFS</td></tr><tr><td>hdfs.lastModified</td><td>The timestamp of 
when the file in HDFS was last modified, as milliseconds since midnight Jan 1, 
1970 UTC</td></tr><tr><td>hdfs.length</td><td>The number of bytes in the file 
in HDFS</td></tr><tr><td>hdfs.replication</td><td>The number of HDFS replicas 
for hte file</td></tr><tr><td>hdfs.permissions</td><td>The permissions for the 
file in HDFS. This is formatted as 3 characters for the owner, 3 for the group, 
and 3 for other users. For example rw-rw-r--</td></tr></table><h3>State 
management: </h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>After
 performing a listing of HDFS files, the timestamp of the newest file is 
stored, along with the filenames of all files that share that same timestamp. 
This allows the Processor to list on
 ly files that have been added or modified after this date the next time that 
the Processor is run. State is stored across the cluster so that this Processor 
can be run on Primary Node only and if a new Primary Node is selected, the new 
node can pick up where the previous node left off, without duplicating the 
data.</td></tr></table><h3>Restricted: </h3>This component is not 
restricted.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHDFS</h1><h2>Description: </h2><p>Write FlowFile data to Hadoop 
Distributed File System (HDFS)</p><h3>Tags: </h3><p>hadoop, HDFS, put, copy, 
filesystem, restricted</p><h3>Properties: </h3><p>In the list below, the names 
of required properties appear in <strong>bold</strong>. Any other properties 
(not in bold) are considered optional. The table also indicates any default 
values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Descr
 iption</th></tr><tr><td id="name">Hadoop Configuration Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
file or comma separated list of files which contains the Hadoop file system 
configuration. Without this, Hadoop will search the classpath for a 
'core-site.xml' and 'hdfs-site.xml' file or will revert to a default 
configuration.</td></tr><tr><td id="name">Kerberos Principal</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties</td></tr><tr><td 
id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos keytab associated with 
the principal. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties</td></tr><tr><td id="name">Kerberos Relogin Period</td><td 
id="default-value">4 hours</td><td id="allowable-values"></td><td 
id="description"
 >Period of time which should pass before attempting a kerberos 
 >relogin</td></tr><tr><td id="name">Additional Classpath Resources</td><td 
 >id="default-value"></td><td id="allowable-values"></td><td id="description">A 
 >comma-separated list of paths to files and/or directories that will be added 
 >to the classpath. When specifying a directory, all files with in the 
 >directory will be added to the classpath, but further sub-directories will 
 >not be included.</td></tr><tr><td 
 >id="name"><strong>Directory</strong></td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">The parent HDFS directory to 
 >which files should be written. The directory will be created if it doesn't 
 >exist.<br/><strong>Supports Expression Language: 
 >true</strong></td></tr><tr><td id="name"><strong>Conflict Resolution 
 >Strategy</strong></td><td id="default-value">fail</td><td 
 >id="allowable-values"><ul><li>replace <img 
 >src="../../../../../html/images/iconInfo.png" alt="Replaces the existing file 
 >if an
 y." title="Replaces the existing file if any."></img></li><li>ignore <img 
src="../../../../../html/images/iconInfo.png" alt="Ignores the flow file and 
routes it to success." title="Ignores the flow file and routes it to 
success."></img></li><li>fail <img 
src="../../../../../html/images/iconInfo.png" alt="Penalizes the flow file and 
routes it to failure." title="Penalizes the flow file and routes it to 
failure."></img></li><li>append <img 
src="../../../../../html/images/iconInfo.png" alt="Appends to the existing file 
if any, creates a new file otherwise." title="Appends to the existing file if 
any, creates a new file otherwise."></img></li></ul></td><td 
id="description">Indicates what should happen when a file with the same name 
already exists in the output directory</td></tr><tr><td id="name">Block 
Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Size of each block as written to HDFS. This overrides the 
Hadoop Configuration</td></tr><tr><td id
 ="name">IO Buffer Size</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Amount of memory to use to 
buffer file contents during IO. This overrides the Hadoop 
Configuration</td></tr><tr><td id="name">Replication</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Number of times that HDFS will replicate each file. This 
overrides the Hadoop Configuration</td></tr><tr><td id="name">Permissions 
umask</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A umask represented as an octal number which determines the 
permissions of files written to HDFS. This overrides the Hadoop Configuration 
dfs.umaskmode</td></tr><tr><td id="name">Remote Owner</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Changes the owner of the HDFS file to this value after it is 
written. This only works if NiFi is running as a user that has HDFS super user 
privilege to change owner</td><
 /tr><tr><td id="name">Remote Group</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Changes the group of the HDFS 
file to this value after it is written. This only works if NiFi is running as a 
user that has HDFS super user privilege to change group</td></tr><tr><td 
id="name"><strong>Compression codec</strong></td><td 
id="default-value">NONE</td><td 
id="allowable-values"><ul><li>NONE</li><li>DEFAULT</li><li>BZIP</li><li>GZIP</li><li>LZ4</li><li>SNAPPY</li><li>AUTOMATIC</li></ul></td><td
 id="description">No Description Provided.</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Files
 that have been successfully written to HDFS are transferred to this 
relationship</td></tr><tr><td>failure</td><td>Files that could not be written 
to HDFS for some reason are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3><table 
id="reads-attributes"><tr><th>N
 ame</th><th>Description</th></tr><tr><td>filename</td><td>The name of the file 
written to HDFS comes from the value of this 
attribute.</td></tr></table><h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS is stored in this 
attribute.</td></tr><tr><td>absolute.hdfs.path</td><td>The absolute path to the 
file on HDFS is stored in this attribute.</td></tr></table><h3>State 
management: </h3>This component does not store state.<h3>Restricted: 
</h3>Provides operator the ability to write to any file that NiFi has access to 
in HDFS or the local filesystem.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.2.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHDFSEvents</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">GetHDFSEvents</h1><h2>Description: </h2><p>This processor polls the 
notification events provided by the HdfsAdmin API. Since this uses the 
HdfsAdmin APIs it is required to run as an HDFS super user. Currently there are 
six types of events (append, close, create, metadata, rename, and unlink). 
Please see org.apache.hadoop.hdfs.inotify.Event documentation for full 
explanations of each event. This processor will poll for new events based on a 
defined duration. For each event received a new flow file will be created with 
the expected attributes and the event itself serialized to JSON and written to 
th
 e flow file's content. For example, if event.type is APPEND then the content 
of the flow file will contain a JSON file containing the information about the 
append event. If successful the flow files are sent to the 'success' 
relationship. Be careful of where the generated flow files are stored. If the 
flow files are stored in one of processor's watch directories there will be a 
never ending flow of events. It is also important to be aware that this 
processor must consume all events. The filtering must happen within the 
processor. This is because the HDFS admin's event notifications API does not 
have filtering.</p><h3>Tags: </h3><p>hadoop, events, inotify, notifications, 
filesystem</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide
 .html">NiFi Expression Language</a>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name">Hadoop Configuration 
Resources</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A file or comma separated list of files which contains the 
Hadoop file system configuration. Without this, Hadoop will search the 
classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a 
default configuration.</td></tr><tr><td id="name">Kerberos Principal</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties</td></tr><tr><td 
id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos keytab associated with 
the principal. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties</td><
 /tr><tr><td id="name">Kerberos Relogin Period</td><td id="default-value">4 
hours</td><td id="allowable-values"></td><td id="description">Period of time 
which should pass before attempting a kerberos relogin</td></tr><tr><td 
id="name">Additional Classpath Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of paths 
to files and/or directories that will be added to the classpath. When 
specifying a directory, all files with in the directory will be added to the 
classpath, but further sub-directories will not be included.</td></tr><tr><td 
id="name"><strong>Poll Duration</strong></td><td id="default-value">1 
second</td><td id="allowable-values"></td><td id="description">The time before 
the polling method returns with the next batch of events if they exist. It may 
exceed this amount of time by up to the time required for an RPC to the 
NameNode.</td></tr><tr><td id="name"><strong>HDFS Path to 
Watch</strong></td><td id="defaul
 t-value"></td><td id="allowable-values"></td><td id="description">The HDFS 
path to get event notifications for. This property accepts both expression 
language and regular expressions. This will be evaluated during the OnScheduled 
phase.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name"><strong>Ignore Hidden Files</strong></td><td 
id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true and the final component of the path associated with a 
given event starts with a '.' then that event will not be 
processed.</td></tr><tr><td id="name"><strong>Event Types to Filter 
On</strong></td><td id="default-value">append, close, create, metadata, rename, 
unlink</td><td id="allowable-values"></td><td id="description">A 
comma-separated list of event types to process. Valid event types are: append, 
close, create, metadata, rename, and unlink. Case does not 
matter.</td></tr><tr><td id="name"><strong>I
 OException Retries During Event Polling</strong></td><td 
id="default-value">3</td><td id="allowable-values"></td><td 
id="description">According to the HDFS admin API for event polling it is good 
to retry at least a few times. This number defines how many times the poll will 
be retried if it throws an IOException.</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 flow file with updated information about a specific event will be sent to this 
relationship.</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>mime.type</td><td>This
 is always 
application/json.</td></tr><tr><td>hdfs.inotify.event.type</td><td>This will 
specify the specific HDFS notification event type. Currently there are six 
types of events (append, close, create, metadata, rename, and 
unlink).</td></tr><tr><td>hdfs.inotify.e
 vent.path</td><td>The specific path that the event is tied 
to.</td></tr></table><h3>State management: </h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>The
 last used transaction id is stored. This is used 
</td></tr></table><h3>Restricted: </h3>This component is not restricted.<h3>See 
Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.ListHDFS/index.html">ListHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.FetchHBaseRow/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>FetchHBaseRow</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">FetchHBaseRow</h1><h2>Description: </h2><p>Fetches a row from an HBase 
table. The Destination property controls whether the cells are added as flow 
file attributes, or the row is written to the flow file content as JSON. This 
processor may be used to fetch a fixed row on a interval by specifying the 
table and row id directly in the processor, or it may be used to dynamically 
fetch rows by referencing the table and row id from incoming flow 
files.</p><h3>Tags: </h3><p>hbase, scan, fetch, get, enrich</p><h3>Properties: 
</h3><p>In the list below, the names of required properties appear in 
<strong>bol
 d</strong>. Any other properties (not in bold) are considered optional. The 
table also indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: 
</strong><br/>HBaseClientService<br/><strong>Implementation:</strong><br/><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to fetch from.<
 br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name"><strong>Row Identifier</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The identifier of the row to 
fetch.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Columns</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">An optional comma-separated 
list of "&lt;colFamily&gt;:&lt;colQualifier&gt;" pairs to fetch. To return all 
columns for a given family, leave off the qualifier such as 
"&lt;colFamily1&gt;,&lt;colFamily2&gt;".<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td 
id="name"><strong>Destination</strong></td><td 
id="default-value">flowfile-attributes</td><td 
id="allowable-values"><ul><li>flowfile-attributes <img 
src="../../../../../html/images/iconInfo.png" alt="Adds the JSON document 
representing the row that was fetched as an attribute named hbase.row. The 
format o
 f the JSON document is determined by the JSON Format property. NOTE: Fetching 
many large rows into attributes may have a negative impact on performance." 
title="Adds the JSON document representing the row that was fetched as an 
attribute named hbase.row. The format of the JSON document is determined by the 
JSON Format property. NOTE: Fetching many large rows into attributes may have a 
negative impact on performance."></img></li><li>flowfile-content <img 
src="../../../../../html/images/iconInfo.png" alt="Overwrites the FlowFile 
content with a JSON document representing the row that was fetched. The format 
of the JSON document is determined by the JSON Format property." 
title="Overwrites the FlowFile content with a JSON document representing the 
row that was fetched. The format of the JSON document is determined by the JSON 
Format property."></img></li></ul></td><td id="description">Indicates whether 
the row fetched from HBase is written to FlowFile content or FlowFile 
Attributes.</td
 ></tr><tr><td id="name"><strong>JSON Format</strong></td><td 
 >id="default-value">full-row</td><td id="allowable-values"><ul><li>full-row 
 ><img src="../../../../../html/images/iconInfo.png" alt="Creates a JSON 
 >document with the format: {&quot;row&quot;:&lt;row-id&gt;, 
 >&quot;cells&quot;:[{&quot;fam&quot;:&lt;col-fam&gt;, 
 >&quot;qual&quot;:&lt;col-val&gt;, &quot;val&quot;:&lt;value&gt;, 
 >&quot;ts&quot;:&lt;timestamp&gt;}]}." title="Creates a JSON document with the 
 >format: {&quot;row&quot;:&lt;row-id&gt;, 
 >&quot;cells&quot;:[{&quot;fam&quot;:&lt;col-fam&gt;, 
 >&quot;qual&quot;:&lt;col-val&gt;, &quot;val&quot;:&lt;value&gt;, 
 >&quot;ts&quot;:&lt;timestamp&gt;}]}."></img></li><li>col-qual-and-val <img 
 >src="../../../../../html/images/iconInfo.png" alt="Creates a JSON document 
 >with the format: {&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;, 
 >&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;." title="Creates a 
 >JSON document with the format: {&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&g
 t;&quot;, 
&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;."></img></li></ul></td><td
 id="description">Specifies how to represent the HBase row as a JSON 
document.</td></tr><tr><td id="name"><strong>JSON Value 
Encoding</strong></td><td id="default-value">none</td><td 
id="allowable-values"><ul><li>none <img 
src="../../../../../html/images/iconInfo.png" alt="Creates a String using the 
bytes of given data and the given Character Set." title="Creates a String using 
the bytes of given data and the given Character Set."></img></li><li>base64 
<img src="../../../../../html/images/iconInfo.png" alt="Creates a Base64 
encoded String of the given data." title="Creates a Base64 encoded String of 
the given data."></img></li></ul></td><td id="description">Specifies how to 
represent row ids, column families, column qualifiers, and values when stored 
in FlowFile attributes, or written to JSON.</td></tr><tr><td 
id="name"><strong>Encode Character Set</strong></td><td 
id="default-value">UTF-8</td
 ><td id="allowable-values"></td><td id="description">The character set used to 
 >encode the JSON representation of the row.</td></tr><tr><td 
 >id="name"><strong>Decode Character Set</strong></td><td 
 >id="default-value">UTF-8</td><td id="allowable-values"></td><td 
 >id="description">The character set used to decode data from 
 >HBase.</td></tr></table><h3>Relationships: </h3><table 
 >id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 > successful fetches are routed to this 
 >relationship.</td></tr><tr><td>failure</td><td>All failed fetches are routed 
 >to this relationship.</td></tr><tr><td>not found</td><td>All fetches where 
 >the row id is not found are routed to this 
 >relationship.</td></tr></table><h3>Reads Attributes: </h3>None 
 >specified.<h3>Writes Attributes: </h3><table 
 >id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>hbase.table</td><td>The
 > name of the HBase table that the row was fetched 
 >from</td></tr><tr><td>hbase.row</td><td>A JSON 
 document representing the row. This property is only written when a 
Destination of flowfile-attributes is 
selected.</td></tr><tr><td>mime.type</td><td>Set to application/json when using 
a Destination of flowfile-content, not set or modified 
otherwise</td></tr></table><h3>State management: </h3>This component does not 
store state.<h3>Restricted: </h3>This component is not restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.GetHBase/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.GetHBase/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.GetHBase/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.GetHBase/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHBase</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">GetHBase</h1><h2>Description: </h2><p>This Processor polls HBase for any 
records in the specified table. The processor keeps track of the timestamp of 
the cells that it receives, so that as new records are pushed to HBase, they 
will automatically be pulled. Each record is output in JSON format, as {"row": 
"&lt;row key&gt;", "cells": { "&lt;column 1 family&gt;:&lt;column 1 
qualifier&gt;": "&lt;cell 1 value&gt;", "&lt;column 2 family&gt;:&lt;column 2 
qualifier&gt;": "&lt;cell 2 value&gt;", ... }}. For each record received, a 
Provenance RECEIVE event is emitted with the format hbase://&lt;table name&gt;/&
 lt;row key&gt;, where &lt;row key&gt; is the UTF-8 encoded value of the row's 
key.</p><h3>Tags: </h3><p>hbase, get, ingest</p><h3>Properties: </h3><p>In the 
list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>HBase Client 
Service</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>HBaseClientService<br/><strong>Implementation:</strong><br/><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name">Distributed Cache Service</td><td 
id="default-value"></td><
 td id="allowable-values"><strong>Controller Service API: 
</strong><br/>DistributedMapCacheClient<br/><strong>Implementation:</strong><br/><a
 
href="../../../nifi-distributed-cache-services-nar/1.2.0/org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService/index.html">DistributedMapCacheClientService</a></td><td
 id="description">Specifies the Controller Service that should be used to 
maintain state about what has been pulled from HBase so that if a new node 
begins pulling data, it won't duplicate all of the work that has been 
done.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to put data into</td></tr><tr><td 
id="name">Columns</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of 
"&lt;colFamily&gt;:&lt;colQualifier&gt;" pairs to return when scanning. To 
return all columns for a given family, l
 eave off the qualifier such as 
"&lt;colFamily1&gt;,&lt;colFamily2&gt;".</td></tr><tr><td id="name">Filter 
Expression</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">An HBase filter expression that will be applied to the scan. 
This property can not be used when also using the Columns 
property.</td></tr><tr><td id="name"><strong>Initial Time 
Range</strong></td><td id="default-value">None</td><td 
id="allowable-values"><ul><li>None</li><li>Current Time</li></ul></td><td 
id="description">The time range to use on the first scan of a table. None will 
pull the entire table on the first scan, Current Time will pull entries from 
that point forward.</td></tr><tr><td id="name"><strong>Character 
Set</strong></td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">Specifies which character set 
is used to encode the data in HBase</td></tr></table><h3>Relationships: 
</h3><table id="relationships"><tr><th>Name</th><th>Description</
 th></tr><tr><td>success</td><td>All FlowFiles are routed to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>hbase.table</td><td>The
 name of the HBase table that the data was pulled 
from</td></tr><tr><td>mime.type</td><td>Set to application/json to indicate 
that output is JSON</td></tr></table><h3>State management: </h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>After
 performing a fetching from HBase, stores a timestamp of the last-modified cell 
that was found. In addition, it stores the ID of the row(s) and the value of 
each cell that has that timestamp as its modification date. This is stored 
across the cluster and allows the next fetch to avoid duplicating data, even if 
this Processor is run on Primary Node only and the Primary Node 
changes.</td></tr></table><h3>Restricted: </h3>This component is not r
 estricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseCell/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseCell/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseCell/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseCell/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHBaseCell</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHBaseCell</h1><h2>Description: </h2><p>Adds the Contents of a 
FlowFile to HBase as the value of a single cell</p><h3>Tags: </h3><p>hadoop, 
hbase</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></t
 r><tr><td id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: 
</strong><br/>HBaseClientService<br/><strong>Implementation:</strong><br/><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to put data into<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Row Identifier</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the Row ID to use when inserting data into 
HBase<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Row Identifier Encoding Strateg
 y</td><td id="default-value">String</td><td 
id="allowable-values"><ul><li>String <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of row id 
as a UTF-8 String." title="Stores the value of row id as a UTF-8 
String."></img></li><li>Binary <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of the rows 
id as a binary byte array. It expects that the row id is a binary formatted 
string." title="Stores the value of the rows id as a binary byte array. It 
expects that the row id is a binary formatted string."></img></li></ul></td><td 
id="description">Specifies the data type of Row ID used when inserting data 
into HBase. The default behavior is to convert the row id to a UTF-8 byte 
array. Choosing Binary will convert a binary formatted string to the correct 
byte[] representation. The Binary option should be used if you are using Binary 
row keys in HBase</td></tr><tr><td id="name"><strong>Column 
Family</strong></td><td id="default-value"></td><td id
 ="allowable-values"></td><td id="description">The Column Family to use when 
inserting data into HBase<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Column 
Qualifier</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Column Qualifier to use 
when inserting data into HBase<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">25</td><td id="allowable-values"></td><td 
id="description">The maximum number of FlowFiles to process in a single 
execution. The FlowFiles will be grouped by table, and a single Put per table 
will be performed.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship after it has been successfully stored 
in HBase</td></tr><tr><td>failure</td><td>A FlowFile is routed to this re
 lationship if it cannot be sent to HBase</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3>None 
specified.<h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3>This component is not restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseJSON/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseJSON/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseJSON/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.2.0/org.apache.nifi.hbase.PutHBaseJSON/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHBaseJSON</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHBaseJSON</h1><h2>Description: </h2><p>Adds rows to HBase based on 
the contents of incoming JSON documents. Each FlowFile must contain a single 
UTF-8 encoded JSON document, and any FlowFiles where the root element is not a 
single document will be routed to failure. Each JSON field name and value will 
become a column qualifier and value of the HBase row. Any fields with a null 
value will be skipped, and fields with a complex value will be handled 
according to the Complex Field Strategy. The row id can be specified either 
directly on the processor through the Row Identifier property, or can be ext
 racted from the JSON document by specifying the Row Identifier Field Name 
property. This processor will hold the contents of all FlowFiles for the given 
batch in memory at one time.</p><h3>Tags: </h3><p>hadoop, hbase, put, 
json</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: 
</strong><br/>HBaseClientService<br/><strong>Implementation:</strong><br/><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_
 1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td 
id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to put data into<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Row Identifier</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the Row ID to use when inserting data into 
HBase<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Row Identifier Field Name</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Specifies the name of a JSON 
element whose value should be used as the row id for the given JSON 
document.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Row Identifier Encoding 
Strategy</td><td id
 ="default-value">String</td><td id="allowable-values"><ul><li>String <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of row id 
as a UTF-8 String." title="Stores the value of row id as a UTF-8 
String."></img></li><li>Binary <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of the rows 
id as a binary byte array. It expects that the row id is a binary formatted 
string." title="Stores the value of the rows id as a binary byte array. It 
expects that the row id is a binary formatted string."></img></li></ul></td><td 
id="description">Specifies the data type of Row ID used when inserting data 
into HBase. The default behavior is to convert the row id to a UTF-8 byte 
array. Choosing Binary will convert a binary formatted string to the correct 
byte[] representation. The Binary option should be used if you are using Binary 
row keys in HBase</td></tr><tr><td id="name"><strong>Column 
Family</strong></td><td id="default-value"></td><td id="allowable-
 values"></td><td id="description">The Column Family to use when inserting data 
into HBase<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">25</td><td id="allowable-values"></td><td 
id="description">The maximum number of FlowFiles to process in a single 
execution. The FlowFiles will be grouped by table, and a single Put per table 
will be performed.</td></tr><tr><td id="name"><strong>Complex Field 
Strategy</strong></td><td id="default-value">Text</td><td 
id="allowable-values"><ul><li>Fail <img 
src="../../../../../html/images/iconInfo.png" alt="Route entire FlowFile to 
failure if any elements contain complex values." title="Route entire FlowFile 
to failure if any elements contain complex values."></img></li><li>Warn <img 
src="../../../../../html/images/iconInfo.png" alt="Provide a warning and do not 
include field in row sent to HBase." title="Provide a warning and do not 
include field in row sent t
 o HBase."></img></li><li>Ignore <img 
src="../../../../../html/images/iconInfo.png" alt="Silently ignore and do not 
include in row sent to HBase." title="Silently ignore and do not include in row 
sent to HBase."></img></li><li>Text <img 
src="../../../../../html/images/iconInfo.png" alt="Use the string 
representation of the complex field as the value of the given column." 
title="Use the string representation of the complex field as the value of the 
given column."></img></li></ul></td><td id="description">Indicates how to 
handle complex fields, i.e. fields that do not have a single text 
value.</td></tr><tr><td id="name"><strong>Field Encoding 
Strategy</strong></td><td id="default-value">String</td><td 
id="allowable-values"><ul><li>String <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of each 
field as a UTF-8 String." title="Stores the value of each field as a UTF-8 
String."></img></li><li>Bytes <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the
  value of each field as the byte representation of the type derived from the 
JSON." title="Stores the value of each field as the byte representation of the 
type derived from the JSON."></img></li></ul></td><td 
id="description">Indicates how to store the value of each field in HBase. The 
default behavior is to convert each value from the JSON to a String, and store 
the UTF-8 bytes. Choosing Bytes will interpret the type of each field from the 
JSON, and convert the value to the byte representation of that type, meaning an 
integer will be stored as the byte representation of that 
integer.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship after it has been successfully stored 
in HBase</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if it cannot be sent to HBase</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attribu
 tes: </h3>None specified.<h3>State management: </h3>This component does not 
store state.<h3>Restricted: </h3>This component is not restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.2.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>HBase_1_1_2_ClientService</title><link 
rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">HBase_1_1_2_ClientService</h1><h2>Description: </h2><p>Implementation of 
HBaseClientService for HBase 1.1.2. This service can be configured by providing 
a comma-separated list of configuration files, or by specifying values for the 
other properties. If configuration files are provided, they will be loaded 
first, and the values of the additional properties will override the values 
from the configuration files. In addition, any user defined properties on the 
processor will also be passed to the HBase configuration.</p><h3>Tags: 
</h3><p>hbase, client</p><h3>Properties: </h3><p>In the list
  below, the names of required properties appear in <strong>bold</strong>. Any 
other properties (not in bold) are considered optional. The table also 
indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Files</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Comma-separated list of Hadoop 
Configuration files, such as hbase-site.xml and core-site.xml for kerberos, 
including full paths to the files.</td></tr><tr><td id="name">Kerberos 
Principal</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties</td></tr><tr><td 
id="name">Kerberos Keytab</td><td id="default-value"
 ></td><td id="allowable-values"></td><td id="description">Kerberos keytab 
 >associated with the principal. Requires nifi.kerberos.krb5.file to be set in 
 >your nifi.properties</td></tr><tr><td id="name">ZooKeeper Quorum</td><td 
 >id="default-value"></td><td id="allowable-values"></td><td 
 >id="description">Comma-separated list of ZooKeeper hosts for HBase. Required 
 >if Hadoop Configuration Files are not provided.</td></tr><tr><td 
 >id="name">ZooKeeper Client Port</td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">The port on which ZooKeeper 
 >is accepting client connections. Required if Hadoop Configuration Files are 
 >not provided.</td></tr><tr><td id="name">ZooKeeper ZNode Parent</td><td 
 >id="default-value"></td><td id="allowable-values"></td><td 
 >id="description">The ZooKeeper ZNode Parent value for HBase (example: 
 >/hbase). Required if Hadoop Configuration Files are not 
 >provided.</td></tr><tr><td id="name">HBase Client Retries</td><td 
 >id="default-value">1</td><td 
 id="allowable-values"></td><td id="description">The number of times the HBase 
client will retry connecting. Required if Hadoop Configuration Files are not 
provided.</td></tr><tr><td id="name">Phoenix Client JAR Location</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
full path to the Phoenix client JAR. Required if Phoenix is installed on top of 
HBase.<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Dynamic Properties: </h3><p>Dynamic 
Properties allow the user to specify both the name and value of a 
property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">The name of an HBase configuration property.</td><td id="value">The 
value of the given HBase configuration property.</td><td>These properties will 
be set on the HBase configuration after loading any provided configuration 
files.</td></tr></table></p><h3>State management: </h3>This component does not 
store st
 ate.<h3>Restricted: </h3>This component is not restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>HiveConnectionPool</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">HiveConnectionPool</h1><h2>Description: </h2><p>Provides Database 
Connection Pooling Service for Apache Hive. Connections can be asked from pool 
and returned after usage.</p><h3>Tags: </h3><p>hive, dbcp, jdbc, database, 
connection, pooling, store</p><h3>Properties: </h3><p>In the list below, the 
names of required properties appear in <strong>bold</strong>. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>, and whether 
 a property is considered "sensitive", meaning that its value will be 
encrypted. Before entering a value in a sensitive property, ensure that the 
<strong>nifi.properties</strong> file has an entry for the property 
<strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>Database 
Connection URL</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A database connection URL used 
to connect to a database. May contain database system name, host, port, 
database name and some parameters. The exact syntax of a database connection 
URL is specified by the Hive documentation. For example, the server principal 
is often included as a connection parameter when connecting to a secure Hive 
server.</td></tr><tr><td id="name">Hive Configuration Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">
 A file or comma separated list of files which contains the Hive configuration 
(hive-site.xml, e.g.). Without this, Hadoop will search the classpath for a 
'hive-site.xml' file or will revert to a default configuration. Note that to 
enable authentication with Kerberos e.g., the appropriate properties must be 
set in the configuration files. Please see the Hive documentation for more 
details.</td></tr><tr><td id="name">Database User</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Database user name</td></tr><tr><td id="name">Password</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
password for the database user<br/><strong>Sensitive Property: 
true</strong></td></tr><tr><td id="name"><strong>Max Wait Time</strong></td><td 
id="default-value">500 millis</td><td id="allowable-values"></td><td 
id="description">The maximum amount of time that the pool will wait (when there 
are no available connections)  for a connecti
 on to be returned before failing, or -1 to wait indefinitely. 
</td></tr><tr><td id="name"><strong>Max Total Connections</strong></td><td 
id="default-value">8</td><td id="allowable-values"></td><td 
id="description">The maximum number of active connections that can be allocated 
from this pool at the same time, or negative for no limit.</td></tr><tr><td 
id="name">Validation query</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Validation query used to 
validate connections before returning them. When a borrowed connection is 
invalid, it gets dropped and a new valid connection will be returned. NOTE: 
Using validation may have a performance penalty.<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos 
Principal</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties</td></tr><tr><td 
 id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos keytab associated with 
the principal. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties</td></tr></table><h3>State management: </h3>This component does 
not store state.<h3>Restricted: </h3>This component is not 
restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ConvertAvroToORC</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">ConvertAvroToORC</h1><h2>Description: </h2><p>Converts an Avro record 
into ORC file format. This processor provides a direct mapping of an Avro 
record to an ORC record, such that the resulting ORC file will have the same 
hierarchical structure as the Avro document. If an incoming FlowFile contains a 
stream of multiple Avro records, the resultant FlowFile will contain a ORC file 
containing all of the Avro records.  If an incoming FlowFile does not contain 
any records, an empty ORC file is the output. NOTE: Many Avro datatypes 
(collections, primitives, and unions of primitives, e.g.) can be conve
 rted to ORC, but unions of collections and other complex datatypes may not be 
able to be converted to ORC.</p><h3>Tags: </h3><p>avro, orc, hive, 
convert</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">ORC Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the ORC configuration (hive-site.xml, e.g.). Without 
this, Hadoop will search the classpath for a 'hive-site.xml' file or will 
revert to a default configuration. Please see the ORC documentation for m
 ore details.</td></tr><tr><td id="name"><strong>Stripe Size</strong></td><td 
id="default-value">64 MB</td><td id="allowable-values"></td><td 
id="description">The size of the memory buffer (in bytes) for writing stripes 
to an ORC file</td></tr><tr><td id="name"><strong>Buffer Size</strong></td><td 
id="default-value">10 KB</td><td id="allowable-values"></td><td 
id="description">The maximum size of the memory buffers (in bytes) used for 
compressing and storing a stripe in memory. This is a hint to the ORC writer, 
which may choose to use a smaller buffer size based on stripe size and number 
of columns for efficient stripe writing and memory 
utilization.</td></tr><tr><td id="name"><strong>Compression 
Type</strong></td><td id="default-value">NONE</td><td 
id="allowable-values"><ul><li>NONE</li><li>ZLIB</li><li>SNAPPY</li><li>LZO</li></ul></td><td
 id="description">No Description Provided.</td></tr><tr><td id="name">Hive 
Table Name</td><td id="default-value"></td><td id="allowable-values"></
 td><td id="description">An optional table name to insert into the hive.ddl 
attribute. The generated DDL can be used by a PutHiveQL processor (presumably 
after a PutHDFS processor) to create a table backed by the converted ORC file. 
If this property is not provided, the full name (including namespace) of the 
incoming Avro record will be normalized and used as the table 
name.<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship after it has been converted to ORC 
format.</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if it cannot be parsed as Avro or cannot be converted to ORC for 
any reason</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>mime.type</td><td>Sets
 the m
 ime type to application/octet-stream</td></tr><tr><td>filename</td><td>Sets 
the filename to the existing filename with the extension replaced by / added to 
by .orc</td></tr><tr><td>record.count</td><td>Sets the number of records in the 
ORC file.</td></tr><tr><td>hive.ddl</td><td>Creates a partial Hive DDL 
statement for creating a table in Hive from this ORC file. This can be used in 
ReplaceText for setting the content to the DDL. To make it valid DDL, add 
"LOCATION '&lt;path_to_orc_file_in_hdfs&gt;'", where the path is the directory 
that contains this ORC file on HDFS. For example, ConvertAvroToORC can send 
flow files to a PutHDFS processor to send the file to HDFS, then to a 
ReplaceText to set the content to this DDL (plus the LOCATION clause as 
described), then to PutHiveQL processor to create the table if it doesn't 
exist.</td></tr></table><h3>State management: </h3>This component does not 
store state.<h3>Restricted: </h3>This component is not restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveQL/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveQL/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveQL/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveQL/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHiveQL</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHiveQL</h1><h2>Description: </h2><p>Executes a HiveQL DDL/DML command 
(UPDATE, INSERT, e.g.). The content of an incoming FlowFile is expected to be 
the HiveQL command to execute. The HiveQL command may use the ? to escape 
parameters. In this case, the parameters to use must exist as FlowFile 
attributes with the naming convention hiveql.args.N.type and 
hiveql.args.N.value, where N is a positive integer. The hiveql.args.N.type is 
expected to be a number indicating the JDBC Type. The content of the FlowFile 
is expected to be in UTF-8 format.</p><h3>Tags: </h3><p>sql, hive, put, 
database, update, inser
 t</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>Hive Database 
Connection Pooling Service</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>HiveDBCPService<br/><strong>Implementation:</strong><br/><a 
href="../org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html">HiveConnectionPool</a></td><td
 id="description">The Hive Controller Service that is used to obtain 
connection(s) to the Hive database</td></tr><tr><td id="name"><strong>Batch 
Size</strong></td><td id="default-value">100</td><td 
id="allowable-values"></td><td id="description">The preferred number of 
FlowFiles to put to the database in a single transaction<
 /td></tr><tr><td id="name"><strong>Character Set</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">Specifies the character set of the record 
data.</td></tr><tr><td id="name"><strong>Statement Delimiter</strong></td><td 
id="default-value">;</td><td id="allowable-values"></td><td 
id="description">Statement Delimiter used to separate SQL statements in a 
multiple statement script</td></tr><tr><td id="name"><strong>Rollback On 
Failure</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specify how to handle error. By default (false), if an error 
occurs while processing a FlowFile, the FlowFile will be routed to 'failure' or 
'retry' relationship based on error type, and processor can continue with next 
FlowFile. Instead, you may want to rollback currently processed FlowFiles and 
stop further processing immediately. In that case, you can do so by enabling 
this '
 Rollback On Failure' property.  If enabled, failed FlowFiles will stay in the 
input relationship without penalizing it and being processed repeatedly until 
it gets processed successfully or removed by other means. It is important to 
set adequate 'Yield Duration' to avoid retrying too 
frequently.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>retry</td><td>A
 FlowFile is routed to this relationship if the database cannot be updated but 
attempting the operation again may succeed</td></tr><tr><td>success</td><td>A 
FlowFile is routed to this relationship after the database is successfully 
updated</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if the database cannot be updated and retrying the operation will 
also fail, such as an invalid query or an integrity constraint 
violation</td></tr></table><h3>Reads Attributes: </h3><table 
id="reads-attributes"><tr><th>Name</th><th>Description</th></tr
 ><tr><td>hiveql.args.N.type</td><td>Incoming FlowFiles are expected to be 
 >parametrized HiveQL statements. The type of each Parameter is specified as an 
 >integer that represents the JDBC Type of the 
 >parameter.</td></tr><tr><td>hiveql.args.N.value</td><td>Incoming FlowFiles 
 >are expected to be parametrized HiveQL statements. The value of the 
 >Parameters are specified as hiveql.args.1.value, hiveql.args.2.value, 
 >hiveql.args.3.value, and so on. The type of the hiveql.args.1.value Parameter 
 >is specified by the hiveql.args.1.type attribute.</td></tr></table><h3>Writes 
 >Attributes: </h3>None specified.<h3>State management: </h3>This component 
 >does not store state.<h3>Restricted: </h3>This component is not 
 >restricted.<h3>See Also:</h3><p><a 
 >href="../org.apache.nifi.processors.hive.SelectHiveQL/index.html">SelectHiveQL</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHiveStreaming</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHiveStreaming</h1><h2>Description: </h2><p>This processor uses Hive 
Streaming to send flow file data to an Apache Hive table. The incoming flow 
file is expected to be in Avro format and the table must exist in Hive. Please 
see the Hive documentation for requirements on the Hive table (format, 
partitions, etc.). The partition values are extracted from the Avro record 
based on the names of the partition columns as specified in the 
processor.</p><h3>Tags: </h3><p>hive, streaming, put, database, 
store</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <stron
 g>bold</strong>. Any other properties (not in bold) are considered optional. 
The table also indicates any default values, and whether a property supports 
the <a href="../../../../../html/expression-language-guide.html">NiFi 
Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Hive Metastore URI</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
URI location for the Hive Metastore. Note that this is not the location of the 
Hive Server. The default port for the Hive metastore is 9043.</td></tr><tr><td 
id="name">Hive Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hive configuration (hive-site.xml, e.g.). Without 
this, Hadoop will search the classpath for a 'hive-site.xml' file or will 
revert to a default configu
 ration. Note that to enable authentication with Kerberos e.g., the appropriate 
properties must be set in the configuration files. Please see the Hive 
documentation for more details.</td></tr><tr><td id="name"><strong>Database 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the database in 
which to put the data.</td></tr><tr><td id="name"><strong>Table 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the database table 
in which to put the data.</td></tr><tr><td id="name">Partition Columns</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-delimited list of column names on which the table has been partitioned. 
The order of values in this list must correspond exactly to the order of 
partition columns specified during the table creation.</td></tr><tr><td 
id="name"><strong>Auto-Create Partitions</strong></td><td id="default-va
 lue">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Flag indicating whether partitions should be automatically 
created</td></tr><tr><td id="name"><strong>Max Open 
Connections</strong></td><td id="default-value">8</td><td 
id="allowable-values"></td><td id="description">The maximum number of open 
connections that can be allocated from this pool at the same time, or negative 
for no limit.</td></tr><tr><td id="name"><strong>Heartbeat 
Interval</strong></td><td id="default-value">60</td><td 
id="allowable-values"></td><td id="description">Indicates that a heartbeat 
should be sent when the specified number of seconds has elapsed. A value of 0 
indicates that no heartbeat should be sent.</td></tr><tr><td 
id="name"><strong>Transactions per Batch</strong></td><td 
id="default-value">100</td><td id="allowable-values"></td><td 
id="description">A hint to Hive Streaming indicating how many transactions the 
processor task will need. This value must be
  greater than 1.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Records per 
Transaction</strong></td><td id="default-value">10000</td><td 
id="allowable-values"></td><td id="description">Number of records to process 
before committing the transaction. This value must be greater than 
1.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name"><strong>Rollback On Failure</strong></td><td 
id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specify how to handle error. By default (false), if an error 
occurs while processing a FlowFile, the FlowFile will be routed to 'failure' or 
'retry' relationship based on error type, and processor can continue with next 
FlowFile. Instead, you may want to rollback currently processed FlowFiles and 
stop further processing immediately. In that case, you can do so by enabling 
this 'Rollback On Failure' property.  If enabled
 , failed FlowFiles will stay in the input relationship without penalizing it 
and being processed repeatedly until it gets processed successfully or removed 
by other means. It is important to set adequate 'Yield Duration' to avoid 
retrying too frequently.NOTE: When an error occurred after a Hive streaming 
transaction which is derived from the same input FlowFile is already committed, 
(i.e. a FlowFile contains more records than 'Records per Transaction' and a 
failure occurred at the 2nd transaction or later) then the succeeded records 
will be transferred to 'success' relationship while the original input FlowFile 
stays in incoming queue. Duplicated records can be created for the succeeded 
ones when the same FlowFile is processed again.</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos principal to 
authenticate as. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties</td></tr><tr><td 
 id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos keytab associated with 
the principal. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>retry</td><td>The
 incoming FlowFile is routed to this relationship if its records cannot be 
transmitted to Hive. Note that some records may have been processed 
successfully, they will be routed (as Avro flow files) to the success 
relationship. The combination of the retry, success, and failure relationships 
indicate how many records succeeded and/or failed. This can be used to provide 
a retry capability since full rollback is not 
possible.</td></tr><tr><td>success</td><td>A FlowFile containing Avro records 
routed to this relationship after the record has been successfully transmitted 
to Hive.</td></tr><tr><td>failure</td><td>A FlowFile containing A
 vro records routed to this relationship if the record could not be transmitted 
to Hive.</td></tr></table><h3>Reads Attributes: </h3>None specified.<h3>Writes 
Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>hivestreaming.record.count</td><td>This
 attribute is written on the flow files routed to the 'success' and 'failure' 
relationships, and contains the number of records from the incoming flow file 
written successfully and unsuccessfully, 
respectively.</td></tr></table><h3>State management: </h3>This component does 
not store state.<h3>Restricted: </h3>This component is not 
restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.2.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>SelectHiveQL</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">SelectHiveQL</h1><h2>Description: </h2><p>Execute provided HiveQL SELECT 
query against a Hive database connection. Query result will be converted to 
Avro or CSV format. Streaming is used so arbitrarily large result sets are 
supported. This processor can be scheduled to run on a timer, or cron 
expression, using the standard scheduling methods, or it can be triggered by an 
incoming FlowFile. If it is triggered by an incoming FlowFile, then attributes 
of that FlowFile will be available when evaluating the select query. FlowFile 
attribute 'selecthiveql.row.count' indicates how many rows were selected.<
 /p><h3>Tags: </h3><p>hive, sql, select, jdbc, query, 
database</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Hive Database Connection Pooling Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HiveDBCPService<br/><strong>Implementation:</strong><br/><a 
href="../org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html">HiveConnectionPool</a></td><td
 id="description">The Hive Controller Service that is used to obtain 
connection(s) to the Hive database</td></tr><tr><td id="name">HiveQL Se
 lect Query</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">HiveQL SELECT query to execute<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Output 
Format</strong></td><td id="default-value">Avro</td><td 
id="allowable-values"><ul><li>Avro</li><li>CSV</li></ul></td><td 
id="description">How to represent the records coming from Hive (Avro, CSV, 
e.g.)</td></tr><tr><td id="name"><strong>CSV Header</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Include Header in Output</td></tr><tr><td id="name">Alternate 
CSV Header</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Comma separated list of header fields<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name"><strong>CSV 
Delimiter</strong></td><td id="default-value">,</td><td 
id="allowable-values"></td><td id="description
 ">CSV Delimiter used to separate fields<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>CSV 
Quote</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether to force quoting of CSV fields. Note that this might 
conflict with the setting for CSV Escape.</td></tr><tr><td 
id="name"><strong>CSV Escape</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether to escape CSV strings in output. Note that this might 
conflict with the setting for CSV Quote.</td></tr><tr><td 
id="name"><strong>Character Set</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">Specifies the character set of the record 
data.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Successful
 ly created FlowFile from HiveQL query result 
set.</td></tr><tr><td>failure</td><td>HiveQL query execution failed. Incoming 
FlowFile will be penalized and routed to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>mime.type</td><td>Sets
 the MIME type for the outgoing flowfile to application/avro-binary for Avro or 
text/csv for CSV.</td></tr><tr><td>filename</td><td>Adds .avro or .csv to the 
filename attribute depending on which output format is 
selected.</td></tr><tr><td>selecthiveql.row.count</td><td>Indicates how many 
rows were selected/returned by the query.</td></tr></table><h3>State 
management: </h3>This component does not store state.<h3>Restricted: </h3>This 
component is not restricted.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.2.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.2.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html?rev=1794596&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.2.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.2.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html
 Tue May  9 15:27:39 2017
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ExtractHL7Attributes</title><link 
rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">ExtractHL7Attributes</h1><h2>Description: </h2><p>Extracts information 
from an HL7 (Health Level 7) formatted FlowFile and adds the information as 
FlowFile Attributes. The attributes are named as &lt;Segment Name&gt; 
&lt;dot&gt; &lt;Field Index&gt;. If the segment is repeating, the naming will 
be &lt;Segment Name&gt; &lt;underscore&gt; &lt;Segment Index&gt; &lt;dot&gt; 
&lt;Field Index&gt;. For example, we may have an attribute named "MHS.12" with 
a value of "2.1" and an attribute named "OBX_11.3" with a value of 
"93000^CPT4".</p><h3>Tags: </h3><p>HL7, health level 7, healthcare, extract, at
 tributes</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Character Encoding</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">The Character Encoding that is used to encode the HL7 
data<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name"><strong>Use Segment Names</strong></td><td 
id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether or not to use HL7 segment names in 
attributes</td></tr><tr><td id="name"><st
 rong>Parse Segment Fields</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether or not to parse HL7 segment fields into 
attributes</td></tr><tr><td id="name"><strong>Skip Validation</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether or not to validate HL7 message values</td></tr><tr><td 
id="name"><strong>HL7 Input Version</strong></td><td 
id="default-value">autodetect</td><td 
id="allowable-values"><ul><li>autodetect</li><li>2.2</li><li>2.3</li><li>2.3.1</li><li>2.4</li><li>2.5</li><li>2.5.1</li><li>2.6</li></ul></td><td
 id="description">The HL7 version to use for parsing and 
validation</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship if it is properly parsed as HL7 and 
its attributes 
 extracted</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if it cannot be mapped to FlowFile Attributes. This would happen 
if the FlowFile does not contain valid HL7 data</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3>None 
specified.<h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3>This component is not restricted.</body></html>
\ No newline at end of file


Reply via email to