Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHDFSSequenceFile</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">GetHDFSSequenceFile</h1><h2>Description: </h2><p>Fetch sequence files 
from Hadoop Distributed File System (HDFS) into FlowFiles</p><h3>Tags: 
</h3><p>hadoop, HDFS, get, fetch, ingest, source, sequence 
file</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>De
 fault Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration. To use swebhdfs, see 'Additional 
Details' section of PutHDFS's documentation.<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Kerberos Principal</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos 
Keytab</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos keytab
  associated with the principal. Requires nifi.kerberos.krb5.file to be set in 
your nifi.properties<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kerberos Relogin Period</td><td 
id="default-value">4 hours</td><td id="allowable-values"></td><td 
id="description">Period of time which should pass before attempting a kerberos 
relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Additional Classpath Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of paths to files and/or directories that will be added to 
the classpath. When specifying a directory, all files with in the directory 
will be added to the classpath, but further sub-directories will not be 
included.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
HDFS directory from which files should be read<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Recurse 
Subdirectories</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to 
 pull files from subdirectories of the HDFS directory</td></tr><tr><td 
id="name"><strong>Keep Source File</strong></td><td 
id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Determines whether to delete the file from HDFS after it has 
been successfully transferred. If true, the file will be fetched repeatedly. 
This is intended for testing only.</td></tr><tr><td id="name">File Filter 
Regex</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A Java Regular Expression for filtering Filenames; if a filter 
is supplied then only files whose names match that Regular Expression will be 
fetched, otherwise all files will be fetched</td></tr><tr><td 
id="name"><strong>Filter Match Name Only</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true then File Filter Regex will match on just the 
filename, otherwise subdi
 rectory names will be included with filename in the regex 
comparison</td></tr><tr><td id="name"><strong>Ignore Dotted 
Files</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true, files whose names begin with a dot (".") will be 
ignored</td></tr><tr><td id="name"><strong>Minimum File Age</strong></td><td 
id="default-value">0 sec</td><td id="allowable-values"></td><td 
id="description">The minimum age that a file must be in order to be pulled; any 
file younger than this amount of time (based on last modification date) will be 
ignored</td></tr><tr><td id="name">Maximum File Age</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
maximum age that a file must be in order to be pulled; any file older than this 
amount of time (based on last modification date) will be 
ignored</td></tr><tr><td id="name"><strong>Polling Interval</strong></td><td 
id="default-value">0 sec</t
 d><td id="allowable-values"></td><td id="description">Indicates how long to 
wait between performing directory listings</td></tr><tr><td 
id="name"><strong>Batch Size</strong></td><td id="default-value">100</td><td 
id="allowable-values"></td><td id="description">The maximum number of files to 
pull in each iteration, based on run schedule.</td></tr><tr><td id="name">IO 
Buffer Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Amount of memory to use to buffer file contents during IO. 
This overrides the Hadoop Configuration</td></tr><tr><td 
id="name"><strong>Compression codec</strong></td><td 
id="default-value">NONE</td><td id="allowable-values"><ul><li>NONE <img 
src="../../../../../html/images/iconInfo.png" alt="No compression" title="No 
compression"></img></li><li>DEFAULT <img 
src="../../../../../html/images/iconInfo.png" alt="Default ZLIB compression" 
title="Default ZLIB compression"></img></li><li>BZIP <img 
src="../../../../../html/images/iconIn
 fo.png" alt="BZIP compression" title="BZIP compression"></img></li><li>GZIP 
<img src="../../../../../html/images/iconInfo.png" alt="GZIP compression" 
title="GZIP compression"></img></li><li>LZ4 <img 
src="../../../../../html/images/iconInfo.png" alt="LZ4 compression" title="LZ4 
compression"></img></li><li>LZO <img 
src="../../../../../html/images/iconInfo.png" alt="LZO compression - it assumes 
LD_LIBRARY_PATH has been set and jar is available" title="LZO compression - it 
assumes LD_LIBRARY_PATH has been set and jar is 
available"></img></li><li>SNAPPY <img 
src="../../../../../html/images/iconInfo.png" alt="Snappy compression" 
title="Snappy compression"></img></li><li>AUTOMATIC <img 
src="../../../../../html/images/iconInfo.png" alt="Will attempt to 
automatically detect the compression codec." title="Will attempt to 
automatically detect the compression codec."></img></li></ul></td><td 
id="description">No Description Provided.</td></tr><tr><td 
id="name"><strong>FlowFile Content</strong></
 td><td id="default-value">VALUE ONLY</td><td 
id="allowable-values"><ul><li>VALUE ONLY</li><li>KEY VALUE 
PAIR</li></ul></td><td id="description">Indicate if the content is to be both 
the key and value of the Sequence File, or just the 
value.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 files retrieved from HDFS are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the relative path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". If the Recurse Subdirectories property is 
set to true and a file is picked up from /tmp/abc/1/2/3,
  then the path attribute will be set to 
"abc/1/2/3".</td></tr></table><h3>State management: </h3>This component does 
not store state.<h3>Restricted: </h3>Provides operator the ability to retrieve 
and delete any file that NiFi has access to in HDFS or the local 
filesystem.<h3>Input requirement: </h3>This component does not allow an 
incoming relationship.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ListHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">ListHDFS</h1><h2>Description: </h2><p>Retrieves a listing of files from 
HDFS. Each time a listing is performed, the files with the latest timestamp 
will be excluded and picked up during the next execution of the processor. This 
is done to ensure that we do not miss any files, or produce duplicates, in the 
cases where files with the same timestamp are written immediately before and 
after a single execution of the processor. For each file that is listed in 
HDFS, this processor creates a FlowFile that represents the HDFS file to be 
fetched in conjunction with FetchHDFS. This Processor is designed to run o
 n Primary Node only in a cluster. If the primary node changes, the new Primary 
Node will pick up where the previous node left off without duplicating all of 
the data. Unlike GetHDFS, this Processor does not delete any data from 
HDFS.</p><h3>Tags: </h3><p>hadoop, HDFS, get, list, ingest, source, 
filesystem</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Witho
 ut this, Hadoop will search the classpath for a 'core-site.xml' and 
'hdfs-site.xml' file or will revert to a default configuration. To use 
swebhdfs, see 'Additional Details' section of PutHDFS's 
documentation.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kerberos Principal</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos 
Keytab</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos keytab associated with the principal. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos Relogin 
Period</td><td id="default-value">4 hours</td><td 
id="allowable-values"></td><td id="description">
 Period of time which should pass before attempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Additional Classpath Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of paths to files and/or directories that will be added to 
the classpath. When specifying a directory, all files with in the directory 
will be added to the classpath, but further sub-directories will not be 
included.</td></tr><tr><td id="name">Distributed Cache Service</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>DistributedMapCacheClient<br/><strong>Implementations: 
</strong><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html">HBase_1_1_2_ClientMapCacheService</a><br/><a
 href="../../../nifi-distributed-cache-services-nar/1.5.0/org.ap
 
ache.nifi.distributed.cache.client.DistributedMapCacheClientService/index.html">DistributedMapCacheClientService</a><br/><a
 
href="../../../nifi-redis-nar/1.5.0/org.apache.nifi.redis.service.RedisDistributedMapCacheClientService/index.html">RedisDistributedMapCacheClientService</a></td><td
 id="description">Specifies the Controller Service that should be used to 
maintain state about what has been pulled from HDFS so that if a new node 
begins pulling data, it won't duplicate all of the work that has been 
done.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
HDFS directory from which files should be read<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Recurse 
Subdirectories</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to list files from subdirector
 ies of the HDFS directory</td></tr><tr><td id="name"><strong>File 
Filter</strong></td><td id="default-value">[^\.].*</td><td 
id="allowable-values"></td><td id="description">Only files whose names match 
the given regular expression will be picked up</td></tr><tr><td 
id="name">Minimum File Age</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The minimum age that a file 
must be in order to be pulled; any file younger than this amount of time (based 
on last modification date) will be ignored</td></tr><tr><td id="name">Maximum 
File Age</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">The maximum age that a file must be in order to be pulled; any 
file older than this amount of time (based on last modification date) will be 
ignored. Minimum value is 100ms.</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 FlowFiles are transferred to t
 his relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the absolute path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". If the Recurse Subdirectories property is 
set to true and a file is picked up from /tmp/abc/1/2/3, then the path 
attribute will be set to 
"/tmp/abc/1/2/3".</td></tr><tr><td>hdfs.owner</td><td>The user that owns the 
file in HDFS</td></tr><tr><td>hdfs.group</td><td>The group that owns the file 
in HDFS</td></tr><tr><td>hdfs.lastModified</td><td>The timestamp of when the 
file in HDFS was last modified, as milliseconds since midnight Jan 1, 1970 
UTC</td></tr><tr><td>hdfs.length</td><td>The number of bytes in the file in H
 DFS</td></tr><tr><td>hdfs.replication</td><td>The number of HDFS replicas for 
hte file</td></tr><tr><td>hdfs.permissions</td><td>The permissions for the file 
in HDFS. This is formatted as 3 characters for the owner, 3 for the group, and 
3 for other users. For example rw-rw-r--</td></tr></table><h3>State management: 
</h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>After
 performing a listing of HDFS files, the latest timestamp of all the files 
listed and the latest timestamp of all the files transferred are both stored. 
This allows the Processor to list only files that have been added or modified 
after this date the next time that the Processor is run, without having to 
store all of the actual filenames/paths which could lead to performance 
problems. State is stored across the cluster so that this Processor can be run 
on Primary Node only and if a new Primary Node is selected, the new node can 
pick up where the previous node left off, withou
 t duplicating the data.</td></tr></table><h3>Restricted: </h3>This component 
is not restricted.<h3>Input requirement: </h3>This component does not allow an 
incoming relationship.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>MoveHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">MoveHDFS</h1><h2>Description: </h2><p>Rename existing files or a 
directory of files (non-recursive) on Hadoop Distributed File System 
(HDFS).</p><h3>Tags: </h3><p>hadoop, HDFS, put, move, filesystem, restricted, 
moveHDFS</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><
 th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration. To use swebhdfs, see 'Additional 
Details' section of PutHDFS's documentation.<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Kerberos Principal</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos 
Keytab</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos k
 eytab associated with the principal. Requires nifi.kerberos.krb5.file to be 
set in your nifi.properties<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kerberos Relogin Period</td><td 
id="default-value">4 hours</td><td id="allowable-values"></td><td 
id="description">Period of time which should pass before attempting a kerberos 
relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Additional Classpath Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of paths to files and/or directories that will be added to 
the classpath. When specifying a directory, all files with in the directory 
will be added to the classpath, but further sub-directories will not be 
included.</td></tr><tr><td id="name"><strong>Conflict Resolution 
Strategy</strong></td><td id="default-value">fail</td><td 
id="allowable-values"><ul><li>replace <img 
src="../../../../../html/images/iconInfo.png" alt="Replaces the existing file 
if any." title="Replaces the existing file if any."></img></li><li>ignore <img 
src="../../../../../html/images/iconInfo.png" alt="Failed rename operation 
stops processing and routes to success." title="Failed rename operati
 on stops processing and routes to success."></img></li><li>fail <img 
src="../../../../../html/images/iconInfo.png" alt="Failing to rename a file 
routes to failure." title="Failing to rename a file routes to 
failure."></img></li></ul></td><td id="description">Indicates what should 
happen when a file with the same name already exists in the output 
directory</td></tr><tr><td id="name">Input Directory or File</td><td 
id="default-value">${path}</td><td id="allowable-values"></td><td 
id="description">The HDFS directory from which files should be read, or a 
single file to read.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Output 
Directory</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The HDFS directory where the 
files will be moved to<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>HDFS 
Operation</strong></td><td id="default-value">move</td><td id="allowab
 le-values"><ul><li>move</li><li>copy</li></ul></td><td id="description">The 
operation that will be performed on the source file</td></tr><tr><td 
id="name">File Filter Regex</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A Java Regular Expression for 
filtering Filenames; if a filter is supplied then only files whose names match 
that Regular Expression will be fetched, otherwise all files will be 
fetched</td></tr><tr><td id="name"><strong>Ignore Dotted Files</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true, files whose names begin with a dot (".") will be 
ignored</td></tr><tr><td id="name">Remote Owner</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Changes the owner of the HDFS file to this value after it is 
written. This only works if NiFi is running as a user that has HDFS super user 
privilege to change owner</td></tr><
 tr><td id="name">Remote Group</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Changes the group of the HDFS 
file to this value after it is written. This only works if NiFi is running as a 
user that has HDFS super user privilege to change 
group</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Files
 that have been successfully renamed on HDFS are transferred to this 
relationship</td></tr><tr><td>failure</td><td>Files that could not be renamed 
on HDFS are transferred to this relationship</td></tr></table><h3>Reads 
Attributes: </h3><table 
id="reads-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS comes from the value of this 
attribute.</td></tr></table><h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written
  to HDFS is stored in this 
attribute.</td></tr><tr><td>absolute.hdfs.path</td><td>The absolute path to the 
file on HDFS is stored in this attribute.</td></tr></table><h3>State 
management: </h3>This component does not store state.<h3>Restricted: </h3>This 
component is not restricted.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1,101 @@
+<!DOCTYPE html>
+<html lang="en">
+<!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+
+<head>
+  <meta charset="utf-8" />
+  <title>PutHDFS</title>
+  <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+</head>
+
+<body>
+  <!-- Processor Documentation 
================================================== -->
+  <h2>SSL Configuration:</h2>
+  <p>
+    Hadoop provides the ability to configure keystore and/or truststore 
properties. If you want to use SSL-secured file system like swebhdfs, you can 
use the Hadoop configurations instead of using SSL Context Service.
+    <ol>
+      <li>create 'ssl-client.xml' to configure the truststores.</li>
+      <p>ssl-client.xml Properties:</p>
+      <table>
+        <tr>
+          <th>Property</th>
+          <th>Default Value</th>
+          <th>Explanation</th>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.type</td>
+          <td>jks</td>
+          <td>Truststore file type</td>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.location</td>
+          <td>NONE</td>
+          <td>Truststore file location</td>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.password</td>
+          <td>NONE</td>
+          <td>Truststore file password</td>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.reload.interval</td>
+          <td>10000</td>
+          <td>Truststore reload interval, in milliseconds</td>
+        </tr>
+      </table>
+
+      <p>ssl-client.xml Example:</p>
+      <pre>
+&lt;configuration&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.type&lt;/name&gt;
+    &lt;value&gt;jks&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.location&lt;/name&gt;
+    &lt;value&gt;/path/to/truststore.jks&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.password&lt;/name&gt;
+    &lt;value&gt;clientfoo&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.reload.interval&lt;/name&gt;
+    &lt;value&gt;10000&lt;/value&gt;
+  &lt;/property&gt;
+&lt;/configuration&gt;
+                    </pre>
+
+      <li>put 'ssl-client.xml' to the location looked up in the classpath, 
like under NiFi conriguration directory.</li>
+
+      <li>set the name of 'ssl-client.xml' to <i>hadoop.ssl.client.conf</i> in 
the 'core-site.xml' which HDFS processors use.</li>
+      <pre>
+&lt;configuration&gt;
+    &lt;property&gt;
+      &lt;name&gt;fs.defaultFS&lt;/name&gt;
+      &lt;value&gt;swebhdfs://{namenode.hostname:port}&lt;/value&gt;
+    &lt;/property&gt;
+    &lt;property&gt;
+      &lt;name&gt;hadoop.ssl.client.conf&lt;/name&gt;
+      &lt;value&gt;ssl-client.xml&lt;/value&gt;
+    &lt;/property&gt;
+&lt;configuration&gt;
+                  </pre>
+    </ol>
+  </p>
+</body>
+
+</html>

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHDFS</h1><h2>Description: </h2><p>Write FlowFile data to Hadoop 
Distributed File System (HDFS)</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>hadoop, HDFS, put, copy, filesystem, restricted</p><h3>Properties: 
</h3><p>In the list below, the names of required properties appear in 
<strong>bold</strong>. Any other properties (not in bold) are considered 
optional. The table also indicates any default values, and whether a property 
supports the <a href="../../../../../html/expression-language-guide.html">NiFi 
Expression Language</a>.</p><table id="properties"><tr><th>
 Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name">Hadoop Configuration 
Resources</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A file or comma separated list of files which contains the 
Hadoop file system configuration. Without this, Hadoop will search the 
classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a 
default configuration. To use swebhdfs, see 'Additional Details' section of 
PutHDFS's documentation.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kerberos Principal</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos 
Keytab</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">
 Kerberos keytab associated with the principal. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos Relogin 
Period</td><td id="default-value">4 hours</td><td 
id="allowable-values"></td><td id="description">Period of time which should 
pass before attempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Additional Classpath Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of paths to files and/or directories that will be added to 
the classpath. When specifying a directory, all files with in the directory 
will be added to the classpath, but further sub-directories will not be 
included.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
parent HDFS directory to which files should be written. The directory will be 
created if it doesn't exist.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Conflict Resolution 
Strategy</strong></td><td id="default-value">fail</td><td 
id="allowable-values"><ul><li>replace <
 img src="../../../../../html/images/iconInfo.png" alt="Replaces the existing 
file if any." title="Replaces the existing file if any."></img></li><li>ignore 
<img src="../../../../../html/images/iconInfo.png" alt="Ignores the flow file 
and routes it to success." title="Ignores the flow file and routes it to 
success."></img></li><li>fail <img 
src="../../../../../html/images/iconInfo.png" alt="Penalizes the flow file and 
routes it to failure." title="Penalizes the flow file and routes it to 
failure."></img></li><li>append <img 
src="../../../../../html/images/iconInfo.png" alt="Appends to the existing file 
if any, creates a new file otherwise." title="Appends to the existing file if 
any, creates a new file otherwise."></img></li></ul></td><td 
id="description">Indicates what should happen when a file with the same name 
already exists in the output directory</td></tr><tr><td id="name">Block 
Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Size of eac
 h block as written to HDFS. This overrides the Hadoop 
Configuration</td></tr><tr><td id="name">IO Buffer Size</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Amount of memory to use to buffer file contents during IO. 
This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Replication</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Number of times that HDFS will 
replicate each file. This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Permissions umask</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A umask represented as an octal 
number which determines the permissions of files written to HDFS. This 
overrides the Hadoop Configuration dfs.umaskmode</td></tr><tr><td 
id="name">Remote Owner</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Changes the owner of the HDFS 
file to this value after it is written. This only work
 s if NiFi is running as a user that has HDFS super user privilege to change 
owner<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Remote Group</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Changes the group of the HDFS 
file to this value after it is written. This only works if NiFi is running as a 
user that has HDFS super user privilege to change group<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td 
id="name"><strong>Compression codec</strong></td><td 
id="default-value">NONE</td><td id="allowable-values"><ul><li>NONE <img 
src="../../../../../html/images/iconInfo.png" alt="No compression" title="No 
compression"></img></li><li>DEFAULT <img 
src="../../../../../html/images/iconInfo.png" alt="Default ZLIB compression" 
title="Default ZLIB compression"></img></li><li>BZIP <img 
src="../../../../../html/images/iconInfo.png" alt="BZIP compression" 
title="BZIP compression"></img></li><li>GZIP <img
  src="../../../../../html/images/iconInfo.png" alt="GZIP compression" 
title="GZIP compression"></img></li><li>LZ4 <img 
src="../../../../../html/images/iconInfo.png" alt="LZ4 compression" title="LZ4 
compression"></img></li><li>LZO <img 
src="../../../../../html/images/iconInfo.png" alt="LZO compression - it assumes 
LD_LIBRARY_PATH has been set and jar is available" title="LZO compression - it 
assumes LD_LIBRARY_PATH has been set and jar is 
available"></img></li><li>SNAPPY <img 
src="../../../../../html/images/iconInfo.png" alt="Snappy compression" 
title="Snappy compression"></img></li><li>AUTOMATIC <img 
src="../../../../../html/images/iconInfo.png" alt="Will attempt to 
automatically detect the compression codec." title="Will attempt to 
automatically detect the compression codec."></img></li></ul></td><td 
id="description">No Description Provided.</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Files
 tha
 t have been successfully written to HDFS are transferred to this 
relationship</td></tr><tr><td>failure</td><td>Files that could not be written 
to HDFS for some reason are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3><table 
id="reads-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS comes from the value of this 
attribute.</td></tr></table><h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS is stored in this 
attribute.</td></tr><tr><td>absolute.hdfs.path</td><td>The absolute path to the 
file on HDFS is stored in this attribute.</td></tr></table><h3>State 
management: </h3>This component does not store state.<h3>Restricted: 
</h3>Provides operator the ability to write to any file that NiFi has access to 
in HDFS or the local filesystem.<h3>Input requirement: </h3>This component 
requir
 es an incoming relationship.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.5.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHDFSEvents</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">GetHDFSEvents</h1><h2>Description: </h2><p>This processor polls the 
notification events provided by the HdfsAdmin API. Since this uses the 
HdfsAdmin APIs it is required to run as an HDFS super user. Currently there are 
six types of events (append, close, create, metadata, rename, and unlink). 
Please see org.apache.hadoop.hdfs.inotify.Event documentation for full 
explanations of each event. This processor will poll for new events based on a 
defined duration. For each event received a new flow file will be created with 
the expected attributes and the event itself serialized to JSON and written to 
th
 e flow file's content. For example, if event.type is APPEND then the content 
of the flow file will contain a JSON file containing the information about the 
append event. If successful the flow files are sent to the 'success' 
relationship. Be careful of where the generated flow files are stored. If the 
flow files are stored in one of processor's watch directories there will be a 
never ending flow of events. It is also important to be aware that this 
processor must consume all events. The filtering must happen within the 
processor. This is because the HDFS admin's event notifications API does not 
have filtering.</p><h3>Tags: </h3><p>hadoop, events, inotify, notifications, 
filesystem</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide
 .html">NiFi Expression Language</a>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name">Hadoop Configuration 
Resources</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A file or comma separated list of files which contains the 
Hadoop file system configuration. Without this, Hadoop will search the 
classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a 
default configuration. To use swebhdfs, see 'Additional Details' section of 
PutHDFS's documentation.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kerberos Principal</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos 
Keytab</td><td id="d
 efault-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos keytab associated with the principal. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos Relogin 
Period</td><td id="default-value">4 hours</td><td 
id="allowable-values"></td><td id="description">Period of time which should 
pass before attempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Additional Classpath Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of paths to files and/or directories that will be added to 
the classpath. When specifying a directory, all files with in the directory 
will be added to the classpath, but further sub-directories will not be 
included.</td></tr><tr><td id="name"><strong>Poll Duration</strong></td><td 
id="default-value">1 second</td><td id="allowable-values"></td><td 
id="description">The time before the polling method returns with the next batch 
of events if they exist. It may exceed this amount of time by up to the time 
required for an RPC to the NameNode.</td></tr><tr><td id="name"><strong>HDFS 
Path to Watch</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td 
 id="description">The HDFS path to get event notifications for. This property 
accepts both expression language and regular expressions. This will be 
evaluated during the OnScheduled phase.<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Ignore Hidden 
Files</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true and the final component of the path associated with a 
given event starts with a '.' then that event will not be 
processed.</td></tr><tr><td id="name"><strong>Event Types to Filter 
On</strong></td><td id="default-value">append, close, create, metadata, rename, 
unlink</td><td id="allowable-values"></td><td id="description">A 
comma-separated list of event types to process. Valid event types are: append, 
close, create, metadata, rename, and unlink. Case does not 
matter.</td></tr><tr><td id="name"><strong>IOException Retries During Event 
Polling</strong><
 /td><td id="default-value">3</td><td id="allowable-values"></td><td 
id="description">According to the HDFS admin API for event polling it is good 
to retry at least a few times. This number defines how many times the poll will 
be retried if it throws an IOException.</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 flow file with updated information about a specific event will be sent to this 
relationship.</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>mime.type</td><td>This
 is always 
application/json.</td></tr><tr><td>hdfs.inotify.event.type</td><td>This will 
specify the specific HDFS notification event type. Currently there are six 
types of events (append, close, create, metadata, rename, and 
unlink).</td></tr><tr><td>hdfs.inotify.event.path</td><td>The specific path 
that the even
 t is tied to.</td></tr></table><h3>State management: </h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>The
 last used transaction id is stored. This is used 
</td></tr></table><h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component does not allow an 
incoming relationship.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.ListHDFS/index.html">ListHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.FetchHBaseRow/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>FetchHBaseRow</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">FetchHBaseRow</h1><h2>Description: </h2><p>Fetches a row from an HBase 
table. The Destination property controls whether the cells are added as flow 
file attributes, or the row is written to the flow file content as JSON. This 
processor may be used to fetch a fixed row on a interval by specifying the 
table and row id directly in the processor, or it may be used to dynamically 
fetch rows by referencing the table and row id from incoming flow 
files.</p><h3>Tags: </h3><p>hbase, scan, fetch, get, enrich</p><h3>Properties: 
</h3><p>In the list below, the names of required properties appear in 
<strong>bol
 d</strong>. Any other properties (not in bold) are considered optional. The 
table also indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HBaseClientService<br/><strong>Implementation: </strong><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to fetch from.<br/>
 <strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name"><strong>Row Identifier</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The identifier of the row to 
fetch.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Columns</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">An optional comma-separated 
list of "&lt;colFamily&gt;:&lt;colQualifier&gt;" pairs to fetch. To return all 
columns for a given family, leave off the qualifier such as 
"&lt;colFamily1&gt;,&lt;colFamily2&gt;".<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td 
id="name"><strong>Destination</strong></td><td 
id="default-value">flowfile-attributes</td><td 
id="allowable-values"><ul><li>flowfile-attributes <img 
src="../../../../../html/images/iconInfo.png" alt="Adds the JSON document 
representing the row that was fetched as an attribute named hbase.row. The 
format of th
 e JSON document is determined by the JSON Format property. NOTE: Fetching many 
large rows into attributes may have a negative impact on performance." 
title="Adds the JSON document representing the row that was fetched as an 
attribute named hbase.row. The format of the JSON document is determined by the 
JSON Format property. NOTE: Fetching many large rows into attributes may have a 
negative impact on performance."></img></li><li>flowfile-content <img 
src="../../../../../html/images/iconInfo.png" alt="Overwrites the FlowFile 
content with a JSON document representing the row that was fetched. The format 
of the JSON document is determined by the JSON Format property." 
title="Overwrites the FlowFile content with a JSON document representing the 
row that was fetched. The format of the JSON document is determined by the JSON 
Format property."></img></li></ul></td><td id="description">Indicates whether 
the row fetched from HBase is written to FlowFile content or FlowFile 
Attributes.</td></t
 r><tr><td id="name"><strong>JSON Format</strong></td><td 
id="default-value">full-row</td><td id="allowable-values"><ul><li>full-row <img 
src="../../../../../html/images/iconInfo.png" alt="Creates a JSON document with 
the format: {&quot;row&quot;:&lt;row-id&gt;, 
&quot;cells&quot;:[{&quot;fam&quot;:&lt;col-fam&gt;, 
&quot;qual&quot;:&lt;col-val&gt;, &quot;val&quot;:&lt;value&gt;, 
&quot;ts&quot;:&lt;timestamp&gt;}]}." title="Creates a JSON document with the 
format: {&quot;row&quot;:&lt;row-id&gt;, 
&quot;cells&quot;:[{&quot;fam&quot;:&lt;col-fam&gt;, 
&quot;qual&quot;:&lt;col-val&gt;, &quot;val&quot;:&lt;value&gt;, 
&quot;ts&quot;:&lt;timestamp&gt;}]}."></img></li><li>col-qual-and-val <img 
src="../../../../../html/images/iconInfo.png" alt="Creates a JSON document with 
the format: {&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;, 
&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;." title="Creates a JSON 
document with the format: {&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&q
 uot;, 
&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;."></img></li></ul></td><td
 id="description">Specifies how to represent the HBase row as a JSON 
document.</td></tr><tr><td id="name"><strong>JSON Value 
Encoding</strong></td><td id="default-value">none</td><td 
id="allowable-values"><ul><li>none <img 
src="../../../../../html/images/iconInfo.png" alt="Creates a String using the 
bytes of given data and the given Character Set." title="Creates a String using 
the bytes of given data and the given Character Set."></img></li><li>base64 
<img src="../../../../../html/images/iconInfo.png" alt="Creates a Base64 
encoded String of the given data." title="Creates a Base64 encoded String of 
the given data."></img></li></ul></td><td id="description">Specifies how to 
represent row ids, column families, column qualifiers, and values when stored 
in FlowFile attributes, or written to JSON.</td></tr><tr><td 
id="name"><strong>Encode Character Set</strong></td><td 
id="default-value">UTF-8</td><td
  id="allowable-values"></td><td id="description">The character set used to 
encode the JSON representation of the row.</td></tr><tr><td 
id="name"><strong>Decode Character Set</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">The character set used to decode data from 
HBase.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 successful fetches are routed to this 
relationship.</td></tr><tr><td>failure</td><td>All failed fetches are routed to 
this relationship.</td></tr><tr><td>not found</td><td>All fetches where the row 
id is not found are routed to this relationship.</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>hbase.table</td><td>The
 name of the HBase table that the row was fetched 
from</td></tr><tr><td>hbase.row</td><td>A JSON docu
 ment representing the row. This property is only written when a Destination of 
flowfile-attributes is selected.</td></tr><tr><td>mime.type</td><td>Set to 
application/json when using a Destination of flowfile-content, not set or 
modified otherwise</td></tr></table><h3>State management: </h3>This component 
does not store state.<h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component requires an incoming 
relationship.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.GetHBase/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.GetHBase/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.GetHBase/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.GetHBase/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHBase</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">GetHBase</h1><h2>Description: </h2><p>This Processor polls HBase for any 
records in the specified table. The processor keeps track of the timestamp of 
the cells that it receives, so that as new records are pushed to HBase, they 
will automatically be pulled. Each record is output in JSON format, as {"row": 
"&lt;row key&gt;", "cells": { "&lt;column 1 family&gt;:&lt;column 1 
qualifier&gt;": "&lt;cell 1 value&gt;", "&lt;column 2 family&gt;:&lt;column 2 
qualifier&gt;": "&lt;cell 2 value&gt;", ... }}. For each record received, a 
Provenance RECEIVE event is emitted with the format hbase://&lt;table name&gt;/&
 lt;row key&gt;, where &lt;row key&gt; is the UTF-8 encoded value of the row's 
key.</p><h3>Tags: </h3><p>hbase, get, ingest</p><h3>Properties: </h3><p>In the 
list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>HBase Client 
Service</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>HBaseClientService<br/><strong>Implementation: </strong><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name">Distributed Cache Service</td><td 
id="default-value"></td><td i
 d="allowable-values"><strong>Controller Service API: 
</strong><br/>DistributedMapCacheClient<br/><strong>Implementations: 
</strong><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html">HBase_1_1_2_ClientMapCacheService</a><br/><a
 
href="../../../nifi-distributed-cache-services-nar/1.5.0/org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService/index.html">DistributedMapCacheClientService</a><br/><a
 
href="../../../nifi-redis-nar/1.5.0/org.apache.nifi.redis.service.RedisDistributedMapCacheClientService/index.html">RedisDistributedMapCacheClientService</a></td><td
 id="description">Specifies the Controller Service that should be used to 
maintain state about what has been pulled from HBase so that if a new node 
begins pulling data, it won't duplicate all of the work that has been 
done.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></t
 d><td id="description">The name of the HBase Table to put data 
into</td></tr><tr><td id="name">Columns</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of 
"&lt;colFamily&gt;:&lt;colQualifier&gt;" pairs to return when scanning. To 
return all columns for a given family, leave off the qualifier such as 
"&lt;colFamily1&gt;,&lt;colFamily2&gt;".</td></tr><tr><td id="name">Filter 
Expression</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">An HBase filter expression that will be applied to the scan. 
This property can not be used when also using the Columns 
property.</td></tr><tr><td id="name"><strong>Initial Time 
Range</strong></td><td id="default-value">None</td><td 
id="allowable-values"><ul><li>None</li><li>Current Time</li></ul></td><td 
id="description">The time range to use on the first scan of a table. None will 
pull the entire table on the first scan, Current Time will pull entries from 
that p
 oint forward.</td></tr><tr><td id="name"><strong>Character 
Set</strong></td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">Specifies which character set 
is used to encode the data in HBase</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 FlowFiles are routed to this relationship</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>hbase.table</td><td>The
 name of the HBase table that the data was pulled 
from</td></tr><tr><td>mime.type</td><td>Set to application/json to indicate 
that output is JSON</td></tr></table><h3>State management: </h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>After
 performing a fetching from HBase, stores a timestamp of the last-modified cell 
that was found. In addition, it stores
  the ID of the row(s) and the value of each cell that has that timestamp as 
its modification date. This is stored across the cluster and allows the next 
fetch to avoid duplicating data, even if this Processor is run on Primary Node 
only and the Primary Node changes.</td></tr></table><h3>Restricted: </h3>This 
component is not restricted.<h3>Input requirement: </h3>This component does not 
allow an incoming relationship.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseCell/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseCell/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseCell/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseCell/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHBaseCell</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHBaseCell</h1><h2>Description: </h2><p>Adds the Contents of a 
FlowFile to HBase as the value of a single cell</p><h3>Tags: </h3><p>hadoop, 
hbase</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></t
 r><tr><td id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HBaseClientService<br/><strong>Implementation: </strong><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to put data into<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Row Identifier</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the Row ID to use when inserting data into 
HBase<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Row Identifier Encoding Strategy</t
 d><td id="default-value">String</td><td id="allowable-values"><ul><li>String 
<img src="../../../../../html/images/iconInfo.png" alt="Stores the value of row 
id as a UTF-8 String." title="Stores the value of row id as a UTF-8 
String."></img></li><li>Binary <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of the rows 
id as a binary byte array. It expects that the row id is a binary formatted 
string." title="Stores the value of the rows id as a binary byte array. It 
expects that the row id is a binary formatted string."></img></li></ul></td><td 
id="description">Specifies the data type of Row ID used when inserting data 
into HBase. The default behavior is to convert the row id to a UTF-8 byte 
array. Choosing Binary will convert a binary formatted string to the correct 
byte[] representation. The Binary option should be used if you are using Binary 
row keys in HBase</td></tr><tr><td id="name"><strong>Column 
Family</strong></td><td id="default-value"></td><td id="al
 lowable-values"></td><td id="description">The Column Family to use when 
inserting data into HBase<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Column 
Qualifier</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Column Qualifier to use 
when inserting data into HBase<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Timestamp</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
timestamp for the cells being created in HBase. This field can be left blank 
and HBase will use the current time.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">25</td><td id="allowable-values"></td><td 
id="description">The maximum number of FlowFiles to process in a single 
execution. The FlowFiles will be grouped by table, and a single Put per table 
will be perfor
 med.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship after it has been successfully stored 
in HBase</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if it cannot be sent to HBase</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3>None 
specified.<h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3>This component is not restricted.<h3>Input 
requirement: </h3>This component requires an incoming 
relationship.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseJSON/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseJSON/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseJSON/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseJSON/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHBaseJSON</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHBaseJSON</h1><h2>Description: </h2><p>Adds rows to HBase based on 
the contents of incoming JSON documents. Each FlowFile must contain a single 
UTF-8 encoded JSON document, and any FlowFiles where the root element is not a 
single document will be routed to failure. Each JSON field name and value will 
become a column qualifier and value of the HBase row. Any fields with a null 
value will be skipped, and fields with a complex value will be handled 
according to the Complex Field Strategy. The row id can be specified either 
directly on the processor through the Row Identifier property, or can be ext
 racted from the JSON document by specifying the Row Identifier Field Name 
property. This processor will hold the contents of all FlowFiles for the given 
batch in memory at one time.</p><h3>Tags: </h3><p>hadoop, hbase, put, 
json</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HBaseClientService<br/><strong>Implementation: </strong><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_
 ClientService/index.html">HBase_1_1_2_ClientService</a></td><td 
id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to put data into<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Row Identifier</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the Row ID to use when inserting data into 
HBase<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Row Identifier Field Name</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Specifies the name of a JSON 
element whose value should be used as the row id for the given JSON 
document.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Row Identifier Encoding 
Strategy</td><td id="de
 fault-value">String</td><td id="allowable-values"><ul><li>String <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of row id 
as a UTF-8 String." title="Stores the value of row id as a UTF-8 
String."></img></li><li>Binary <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of the rows 
id as a binary byte array. It expects that the row id is a binary formatted 
string." title="Stores the value of the rows id as a binary byte array. It 
expects that the row id is a binary formatted string."></img></li></ul></td><td 
id="description">Specifies the data type of Row ID used when inserting data 
into HBase. The default behavior is to convert the row id to a UTF-8 byte 
array. Choosing Binary will convert a binary formatted string to the correct 
byte[] representation. The Binary option should be used if you are using Binary 
row keys in HBase</td></tr><tr><td id="name"><strong>Column 
Family</strong></td><td id="default-value"></td><td id="allowable-valu
 es"></td><td id="description">The Column Family to use when inserting data 
into HBase<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Timestamp</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
timestamp for the cells being created in HBase. This field can be left blank 
and HBase will use the current time.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">25</td><td id="allowable-values"></td><td 
id="description">The maximum number of FlowFiles to process in a single 
execution. The FlowFiles will be grouped by table, and a single Put per table 
will be performed.</td></tr><tr><td id="name"><strong>Complex Field 
Strategy</strong></td><td id="default-value">Text</td><td 
id="allowable-values"><ul><li>Fail <img 
src="../../../../../html/images/iconInfo.png" alt="Route entire FlowFile to 
failure if any elements contain complex valu
 es." title="Route entire FlowFile to failure if any elements contain complex 
values."></img></li><li>Warn <img src="../../../../../html/images/iconInfo.png" 
alt="Provide a warning and do not include field in row sent to HBase." 
title="Provide a warning and do not include field in row sent to 
HBase."></img></li><li>Ignore <img 
src="../../../../../html/images/iconInfo.png" alt="Silently ignore and do not 
include in row sent to HBase." title="Silently ignore and do not include in row 
sent to HBase."></img></li><li>Text <img 
src="../../../../../html/images/iconInfo.png" alt="Use the string 
representation of the complex field as the value of the given column." 
title="Use the string representation of the complex field as the value of the 
given column."></img></li></ul></td><td id="description">Indicates how to 
handle complex fields, i.e. fields that do not have a single text 
value.</td></tr><tr><td id="name"><strong>Field Encoding 
Strategy</strong></td><td id="default-value">String</td><t
 d id="allowable-values"><ul><li>String <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of each 
field as a UTF-8 String." title="Stores the value of each field as a UTF-8 
String."></img></li><li>Bytes <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of each 
field as the byte representation of the type derived from the JSON." 
title="Stores the value of each field as the byte representation of the type 
derived from the JSON."></img></li></ul></td><td id="description">Indicates how 
to store the value of each field in HBase. The default behavior is to convert 
each value from the JSON to a String, and store the UTF-8 bytes. Choosing Bytes 
will interpret the type of each field from the JSON, and convert the value to 
the byte representation of that type, meaning an integer will be stored as the 
byte representation of that integer.</td></tr></table><h3>Relationships: 
</h3><table id="relationships"><tr><th>Name</th><th>Description</th></tr><tr>
 <td>success</td><td>A FlowFile is routed to this relationship after it has 
been successfully stored in HBase</td></tr><tr><td>failure</td><td>A FlowFile 
is routed to this relationship if it cannot be sent to 
HBase</td></tr></table><h3>Reads Attributes: </h3>None specified.<h3>Writes 
Attributes: </h3>None specified.<h3>State management: </h3>This component does 
not store state.<h3>Restricted: </h3>This component is not restricted.<h3>Input 
requirement: </h3>This component requires an incoming 
relationship.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseRecord/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseRecord/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseRecord/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.5.0/org.apache.nifi.hbase.PutHBaseRecord/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHBaseRecord</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHBaseRecord</h1><h2>Description: </h2><p>Adds rows to HBase based on 
the contents of a flowfile using a configured record reader.</p><h3>Tags: 
</h3><p>hadoop, hbase, put, record</p><h3>Properties: </h3><p>In the list 
below, the names of required properties appear in <strong>bold</strong>. Any 
other properties (not in bold) are considered optional. The table also 
indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable
  Values</th><th>Description</th></tr><tr><td id="name"><strong>Record 
Reader</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>RecordReaderFactory<br/><strong>Implementations: </strong><a 
href="../../../nifi-record-serialization-services-nar/1.5.0/org.apache.nifi.csv.CSVReader/index.html">CSVReader</a><br/><a
 
href="../../../nifi-scripting-nar/1.5.0/org.apache.nifi.record.script.ScriptedReader/index.html">ScriptedReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.5.0/org.apache.nifi.grok.GrokReader/index.html">GrokReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.5.0/org.apache.nifi.json.JsonTreeReader/index.html">JsonTreeReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.5.0/org.apache.nifi.avro.AvroReader/index.html">AvroReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.5.0/org.apache.nifi.json.JsonPathReader/index.html">JsonPa
 thReader</a></td><td id="description">Specifies the Controller Service to use 
for parsing incoming data and determining the data's schema</td></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HBaseClientService<br/><strong>Implementation: </strong><a 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table to put data into<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Row Identifier Field 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Spec
 ifies the name of a record field whose value should be used as the row id for 
the given record.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Row Identifier Encoding 
Strategy</td><td id="default-value">String</td><td 
id="allowable-values"><ul><li>String <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of row id 
as a UTF-8 String." title="Stores the value of row id as a UTF-8 
String."></img></li><li>Binary <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of the rows 
id as a binary byte array. It expects that the row id is a binary formatted 
string." title="Stores the value of the rows id as a binary byte array. It 
expects that the row id is a binary formatted string."></img></li></ul></td><td 
id="description">Specifies the data type of Row ID used when inserting data 
into HBase. The default behavior is to convert the row id to a UTF-8 byte 
array. Choosing Binary will convert a binary formatted string
  to the correct byte[] representation. The Binary option should be used if you 
are using Binary row keys in HBase</td></tr><tr><td id="name"><strong>Null 
Field Strategy</strong></td><td id="default-value">skip-field</td><td 
id="allowable-values"><ul><li>Empty Bytes <img 
src="../../../../../html/images/iconInfo.png" alt="Use empty bytes. This can be 
used to overwrite existing fields or to put an empty placeholder value if you 
want every field to be present even if it has a null value." title="Use empty 
bytes. This can be used to overwrite existing fields or to put an empty 
placeholder value if you want every field to be present even if it has a null 
value."></img></li><li>Skip Field <img 
src="../../../../../html/images/iconInfo.png" alt="Skip the field (don't 
process it at all)." title="Skip the field (don't process it at 
all)."></img></li></ul></td><td id="description">Handle null field values as 
either an empty string or skip them altogether.</td></tr><tr><td 
id="name"><strong>Colu
 mn Family</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Column Family to use when 
inserting data into HBase<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Timestamp Field Name</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the name of a record field whose value should be 
used as the timestamp for the cells in HBase. The value of this field must be a 
number, string, or date that can be converted to a long. If this field is left 
blank, HBase will use the current time.<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Batch 
Size</strong></td><td id="default-value">1000</td><td 
id="allowable-values"></td><td id="description">The maximum number of records 
to be sent to HBase at any one time from the record set.</td></tr><tr><td 
id="name"><strong>Complex Field Strategy</strong></td><td 
id="default-value">Text</td><td
  id="allowable-values"><ul><li>Fail <img 
src="../../../../../html/images/iconInfo.png" alt="Route entire FlowFile to 
failure if any elements contain complex values." title="Route entire FlowFile 
to failure if any elements contain complex values."></img></li><li>Warn <img 
src="../../../../../html/images/iconInfo.png" alt="Provide a warning and do not 
include field in row sent to HBase." title="Provide a warning and do not 
include field in row sent to HBase."></img></li><li>Ignore <img 
src="../../../../../html/images/iconInfo.png" alt="Silently ignore and do not 
include in row sent to HBase." title="Silently ignore and do not include in row 
sent to HBase."></img></li><li>Text <img 
src="../../../../../html/images/iconInfo.png" alt="Use the string 
representation of the complex field as the value of the given column." 
title="Use the string representation of the complex field as the value of the 
given column."></img></li></ul></td><td id="description">Indicates how to 
handle complex field
 s, i.e. fields that do not have a single text value.</td></tr><tr><td 
id="name"><strong>Field Encoding Strategy</strong></td><td 
id="default-value">String</td><td id="allowable-values"><ul><li>String <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of each 
field as a UTF-8 String." title="Stores the value of each field as a UTF-8 
String."></img></li><li>Bytes <img 
src="../../../../../html/images/iconInfo.png" alt="Stores the value of each 
field as the byte representation of the type derived from the record." 
title="Stores the value of each field as the byte representation of the type 
derived from the record."></img></li></ul></td><td id="description">Indicates 
how to store the value of each field in HBase. The default behavior is to 
convert each value from the record to a String, and store the UTF-8 bytes. 
Choosing Bytes will interpret the type of each field from the record, and 
convert the value to the byte representation of that type, meaning an integer 
wil
 l be stored as the byte representation of that 
integer.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship after it has been successfully stored 
in HBase</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if it cannot be sent to HBase</td></tr></table><h3>Reads 
Attributes: </h3><table 
id="reads-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>restart.index</td><td>Reads
 restart.index when it needs to replay part of a record set that did not get 
into HBase.</td></tr></table><h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>restart.index</td><td>Writes
 restart.index when a batch fails to be insert into 
HBase</td></tr></table><h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3>This component is not restricted.<h3>Input 
requirement: </h3>Th
 is component requires an incoming relationship.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html?rev=1821033&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.5.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html
 Fri Jan 12 21:00:14 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>HBase_1_1_2_ClientMapCacheService</title><link 
rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">HBase_1_1_2_ClientMapCacheService</h1><h2>Description: </h2><p>Provides 
the ability to use an HBase table as a cache, in place of a 
DistributedMapCache. Uses a HBase_1_1_2_ClientService controller to communicate 
with HBase.</p><h3>Tags: </h3><p>distributed, cache, state, map, cluster, 
hbase</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a href="../../../../../html/expression
 -language-guide.html">NiFi Expression Language</a>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>HBase Cache Table 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Name of the table on HBase to 
use for the cache.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>HBase Client 
Service</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>HBaseClientService<br/><strong>Implementation: </strong><a 
href="../org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the HBase Client Controller Service to use for 
accessing HBase.</td></tr><tr><td id="name"><strong>HBase Column 
Family</strong></td><td id="default-value">f</td><td 
id="allowable-values"></td><td id="description">Name of 
 the column family on HBase to use for the cache.<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name"><strong>HBase 
Column Qualifier</strong></td><td id="default-value">q</td><td 
id="allowable-values"></td><td id="description">Name of the column qualifier on 
HBase to use for the cache<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>State management: </h3>This component does 
not store state.<h3>Restricted: </h3>This component is not restricted.<h3>See 
Also:</h3><p><a 
href="../org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></p></body></html>
\ No newline at end of file


Reply via email to