Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHDFSSequenceFile</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">GetHDFSSequenceFile</h1><h2>Description: </h2><p>Fetch sequence files 
from Hadoop Distributed File System (HDFS) into FlowFiles</p><h3>Tags: 
</h3><p>hadoop, HDFS, get, fetch, ingest, source, sequence 
file</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>De
 fault Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration. To use swebhdfs, see 'Additional 
Details' section of PutHDFS's documentation.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Kerberos Credentials Service</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
</strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.9.0/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos principal to 
authenticate as. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos keytab associated with 
the principal. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Relogin Period</td><td id="default-value">4 hours</td><td 
id="allowable-values"></td><td id="description">Period of time which should 
pass before atte
 mpting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Additional Classpath Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of paths 
to files and/or directories that will be added to the classpath. When 
specifying a directory, all files with in the directory will be added to the 
classpath, but further sub-directories will not be included.</td></tr><tr><td 
id="name"><strong>Directory</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The HDFS directory from which 
files should be read<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Recurse Subdirectories</strong></td><td 
id="default-value">true</td><td id="all
 owable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to pull files from subdirectories of the 
HDFS directory</td></tr><tr><td id="name"><strong>Keep Source 
File</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Determines whether to delete the file from HDFS after it has 
been successfully transferred. If true, the file will be fetched repeatedly. 
This is intended for testing only.</td></tr><tr><td id="name">File Filter 
Regex</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A Java Regular Expression for filtering Filenames; if a filter 
is supplied then only files whose names match that Regular Expression will be 
fetched, otherwise all files will be fetched</td></tr><tr><td 
id="name"><strong>Filter Match Name Only</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><t
 d id="description">If true then File Filter Regex will match on just the 
filename, otherwise subdirectory names will be included with filename in the 
regex comparison</td></tr><tr><td id="name"><strong>Ignore Dotted 
Files</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true, files whose names begin with a dot (".") will be 
ignored</td></tr><tr><td id="name"><strong>Minimum File Age</strong></td><td 
id="default-value">0 sec</td><td id="allowable-values"></td><td 
id="description">The minimum age that a file must be in order to be pulled; any 
file younger than this amount of time (based on last modification date) will be 
ignored</td></tr><tr><td id="name">Maximum File Age</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
maximum age that a file must be in order to be pulled; any file older than this 
amount of time (based on last modification date) will be ignore
 d</td></tr><tr><td id="name"><strong>Polling Interval</strong></td><td 
id="default-value">0 sec</td><td id="allowable-values"></td><td 
id="description">Indicates how long to wait between performing directory 
listings</td></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">100</td><td id="allowable-values"></td><td 
id="description">The maximum number of files to pull in each iteration, based 
on run schedule.</td></tr><tr><td id="name">IO Buffer Size</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Amount of memory to use to buffer file contents during IO. 
This overrides the Hadoop Configuration</td></tr><tr><td 
id="name"><strong>Compression codec</strong></td><td 
id="default-value">NONE</td><td id="allowable-values"><ul><li>NONE <img 
src="../../../../../html/images/iconInfo.png" alt="No compression" title="No 
compression"></img></li><li>DEFAULT <img 
src="../../../../../html/images/iconInfo.png" alt="Default ZLIB compression
 " title="Default ZLIB compression"></img></li><li>BZIP <img 
src="../../../../../html/images/iconInfo.png" alt="BZIP compression" 
title="BZIP compression"></img></li><li>GZIP <img 
src="../../../../../html/images/iconInfo.png" alt="GZIP compression" 
title="GZIP compression"></img></li><li>LZ4 <img 
src="../../../../../html/images/iconInfo.png" alt="LZ4 compression" title="LZ4 
compression"></img></li><li>LZO <img 
src="../../../../../html/images/iconInfo.png" alt="LZO compression - it assumes 
LD_LIBRARY_PATH has been set and jar is available" title="LZO compression - it 
assumes LD_LIBRARY_PATH has been set and jar is 
available"></img></li><li>SNAPPY <img 
src="../../../../../html/images/iconInfo.png" alt="Snappy compression" 
title="Snappy compression"></img></li><li>AUTOMATIC <img 
src="../../../../../html/images/iconInfo.png" alt="Will attempt to 
automatically detect the compression codec." title="Will attempt to 
automatically detect the compression codec."></img></li></ul></td><td id="de
 scription">No Description Provided.</td></tr><tr><td 
id="name"><strong>FlowFile Content</strong></td><td id="default-value">VALUE 
ONLY</td><td id="allowable-values"><ul><li>VALUE ONLY</li><li>KEY VALUE 
PAIR</li></ul></td><td id="description">Indicate if the content is to be both 
the key and value of the Sequence File, or just the 
value.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 files retrieved from HDFS are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the relative path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". 
 If the Recurse Subdirectories property is set to true and a file is picked up 
from /tmp/abc/1/2/3, then the path attribute will be set to 
"abc/1/2/3".</td></tr></table><h3>State management: </h3>This component does 
not store state.<h3>Restricted: </h3><table id="restrictions"><tr><th>Required 
Permission</th><th>Explanation</th></tr><tr><td>read 
filesystem</td><td>Provides operator the ability to retrieve any file that NiFi 
has access to in HDFS or the local filesystem.</td></tr><tr><td>write 
filesystem</td><td>Provides operator the ability to delete any file that NiFi 
has access to in HDFS or the local filesystem.</td></tr></table><h3>Input 
requirement: </h3>This component does not allow an incoming 
relationship.<h3>System Resource Considerations:</h3>None specified.<h3>See 
Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/additionalDetails.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/additionalDetails.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,104 @@
+<!DOCTYPE html>
+<html lang="en" xmlns="http://www.w3.org/1999/html";>
+<!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+
+<head>
+    <meta charset="utf-8"/>
+    <title>PutHDFS</title>
+    <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"/>
+</head>
+
+<body>
+<!-- Processor Documentation 
================================================== -->
+<h1>ListHDFS Filter Modes</h1>
+<p>
+There are three filter modes available for ListHDFS that determine how the 
regular expression in the <b><code>File Filter</code></b> property will be 
applied to listings in HDFS.
+<ul>
+    <li><b><code>Directories and Files</code></b></li>
+Filtering will be applied to the names of directories and files.  If 
<b><code>Recurse Subdirectories</code></b> is set to true, only subdirectories 
with a matching name will be searched for files that match the regular 
expression defined in <b><code>File Filter</code></b>.
+    <li><b><code>Files Only</code></b></li>
+Filtering will only be applied to the names of files.  If <b><code>Recurse 
Subdirectories</code></b> is set to true, the entire subdirectory tree will be 
searched for files that match the regular expression defined in <b><code>File 
Filter</code></b>.
+    <li><b><code>Full Path</code></b></li>
+Filtering will be applied to the full path of files.  If <b><code>Recurse 
Subdirectories</code></b> is set to true, the entire subdirectory tree will be 
searched for files in which the full path of the file matches the regular 
expression defined in <b><code>File Filter</code></b>.
+</ul>
+<p>
+<h2>Examples:</h2>
+For the given examples, the following directory structure is used:
+<br>
+<br>
+    data<br>
+    ├── readme.txt<br>
+    ├── bin<br>
+    │   ├── readme.txt<br>
+    │   ├── 1.bin<br>
+    │   ├── 2.bin<br>
+    │   └── 3.bin<br>
+    ├── csv<br>
+    │   ├── readme.txt<br>
+    │   ├── 1.csv<br>
+    │   ├── 2.csv<br>
+    │   └── 3.csv<br>
+    └── txt<br>
+    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ├── readme.txt<br>
+    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ├── 1.txt<br>
+    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ├── 2.txt<br>
+    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; └── 3.txt<br>
+    <br><br>
+<h3><b>Directories and Files</b></h3>
+This mode is useful when the listing should match the names of directories and 
files with the regular expression defined in <b><code>File Filter</code></b>.  
When <b><code>Recurse Subdirectories</code></b> is true, this mode allows the 
user to filter for files in subdirectories with names that match the regular 
expression defined in <b><code>File Filter</code></b>.
+<br>
+<br>
+ListHDFS configuration:
+<table><tr><th><b><code>Property</code></b></th><th><b><code>Value</code></b></th></tr><tr><td><b><code>Directory</code></b></td><td><code>/data</code></td></tr><tr><td><b><code>Recurse
 Subdirectories</code></b></td><td>true</td><tr><td><b><code>File 
Filter</code></b></td><td><code>.*txt.*</code></td></tr><tr><td><code><b>Filter 
Mode</b></code></td><td><code>Directories and Files</code></td></tr></table>
+<p>ListHDFS results:
+<ul>
+    <li>/data/readme.txt</li>
+    <li>/data/txt/readme.txt</li>
+    <li>/data/txt/1.txt</li>
+    <li>/data/txt/2.txt</li>
+    <li>/data/txt/3.txt</li>
+</ul>
+<h3><b>Files Only</b></h3>
+This mode is useful when the listing should match only the names of files with 
the regular expression defined in <b><code>File Filter</code></b>.  Directory 
names will not be matched against the regular expression defined in 
<b><code>File Filter</code></b>.  When <b><code>Recurse 
Subdirectories</code></b> is true, this mode allows the user to filter for 
files in the entire subdirectory tree of the directory specified in the 
<b><code>Directory</code></b> property.
+<br>
+<br>
+ListHDFS configuration:
+<table><tr><th><b><code>Property</code></b></th><th><b><code>Value</code></b></th></tr><tr><td><b><code>Directory</code></b></td><td><code>/data</code></td></tr><tr><td><b><code>Recurse
 Subdirectories</code></b></td><td>true</td><tr><td><b><code>File 
Filter</code></b></td><td><code>[^\.].*\.txt</code></td></tr><tr><td><code><b>Filter
 Mode</b></code></td><td><code>Files Only</code></td></tr></table>
+<p>ListHDFS results:
+<ul>
+    <li>/data/readme.txt</li>
+    <li>/data/bin/readme.txt</li>
+    <li>/data/csv/readme.txt</li>
+    <li>/data/txt/readme.txt</li>
+    <li>/data/txt/1.txt</li>
+    <li>/data/txt/2.txt</li>
+    <li>/data/txt/3.txt</li>
+</ul>
+<h3><b>Full Path</b></h3>
+This mode is useful when the listing should match the entire path of a file 
with the regular expression defined in <b><code>File Filter</code></b>.  When 
<b><code>Recurse Subdirectories</code></b> is true, this mode allows the user 
to filter for files in the entire subdirectory tree of the directory specified 
in the <b><code>Directory</code></b> property while allowing filtering based on 
the full path of each file.
+<br>
+<br>
+ListHDFS configuration:
+<table><tr><th><b><code>Property</code></b></th><th><b><code>Value</code></b></th></tr><tr><td><b><code>Directory</code></b></td><td><code>/data</code></td></tr><tr><td><b><code>Recurse
 Subdirectories</code></b></td><td>true</td><tr><td><b><code>File 
Filter</code></b></td><td><code>(/.*/)*csv/.*</code></td></tr><tr><td><code><b>Filter
 Mode</b></code></td><td><code>Full Path</code></td></tr></table>
+<p>ListHDFS results:
+<ul>
+    <li>/data/csv/readme.txt</li>
+    <li>/data/csv/1.csv</li>
+    <li>/data/csv/2.csv</li>
+    <li>/data/csv/3.csv</li>
+</ul>
+</body>
+</html>

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ListHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">ListHDFS</h1><h2>Description: </h2><p>Retrieves a listing of files from 
HDFS. Each time a listing is performed, the files with the latest timestamp 
will be excluded and picked up during the next execution of the processor. This 
is done to ensure that we do not miss any files, or produce duplicates, in the 
cases where files with the same timestamp are written immediately before and 
after a single execution of the processor. For each file that is listed in 
HDFS, this processor creates a FlowFile that represents the HDFS file to be 
fetched in conjunction with FetchHDFS. This Processor is designed to run o
 n Primary Node only in a cluster. If the primary node changes, the new Primary 
Node will pick up where the previous node left off without duplicating all of 
the data. Unlike GetHDFS, this Processor does not delete any data from 
HDFS.</p><p><a href="additionalDetails.html">Additional 
Details...</a></p><h3>Tags: </h3><p>hadoop, HDFS, get, list, ingest, source, 
filesystem</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of
  files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration. To use swebhdfs, see 'Additional 
Details' section of PutHDFS's documentation.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Kerberos Credentials Service</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
</strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.9.0/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td
 ><td id="description">Kerberos principal to authenticate as. Requires 
 >nifi.kerberos.krb5.file to be set in your 
 >nifi.properties<br/><strong>Supports Expression Language: true (will be 
 >evaluated using variable registry only)</strong></td></tr><tr><td 
 >id="name">Kerberos Keytab</td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">Kerberos keytab associated 
 >with the principal. Requires nifi.kerberos.krb5.file to be set in your 
 >nifi.properties<br/><strong>Supports Expression Language: true (will be 
 >evaluated using variable registry only)</strong></td></tr><tr><td 
 >id="name">Kerberos Relogin Period</td><td id="default-value">4 hours</td><td 
 >id="allowable-values"></td><td id="description">Period of time which should 
 >pass before attempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Additional Classpath Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of paths 
to files and/or directories that will be added to the classpath. When 
specifying a directory, all files with in the directory will be added to the 
classpath, but further sub-directories will not be included.</td></tr><tr><td 
id="name">Distributed Cache Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>DistributedMapCacheClient<br/><strong>Implementations: 
</strong><a 
href="../../../nifi-distributed-cache-services-nar/1.9.0/org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService/index.html">DistributedMapCacheClientService</a><br/>
 <a 
href="../../../nifi-redis-nar/1.9.0/org.apache.nifi.redis.service.RedisDistributedMapCacheClientService/index.html">RedisDistributedMapCacheClientService</a><br/><a
 
href="../../../nifi-couchbase-nar/1.9.0/org.apache.nifi.couchbase.CouchbaseMapCacheClient/index.html">CouchbaseMapCacheClient</a><br/><a
 
href="../../../nifi-hbase_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_2_ClientMapCacheService/index.html">HBase_2_ClientMapCacheService</a><br/><a
 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html">HBase_1_1_2_ClientMapCacheService</a></td><td
 id="description">This property is ignored.  State will be stored in the LOCAL 
or CLUSTER scope by the State Manager based on NiFi's 
configuration.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
HDFS directory from which files should be read<br/><strong>Supports Expres
 sion Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>Recurse 
Subdirectories</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to list files from subdirectories of the 
HDFS directory</td></tr><tr><td id="name"><strong>File Filter</strong></td><td 
id="default-value">[^\.].*</td><td id="allowable-values"></td><td 
id="description">Only files whose names match the given regular expression will 
be picked up</td></tr><tr><td id="name"><strong>File Filter 
Mode</strong></td><td 
id="default-value">filter-mode-directories-and-files</td><td 
id="allowable-values"><ul><li>Directories and Files <img 
src="../../../../../html/images/iconInfo.png" alt="Filtering will be applied to 
the names of directories and files.  If Recurse Subdirectories is set to true, 
only subdirectories with a matching name will be searched for files that match 
the re
 gular expression defined in File Filter." title="Filtering will be applied to 
the names of directories and files.  If Recurse Subdirectories is set to true, 
only subdirectories with a matching name will be searched for files that match 
the regular expression defined in File Filter."></img></li><li>Files Only <img 
src="../../../../../html/images/iconInfo.png" alt="Filtering will only be 
applied to the names of files.  If Recurse Subdirectories is set to true, the 
entire subdirectory tree will be searched for files that match the regular 
expression defined in File Filter." title="Filtering will only be applied to 
the names of files.  If Recurse Subdirectories is set to true, the entire 
subdirectory tree will be searched for files that match the regular expression 
defined in File Filter."></img></li><li>Full Path <img 
src="../../../../../html/images/iconInfo.png" alt="Filtering will be applied to 
the full path of files.  If Recurse Subdirectories is set to true, the entire 
subdirectory
  tree will be searched for files in which the full path of the file matches 
the regular expression defined in File Filter." title="Filtering will be 
applied to the full path of files.  If Recurse Subdirectories is set to true, 
the entire subdirectory tree will be searched for files in which the full path 
of the file matches the regular expression defined in File 
Filter."></img></li></ul></td><td id="description">Determines how the regular 
expression in  File Filter will be used when retrieving 
listings.</td></tr><tr><td id="name">Minimum File Age</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
minimum age that a file must be in order to be pulled; any file younger than 
this amount of time (based on last modification date) will be 
ignored</td></tr><tr><td id="name">Maximum File Age</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
maximum age that a file must be in order to be pulled; any file older than th
 is amount of time (based on last modification date) will be ignored. Minimum 
value is 100ms.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 FlowFiles are transferred to this relationship</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the absolute path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". If the Recurse Subdirectories property is 
set to true and a file is picked up from /tmp/abc/1/2/3, then the path 
attribute will be set to 
"/tmp/abc/1/2/3".</td></tr><tr><td>hdfs.owner</td><td>The user that owns the 
file in HDFS</td></tr><tr><td>hdfs.group</td><t
 d>The group that owns the file in 
HDFS</td></tr><tr><td>hdfs.lastModified</td><td>The timestamp of when the file 
in HDFS was last modified, as milliseconds since midnight Jan 1, 1970 
UTC</td></tr><tr><td>hdfs.length</td><td>The number of bytes in the file in 
HDFS</td></tr><tr><td>hdfs.replication</td><td>The number of HDFS replicas for 
hte file</td></tr><tr><td>hdfs.permissions</td><td>The permissions for the file 
in HDFS. This is formatted as 3 characters for the owner, 3 for the group, and 
3 for other users. For example rw-rw-r--</td></tr></table><h3>State management: 
</h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>After
 performing a listing of HDFS files, the latest timestamp of all the files 
listed and the latest timestamp of all the files transferred are both stored. 
This allows the Processor to list only files that have been added or modified 
after this date the next time that the Processor is run, without having to 
store all of the
  actual filenames/paths which could lead to performance problems. State is 
stored across the cluster so that this Processor can be run on Primary Node 
only and if a new Primary Node is selected, the new node can pick up where the 
previous node left off, without duplicating the 
data.</td></tr></table><h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component does not allow an 
incoming relationship.<h3>System Resource Considerations:</h3>None 
specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.MoveHDFS/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>MoveHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">MoveHDFS</h1><h2>Description: </h2><p>Rename existing files or a 
directory of files (non-recursive) on Hadoop Distributed File System 
(HDFS).</p><h3>Tags: </h3><p>hadoop, HDFS, put, move, filesystem, 
moveHDFS</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default V
 alue</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration. To use swebhdfs, see 'Additional 
Details' section of PutHDFS's documentation.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Kerberos Credentials Service</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
</strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.9.0/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="des
 cription">Specifies the Kerberos Credentials Controller Service that should be 
used for authenticating with Kerberos</td></tr><tr><td id="name">Kerberos 
Principal</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos principal to authenticate as. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Kerberos Keytab</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos keytab associated with the principal. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Kerberos Relogin Period</td><td 
id="default-value">4 hours</td><td id="allowable-values"></td><td 
id="description">Period of time which should pass before attempting 
 a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Additional Classpath Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of paths 
to files and/or directories that will be added to the classpath. When 
specifying a directory, all files with in the directory will be added to the 
classpath, but further sub-directories will not be included.</td></tr><tr><td 
id="name"><strong>Conflict Resolution Strategy</strong></td><td 
id="default-value">fail</td><td id="allowable-values"><ul><li>replace <img 
src="../../../../../html/images/iconInfo.png" alt="Replaces the existing file 
if any." title="Replaces the existing file if any."></img></li><li>ignore <img 
src="../../../../../html/images/iconInfo.png" alt="Failed rename operation 
stops processing and
  routes to success." title="Failed rename operation stops processing and 
routes to success."></img></li><li>fail <img 
src="../../../../../html/images/iconInfo.png" alt="Failing to rename a file 
routes to failure." title="Failing to rename a file routes to 
failure."></img></li></ul></td><td id="description">Indicates what should 
happen when a file with the same name already exists in the output 
directory</td></tr><tr><td id="name"><strong>Input Directory or 
File</strong></td><td id="default-value">${path}</td><td 
id="allowable-values"></td><td id="description">The HDFS directory from which 
files should be read, or a single file to read.<br/><strong>Supports Expression 
Language: true (will be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td id="name"><strong>Output 
Directory</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The HDFS directory where the 
files will be moved to<br/><strong>Supports Expression
  Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>HDFS 
Operation</strong></td><td id="default-value">move</td><td 
id="allowable-values"><ul><li>move</li><li>copy</li></ul></td><td 
id="description">The operation that will be performed on the source 
file</td></tr><tr><td id="name">File Filter Regex</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
Java Regular Expression for filtering Filenames; if a filter is supplied then 
only files whose names match that Regular Expression will be fetched, otherwise 
all files will be fetched</td></tr><tr><td id="name"><strong>Ignore Dotted 
Files</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true, files whose names begin with a dot (".") will be 
ignored</td></tr><tr><td id="name">Remote Owner</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="descr
 iption">Changes the owner of the HDFS file to this value after it is written. 
This only works if NiFi is running as a user that has HDFS super user privilege 
to change owner</td></tr><tr><td id="name">Remote Group</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Changes the group of the HDFS file to this value after it is 
written. This only works if NiFi is running as a user that has HDFS super user 
privilege to change group</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Files
 that have been successfully renamed on HDFS are transferred to this 
relationship</td></tr><tr><td>failure</td><td>Files that could not be renamed 
on HDFS are transferred to this relationship</td></tr></table><h3>Reads 
Attributes: </h3><table 
id="reads-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS comes from the value of th
 is attribute.</td></tr></table><h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS is stored in this 
attribute.</td></tr><tr><td>absolute.hdfs.path</td><td>The absolute path to the 
file on HDFS is stored in this attribute.</td></tr></table><h3>State 
management: </h3>This component does not store state.<h3>Restricted: 
</h3><table id="restrictions"><tr><th>Required 
Permission</th><th>Explanation</th></tr><tr><td>read 
filesystem</td><td>Provides operator the ability to retrieve any file that NiFi 
has access to in HDFS or the local filesystem.</td></tr><tr><td>write 
filesystem</td><td>Provides operator the ability to delete any file that NiFi 
has access to in HDFS or the local filesystem.</td></tr></table><h3>Input 
requirement: </h3>This component allows an incoming relationship.<h3>System 
Resource Considerations:</h3>None specified.<h3>See Also:</h3><p><a 
href="../org.apache.ni
 fi.processors.hadoop.PutHDFS/index.html">PutHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/additionalDetails.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,101 @@
+<!DOCTYPE html>
+<html lang="en">
+<!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+
+<head>
+  <meta charset="utf-8" />
+  <title>PutHDFS</title>
+  <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+</head>
+
+<body>
+  <!-- Processor Documentation 
================================================== -->
+  <h2>SSL Configuration:</h2>
+  <p>
+    Hadoop provides the ability to configure keystore and/or truststore 
properties. If you want to use SSL-secured file system like swebhdfs, you can 
use the Hadoop configurations instead of using SSL Context Service.
+    <ol>
+      <li>create 'ssl-client.xml' to configure the truststores.</li>
+      <p>ssl-client.xml Properties:</p>
+      <table>
+        <tr>
+          <th>Property</th>
+          <th>Default Value</th>
+          <th>Explanation</th>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.type</td>
+          <td>jks</td>
+          <td>Truststore file type</td>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.location</td>
+          <td>NONE</td>
+          <td>Truststore file location</td>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.password</td>
+          <td>NONE</td>
+          <td>Truststore file password</td>
+        </tr>
+        <tr>
+          <td>ssl.client.truststore.reload.interval</td>
+          <td>10000</td>
+          <td>Truststore reload interval, in milliseconds</td>
+        </tr>
+      </table>
+
+      <p>ssl-client.xml Example:</p>
+      <pre>
+&lt;configuration&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.type&lt;/name&gt;
+    &lt;value&gt;jks&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.location&lt;/name&gt;
+    &lt;value&gt;/path/to/truststore.jks&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.password&lt;/name&gt;
+    &lt;value&gt;clientfoo&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;ssl.client.truststore.reload.interval&lt;/name&gt;
+    &lt;value&gt;10000&lt;/value&gt;
+  &lt;/property&gt;
+&lt;/configuration&gt;
+                    </pre>
+
+      <li>put 'ssl-client.xml' to the location looked up in the classpath, 
like under NiFi conriguration directory.</li>
+
+      <li>set the name of 'ssl-client.xml' to <i>hadoop.ssl.client.conf</i> in 
the 'core-site.xml' which HDFS processors use.</li>
+      <pre>
+&lt;configuration&gt;
+    &lt;property&gt;
+      &lt;name&gt;fs.defaultFS&lt;/name&gt;
+      &lt;value&gt;swebhdfs://{namenode.hostname:port}&lt;/value&gt;
+    &lt;/property&gt;
+    &lt;property&gt;
+      &lt;name&gt;hadoop.ssl.client.conf&lt;/name&gt;
+      &lt;value&gt;ssl-client.xml&lt;/value&gt;
+    &lt;/property&gt;
+&lt;configuration&gt;
+                  </pre>
+    </ol>
+  </p>
+</body>
+
+</html>

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHDFS</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutHDFS</h1><h2>Description: </h2><p>Write FlowFile data to Hadoop 
Distributed File System (HDFS)</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>hadoop, HDFS, put, copy, filesystem</p><h3>Properties: </h3><p>In the 
list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th
 >Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
 >id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">A file or comma separated 
 >list of files which contains the Hadoop file system configuration. Without 
 >this, Hadoop will search the classpath for a 'core-site.xml' and 
 >'hdfs-site.xml' file or will revert to a default configuration. To use 
 >swebhdfs, see 'Additional Details' section of PutHDFS's 
 >documentation.<br/><strong>Supports Expression Language: true (will be 
 >evaluated using variable registry only)</strong></td></tr><tr><td 
 >id="name">Kerberos Credentials Service</td><td id="default-value"></td><td 
 >id="allowable-values"><strong>Controller Service API: 
 ></strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
 ></strong><a 
 >href="../../../nifi-kerberos-credentials-service-nar/1.9.0/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><
 td id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos principal to 
authenticate as. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos keytab associated with 
the principal. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Relogin Period</td><td id="default-value">4 hours</td><td 
id="allowable-values"></td><td id="description">Period of time which should 
pass before a
 ttempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Additional Classpath Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of paths 
to files and/or directories that will be added to the classpath. When 
specifying a directory, all files with in the directory will be added to the 
classpath, but further sub-directories will not be included.</td></tr><tr><td 
id="name"><strong>Directory</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The parent HDFS directory to 
which files should be written. The directory will be created if it doesn't 
exist.<br/><strong>Supports Expression Language: true (will be evaluated using 
flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong
 >Conflict Resolution Strategy</strong></td><td id="default-value">fail</td><td 
 >id="allowable-values"><ul><li>replace <img 
 >src="../../../../../html/images/iconInfo.png" alt="Replaces the existing file 
 >if any." title="Replaces the existing file if any."></img></li><li>ignore 
 ><img src="../../../../../html/images/iconInfo.png" alt="Ignores the flow file 
 >and routes it to success." title="Ignores the flow file and routes it to 
 >success."></img></li><li>fail <img 
 >src="../../../../../html/images/iconInfo.png" alt="Penalizes the flow file 
 >and routes it to failure." title="Penalizes the flow file and routes it to 
 >failure."></img></li><li>append <img 
 >src="../../../../../html/images/iconInfo.png" alt="Appends to the existing 
 >file if any, creates a new file otherwise." title="Appends to the existing 
 >file if any, creates a new file otherwise."></img></li></ul></td><td 
 >id="description">Indicates what should happen when a file with the same name 
 >already exists in the output directory</td></tr><tr><t
 d id="name">Block Size</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Size of each block as written 
to HDFS. This overrides the Hadoop Configuration</td></tr><tr><td id="name">IO 
Buffer Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Amount of memory to use to buffer file contents during IO. 
This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Replication</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Number of times that HDFS will 
replicate each file. This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Permissions umask</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A umask represented as an octal 
number which determines the permissions of files written to HDFS. This 
overrides the Hadoop property "fs.permission.umask-mode".  If this property and 
"fs.permission.umask-mode" are undefined, the Hadoop defaul
 t "022" will be used.</td></tr><tr><td id="name">Remote Owner</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Changes the owner of the HDFS file to this value after it is 
written. This only works if NiFi is running as a user that has HDFS super user 
privilege to change owner<br/><strong>Supports Expression Language: true (will 
be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td id="name">Remote Group</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Changes the group of the HDFS file to this value after it is 
written. This only works if NiFi is running as a user that has HDFS super user 
privilege to change group<br/><strong>Supports Expression Language: true (will 
be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td id="name"><strong>Compression 
codec</strong></td><td id="default-value">NONE</td><td 
id="allowable-values"><ul><li>NONE <img src
 ="../../../../../html/images/iconInfo.png" alt="No compression" title="No 
compression"></img></li><li>DEFAULT <img 
src="../../../../../html/images/iconInfo.png" alt="Default ZLIB compression" 
title="Default ZLIB compression"></img></li><li>BZIP <img 
src="../../../../../html/images/iconInfo.png" alt="BZIP compression" 
title="BZIP compression"></img></li><li>GZIP <img 
src="../../../../../html/images/iconInfo.png" alt="GZIP compression" 
title="GZIP compression"></img></li><li>LZ4 <img 
src="../../../../../html/images/iconInfo.png" alt="LZ4 compression" title="LZ4 
compression"></img></li><li>LZO <img 
src="../../../../../html/images/iconInfo.png" alt="LZO compression - it assumes 
LD_LIBRARY_PATH has been set and jar is available" title="LZO compression - it 
assumes LD_LIBRARY_PATH has been set and jar is 
available"></img></li><li>SNAPPY <img 
src="../../../../../html/images/iconInfo.png" alt="Snappy compression" 
title="Snappy compression"></img></li><li>AUTOMATIC <img src="../../../../../h
 tml/images/iconInfo.png" alt="Will attempt to automatically detect the 
compression codec." title="Will attempt to automatically detect the compression 
codec."></img></li></ul></td><td id="description">No Description 
Provided.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Files
 that have been successfully written to HDFS are transferred to this 
relationship</td></tr><tr><td>failure</td><td>Files that could not be written 
to HDFS for some reason are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3><table 
id="reads-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS comes from the value of this 
attribute.</td></tr></table><h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS is stored in this attribute.<
 /td></tr><tr><td>absolute.hdfs.path</td><td>The absolute path to the file on 
HDFS is stored in this attribute.</td></tr></table><h3>State management: 
</h3>This component does not store state.<h3>Restricted: </h3><table 
id="restrictions"><tr><th>Required 
Permission</th><th>Explanation</th></tr><tr><td>write 
filesystem</td><td>Provides operator the ability to delete any file that NiFi 
has access to in HDFS or the local filesystem.</td></tr></table><h3>Input 
requirement: </h3>This component requires an incoming relationship.<h3>System 
Resource Considerations:</h3>None specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.9.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHDFSEvents</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">GetHDFSEvents</h1><h2>Description: </h2><p>This processor polls the 
notification events provided by the HdfsAdmin API. Since this uses the 
HdfsAdmin APIs it is required to run as an HDFS super user. Currently there are 
six types of events (append, close, create, metadata, rename, and unlink). 
Please see org.apache.hadoop.hdfs.inotify.Event documentation for full 
explanations of each event. This processor will poll for new events based on a 
defined duration. For each event received a new flow file will be created with 
the expected attributes and the event itself serialized to JSON and written to 
th
 e flow file's content. For example, if event.type is APPEND then the content 
of the flow file will contain a JSON file containing the information about the 
append event. If successful the flow files are sent to the 'success' 
relationship. Be careful of where the generated flow files are stored. If the 
flow files are stored in one of processor's watch directories there will be a 
never ending flow of events. It is also important to be aware that this 
processor must consume all events. The filtering must happen within the 
processor. This is because the HDFS admin's event notifications API does not 
have filtering.</p><h3>Tags: </h3><p>hadoop, events, inotify, notifications, 
filesystem</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide
 .html">NiFi Expression Language</a>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name">Hadoop Configuration 
Resources</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A file or comma separated list of files which contains the 
Hadoop file system configuration. Without this, Hadoop will search the 
classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a 
default configuration. To use swebhdfs, see 'Additional Details' section of 
PutHDFS's documentation.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Credentials Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
</strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.9.0/org.apache.ni
 
fi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos principal to 
authenticate as. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos keytab associated with 
the principal. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Relogin Period</td><td id="default-value">4 hours</td><td 
id="al
 lowable-values"></td><td id="description">Period of time which should pass 
before attempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Additional Classpath Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of paths 
to files and/or directories that will be added to the classpath. When 
specifying a directory, all files with in the directory will be added to the 
classpath, but further sub-directories will not be included.</td></tr><tr><td 
id="name"><strong>Poll Duration</strong></td><td id="default-value">1 
second</td><td id="allowable-values"></td><td id="description">The time before 
the polling method returns with the next batch of events if they exist. It may 
exceed this amount of time by up to the time required for an RPC to the 
NameNode.</td></tr><tr><td id="name"><strong>HDFS Path to 
Watch</strong></td><td id="defaul
 t-value"></td><td id="allowable-values"></td><td id="description">The HDFS 
path to get event notifications for. This property accepts both expression 
language and regular expressions. This will be evaluated during the OnScheduled 
phase.<br/><strong>Supports Expression Language: true (will be evaluated using 
variable registry only)</strong></td></tr><tr><td id="name"><strong>Ignore 
Hidden Files</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true and the final component of the path associated with a 
given event starts with a '.' then that event will not be 
processed.</td></tr><tr><td id="name"><strong>Event Types to Filter 
On</strong></td><td id="default-value">append, close, create, metadata, rename, 
unlink</td><td id="allowable-values"></td><td id="description">A 
comma-separated list of event types to process. Valid event types are: append, 
close, create, metadata, rename, and unlink. Case does
  not matter.</td></tr><tr><td id="name"><strong>IOException Retries During 
Event Polling</strong></td><td id="default-value">3</td><td 
id="allowable-values"></td><td id="description">According to the HDFS admin API 
for event polling it is good to retry at least a few times. This number defines 
how many times the poll will be retried if it throws an 
IOException.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 flow file with updated information about a specific event will be sent to this 
relationship.</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>mime.type</td><td>This
 is always 
application/json.</td></tr><tr><td>hdfs.inotify.event.type</td><td>This will 
specify the specific HDFS notification event type. Currently there are six 
types of events (append, close, create, metadata, ren
 ame, and unlink).</td></tr><tr><td>hdfs.inotify.event.path</td><td>The 
specific path that the event is tied to.</td></tr></table><h3>State management: 
</h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>The
 last used transaction id is stored. This is used 
</td></tr></table><h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component does not allow an 
incoming relationship.<h3>System Resource Considerations:</h3>None 
specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.ListHDFS/index.html">ListHDFS</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/additionalDetails.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/additionalDetails.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,39 @@
+<!DOCTYPE html>
+<html lang="en">
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<head>
+    <meta charset="utf-8" />
+    <title>DeleteHBaseCells</title>
+    <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+</head>
+
+<body>
+<!-- Processor Documentation 
================================================== -->
+<h2>Overview</h2>
+<p>
+    This processor provides the ability to do deletes against one or more 
HBase cells, without having to delete the entire row. It should
+    be used as the primary delete method when visibility labels are in use and 
the cells have different visibility labels. Each line in
+    the flowfile body is a fully qualified cell (row id, column family, column 
qualifier and visibility labels if applicable). The separator
+    that separates each piece of the fully qualified cell is configurable, but 
<strong>::::</strong> is the default value.
+</p>
+<h2>Example FlowFile</h2>
+<pre>
+row1::::user::::name
+row1::::user::::address::::PII
+row1::::user::::billing_code_1::::PII&&amp;BILLING
+</pre>
+</body>
+</html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseCells/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>DeleteHBaseCells</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">DeleteHBaseCells</h1><h2>Description: </h2><p>This processor allows the 
user to delete individual HBase cells by specifying one or more lines in the 
flowfile content that are a sequence composed of row ID, column family, column 
qualifier and associated visibility labels if visibility labels are enabled and 
in use. A user-defined separator is used to separate each of these pieces of 
data on each line, with :::: being the default separator.</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>hbase, delete, cell, cells, visibility</p><h3>Properties: </h3><p>In the
  list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HBaseClientService<br/><strong>Implementations: </strong><a 
href="../../../nifi-hbase_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_2_ClientService/index.html">HBase_2_ClientService</a><br/><a
 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to u
 se for accessing HBase.</td></tr><tr><td id="name"><strong>Table 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the HBase 
Table.<br/><strong>Supports Expression Language: true (will be evaluated using 
flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Separator</strong></td><td id="default-value">::::</td><td 
id="allowable-values"></td><td id="description">Each line of the flowfile 
content is separated into components for building a delete using thisseparator. 
It should be something other than a single colon or a comma because these are 
values that are associated with columns and visibility labels respectively. To 
delete a row with ID xyz, column family abc, column qualifier def and 
visibility label PII&amp;PHI, one would specify 
xyz::::abc::::def::::PII&amp;PHI given the default value<br/><strong>Supports 
Expression Language: true (will be evaluated using flow file attributes and var
 iable registry)</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship after it has been successfully stored 
in HBase</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if it cannot be sent to HBase</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>error.line</td><td>The
 line number of the error.</td></tr><tr><td>error.msg</td><td>The message 
explaining the error.</td></tr></table><h3>State management: </h3>This 
component does not store state.<h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component requires an incoming 
relationship.<h3>System Resource Considerations:</h3>None 
specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseRow/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseRow/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseRow/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.DeleteHBaseRow/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>DeleteHBaseRow</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">DeleteHBaseRow</h1><h2>Description: </h2><p>Delete HBase records 
individually or in batches. The input can be a single row ID in the flowfile 
content, one ID per line, row IDs separated by a configurable separator 
character (default is a comma). </p><h3>Tags: </h3><p>delete, 
hbase</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Ex
 pression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HBaseClientService<br/><strong>Implementations: </strong><a 
href="../../../nifi-hbase_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_2_ClientService/index.html">HBase_2_ClientService</a><br/><a
 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the HBase Table.<br/><strong>Supports Expression Language: true (will 
be evaluated using flow file attributes and
  variable registry)</strong></td></tr><tr><td id="name">Row Identifier</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the Row ID to use when deleting data into 
HBase<br/><strong>Supports Expression Language: true (will be evaluated using 
flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Row ID Location</strong></td><td 
id="default-value">content</td><td id="allowable-values"><ul><li>FlowFile 
content <img src="../../../../../html/images/iconInfo.png" alt="Get the row 
key(s) from the flowfile content." title="Get the row key(s) from the flowfile 
content."></img></li><li>FlowFile attributes <img 
src="../../../../../html/images/iconInfo.png" alt="Get the row key from an 
expression language statement." title="Get the row key from an expression 
language statement."></img></li></ul></td><td id="description">The location of 
the row ID to use for building the delete. Can be from the content or an 
expression 
 language statement.</td></tr><tr><td id="name"><strong>Flowfile Fetch 
Count</strong></td><td id="default-value">5</td><td 
id="allowable-values"></td><td id="description">The number of flowfiles to 
fetch per run.</td></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">50</td><td id="allowable-values"></td><td 
id="description">The number of deletes to send per batch.</td></tr><tr><td 
id="name"><strong>Delete Row Key Separator</strong></td><td 
id="default-value">,</td><td id="allowable-values"></td><td 
id="description">The separator character(s) that separate multiple row keys 
when multiple row keys are provided in the flowfile 
content<br/><strong>Supports Expression Language: true (will be evaluated using 
flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name">Visibility Label</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">If visibility labels are 
enabled, a row cannot be deleted without supplying i
 ts visibility label(s) in the delete request. Note: this visibility label will 
be applied to all cells within the row that is specified. If some cells have 
different visibility labels, they will not be deleted. When that happens, the 
failure to delete will be considered a success because HBase does not report it 
as a failure.<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Character Set</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">The character set used to encode the row key for 
HBase.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship after it has been successfully stored 
in HBase</td></tr><tr><td>failure</td><td>A FlowFile is routed to this 
relationship if it cannot be sent to HBase</td></tr></tab
 le><h3>Reads Attributes: </h3>None specified.<h3>Writes Attributes: 
</h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>restart.index</td><td>If
 a delete batch fails, 'restart.index' attribute is added to the FlowFile and 
sent to 'failure' relationship, so that this processor can retry from there 
when the same FlowFile is routed 
again.</td></tr><tr><td>rowkey.start</td><td>The first rowkey in the flowfile. 
Only written when using the flowfile's content for the row 
IDs.</td></tr><tr><td>rowkey.end</td><td>The last rowkey in the flowfile. Only 
written when using the flowfile's content for the row 
IDs.</td></tr></table><h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3>This component is not restricted.<h3>Input 
requirement: </h3>This component requires an incoming relationship.<h3>System 
Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.FetchHBaseRow/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.9.0/org.apache.nifi.hbase.FetchHBaseRow/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>FetchHBaseRow</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">FetchHBaseRow</h1><h2>Description: </h2><p>Fetches a row from an HBase 
table. The Destination property controls whether the cells are added as flow 
file attributes, or the row is written to the flow file content as JSON. This 
processor may be used to fetch a fixed row on a interval by specifying the 
table and row id directly in the processor, or it may be used to dynamically 
fetch rows by referencing the table and row id from incoming flow 
files.</p><h3>Tags: </h3><p>hbase, scan, fetch, get, enrich</p><h3>Properties: 
</h3><p>In the list below, the names of required properties appear in 
<strong>bol
 d</strong>. Any other properties (not in bold) are considered optional. The 
table also indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>HBase Client Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>HBaseClientService<br/><strong>Implementations: </strong><a 
href="../../../nifi-hbase_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_2_ClientService/index.html">HBase_2_ClientService</a><br/><a
 
href="../../../nifi-hbase_1_1_2-client-service-nar/1.9.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html">HBase_1_1_2_ClientService</a></td><td
 id="description">Specifies the Controller Service to use for accessing 
HBase.</td></tr><tr><td id="name"><strong>Table Na
 me</strong></td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">The name of the HBase Table to fetch 
from.<br/><strong>Supports Expression Language: true (will be evaluated using 
flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Row Identifier</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The identifier of the row to 
fetch.<br/><strong>Supports Expression Language: true (will be evaluated using 
flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name">Columns</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">An optional comma-separated 
list of "&lt;colFamily&gt;:&lt;colQualifier&gt;" pairs to fetch. To return all 
columns for a given family, leave off the qualifier such as 
"&lt;colFamily1&gt;,&lt;colFamily2&gt;".<br/><strong>Supports Expression 
Language: true (will be evaluated using flow file attributes and varia
 ble registry)</strong></td></tr><tr><td id="name">Authorizations</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
list of authorizations to pass to the scanner. This will be ignored if cell 
visibility labels are not in use.<br/><strong>Supports Expression Language: 
true (will be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td 
id="name"><strong>Destination</strong></td><td 
id="default-value">flowfile-attributes</td><td 
id="allowable-values"><ul><li>flowfile-attributes <img 
src="../../../../../html/images/iconInfo.png" alt="Adds the JSON document 
representing the row that was fetched as an attribute named hbase.row. The 
format of the JSON document is determined by the JSON Format property. NOTE: 
Fetching many large rows into attributes may have a negative impact on 
performance." title="Adds the JSON document representing the row that was 
fetched as an attribute named hbase.row. The format of the JSON document
  is determined by the JSON Format property. NOTE: Fetching many large rows 
into attributes may have a negative impact on 
performance."></img></li><li>flowfile-content <img 
src="../../../../../html/images/iconInfo.png" alt="Overwrites the FlowFile 
content with a JSON document representing the row that was fetched. The format 
of the JSON document is determined by the JSON Format property." 
title="Overwrites the FlowFile content with a JSON document representing the 
row that was fetched. The format of the JSON document is determined by the JSON 
Format property."></img></li></ul></td><td id="description">Indicates whether 
the row fetched from HBase is written to FlowFile content or FlowFile 
Attributes.</td></tr><tr><td id="name"><strong>JSON Format</strong></td><td 
id="default-value">full-row</td><td id="allowable-values"><ul><li>full-row <img 
src="../../../../../html/images/iconInfo.png" alt="Creates a JSON document with 
the format: {&quot;row&quot;:&lt;row-id&gt;, &quot;cells&quot;:[{
 &quot;fam&quot;:&lt;col-fam&gt;, &quot;qual&quot;:&lt;col-val&gt;, 
&quot;val&quot;:&lt;value&gt;, &quot;ts&quot;:&lt;timestamp&gt;}]}." 
title="Creates a JSON document with the format: 
{&quot;row&quot;:&lt;row-id&gt;, 
&quot;cells&quot;:[{&quot;fam&quot;:&lt;col-fam&gt;, 
&quot;qual&quot;:&lt;col-val&gt;, &quot;val&quot;:&lt;value&gt;, 
&quot;ts&quot;:&lt;timestamp&gt;}]}."></img></li><li>col-qual-and-val <img 
src="../../../../../html/images/iconInfo.png" alt="Creates a JSON document with 
the format: {&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;, 
&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;." title="Creates a JSON 
document with the format: 
{&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;, 
&quot;&lt;col-qual&gt;&quot;:&quot;&lt;value&gt;&quot;."></img></li></ul></td><td
 id="description">Specifies how to represent the HBase row as a JSON 
document.</td></tr><tr><td id="name"><strong>JSON Value 
Encoding</strong></td><td id="default-value">none</td><td id="allowabl
 e-values"><ul><li>none <img src="../../../../../html/images/iconInfo.png" 
alt="Creates a String using the bytes of given data and the given Character 
Set." title="Creates a String using the bytes of given data and the given 
Character Set."></img></li><li>base64 <img 
src="../../../../../html/images/iconInfo.png" alt="Creates a Base64 encoded 
String of the given data." title="Creates a Base64 encoded String of the given 
data."></img></li></ul></td><td id="description">Specifies how to represent row 
ids, column families, column qualifiers, and values when stored in FlowFile 
attributes, or written to JSON.</td></tr><tr><td id="name"><strong>Encode 
Character Set</strong></td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">The character set used to 
encode the JSON representation of the row.</td></tr><tr><td 
id="name"><strong>Decode Character Set</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">The charac
 ter set used to decode data from HBase.</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 successful fetches are routed to this 
relationship.</td></tr><tr><td>failure</td><td>All failed fetches are routed to 
this relationship.</td></tr><tr><td>not found</td><td>All fetches where the row 
id is not found are routed to this relationship.</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>hbase.table</td><td>The
 name of the HBase table that the row was fetched 
from</td></tr><tr><td>hbase.row</td><td>A JSON document representing the row. 
This property is only written when a Destination of flowfile-attributes is 
selected.</td></tr><tr><td>mime.type</td><td>Set to application/json when using 
a Destination of flowfile-content, not set or modified 
otherwise</td></tr></table><h3>State manageme
 nt: </h3>This component does not store state.<h3>Restricted: </h3>This 
component is not restricted.<h3>Input requirement: </h3>This component requires 
an incoming relationship.<h3>System Resource Considerations:</h3>None 
specified.</body></html>
\ No newline at end of file


Reply via email to