Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.GetHDFSSequenceFile/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHDFSSequenceFile</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Fetch sequence 
files from Hadoop Distributed File System (HDFS) into FlowFiles</p><h3>Tags: 
</h3><p>hadoop, HDFS, get, fetch, ingest, source, sequence 
file</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, whether a 
property supports the <a href="../../html/expression-language-guide.html">NiFi 
Expression Language</a>, and whether a property is considered "sensitive", 
meaning that its value will be encrypted. Before entering a value in a 
sensitive property, ensure that the <strong>nifi.properties</strong> file has 
an entry for the property <strong>nifi.sensitive.props.key</strong>.</p><table 
id="propert
 ies"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name">Hadoop Configuration 
Resources</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A file or comma separated list of files which contains the 
Hadoop file system configuration. Without this, Hadoop will search the 
classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a 
default configuration.</td></tr><tr><td 
id="name"><strong>Directory</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The HDFS directory from which 
files should be read</td></tr><tr><td id="name"><strong>Recurse 
Subdirectories</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to pull files from subdirectories of the 
HDFS directory</td></tr><tr><td id="name"><strong>Keep Source 
File</strong></td><td id="default-value
 ">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Determines whether to delete the file from HDFS after it has 
been successfully transferred. If true, the file will be fetched repeatedly. 
This is intended for testing only.</td></tr><tr><td id="name">File Filter 
Regex</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A Java Regular Expression for filtering Filenames; if a filter 
is supplied then only files whose names match that Regular Expression will be 
fetched, otherwise all files will be fetched</td></tr><tr><td 
id="name"><strong>Filter Match Name Only</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true then File Filter Regex will match on just the 
filename, otherwise subdirectory names will be included with filename in the 
regex comparison</td></tr><tr><td id="name"><strong>Ignore Dotted 
Files</strong></td><t
 d id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true, files whose names begin with a dot (".") will be 
ignored</td></tr><tr><td id="name"><strong>Minimum File Age</strong></td><td 
id="default-value">0 sec</td><td id="allowable-values"></td><td 
id="description">The minimum age that a file must be in order to be pulled; any 
file younger than this amount of time (based on last modification date) will be 
ignored</td></tr><tr><td id="name">Maximum File Age</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
maximum age that a file must be in order to be pulled; any file older than this 
amount of time (based on last modification date) will be 
ignored</td></tr><tr><td id="name"><strong>Polling Interval</strong></td><td 
id="default-value">0 sec</td><td id="allowable-values"></td><td 
id="description">Indicates how long to wait between performing directory 
listings</td></tr><tr><td id=
 "name"><strong>Batch Size</strong></td><td id="default-value">100</td><td 
id="allowable-values"></td><td id="description">The maximum number of files to 
pull in each iteration, based on run schedule.</td></tr><tr><td id="name">IO 
Buffer Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Amount of memory to use to buffer file contents during IO. 
This overrides the Hadoop Configuration</td></tr><tr><td 
id="name"><strong>FlowFile Content</strong></td><td id="default-value">VALUE 
ONLY</td><td id="allowable-values"><ul><li>VALUE ONLY</li><li>KEY VALUE 
PAIR</li></ul></td><td id="description">Indicate if the content is to be both 
the key and value of the Sequence File, or just the 
value.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 files retrieved from HDFS are transferred to this 
relationship</td></tr><tr><td>passthrough</td><td>If this processor has an 
input q
 ueue for some reason, then FlowFiles arriving on that input are transferred to 
this relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the relative path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". If the Recurse Subdirectories property is 
set to true and a file is picked up from /tmp/abc/1/2/3, then the path 
attribute will be set to "abc/1/2/3".</td></tr></table><h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetHDFSSequenceFile</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Fetch sequence 
files from Hadoop Distributed File System (HDFS) into FlowFiles</p><h3>Tags: 
</h3><p>hadoop, HDFS, get, fetch, ingest, source, sequence 
file</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name">Hadoop Configuration 
Resources</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A file or comma separated list of files which contains the 
Hadoop file system configuration. Without this, Hadoop will search the 
classpath for a 'core-sit
 e.xml' and 'hdfs-site.xml' file or will revert to a default 
configuration.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
HDFS directory from which files should be read</td></tr><tr><td 
id="name"><strong>Recurse Subdirectories</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to pull files from subdirectories of the 
HDFS directory</td></tr><tr><td id="name"><strong>Keep Source 
File</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Determines whether to delete the file from HDFS after it has 
been successfully transferred. If true, the file will be fetched repeatedly. 
This is intended for testing only.</td></tr><tr><td id="name">File Filter 
Regex</td><td id="default-value"></td><td id="allowable-values"></td><
 td id="description">A Java Regular Expression for filtering Filenames; if a 
filter is supplied then only files whose names match that Regular Expression 
will be fetched, otherwise all files will be fetched</td></tr><tr><td 
id="name"><strong>Filter Match Name Only</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true then File Filter Regex will match on just the 
filename, otherwise subdirectory names will be included with filename in the 
regex comparison</td></tr><tr><td id="name"><strong>Ignore Dotted 
Files</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true, files whose names begin with a dot (".") will be 
ignored</td></tr><tr><td id="name"><strong>Minimum File Age</strong></td><td 
id="default-value">0 sec</td><td id="allowable-values"></td><td 
id="description">The minimum age that a file must be in order to 
 be pulled; any file younger than this amount of time (based on last 
modification date) will be ignored</td></tr><tr><td id="name">Maximum File 
Age</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">The maximum age that a file must be in order to be pulled; any 
file older than this amount of time (based on last modification date) will be 
ignored</td></tr><tr><td id="name"><strong>Polling Interval</strong></td><td 
id="default-value">0 sec</td><td id="allowable-values"></td><td 
id="description">Indicates how long to wait between performing directory 
listings</td></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">100</td><td id="allowable-values"></td><td 
id="description">The maximum number of files to pull in each iteration, based 
on run schedule.</td></tr><tr><td id="name">IO Buffer Size</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Amount of memory to use to buffer file contents during I
 O. This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Compression codec</td><td id="default-value"></td><td 
id="allowable-values"><ul><li>org.apache.hadoop.io.compress.BZip2Codec</li><li>org.apache.hadoop.io.compress.DefaultCodec</li><li>org.apache.hadoop.io.compress.GzipCodec</li><li>org.apache.hadoop.io.compress.Lz4Codec</li><li>org.apache.hadoop.io.compress.SnappyCodec</li></ul></td><td
 id="description">No Description Provided.</td></tr><tr><td 
id="name"><strong>FlowFile Content</strong></td><td id="default-value">VALUE 
ONLY</td><td id="allowable-values"><ul><li>VALUE ONLY</li><li>KEY VALUE 
PAIR</li></ul></td><td id="description">Indicate if the content is to be both 
the key and value of the Sequence File, or just the 
value.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>passthrough</td><td>If
 this processor has an input queue for some reason, then FlowFiles arriving on 
that input are transferre
 d to this relationship</td></tr><tr><td>success</td><td>All files retrieved 
from HDFS are transferred to this relationship</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the relative path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". If the Recurse Subdirectories property is 
set to true and a file is picked up from /tmp/abc/1/2/3, then the path 
attribute will be set to "abc/1/2/3".</td></tr></table><h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.ListHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.ListHDFS/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.ListHDFS/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ListHDFS</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Retrieves a 
listing of files from HDFS. For each file that is listed in HDFS, creates a 
FlowFile that represents the HDFS file so that it can be fetched in conjunction 
with ListHDFS. This Processor is designed to run on Primary Node only in a 
cluster. If the primary node changes, the new Primary Node will pick up where 
the previous node left off without duplicating all of the data. Unlike GetHDFS, 
this Processor does not delete any data from HDFS.</p><h3>Tags: </h3><p>hadoop, 
HDFS, get, list, ingest, source, filesystem</p><h3>Properties: </h3><p>In the 
list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values, whether a property supports the <a href="../.
 ./html/expression-language-guide.html">NiFi Expression Language</a>, and 
whether a property is considered "sensitive", meaning that its value will be 
encrypted. Before entering a value in a sensitive property, ensure that the 
<strong>nifi.properties</strong> file has an entry for the property 
<strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name">Hadoop Configuration 
Resources</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A file or comma separated list of files which contains the 
Hadoop file system configuration. Without this, Hadoop will search the 
classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a 
default configuration.</td></tr><tr><td id="name"><strong>Distributed Cache 
Service</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: </strong><br/>Distribu
 tedMapCacheClient<br/><strong>Implementation:</strong><br/><a 
href="../org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService/index.html">DistributedMapCacheClientService</a></td><td
 id="description">Specifies the Controller Service that should be used to 
maintain state about what has been pulled from HDFS so that if a new node 
begins pulling data, it won't duplicate all of the work that has been 
done.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
HDFS directory from which files should be read</td></tr><tr><td 
id="name"><strong>Recurse Subdirectories</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to list files from subdirectories of the 
HDFS directory</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>s
 uccess</td><td>All FlowFiles are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the absolute path of the file's directory on HDFS. For example, 
if the Directory property is set to /tmp, then files picked up from /tmp will 
have the path attribute set to "./". If the Recurse Subdirectories property is 
set to true and a file is picked up from /tmp/abc/1/2/3, then the path 
attribute will be set to 
"/tmp/abc/1/2/3".</td></tr><tr><td>hdfs.owner</td><td>The user that owns the 
file in HDFS</td></tr><tr><td>hdfs.group</td><td>The group that owns the file 
in HDFS</td></tr><tr><td>hdfs.lastModified</td><td>The timestamp of when the 
file in HDFS was last modified, as milliseconds since midnight Jan 1, 1970 
UTC</td></tr><tr><td>hdfs.le
 ngth</td><td>The number of bytes in the file in 
HDFS</td></tr><tr><td>hdfs.replication</td><td>The number of HDFS replicas for 
hte file</td></tr><tr><td>hdfs.permissions</td><td>The permissions for the file 
in HDFS. This is formatted as 3 characters for the owner, 3 for the group, and 
3 for other users. For example rw-rw-r--</td></tr></table><h3>See 
Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ListHDFS</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Retrieves a 
listing of files from HDFS. For each file that is listed in HDFS, creates a 
FlowFile that represents the HDFS file so that it can be fetched in conjunction 
with ListHDFS. This Processor is designed to run on Primary Node only in a 
cluster. If the primary node changes, the new Primary Node will pick up where 
the previous node left off without duplicating all of the data. Unlike GetHDFS, 
this Processor does not delete any data from HDFS.</p><h3>Tags: </h3><p>hadoop, 
HDFS, get, list, ingest, source, filesystem</p><h3>Properties: </h3><p>In the 
list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values.</p><table id="properties"><tr><th>Name</th><t
 h>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration.</td></tr><tr><td 
id="name"><strong>Distributed Cache Service</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: 
</strong><br/>DistributedMapCacheClient<br/><strong>Implementation:</strong><br/><a
 
href="../org.apache.nifi.distributed.cache.client.DistributedMapCacheClientService/index.html">DistributedMapCacheClientService</a></td><td
 id="description">Specifies the Controller Service that should be used to 
maintain state about what has been pulled from HDFS so that if a new node 
begins pulling data, it won't dup
 licate all of the work that has been done.</td></tr><tr><td 
id="name"><strong>Directory</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The HDFS directory from which 
files should be read</td></tr><tr><td id="name"><strong>Recurse 
Subdirectories</strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Indicates whether to list files from subdirectories of the 
HDFS directory</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 FlowFiles are transferred to this relationship</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file that was read from HDFS.</td></tr><tr><td>path</td><td>The 
path is set to the absolute path of the file's direc
 tory on HDFS. For example, if the Directory property is set to /tmp, then 
files picked up from /tmp will have the path attribute set to "./". If the 
Recurse Subdirectories property is set to true and a file is picked up from 
/tmp/abc/1/2/3, then the path attribute will be set to 
"/tmp/abc/1/2/3".</td></tr><tr><td>hdfs.owner</td><td>The user that owns the 
file in HDFS</td></tr><tr><td>hdfs.group</td><td>The group that owns the file 
in HDFS</td></tr><tr><td>hdfs.lastModified</td><td>The timestamp of when the 
file in HDFS was last modified, as milliseconds since midnight Jan 1, 1970 
UTC</td></tr><tr><td>hdfs.length</td><td>The number of bytes in the file in 
HDFS</td></tr><tr><td>hdfs.replication</td><td>The number of HDFS replicas for 
hte file</td></tr><tr><td>hdfs.permissions</td><td>The permissions for the file 
in HDFS. This is formatted as 3 characters for the owner, 3 for the group, and 
3 for other users. For example rw-rw-r--</td></tr></table><h3>See 
Also:</h3><p><a href="../org.a
 pache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a>, <a 
href="../org.apache.nifi.processors.hadoop.FetchHDFS/index.html">FetchHDFS</a>, 
<a 
href="../org.apache.nifi.processors.hadoop.PutHDFS/index.html">PutHDFS</a></p></body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHDFS</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Write FlowFile 
data to Hadoop Distributed File System (HDFS)</p><h3>Tags: </h3><p>hadoop, 
HDFS, put, copy, filesystem</p><h3>Properties: </h3><p>In the list below, the 
names of required properties appear in <strong>bold</strong>. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression Language</a>, 
and whether a property is considered "sensitive", meaning that its value will 
be encrypted. Before entering a value in a sensitive property, ensure that the 
<strong>nifi.properties</strong> file has an entry for the property 
<strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Al
 lowable Values</th><th>Description</th></tr><tr><td id="name">Hadoop 
Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration.</td></tr><tr><td 
id="name"><strong>Directory</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The parent HDFS directory to 
which files should be written<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Conflict Resolution 
Strategy</strong></td><td id="default-value">fail</td><td 
id="allowable-values"><ul><li>replace</li><li>ignore</li><li>fail</li></ul></td><td
 id="description">Indicates what should happen when a file with the same name 
already exists in the output directory</td></tr><tr><td id="nam
 e">Block Size</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Size of each block as written 
to HDFS. This overrides the Hadoop Configuration</td></tr><tr><td id="name">IO 
Buffer Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Amount of memory to use to buffer file contents during IO. 
This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Replication</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Number of times that HDFS will 
replicate each file. This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Permissions umask</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A umask represented as an octal 
number which determines the permissions of files written to HDFS. This 
overrides the Hadoop Configuration dfs.umaskmode</td></tr><tr><td 
id="name">Remote Owner</td><td id="default-value"></td><td 
id="allowable-values"><
 /td><td id="description">Changes the owner of the HDFS file to this value 
after it is written. This only works if NiFi is running as a user that has HDFS 
super user privilege to change owner</td></tr><tr><td id="name">Remote 
Group</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Changes the group of the HDFS file to this value after it is 
written. This only works if NiFi is running as a user that has HDFS super user 
privilege to change group</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Files
 that have been successfully written to HDFS are transferred to this 
relationship</td></tr><tr><td>failure</td><td>Files that could not be written 
to HDFS for some reason are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>fil
 ename</td><td>The name of the file written to HDFS comes from the value of 
this attribute.</td></tr></table><h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a></p></body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutHDFS</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Write FlowFile 
data to Hadoop Distributed File System (HDFS)</p><h3>Tags: </h3><p>hadoop, 
HDFS, put, copy, filesystem</p><h3>Properties: </h3><p>In the list below, the 
names of required properties appear in <strong>bold</strong>. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, and whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configura
 tion. Without this, Hadoop will search the classpath for a 'core-site.xml' and 
'hdfs-site.xml' file or will revert to a default 
configuration.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
parent HDFS directory to which files should be written<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name"><strong>Conflict 
Resolution Strategy</strong></td><td id="default-value">fail</td><td 
id="allowable-values"><ul><li>replace</li><li>ignore</li><li>fail</li></ul></td><td
 id="description">Indicates what should happen when a file with the same name 
already exists in the output directory</td></tr><tr><td id="name">Block 
Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Size of each block as written to HDFS. This overrides the 
Hadoop Configuration</td></tr><tr><td id="name">IO Buffer Size</td><td 
id="default-value"></td><td id="allow
 able-values"></td><td id="description">Amount of memory to use to buffer file 
contents during IO. This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Replication</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Number of times that HDFS will 
replicate each file. This overrides the Hadoop Configuration</td></tr><tr><td 
id="name">Permissions umask</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A umask represented as an octal 
number which determines the permissions of files written to HDFS. This 
overrides the Hadoop Configuration dfs.umaskmode</td></tr><tr><td 
id="name">Remote Owner</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Changes the owner of the HDFS 
file to this value after it is written. This only works if NiFi is running as a 
user that has HDFS super user privilege to change owner</td></tr><tr><td 
id="name">Remote Group</td><td id="default-value"></td><
 td id="allowable-values"></td><td id="description">Changes the group of the 
HDFS file to this value after it is written. This only works if NiFi is running 
as a user that has HDFS super user privilege to change group</td></tr><tr><td 
id="name">Compression codec</td><td id="default-value"></td><td 
id="allowable-values"><ul><li>org.apache.hadoop.io.compress.BZip2Codec</li><li>org.apache.hadoop.io.compress.DefaultCodec</li><li>org.apache.hadoop.io.compress.GzipCodec</li><li>org.apache.hadoop.io.compress.Lz4Codec</li><li>org.apache.hadoop.io.compress.SnappyCodec</li></ul></td><td
 id="description">No Description Provided.</td></tr></table><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>failure</td><td>Files
 that could not be written to HDFS for some reason are transferred to this 
relationship</td></tr><tr><td>success</td><td>Files that have been successfully 
written to HDFS are transferred to this relationship</td></tr></table><h3>Reads
  Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file written to HDFS comes from the value of this 
attribute.</td></tr></table><h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.hadoop.GetHDFS/index.html">GetHDFS</a></p></body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ExtractHL7Attributes</title><link 
rel="stylesheet" href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Extracts 
information from an HL7 (Health Level 7) formatted FlowFile and adds the 
information as FlowFile Attributes. The attributes are named as &lt;Segment 
Name&gt; &lt;dot&gt; &lt;Field Index&gt;. If the segment is repeating, the 
naming will be &lt;Segment Name&gt; &lt;underscore&gt; &lt;Segment Index&gt; 
&lt;dot&gt; &lt;Field Index&gt;. For example, we may have an attribute named 
"MHS.12" with a value of "2.1" and an attribute named "OBX_11.3" with a value 
of "93000^CPT4".</p><h3>Tags: </h3><p>HL7, health level 7, healthcare, extract, 
attributes</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, wh
 ether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression Language</a>, 
and whether a property is considered "sensitive", meaning that its value will 
be encrypted. Before entering a value in a sensitive property, ensure that the 
<strong>nifi.properties</strong> file has an entry for the property 
<strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>Character 
Encoding</strong></td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">The Character Encoding that is 
used to encode the HL7 data<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>A
 FlowFile is routed to this relationship if it is properly parsed as HL7 and 
its attributes extracted</td></tr
 ><tr><td>failure</td><td>A FlowFile is routed to this relationship if it 
 >cannot be mapped to FlowFile Attributes. This would happen if the FlowFile 
 >does not contain valid HL7 data</td></tr></table><h3>Reads Attributes: 
 ></h3>None specified.<h3>Writes Attributes: </h3>None specified.</body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ExtractHL7Attributes</title><link 
rel="stylesheet" href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Extracts 
information from an HL7 (Health Level 7) formatted FlowFile and adds the 
information as FlowFile Attributes. The attributes are named as &lt;Segment 
Name&gt; &lt;dot&gt; &lt;Field Index&gt;. If the segment is repeating, the 
naming will be &lt;Segment Name&gt; &lt;underscore&gt; &lt;Segment Index&gt; 
&lt;dot&gt; &lt;Field Index&gt;. For example, we may have an attribute named 
"MHS.12" with a value of "2.1" and an attribute named "OBX_11.3" with a value 
of "93000^CPT4".</p><h3>Tags: </h3><p>HL7, health level 7, healthcare, extract, 
attributes</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, an
 d whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Character Encoding</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">The Character Encoding that is used to encode the HL7 
data<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>failure</td><td>A
 FlowFile is routed to this relationship if it cannot be mapped to FlowFile 
Attributes. This would happen if the FlowFile does not contain valid HL7 
data</td></tr><tr><td>success</td><td>A FlowFile is routed to this relationship 
if it is properly parsed as HL7 and its attributes 
extracted</td></tr></table><h3>Reads Attributes: </h3>None specified.<h3>Writes 
Attributes: </
 h3>None specified.</body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.RouteHL7/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.RouteHL7/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.RouteHL7/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.hl7.RouteHL7/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>RouteHL7</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Routes incoming 
HL7 data according to user-defined queries. To add a query, add a new property 
to the processor. The name of the property will become a new relationship for 
the processor, and the value is an HL7 Query Language query. If a FlowFile 
matches the query, a copy of the FlowFile will be routed to the associated 
relationship.</p><h3>Tags: </h3><p>HL7, healthcare, route, Health Level 
7</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, whether a 
property supports the <a href="../../html/expression-language-guide.html">NiFi 
Expression Language</a>, and whether a property is considered "sensitive", 
meaning t
 hat its value will be encrypted. Before entering a value in a sensitive 
property, ensure that the <strong>nifi.properties</strong> file has an entry 
for the property <strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>Character 
Encoding</strong></td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">The Character Encoding that is 
used to encode the HL7 data<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Dynamic Properties: </h3><p>Dynamic 
Properties allow the user to specify both the name and value of a 
property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">Name of a Relationship</td><td id="value">An HL7 Query Language 
query</td><td>If a FlowFile matches the query, it will be routed to a 
relationship with the name of the property</
 td></tr></table></p><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>failure</td><td>Any
 FlowFile that cannot be parsed as HL7 will be routed to this 
relationship</td></tr><tr><td>original</td><td>The original FlowFile that comes 
into this processor will be routed to this relationship, unless it is routed to 
'failure'</td></tr></table><h3>Reads Attributes: </h3>None specified.<h3>Writes 
Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>RouteHL7.Route</td><td>The
 name of the relationship to which the FlowFile was 
routed</td></tr></table></body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>RouteHL7</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Routes incoming 
HL7 data according to user-defined queries. To add a query, add a new property 
to the processor. The name of the property will become a new relationship for 
the processor, and the value is an HL7 Query Language query. If a FlowFile 
matches the query, a copy of the FlowFile will be routed to the associated 
relationship.</p><h3>Tags: </h3><p>HL7, healthcare, route, Health Level 
7</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
 Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Character Encoding</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">The Character Encoding that is used to encode the HL7 
data<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Dynamic Properties: </h3><p>Dynamic 
Properties allow the user to specify both the name and value of a 
property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">Name of a Relationship</td><td id="value">An HL7 Query Language 
query</td><td>If a FlowFile matches the query, it will be routed to a 
relationship with the name of the 
property</td></tr></table></p><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>original</td><td>The
 original FlowFile that comes into this processor will be routed to this 
relationship, unless it is routed to 'failure'</td>
 </tr><tr><td>failure</td><td>Any FlowFile that cannot be parsed as HL7 will be 
routed to this relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>RouteHL7.Route</td><td>The
 name of the relationship to which the FlowFile was 
routed</td></tr></table></body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.GetKafka/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.GetKafka/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.GetKafka/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.GetKafka/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetKafka</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Fetches messages 
from Apache Kafka</p><p><a href="additionalDetails.html">Additional 
Details...</a></p><h3>Tags: </h3><p>Kafka, Apache, Get, Ingest, Ingress, Topic, 
PubSub</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, whether a 
property supports the <a href="../../html/expression-language-guide.html">NiFi 
Expression Language</a>, and whether a property is considered "sensitive", 
meaning that its value will be encrypted. Before entering a value in a 
sensitive property, ensure that the <strong>nifi.properties</strong> file has 
an entry for the property <strong>nifi.sensitive.props.key</strong>.</p><table 
id="pr
 operties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>ZooKeeper 
Connection String</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Connection String to use in 
order to connect to ZooKeeper. This is often a comma-separated list of 
&lt;host&gt;:&lt;port&gt; combinations. For example, 
host1:2181,host2:2181,host3:2188</td></tr><tr><td id="name"><strong>Topic 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Kafka Topic to pull 
messages from</td></tr><tr><td id="name"><strong>Zookeeper Commit 
Frequency</strong></td><td id="default-value">60 secs</td><td 
id="allowable-values"></td><td id="description">Specifies how often to 
communicate with ZooKeeper to indicate which messages have been pulled. A 
longer time period will result in better overall performance but can result in 
more data duplication if a NiFi node is lost</t
 d></tr><tr><td id="name"><strong>Batch Size</strong></td><td 
id="default-value">1</td><td id="allowable-values"></td><td 
id="description">Specifies the maximum number of messages to combine into a 
single FlowFile. These messages will be concatenated together with the 
&lt;Message Demarcator&gt; string placed between the content of each message. 
If the messages from Kafka should not be concatenated together, leave this 
value at 1.</td></tr><tr><td id="name"><strong>Message 
Demarcator</strong></td><td id="default-value">\n</td><td 
id="allowable-values"></td><td id="description">Specifies the characters to use 
in order to demarcate multiple messages from Kafka. If the &lt;Batch Size&gt; 
property is set to 1, this value is ignored. Otherwise, for each two subsequent 
messages in the batch, this value will be placed in between 
them.</td></tr><tr><td id="name"><strong>Client Name</strong></td><td 
id="default-value">NiFi-</td><td id="allowable-values"></td><td 
id="description">Client Name to
  use when communicating with Kafka</td></tr><tr><td id="name"><strong>Kafka 
Communications Timeout</strong></td><td id="default-value">30 secs</td><td 
id="allowable-values"></td><td id="description">The amount of time to wait for 
a response from Kafka before determining that there is a communications 
error</td></tr><tr><td id="name"><strong>ZooKeeper Communications 
Timeout</strong></td><td id="default-value">30 secs</td><td 
id="allowable-values"></td><td id="description">The amount of time to wait for 
a response from ZooKeeper before determining that there is a communications 
error</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 FlowFiles that are created are routed to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>kafka.topic</td><td>The
 name of the Kafka Top
 ic from which the message was received</td></tr><tr><td>kafka.key</td><td>The 
key of the Kafka message, if it exists and batch size is 1. If the message does 
not have a key, or if the batch size is greater than 1, this attribute will not 
be added</td></tr><tr><td>kafka.partition</td><td>The partition of the Kafka 
Topic from which the message was received. This attribute is added only if the 
batch size is 1</td></tr><tr><td>kafka.offset</td><td>The offset of the message 
within the Kafka partition. This attribute is added only if the batch size is 
1</td></tr></table></body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetKafka</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Fetches messages 
from Apache Kafka</p><p><a href="additionalDetails.html">Additional 
Details...</a></p><h3>Tags: </h3><p>Kafka, Apache, Get, Ingest, Ingress, Topic, 
PubSub</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>ZooKeeper 
Connection String</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Connection String to use in 
order to connect to ZooKeeper. This is often a comma-separated list of 
&lt;host&gt;:&lt;port&gt; combina
 tions. For example, host1:2181,host2:2181,host3:2188</td></tr><tr><td 
id="name"><strong>Topic Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Kafka Topic to pull 
messages from</td></tr><tr><td id="name"><strong>Zookeeper Commit 
Frequency</strong></td><td id="default-value">60 secs</td><td 
id="allowable-values"></td><td id="description">Specifies how often to 
communicate with ZooKeeper to indicate which messages have been pulled. A 
longer time period will result in better overall performance but can result in 
more data duplication if a NiFi node is lost</td></tr><tr><td 
id="name"><strong>Batch Size</strong></td><td id="default-value">1</td><td 
id="allowable-values"></td><td id="description">Specifies the maximum number of 
messages to combine into a single FlowFile. These messages will be concatenated 
together with the &lt;Message Demarcator&gt; string placed between the content 
of each message. If the messages from Kafka should no
 t be concatenated together, leave this value at 1.</td></tr><tr><td 
id="name"><strong>Message Demarcator</strong></td><td 
id="default-value">\n</td><td id="allowable-values"></td><td 
id="description">Specifies the characters to use in order to demarcate multiple 
messages from Kafka. If the &lt;Batch Size&gt; property is set to 1, this value 
is ignored. Otherwise, for each two subsequent messages in the batch, this 
value will be placed in between them.</td></tr><tr><td id="name"><strong>Client 
Name</strong></td><td id="default-value">NiFi-mock-processor</td><td 
id="allowable-values"></td><td id="description">Client Name to use when 
communicating with Kafka</td></tr><tr><td id="name"><strong>Group 
ID</strong></td><td id="default-value">mock-processor</td><td 
id="allowable-values"></td><td id="description">A Group ID is used to identify 
consumers that are within the same consumer group</td></tr><tr><td 
id="name"><strong>Kafka Communications Timeout</strong></td><td 
id="default-value">3
 0 secs</td><td id="allowable-values"></td><td id="description">The amount of 
time to wait for a response from Kafka before determining that there is a 
communications error</td></tr><tr><td id="name"><strong>ZooKeeper 
Communications Timeout</strong></td><td id="default-value">30 secs</td><td 
id="allowable-values"></td><td id="description">The amount of time to wait for 
a response from ZooKeeper before determining that there is a communications 
error</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 FlowFiles that are created are routed to this 
relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>kafka.topic</td><td>The
 name of the Kafka Topic from which the message was 
received</td></tr><tr><td>kafka.key</td><td>The key of the Kafka message, if it 
exists and batch size is 1
 . If the message does not have a key, or if the batch size is greater than 1, 
this attribute will not be added</td></tr><tr><td>kafka.partition</td><td>The 
partition of the Kafka Topic from which the message was received. This 
attribute is added only if the batch size is 
1</td></tr><tr><td>kafka.offset</td><td>The offset of the message within the 
Kafka partition. This attribute is added only if the batch size is 
1</td></tr></table></body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.PutKafka/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.PutKafka/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.PutKafka/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kafka.PutKafka/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutKafka</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Sends the contents 
of a FlowFile as a message to Apache Kafka</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>Apache, Kafka, Put, Send, Message, PubSub</p><h3>Properties: </h3><p>In 
the list below, the names of required properties appear in 
<strong>bold</strong>. Any other properties (not in bold) are considered 
optional. The table also indicates any default values, whether a property 
supports the <a href="../../html/expression-language-guide.html">NiFi 
Expression Language</a>, and whether a property is considered "sensitive", 
meaning that its value will be encrypted. Before entering a value in a 
sensitive property, ensure that the <strong>nifi.properties</strong> file has 
an entry for the property <strong>nifi.sensitive.props.key</strong>
 .</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Known Brokers</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of known 
Kafka Brokers in the format &lt;host&gt;:&lt;port&gt;</td></tr><tr><td 
id="name"><strong>Topic Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Kafka Topic of 
interest<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kafka Key</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
Key to use for the Message<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Delivery 
Guarantee</strong></td><td id="default-value">0</td><td 
id="allowable-values"><ul><li>Best Effort <img 
src="../../html/images/iconInfo.png" alt="FlowFile will be routed to success 
after 
 successfully writing the content to a Kafka node, without waiting for a 
response. This provides the best performance but may result in data loss." 
title="FlowFile will be routed to success after successfully writing the 
content to a Kafka node, without waiting for a response. This provides the best 
performance but may result in data loss."></img></li><li>Guarantee Single Node 
Delivery <img src="../../html/images/iconInfo.png" alt="FlowFile will be routed 
to success if the message is received by a single Kafka node, whether or not it 
is replicated. This is faster than &lt;Guarantee Replicated Delivery&gt; but 
can result in data loss if a Kafka node crashes" title="FlowFile will be routed 
to success if the message is received by a single Kafka node, whether or not it 
is replicated. This is faster than &lt;Guarantee Replicated Delivery&gt; but 
can result in data loss if a Kafka node crashes"></img></li><li>Guarantee 
Replicated Delivery <img src="../../html/images/iconInfo.png" alt="Flo
 wFile will be routed to failure unless the message is replicated to the 
appropriate number of Kafka Nodes according to the Topic configuration" 
title="FlowFile will be routed to failure unless the message is replicated to 
the appropriate number of Kafka Nodes according to the Topic 
configuration"></img></li></ul></td><td id="description">Specifies the 
requirement for guaranteeing that a message is sent to Kafka</td></tr><tr><td 
id="name">Message Delimiter</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Specifies the delimiter to use 
for splitting apart multiple messages within a single FlowFile. If not 
specified, the entire content of the FlowFile will be used as a single message. 
If specified, the contents of the FlowFile will be split on this delimiter and 
each section sent as a separate Kafka message.<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name"><strong>Max Buffer 
Size</strong></td><td id="default-value">1 
 MB</td><td id="allowable-values"></td><td id="description">The maximum amount 
of data to buffer in memory before sending to Kafka</td></tr><tr><td 
id="name"><strong>Communications Timeout</strong></td><td id="default-value">30 
secs</td><td id="allowable-values"></td><td id="description">The amount of time 
to wait for a response from Kafka before determining that there is a 
communications error</td></tr><tr><td id="name"><strong>Client 
Name</strong></td><td id="default-value">NiFi-</td><td 
id="allowable-values"></td><td id="description">Client Name to use when 
communicating with Kafka</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Any
 FlowFile that is successfully sent to Kafka will be routed to this 
Relationship</td></tr><tr><td>failure</td><td>Any FlowFile that cannot be sent 
to Kafka will be routed to this Relationship</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attribut
 es: </h3>None specified.</body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutKafka</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Sends the contents 
of a FlowFile as a message to Apache Kafka</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>Apache, Kafka, Put, Send, Message, PubSub</p><h3>Properties: </h3><p>In 
the list below, the names of required properties appear in 
<strong>bold</strong>. Any other properties (not in bold) are considered 
optional. The table also indicates any default values, and whether a property 
supports the <a href="../../html/expression-language-guide.html">NiFi 
Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Known Brokers</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separa
 ted list of known Kafka Brokers in the format 
&lt;host&gt;:&lt;port&gt;</td></tr><tr><td id="name"><strong>Topic 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Kafka Topic of 
interest<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kafka Key</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
Key to use for the Message<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Delivery 
Guarantee</strong></td><td id="default-value">0</td><td 
id="allowable-values"><ul><li>Best Effort <img 
src="../../html/images/iconInfo.png" alt="FlowFile will be routed to success 
after successfully writing the content to a Kafka node, without waiting for a 
response. This provides the best performance but may result in data loss." 
title="FlowFile will be routed to success after successfully writing the 
content to a Kafka node, without waiting for
  a response. This provides the best performance but may result in data 
loss."></img></li><li>Guarantee Single Node Delivery <img 
src="../../html/images/iconInfo.png" alt="FlowFile will be routed to success if 
the message is received by a single Kafka node, whether or not it is 
replicated. This is faster than &lt;Guarantee Replicated Delivery&gt; but can 
result in data loss if a Kafka node crashes" title="FlowFile will be routed to 
success if the message is received by a single Kafka node, whether or not it is 
replicated. This is faster than &lt;Guarantee Replicated Delivery&gt; but can 
result in data loss if a Kafka node crashes"></img></li><li>Guarantee 
Replicated Delivery <img src="../../html/images/iconInfo.png" alt="FlowFile 
will be routed to failure unless the message is replicated to the appropriate 
number of Kafka Nodes according to the Topic configuration" title="FlowFile 
will be routed to failure unless the message is replicated to the appropriate 
number of Kafka Nodes acco
 rding to the Topic configuration"></img></li></ul></td><td 
id="description">Specifies the requirement for guaranteeing that a message is 
sent to Kafka</td></tr><tr><td id="name">Message Delimiter</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the delimiter to use for splitting apart multiple 
messages within a single FlowFile. If not specified, the entire content of the 
FlowFile will be used as a single message. If specified, the contents of the 
FlowFile will be split on this delimiter and each section sent as a separate 
Kafka message.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Max Buffer 
Size</strong></td><td id="default-value">1 MB</td><td 
id="allowable-values"></td><td id="description">The maximum amount of data to 
buffer in memory before sending to Kafka</td></tr><tr><td 
id="name"><strong>Communications Timeout</strong></td><td id="default-value">30 
secs</td><td id="allowable-values"></
 td><td id="description">The amount of time to wait for a response from Kafka 
before determining that there is a communications error</td></tr><tr><td 
id="name"><strong>Producer Type</strong></td><td 
id="default-value">sync</td><td id="allowable-values"><ul><li>Synchronous <img 
src="../../html/images/iconInfo.png" alt="Send FlowFiles to Kafka immediately." 
title="Send FlowFiles to Kafka immediately."></img></li><li>Asynchronous <img 
src="../../html/images/iconInfo.png" alt="Batch messages before sending them to 
Kafka. While this will improve throughput, it opens the possibility that a 
failure on the client machine will drop unsent data." title="Batch messages 
before sending them to Kafka. While this will improve throughput, it opens the 
possibility that a failure on the client machine will drop unsent 
data."></img></li></ul></td><td id="description">This parameter specifies 
whether the messages are sent asynchronously in a background 
thread.</td></tr><tr><td id="name"><strong>Async B
 atch Size</strong></td><td id="default-value">200</td><td 
id="allowable-values"></td><td id="description">Used only if Producer Type is 
set to "Asynchronous". The number of messages to send in one batch when using 
Asynchronous mode. The producer will wait until either this number of messages 
are ready to send or "Queue Buffering Max Time" is reached.</td></tr><tr><td 
id="name"><strong>Queue Buffer Max Count</strong></td><td 
id="default-value">10000</td><td id="allowable-values"></td><td 
id="description">Used only if Producer Type is set to "Asynchronous". The 
maximum number of unsent messages that can be queued up in the producer when 
using Asynchronous mode before either the producer must be blocked or data must 
be dropped.</td></tr><tr><td id="name"><strong>Queue Buffering Max 
Time</strong></td><td id="default-value">5 secs</td><td 
id="allowable-values"></td><td id="description">Used only if Producer Type is 
set to "Asynchronous". Maximum time to buffer data when using Asynchronou
 s mode. For example a setting of 100 ms will try to batch together 100ms of 
messages to send at once. This will improve throughput but adds message 
delivery latency due to the buffering.</td></tr><tr><td id="name">Queue Enqueue 
Timeout</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Used only if Producer Type is set to "Asynchronous". The 
amount of time to block before dropping messages when running in Asynchronous 
mode and the buffer has reached the "Queue Buffer Max Count". If set to 0, 
events will be enqueued immediately or dropped if the queue is full (the 
producer send call will never block). If not set, the producer will block 
indefinitely and never willingly drop a send.</td></tr><tr><td 
id="name"><strong>Compression Codec</strong></td><td 
id="default-value">none</td><td id="allowable-values"><ul><li>None <img 
src="../../html/images/iconInfo.png" alt="Compression will not be used for any 
topic." title="Compression will not be used for any t
 opic."></img></li><li>GZIP <img src="../../html/images/iconInfo.png" 
alt="Compress messages using GZIP" title="Compress messages using 
GZIP"></img></li><li>Snappy <img src="../../html/images/iconInfo.png" 
alt="Compress messages using Snappy" title="Compress messages using 
Snappy"></img></li></ul></td><td id="description">This parameter allows you to 
specify the compression codec for all data generated by this 
producer.</td></tr><tr><td id="name">Compressed Topics</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">This parameter allows you to set whether compression should be 
turned on for particular topics. If the compression codec is anything other 
than "None", enable compression only for specified topics if any. If the list 
of compressed topics is empty, then enable the specified compression codec for 
all topics. If the compression codec is None, compression is disabled for all 
topics</td></tr><tr><td id="name"><strong>Client Name</strong></td><td 
 id="default-value">NiFi-mock-processor</td><td id="allowable-values"></td><td 
id="description">Client Name to use when communicating with 
Kafka</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>failure</td><td>Any
 FlowFile that cannot be sent to Kafka will be routed to this 
Relationship</td></tr><tr><td>success</td><td>Any FlowFile that is successfully 
sent to Kafka will be routed to this Relationship</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3>None 
specified.</body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertCSVToAvro/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertCSVToAvro/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertCSVToAvro/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertCSVToAvro/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ConvertCSVToAvro</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Converts CSV files 
to Avro according to an Avro Schema</p><h3>Tags: </h3><p>kite, csv, 
avro</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, whether a 
property supports the <a href="../../html/expression-language-guide.html">NiFi 
Expression Language</a>, and whether a property is considered "sensitive", 
meaning that its value will be encrypted. Before entering a value in a 
sensitive property, ensure that the <strong>nifi.properties</strong> file has 
an entry for the property <strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values</t
 h><th>Description</th></tr><tr><td id="name">Hadoop configuration 
files</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A comma-separated list of Hadoop configuration 
files</td></tr><tr><td id="name"><strong>Record schema</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Outgoing Avro schema for each record created from a CSV 
row<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">CSV charset</td><td id="default-value">utf8</td><td 
id="allowable-values"></td><td id="description">Character set for CSV 
files</td></tr><tr><td id="name">CSV delimiter</td><td 
id="default-value">,</td><td id="allowable-values"></td><td 
id="description">Delimiter character for CSV records</td></tr><tr><td 
id="name">CSV quote character</td><td id="default-value">"</td><td 
id="allowable-values"></td><td id="description">Quote character for CSV 
values</td></tr><tr><td id="name">CSV escape character</t
 d><td id="default-value">\</td><td id="allowable-values"></td><td 
id="description">Escape character for CSV values</td></tr><tr><td id="name">Use 
CSV header line</td><td id="default-value">false</td><td 
id="allowable-values"></td><td id="description">Whether to use the first line 
as a header</td></tr><tr><td id="name">Lines to skip</td><td 
id="default-value">0</td><td id="allowable-values"></td><td 
id="description">Number of lines to skip before reading header or 
data</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFile
 content has been successfully saved</td></tr><tr><td>failure</td><td>FlowFile 
content could not be processed</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3>None specified.</body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ConvertCSVToAvro</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Converts CSV files 
to Avro according to an Avro Schema</p><h3>Tags: </h3><p>kite, csv, 
avro</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop configuration files</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of 
Hadoop configuration files</td></tr><tr><td id="name"><strong>Record 
schema</stron
 g></td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Outgoing Avro schema for each record created from a CSV 
row<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">CSV charset</td><td id="default-value">utf8</td><td 
id="allowable-values"></td><td id="description">Character set for CSV 
files</td></tr><tr><td id="name">CSV delimiter</td><td 
id="default-value">,</td><td id="allowable-values"></td><td 
id="description">Delimiter character for CSV records</td></tr><tr><td 
id="name">CSV quote character</td><td id="default-value">"</td><td 
id="allowable-values"></td><td id="description">Quote character for CSV 
values</td></tr><tr><td id="name">CSV escape character</td><td 
id="default-value">\</td><td id="allowable-values"></td><td 
id="description">Escape character for CSV values</td></tr><tr><td id="name">Use 
CSV header line</td><td id="default-value">false</td><td 
id="allowable-values"></td><td id="description">Whether to us
 e the first line as a header</td></tr><tr><td id="name">Lines to skip</td><td 
id="default-value">0</td><td id="allowable-values"></td><td 
id="description">Number of lines to skip before reading header or 
data</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Avro
 content that was converted successfully from 
CSV</td></tr><tr><td>failure</td><td>CSV content that could not be 
processed</td></tr><tr><td>incompatible</td><td>CSV content that could not be 
converted</td></tr></table><h3>Reads Attributes: </h3>None specified.<h3>Writes 
Attributes: </h3>None specified.</body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertJSONToAvro/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertJSONToAvro/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertJSONToAvro/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertJSONToAvro/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ConvertJSONToAvro</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Converts JSON 
files to Avro according to an Avro Schema</p><h3>Tags: </h3><p>kite, json, 
avro</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, whether a 
property supports the <a href="../../html/expression-language-guide.html">NiFi 
Expression Language</a>, and whether a property is considered "sensitive", 
meaning that its value will be encrypted. Before entering a value in a 
sensitive property, ensure that the <strong>nifi.properties</strong> file has 
an entry for the property <strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable Values
 </th><th>Description</th></tr><tr><td id="name">Hadoop configuration 
files</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A comma-separated list of Hadoop configuration 
files</td></tr><tr><td id="name"><strong>Record schema</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Outgoing Avro schema for each record created from a JSON 
object<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFile
 content has been successfully saved</td></tr><tr><td>failure</td><td>FlowFile 
content could not be processed</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3>None specified.</body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ConvertJSONToAvro</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Converts JSON 
files to Avro according to an Avro Schema</p><h3>Tags: </h3><p>kite, json, 
avro</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop configuration files</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of 
Hadoop configuration files</td></tr><tr><td id="name"><strong>Record schema</st
 rong></td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Outgoing Avro schema for each record created from a JSON 
object<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Avro
 content that was converted successfully from 
JSON</td></tr><tr><td>failure</td><td>JSON content that could not be 
processed</td></tr><tr><td>incompatible</td><td>JSON content that could not be 
converted</td></tr></table><h3>Reads Attributes: </h3>None specified.<h3>Writes 
Attributes: </h3>None specified.</body></html>
\ No newline at end of file

Modified: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.StoreInKiteDataset/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.StoreInKiteDataset/index.html?rev=1695640&r1=1695639&r2=1695640&view=diff
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.StoreInKiteDataset/index.html
 (original)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.kite.StoreInKiteDataset/index.html
 Thu Aug 13 01:19:25 2015
@@ -1 +1 @@
-<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>StoreInKiteDataset</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Stores Avro 
records in a Kite dataset</p><h3>Tags: </h3><p>kite, avro, parquet, hadoop, 
hive, hdfs, hbase</p><h3>Properties: </h3><p>In the list below, the names of 
required properties appear in <strong>bold</strong>. Any other properties (not 
in bold) are considered optional. The table also indicates any default values, 
whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression Language</a>, 
and whether a property is considered "sensitive", meaning that its value will 
be encrypted. Before entering a value in a sensitive property, ensure that the 
<strong>nifi.properties</strong> file has an entry for the property 
<strong>nifi.sensitive.props.key</strong>.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>All
 owable Values</th><th>Description</th></tr><tr><td id="name">Hadoop 
configuration files</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of 
Hadoop configuration files</td></tr><tr><td id="name"><strong>Target dataset 
URI</strong></td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">URI that identifies a Kite dataset where data will be 
stored<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFile
 content has been successfully 
saved</td></tr><tr><td>incompatible</td><td>FlowFile content is not compatible 
with the target dataset</td></tr><tr><td>failure</td><td>FlowFile content could 
not be processed</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3>None specified.</body></html>
\ No newline at end of file
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>StoreInKiteDataset</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Stores Avro 
records in a Kite dataset</p><h3>Tags: </h3><p>kite, avro, parquet, hadoop, 
hive, hdfs, hbase</p><h3>Properties: </h3><p>In the list below, the names of 
required properties appear in <strong>bold</strong>. Any other properties (not 
in bold) are considered optional. The table also indicates any default values, 
and whether a property supports the <a 
href="../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop configuration files</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A comma-separated list of 
Hadoop configuration files</td></tr><tr><td id="name"><strong>Targ
 et dataset URI</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">URI that identifies a Kite 
dataset where data will be stored<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFile
 content has been successfully 
saved</td></tr><tr><td>incompatible</td><td>FlowFile content is not compatible 
with the target dataset</td></tr><tr><td>failure</td><td>FlowFile content could 
not be processed</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.GetMongo/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.GetMongo/index.html?rev=1695640&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.GetMongo/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.GetMongo/index.html
 Thu Aug 13 01:19:25 2015
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>GetMongo</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Creates FlowFiles 
from documents in MongoDB</p><h3>Tags: </h3><p>mongodb, read, 
get</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>Mongo 
URI</strong></td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">MongoURI, typically of the form: 
mongodb://host1[:port1][,host2[:port2],...]</td></tr><tr><td 
id="name"><strong>Mongo Database Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of the d
 atabase to use</td></tr><tr><td id="name"><strong>Mongo Collection 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the collection to 
use</td></tr><tr><td id="name">Query</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The selection criteria; must be 
a valid BSON document; if omitted the entire collection will be 
queried</td></tr><tr><td id="name">Projection</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
fields to be returned from the documents in the result set; must be a valid 
BSON document</td></tr><tr><td id="name">Sort</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
fields by which to sort; must be a valid BSON document</td></tr><tr><td 
id="name">Limit</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The maximum number of elements 
to return</td></tr><tr><td id="name">B
 atch Size</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">The number of elements returned from the server in one 
batch</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>All
 files are routed to success</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.PutMongo/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.PutMongo/index.html?rev=1695640&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.PutMongo/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi.processors.mongodb.PutMongo/index.html
 Thu Aug 13 01:19:25 2015
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutMongo</title><link rel="stylesheet" 
href="../../css/component-usage.css" 
type="text/css"></link></head><body><h2>Description: </h2><p>Writes the 
contents of a FlowFile to MongoDB</p><h3>Tags: </h3><p>mongodb, insert, update, 
write, put</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values.</p><table 
id="properties"><tr><th>Name</th><th>Default Value</th><th>Allowable 
Values</th><th>Description</th></tr><tr><td id="name"><strong>Mongo 
URI</strong></td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">MongoURI, typically of the form: 
mongodb://host1[:port1][,host2[:port2],...]</td></tr><tr><td 
id="name"><strong>Mongo Database Name</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description"
 >The name of the database to use</td></tr><tr><td id="name"><strong>Mongo 
 >Collection Name</strong></td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">The name of the collection to 
 >use</td></tr><tr><td id="name"><strong>Mode</strong></td><td 
 >id="default-value">insert</td><td 
 >id="allowable-values"><ul><li>insert</li><li>update</li></ul></td><td 
 >id="description">Indicates whether the processor should insert or update 
 >content</td></tr><tr><td id="name"><strong>Upsert</strong></td><td 
 >id="default-value">false</td><td 
 >id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
 >id="description">When true, inserts a document if no document matches the 
 >update query criteria; this property is valid only when using update mode, 
 >otherwise it is ignored</td></tr><tr><td id="name"><strong>Update Query 
 >Key</strong></td><td id="default-value">_id</td><td 
 >id="allowable-values"></td><td id="description">Key name used to build the 
 >update query criteria; this pro
 perty is valid only when using update mode, otherwise it is 
ignored</td></tr><tr><td id="name"><strong>Write Concern</strong></td><td 
id="default-value">ACKNOWLEDGED</td><td 
id="allowable-values"><ul><li>ACKNOWLEDGED</li><li>UNACKNOWLEDGED</li><li>FSYNCED</li><li>JOURNALED</li><li>REPLICA_ACKNOWLEDGED</li><li>MAJORITY</li></ul></td><td
 id="description">The write concern to use</td></tr><tr><td 
id="name"><strong>Character Set</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">The Character Set in which the data is 
encoded</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>failure</td><td>All
 FlowFiles that cannot be written to MongoDB are routed to this 
relationship</td></tr><tr><td>success</td><td>All FlowFiles that are written to 
MongoDB are routed to this relationship</td></tr></table><h3>Reads Attributes: 
</h3>None specified.<h3>Writes Attributes: </h3>None specified.<
 /body></html>
\ No newline at end of file


Reply via email to