Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.FetchParquet/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.FetchParquet/index.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.FetchParquet/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.FetchParquet/index.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>FetchParquet</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">FetchParquet</h1><h2>Description: </h2><p>Reads from a given Parquet 
file and writes records to the content of the flow file using the selected 
record writer. The original Parquet file will remain unchanged, and the content 
of the flow file will be replaced with records of the selected type. This 
processor can be used with ListHDFS or ListFile to obtain a listing of files to 
fetch.</p><h3>Tags: </h3><p>parquet, hadoop, HDFS, get, ingest, fetch, source, 
record</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in 
 bold) are considered optional. The table also indicates any default values, 
and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file 
or will revert to a default configuration. To use swebhdfs, see 'Additional 
Details' section of PutHDFS's documentation.<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Kerberos Credentials 
Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>KerberosCredentialsSer
 vice<br/><strong>Implementation: </strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.6.0/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos principal to 
authenticate as. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kerberos Keytab</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos keytab associated with the principal. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos Relogin 
Period</td><td id="default-value">4 h
 ours</td><td id="allowable-values"></td><td id="description">Period of time 
which should pass before attempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Additional Classpath Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of paths to files and/or directories that will be added to 
the classpath. When specifying a directory, all files with in the directory 
will be added to the classpath, but further sub-directories will not be 
included.</td></tr><tr><td id="name"><strong>Filename</strong></td><td 
id="default-value">${path}/${filename}</td><td id="allowable-values"></td><td 
id="description">The name of the file to retrieve<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name"><strong>Record 
Writer</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>RecordSetWriterFactory<br/><strong>Implementations
 : </strong><a 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.json.JsonRecordSetWriter/index.html">JsonRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html">AvroRecordSetWriter</a><br/><a
 
href="../../../nifi-scripting-nar/1.6.0/org.apache.nifi.record.script.ScriptedRecordSetWriter/index.html">ScriptedRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html">FreeFormTextRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVRecordSetWriter/index.html">CSVRecordSetWriter</a></td><td
 id="description">The service for writing records to the FlowFile 
content</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>retry</td><td>FlowFiles
 will be routed to this relationship if
  the content of the file cannot be retrieved, but might be able to be in the 
future if tried again. This generally indicates that the Fetch should be tried 
again.</td></tr><tr><td>success</td><td>FlowFiles will be routed to this 
relationship once they have been updated with the content of the 
file</td></tr><tr><td>failure</td><td>FlowFiles will be routed to this 
relationship if the content of the file cannot be retrieved and trying again 
will likely not be helpful. This would occur, for instance, if the file is not 
found or if there is a permissions issue</td></tr></table><h3>Reads Attributes: 
</h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>fetch.failure.reason</td><td>When
 a FlowFile is routed to 'failure', this attribute is added indicating why the 
file could not be fetched from the given 
filesystem.</td></tr><tr><td>record.count</td><td>The number of records in the 
resulting flow file</td></tr></table
 ><h3>State management: </h3>This component does not store 
 >state.<h3>Restricted: </h3><table id="restrictions"><tr><th>Required 
 >Permission</th><th>Explanation</th></tr><tr><td>read 
 >filesystem</td><td>Provides operator the ability to retrieve any file that 
 >NiFi has access to in HDFS or the local 
 >filesystem.</td></tr></table><h3>Input requirement: </h3>This component 
 >requires an incoming relationship.<h3>System Resource 
 >Considerations:</h3>None specified.<h3>See Also:</h3><p><a 
 >href="../org.apache.nifi.processors.parquet.PutParquet/index.html">PutParquet</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.PutParquet/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.PutParquet/index.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.PutParquet/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-parquet-nar/1.6.0/org.apache.nifi.processors.parquet.PutParquet/index.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1,3 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutParquet</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutParquet</h1><h2>Description: </h2><p>Reads records from an incoming 
FlowFile using the provided Record Reader, and writes those records to a 
Parquet file. The schema for the Parquet file must be provided in the processor 
properties. This processor will first write a temporary dot file and upon 
successfully writing every record to the dot file, it will rename the dot file 
to it's final name. If the dot file cannot be renamed, the rename operation 
will be attempted up to 10 times, and if still not successful, the dot file 
will be deleted and the flow file will be routed to failure.  If any error occ
 urs while reading records from the input, or writing records to the output, 
the entire dot file will be removed and the flow file will be routed to failure 
or retry, depending on the error.</p><h3>Tags: </h3><p>put, parquet, hadoop, 
HDFS, filesystem, record</p><h3>Properties: </h3><p>In the list below, the 
names of required properties appear in <strong>bold</strong>. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Hadoop Configuration Resources</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A file or comma separated list 
of files which contains the Hadoop file system configuration. Without this, 
Hadoop will search the classpath for a '
 core-site.xml' and 'hdfs-site.xml' file or will revert to a default 
configuration. To use swebhdfs, see 'Additional Details' section of PutHDFS's 
documentation.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Kerberos Credentials Service</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
</strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.6.0/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Kerberos principal to 
authenticate as. Requires nifi.kerberos.krb5.file to be set in your 
nifi.properties<br/><strong>Supports Expression Lan
 guage: true</strong></td></tr><tr><td id="name">Kerberos Keytab</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Kerberos keytab associated with the principal. Requires 
nifi.kerberos.krb5.file to be set in your nifi.properties<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name">Kerberos Relogin 
Period</td><td id="default-value">4 hours</td><td 
id="allowable-values"></td><td id="description">Period of time which should 
pass before attempting a kerberos relogin.
+
+This property has been deprecated, and has no effect on processing. Relogins 
now occur automatically.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Additional Classpath Resources</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of paths to files and/or directories that will be added to 
the classpath. When specifying a directory, all files with in the directory 
will be added to the classpath, but further sub-directories will not be 
included.</td></tr><tr><td id="name"><strong>Record Reader</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>RecordReaderFactory<br/><strong>Implementations: </strong><a 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroReader/index.html">AvroReader</a><br/><a
 
href="../../../nifi-scripting-nar/1.6.0/org.apache.nifi.record.script.ScriptedReader/index.html">ScriptedRea
 der</a><br/><a 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.json.JsonPathReader/index.html">JsonPathReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.grok.GrokReader/index.html">GrokReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/index.html">CSVReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.json.JsonTreeReader/index.html">JsonTreeReader</a></td><td
 id="description">The service for reading records from incoming flow 
files.</td></tr><tr><td id="name"><strong>Directory</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
parent directory to which files should be written. Will be created if it 
doesn't exist.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Compression 
Type</strong></td><td id="default-value">UNCOMPRESSED</t
 d><td 
id="allowable-values"><ul><li>UNCOMPRESSED</li><li>SNAPPY</li><li>GZIP</li><li>LZO</li></ul></td><td
 id="description">The type of compression for the file being 
written.</td></tr><tr><td id="name"><strong>Overwrite Files</strong></td><td 
id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether or not to overwrite existing files in the same 
directory with the same name. When set to false, flow files will be routed to 
failure when a file exists in the same directory with the same 
name.</td></tr><tr><td id="name">Permissions umask</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
umask represented as an octal number which determines the permissions of files 
written to HDFS. This overrides the Hadoop Configuration 
dfs.umaskmode</td></tr><tr><td id="name">Remote Group</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Changes the group of the HDFS
  file to this value after it is written. This only works if NiFi is running as 
a user that has HDFS super user privilege to change group</td></tr><tr><td 
id="name">Remote Owner</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Changes the owner of the HDFS 
file to this value after it is written. This only works if NiFi is running as a 
user that has HDFS super user privilege to change owner</td></tr><tr><td 
id="name">Row Group Size</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The row group size used by the 
Parquet writer. The value is specified in the format of &lt;Data Size&gt; 
&lt;Data Unit&gt; where Data Unit is one of B, KB, MB, GB, 
TB.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Page Size</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The page size used by the 
Parquet writer. The value is specified in the format of &lt;Data Size&gt;
  &lt;Data Unit&gt; where Data Unit is one of B, KB, MB, GB, 
TB.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Dictionary Page Size</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The dictionary page size used 
by the Parquet writer. The value is specified in the format of &lt;Data 
Size&gt; &lt;Data Unit&gt; where Data Unit is one of B, KB, MB, GB, 
TB.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Max Padding Size</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The maximum amount of padding 
that will be used to align row groups with blocks in the underlying filesystem. 
If the underlying filesystem is not a block filesystem like HDFS, this has no 
effect. The value is specified in the format of &lt;Data Size&gt; &lt;Data 
Unit&gt; where Data Unit is one of B, KB, MB, GB, TB.<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><
 td id="name">Enable Dictionary Encoding</td><td id="default-value"></td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether dictionary encoding should be enabled for 
the Parquet writer</td></tr><tr><td id="name">Enable Validation</td><td 
id="default-value"></td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether validation should be enabled for the Parquet 
writer</td></tr><tr><td id="name">Writer Version</td><td 
id="default-value"></td><td 
id="allowable-values"><ul><li>PARQUET_1_0</li><li>PARQUET_2_0</li></ul></td><td 
id="description">Specifies the version used by Parquet writer</td></tr><tr><td 
id="name">Remove CRC Files</td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether the corresponding CRC file should be deleted 
upon successfully writing a Parquet file</td></tr></table><h3>Relations
 hips: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>retry</td><td>Flow
 Files that could not be processed due to issues that can be retried are 
transferred to this relationship</td></tr><tr><td>success</td><td>Flow Files 
that have been successfully processed are transferred to this 
relationship</td></tr><tr><td>failure</td><td>Flow Files that could not be 
processed due to issue that cannot be retried are transferred to this 
relationship</td></tr></table><h3>Reads Attributes: </h3><table 
id="reads-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file to write comes from the value of this 
attribute.</td></tr></table><h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>filename</td><td>The
 name of the file is stored in this 
attribute.</td></tr><tr><td>absolute.hdfs.path</td><td>The absolute path to the 
file is stored in this attribute.</td></tr><tr><td>
 record.count</td><td>The number of records written to the Parquet 
file</td></tr></table><h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3><table id="restrictions"><tr><th>Required 
Permission</th><th>Explanation</th></tr><tr><td>write 
filesystem</td><td>Provides operator the ability to write any file that NiFi 
has access to in HDFS or the local filesystem.</td></tr></table><h3>Input 
requirement: </h3>This component requires an incoming relationship.<h3>System 
Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/additionalDetails.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/additionalDetails.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1,97 @@
+<!DOCTYPE html>
+<html lang="en">
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<head>
+    <meta charset="utf-8" />
+    <title>ConvertExcelToCSVProcessor</title>
+    <style>
+table {
+    border-collapse: collapse;
+}
+
+table, th, td {
+    border: 1px solid #ccc;
+}
+
+td.r {
+    text-align: right;
+}
+
+td {
+    width: 50px;
+    padding: 5px;
+}
+    </style>
+    <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+</head>
+
+<body>
+<h2>How it extracts CSV data from a sheet</h2>
+<p>
+    ConvertExcelToCSVProcessor extracts CSV data with following rules:
+</p>
+<ul>
+    <li>Find the fist cell which has a value in it (the FirstCell).</li>
+    <li>Scan cells in the first row, starting from the FirstCell,
+        until it reaches to a cell after which no cell with a value can not be 
found in the row (the FirstRowLastCell).</li>
+    <li>Process the 2nd row and later, from the column of FirstCell to the 
column of FirstRowLastCell.</li>
+    <li>If a row does not have any cell that has a value, then the row is 
ignored.</li>
+</ul>
+
+<p>
+    As an example, the sheet shown below will be:
+</p>
+
+<table>
+    <tbody>
+    <tr><th>row          
</th><th>A</th><th>B</th><th>C</th><th>D</th><th>E</th><th>F</th><th>G</th></tr>
+    <tr><td class="r">  1</td><td> </td><td> </td><td> </td><td> </td><td> 
</td><td> </td><td> </td></tr>
+    <tr><td class="r">  2</td><td> </td><td> 
</td><td>x</td><td>y</td><td>z</td><td> </td><td> </td></tr>
+    <tr><td class="r">  3</td><td> </td><td> </td><td>1</td><td> </td><td> 
</td><td> </td><td> </td></tr>
+    <tr><td class="r">  4</td><td>2</td><td> </td><td> </td><td>3</td><td> 
</td><td> </td><td> </td></tr>
+    <tr><td class="r">  5</td><td> </td><td> </td><td> </td><td> 
</td><td>4</td><td> </td><td> </td></tr>
+    <tr><td class="r">  6</td><td> </td><td> 
</td><td>5</td><td>6</td><td>7</td><td> </td><td> </td></tr>
+    <tr><td class="r">  7</td><td> </td><td> </td><td> </td><td> </td><td> 
</td><td>8</td><td> </td></tr>
+    <tr><td class="r">  8</td><td> </td><td> </td><td> </td><td> </td><td> 
</td><td> </td><td> </td></tr>
+    <tr><td class="r">  9</td><td> </td><td> </td><td> </td><td> 
</td><td>9</td><td> </td><td> </td></tr>
+    <tr><td class="r"> 10</td><td> </td><td> </td><td> </td><td> </td><td> 
</td><td> </td><td> </td></tr>
+    <tr><td class="r"> 11</td><td> </td><td> </td><td> </td><td> </td><td> 
</td><td> </td><td> </td></tr>
+    </tbody>
+</table>
+
+<p>
+    converted to following CSV:
+</p>
+
+<pre>
+x,y,z
+1,,
+,3,
+,,4
+5,6,7
+,,9
+</pre>
+
+<ul>
+    <li>C2(x) is the FirstCell, and E2(z) is the FirstRowLastCell.</li>
+    <li>A4(2) is ignored because it is out of range. So is F7(8).</li>
+    <li>Row 7 and 8 are ignored because those do not have a valid cell.</li>
+    <li>It is important to have a header row as shown in the example to define 
data area,
+        especially when a sheet includes empty cells.</li>
+</ul>
+
+</body>
+</html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/index.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-poi-nar/1.6.0/org.apache.nifi.processors.poi.ConvertExcelToCSVProcessor/index.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ConvertExcelToCSVProcessor</title><link 
rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">ConvertExcelToCSVProcessor</h1><h2>Description: </h2><p>Consumes a 
Microsoft Excel document and converts each worksheet to csv. Each sheet from 
the incoming Excel document will generate a new Flowfile that will be output 
from this processor. Each output Flowfile's contents will be formatted as a csv 
file where the each row from the excel sheet is output as a newline in the csv 
file. This processor is currently only capable of processing .xlsx (XSSF 2007 
OOXML file format) Excel documents and not older .xls (HSSF '97(-2007) file 
format) documents. This processor also expects well forma
 tted CSV content and will not escape cell's containing invalid content such as 
newlines or additional commas.</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>excel, csv, poi</p><h3>Properties: </h3><p>In the list below, the names 
of required properties appear in <strong>bold</strong>. Any other properties 
(not in bold) are considered optional. The table also indicates any default 
values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name">Sheets to Extract</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Comma separated list of Excel 
document sheet names that should be extracted from the excel document. If this 
property is left blank then all of the sheets will be extracted from the Excel 
document. The list
  of names is case in-sensitive. Any sheets not specified in this value will be 
ignored.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name"><strong>Number of Rows to 
Skip</strong></td><td id="default-value">0</td><td 
id="allowable-values"></td><td id="description">The row number of the first row 
to start processing.Use this to skip over rows of data at the top of your 
worksheet that are not part of the dataset.Empty rows of data anywhere in the 
spreadsheet will always be skipped, no matter what this value is set 
to.<br/><strong>Supports Expression Language: true</strong></td></tr><tr><td 
id="name">Columns To Skip</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">Comma delimited list of column 
numbers to skip. Use the columns number and not the letter designation. Use 
this to skip over columns anywhere in your worksheet that you don't want 
extracted as part of the record.<br/><strong>Supports Expression Language: 
true</
 strong></td></tr><tr><td id="name"><strong>Format Cell Values</strong></td><td 
id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Should the cell values be written to CSV using the formatting 
applied in Excel, or should they be printed as raw values.</td></tr><tr><td 
id="name"><strong>CSV Format</strong></td><td id="default-value">custom</td><td 
id="allowable-values"><ul><li>Custom Format <img 
src="../../../../../html/images/iconInfo.png" alt="The format of the CSV is 
configured by using the properties of this Controller Service, such as Value 
Separator" title="The format of the CSV is configured by using the properties 
of this Controller Service, such as Value Separator"></img></li><li>RFC 4180 
<img src="../../../../../html/images/iconInfo.png" alt="CSV data follows the 
RFC 4180 Specification defined at https://tools.ietf.org/html/rfc4180"; 
title="CSV data follows the RFC 4180 Specification defined at https://tools.ie
 tf.org/html/rfc4180"></img></li><li>Microsoft Excel <img 
src="../../../../../html/images/iconInfo.png" alt="CSV data follows the format 
used by Microsoft Excel" title="CSV data follows the format used by Microsoft 
Excel"></img></li><li>Tab-Delimited <img 
src="../../../../../html/images/iconInfo.png" alt="CSV data is Tab-Delimited 
instead of Comma Delimited" title="CSV data is Tab-Delimited instead of Comma 
Delimited"></img></li><li>MySQL Format <img 
src="../../../../../html/images/iconInfo.png" alt="CSV data follows the format 
used by MySQL" title="CSV data follows the format used by 
MySQL"></img></li><li>Informix Unload <img 
src="../../../../../html/images/iconInfo.png" alt="The format used by Informix 
when issuing the UNLOAD TO file_name command" title="The format used by 
Informix when issuing the UNLOAD TO file_name command"></img></li><li>Informix 
Unload Escape Disabled <img src="../../../../../html/images/iconInfo.png" 
alt="The format used by Informix when issuing the UNLOAD TO
  file_name command with escaping disabled" title="The format used by Informix 
when issuing the UNLOAD TO file_name command with escaping 
disabled"></img></li></ul></td><td id="description">Specifies which "format" 
the CSV data is in, or specifies if custom formatting should be 
used.</td></tr><tr><td id="name"><strong>Value Separator</strong></td><td 
id="default-value">,</td><td id="allowable-values"></td><td 
id="description">The character that is used to separate values/fields in a CSV 
Record</td></tr><tr><td id="name"><strong>Include Header Line</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether or not the CSV column names should be 
written out as the first line.</td></tr><tr><td id="name"><strong>Quote 
Character</strong></td><td id="default-value">"</td><td 
id="allowable-values"></td><td id="description">The character that is used to 
quote values so that escape characters do not hav
 e to be used</td></tr><tr><td id="name"><strong>Escape 
Character</strong></td><td id="default-value">\</td><td 
id="allowable-values"></td><td id="description">The character that is used to 
escape characters that would otherwise have a specific meaning to the CSV 
Parser.</td></tr><tr><td id="name">Comment Marker</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
character that is used to denote the start of a comment. Any line that begins 
with this comment will be ignored.</td></tr><tr><td id="name">Null 
String</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies a String that, if present as a value in the CSV, 
should be considered a null field instead of using the literal 
value.</td></tr><tr><td id="name"><strong>Trim Fields</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether or not white space should be removed from t
 he beginning and end of fields</td></tr><tr><td id="name"><strong>Quote 
Mode</strong></td><td id="default-value">NONE</td><td 
id="allowable-values"><ul><li>Quote All Values <img 
src="../../../../../html/images/iconInfo.png" alt="All values will be quoted 
using the configured quote character." title="All values will be quoted using 
the configured quote character."></img></li><li>Quote Minimal <img 
src="../../../../../html/images/iconInfo.png" alt="Values will be quoted only 
if they are contain special characters such as newline characters or field 
separators." title="Values will be quoted only if they are contain special 
characters such as newline characters or field 
separators."></img></li><li>Quote Non-Numeric Values <img 
src="../../../../../html/images/iconInfo.png" alt="Values will be quoted unless 
the value is a number." title="Values will be quoted unless the value is a 
number."></img></li><li>Do Not Quote Values <img 
src="../../../../../html/images/iconInfo.png" alt="Values wi
 ll not be quoted. Instead, all special characters will be escaped using the 
configured escape character." title="Values will not be quoted. Instead, all 
special characters will be escaped using the configured escape 
character."></img></li></ul></td><td id="description">Specifies how fields 
should be quoted when they are written</td></tr><tr><td 
id="name"><strong>Record Separator</strong></td><td 
id="default-value">\n</td><td id="allowable-values"></td><td 
id="description">Specifies the characters to use in order to separate CSV 
Records</td></tr><tr><td id="name"><strong>Include Trailing 
Delimiter</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If true, a trailing delimiter will be added to each CSV Record 
that is written. If false, the trailing delimiter will be 
omitted.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td>
 <td>Excel data converted to csv</td></tr><tr><td>failure</td><td>Failed to 
parse the Excel document</td></tr><tr><td>original</td><td>Original Excel 
document received by this processor</td></tr></table><h3>Reads Attributes: 
</h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>sheetname</td><td>The
 name of the Excel sheet that this particular row of data came from in the 
Excel document</td></tr><tr><td>numrows</td><td>The number of rows in this 
Excel Sheet</td></tr><tr><td>sourcefilename</td><td>The name of the Excel 
document file that this data originated 
from</td></tr><tr><td>convertexceltocsvprocessor.error</td><td>Error message 
that was encountered on a per Excel sheet basis. This attribute is only 
populated if an error was occured while processing the particular sheet. Having 
the error present at the sheet level will allow for the end user to better 
understand what syntax errors in their excel doc on a la
 rger scale caused the error.</td></tr></table><h3>State management: </h3>This 
component does not store state.<h3>Restricted: </h3>This component is not 
restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroReader/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroReader/index.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroReader/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroReader/index.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>AvroReader</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">AvroReader</h1><h2>Description: </h2><p>Parses Avro data and returns 
each Avro record as an separate Record object. The Avro data may contain the 
schema itself, or the schema can be externalized and accessed by one of the 
methods offered by the 'Schema Access Strategy' property.</p><h3>Tags: 
</h3><p>avro, parse, record, row, reader, delimited, comma, separated, 
values</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a prope
 rty supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Schema Access Strategy</strong></td><td 
id="default-value">embedded-avro-schema</td><td 
id="allowable-values"><ul><li>Use 'Schema Name' Property <img 
src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to 
use is specified by the 'Schema Name' Property. The value of this property is 
used to lookup the Schema in the configured Schema Registry service." 
title="The name of the Schema to use is specified by the 'Schema Name' 
Property. The value of this property is used to lookup the Schema in the 
configured Schema Registry service."></img></li><li>Use 'Schema Text' Property 
<img src="../../../../../html/images/iconInfo.png" alt="The text of the Schema 
itself is specified by the 'Schema Text' Property. The value of this pr
 operty must be a valid Avro Schema. If Expression Language is used, the value 
of the 'Schema Text' property must be valid after substituting the 
expressions." title="The text of the Schema itself is specified by the 'Schema 
Text' Property. The value of this property must be a valid Avro Schema. If 
Expression Language is used, the value of the 'Schema Text' property must be 
valid after substituting the expressions."></img></li><li>HWX Schema Reference 
Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile 
contains 3 Attributes that will be used to lookup a Schema from the configured 
Schema Registry: 'schema.identifier', 'schema.version', and 
'schema.protocol.version'" title="The FlowFile contains 3 Attributes that will 
be used to lookup a Schema from the configured Schema Registry: 
'schema.identifier', 'schema.version', and 
'schema.protocol.version'"></img></li><li>HWX Content-Encoded Schema Reference 
<img src="../../../../../html/images/iconInfo.png" alt="Th
 e content of the FlowFile contains a reference to a schema in the Schema 
Registry service. The reference is encoded as a single byte indicating the 
'protocol version', followed by 8 bytes indicating the schema identifier, and 
finally 4 bytes indicating the schema version, as per the Hortonworks Schema 
Registry serializers and deserializers, found at 
https://github.com/hortonworks/registry"; title="The content of the FlowFile 
contains a reference to a schema in the Schema Registry service. The reference 
is encoded as a single byte indicating the 'protocol version', followed by 8 
bytes indicating the schema identifier, and finally 4 bytes indicating the 
schema version, as per the Hortonworks Schema Registry serializers and 
deserializers, found at 
https://github.com/hortonworks/registry";></img></li><li>Confluent 
Content-Encoded Schema Reference <img 
src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile 
contains a reference to a schema in the Schema Registry serv
 ice. The reference is encoded as a single 'Magic Byte' followed by 4 bytes 
representing the identifier of the schema, as outlined at 
http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html.
 This is based on version 3.2.x of the Confluent Schema Registry." title="The 
content of the FlowFile contains a reference to a schema in the Schema Registry 
service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes 
representing the identifier of the schema, as outlined at 
http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html.
 This is based on version 3.2.x of the Confluent Schema 
Registry."></img></li><li>Use Embedded Avro Schema <img 
src="../../../../../html/images/iconInfo.png" alt="The FlowFile has the Avro 
Schema embedded within the content, and this schema will be used." title="The 
FlowFile has the Avro Schema embedded within the content, and this schema will 
be used."></img></li></ul></td><td id="description">Specifies h
 ow to obtain the schema that is to be used for interpreting the 
data.</td></tr><tr><td id="name">Schema Registry</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a 
href="../../../nifi-registry-nar/1.6.0/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a><br/><a
 
href="../../../nifi-confluent-platform-nar/1.6.0/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a
 
href="../../../nifi-hwx-schema-registry-nar/1.6.0/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a></td><td
 id="description">Specifies the Controller Service to use for the Schema 
Registry</td></tr><tr><td id="name">Schema Name</td><td 
id="default-value">${schema.name}</td><td id="allowable-values"></td><td 
id="description">Specifies the name of the schema to 
 lookup in the Schema Registry property<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Schema Version</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the version of the schema to lookup in the Schema 
Registry. If not specified then the latest version of the schema will be 
retrieved.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Schema Branch</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the name of the branch to use when looking up the 
schema in the Schema Registry property. If the chosen Schema Registry does not 
support branching, this value will be ignored.<br/><strong>Supports Expression 
Language: true</strong></td></tr><tr><td id="name">Schema Text</td><td 
id="default-value">${avro.schema}</td><td id="allowable-values"></td><td 
id="description">The text of an Avro-formatted Schema<br/><strong>Supports 
Expression Lan
 guage: true</strong></td></tr></table><h3>State management: </h3>This 
component does not store state.<h3>Restricted: </h3>This component is not 
restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>AvroRecordSetWriter</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">AvroRecordSetWriter</h1><h2>Description: </h2><p>Writes the contents of 
a RecordSet in Binary Avro format.</p><h3>Tags: </h3><p>avro, result, set, 
writer, serializer, record, recordset, row</p><h3>Properties: </h3><p>In the 
list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default Value</th
 ><th>Allowable Values</th><th>Description</th></tr><tr><td 
 >id="name"><strong>Schema Write Strategy</strong></td><td 
 >id="default-value">avro-embedded</td><td id="allowable-values"><ul><li>Embed 
 >Avro Schema <img src="../../../../../html/images/iconInfo.png" alt="The 
 >FlowFile will have the Avro schema embedded into the content, as is typical 
 >with Avro" title="The FlowFile will have the Avro schema embedded into the 
 >content, as is typical with Avro"></img></li><li>Set 'schema.name' Attribute 
 ><img src="../../../../../html/images/iconInfo.png" alt="The FlowFile will be 
 >given an attribute named 'schema.name' and this attribute will indicate the 
 >name of the schema in the Schema Registry. Note that ifthe schema for a 
 >record is not obtained from a Schema Registry, then no attribute will be 
 >added." title="The FlowFile will be given an attribute named 'schema.name' 
 >and this attribute will indicate the name of the schema in the Schema 
 >Registry. Note that ifthe schema for a record is not obtained
  from a Schema Registry, then no attribute will be added."></img></li><li>Set 
'avro.schema' Attribute <img src="../../../../../html/images/iconInfo.png" 
alt="The FlowFile will be given an attribute named 'avro.schema' and this 
attribute will contain the Avro Schema that describes the records in the 
FlowFile. The contents of the FlowFile need not be Avro, but the text of the 
schema will be used." title="The FlowFile will be given an attribute named 
'avro.schema' and this attribute will contain the Avro Schema that describes 
the records in the FlowFile. The contents of the FlowFile need not be Avro, but 
the text of the schema will be used."></img></li><li>HWX Schema Reference 
Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile 
will be given a set of 3 attributes to describe the schema: 
'schema.identifier', 'schema.version', and 'schema.protocol.version'. Note that 
if the schema for a record does not contain the necessary identifier and 
version, an Exception
  will be thrown when attempting to write the data." title="The FlowFile will 
be given a set of 3 attributes to describe the schema: 'schema.identifier', 
'schema.version', and 'schema.protocol.version'. Note that if the schema for a 
record does not contain the necessary identifier and version, an Exception will 
be thrown when attempting to write the data."></img></li><li>HWX 
Content-Encoded Schema Reference <img 
src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile 
will contain a reference to a schema in the Schema Registry service. The 
reference is encoded as a single byte indicating the 'protocol version', 
followed by 8 bytes indicating the schema identifier, and finally 4 bytes 
indicating the schema version, as per the Hortonworks Schema Registry 
serializers and deserializers, as found at 
https://github.com/hortonworks/registry. This will be prepended to each 
FlowFile. Note that if the schema for a record does not contain the necessary 
identifier and versi
 on, an Exception will be thrown when attempting to write the data." title="The 
content of the FlowFile will contain a reference to a schema in the Schema 
Registry service. The reference is encoded as a single byte indicating the 
'protocol version', followed by 8 bytes indicating the schema identifier, and 
finally 4 bytes indicating the schema version, as per the Hortonworks Schema 
Registry serializers and deserializers, as found at 
https://github.com/hortonworks/registry. This will be prepended to each 
FlowFile. Note that if the schema for a record does not contain the necessary 
identifier and version, an Exception will be thrown when attempting to write 
the data."></img></li><li>Confluent Schema Registry Reference <img 
src="../../../../../html/images/iconInfo.png" alt="The content of the FlowFile 
will contain a reference to a schema in the Schema Registry service. The 
reference is encoded as a single 'Magic Byte' followed by 4 bytes representing 
the identifier of the schema, as out
 lined at 
http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html.
 This will be prepended to each FlowFile. Note that if the schema for a record 
does not contain the necessary identifier and version, an Exception will be 
thrown when attempting to write the data. This is based on the encoding used by 
version 3.2.x of the Confluent Schema Registry." title="The content of the 
FlowFile will contain a reference to a schema in the Schema Registry service. 
The reference is encoded as a single 'Magic Byte' followed by 4 bytes 
representing the identifier of the schema, as outlined at 
http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html.
 This will be prepended to each FlowFile. Note that if the schema for a record 
does not contain the necessary identifier and version, an Exception will be 
thrown when attempting to write the data. This is based on the encoding used by 
version 3.2.x of the Confluent Schema Registry."></img></li><li>Do Not Write 
 Schema <img src="../../../../../html/images/iconInfo.png" alt="Do not add any 
schema-related information to the FlowFile." title="Do not add any 
schema-related information to the FlowFile."></img></li></ul></td><td 
id="description">Specifies how the schema for a Record should be added to the 
data.</td></tr><tr><td id="name"><strong>Schema Access 
Strategy</strong></td><td id="default-value">inherit-record-schema</td><td 
id="allowable-values"><ul><li>Use 'Schema Name' Property <img 
src="../../../../../html/images/iconInfo.png" alt="The name of the Schema to 
use is specified by the 'Schema Name' Property. The value of this property is 
used to lookup the Schema in the configured Schema Registry service." 
title="The name of the Schema to use is specified by the 'Schema Name' 
Property. The value of this property is used to lookup the Schema in the 
configured Schema Registry service."></img></li><li>Inherit Record Schema <img 
src="../../../../../html/images/iconInfo.png" alt="The schema us
 ed to write records will be the same schema that was given to the Record when 
the Record was created." title="The schema used to write records will be the 
same schema that was given to the Record when the Record was 
created."></img></li><li>Use 'Schema Text' Property <img 
src="../../../../../html/images/iconInfo.png" alt="The text of the Schema 
itself is specified by the 'Schema Text' Property. The value of this property 
must be a valid Avro Schema. If Expression Language is used, the value of the 
'Schema Text' property must be valid after substituting the expressions." 
title="The text of the Schema itself is specified by the 'Schema Text' 
Property. The value of this property must be a valid Avro Schema. If Expression 
Language is used, the value of the 'Schema Text' property must be valid after 
substituting the expressions."></img></li></ul></td><td 
id="description">Specifies how to obtain the schema that is to be used for 
interpreting the data.</td></tr><tr><td id="name">Schema Reg
 istry</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a 
href="../../../nifi-registry-nar/1.6.0/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a><br/><a
 
href="../../../nifi-confluent-platform-nar/1.6.0/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a
 
href="../../../nifi-hwx-schema-registry-nar/1.6.0/org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a></td><td
 id="description">Specifies the Controller Service to use for the Schema 
Registry</td></tr><tr><td id="name">Schema Name</td><td 
id="default-value">${schema.name}</td><td id="allowable-values"></td><td 
id="description">Specifies the name of the schema to lookup in the Schema 
Registry property<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr
 ><td id="name">Schema Version</td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">Specifies the version of the 
 >schema to lookup in the Schema Registry. If not specified then the latest 
 >version of the schema will be retrieved.<br/><strong>Supports Expression 
 >Language: true</strong></td></tr><tr><td id="name">Schema Branch</td><td 
 >id="default-value"></td><td id="allowable-values"></td><td 
 >id="description">Specifies the name of the branch to use when looking up the 
 >schema in the Schema Registry property. If the chosen Schema Registry does 
 >not support branching, this value will be ignored.<br/><strong>Supports 
 >Expression Language: true</strong></td></tr><tr><td id="name">Schema 
 >Text</td><td id="default-value">${avro.schema}</td><td 
 >id="allowable-values"></td><td id="description">The text of an Avro-formatted 
 >Schema<br/><strong>Supports Expression Language: 
 >true</strong></td></tr><tr><td id="name"><strong>Compression 
 >Format</strong></td><td id="default-val
 ue">NONE</td><td 
id="allowable-values"><ul><li>BZIP2</li><li>DEFLATE</li><li>NONE</li><li>SNAPPY</li><li>LZO</li></ul></td><td
 id="description">Compression type to use when writing Avro files. Default is 
None.</td></tr></table><h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3>This component is not restricted.<h3>System Resource 
Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/additionalDetails.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/additionalDetails.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1,334 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8"/>
+        <title>CSVReader</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"/>
+    </head>
+
+    <body>
+        <p>
+               The CSVReader Controller Service, expects input in such a way 
that the first line of a FlowFile specifies the name of
+               each column in the data. Following the first line, the rest of 
the FlowFile is expected to be valid CSV data from which
+               to form appropriate Records. The reader allows for 
customization of the CSV Format, such as which character should be used
+               to separate CSV fields, which character should be used for 
quoting and when to quote fields, which character should denote
+               a comment, etc.
+        </p>
+
+
+               <h2>Schemas and Type Coercion</h2>
+               
+               <p>
+                       When a record is parsed from incoming data, it is 
separated into fields. Each of these fields is then looked up against the
+                       configured schema (by field name) in order to determine 
what the type of the data should be. If the field is not present in
+                       the schema, that field is omitted from the Record. If 
the field is found in the schema, the data type of the received data
+                       is compared against the data type specified in the 
schema. If the types match, the value of that field is used as-is. If the
+                       schema indicates that the field should be of a 
different type, then the Controller Service will attempt to coerce the data
+                       into the type specified by the schema. If the field 
cannot be coerced into the specified type, an Exception will be thrown.
+               </p>
+               
+               <p>
+                       The following rules apply when attempting to coerce a 
field value from one data type to another:
+               </p>
+                       
+               <ul>
+                       <li>Any data type can be coerced into a String 
type.</li>
+                       <li>Any numeric data type (Byte, Short, Int, Long, 
Float, Double) can be coerced into any other numeric data type.</li>
+                       <li>Any numeric value can be coerced into a Date, Time, 
or Timestamp type, by assuming that the Long value is the number of
+                       milliseconds since epoch (Midnight GMT, January 1, 
1970).</li>
+                       <li>A String value can be coerced into a Date, Time, or 
Timestamp type, if its format matches the configured "Date Format," "Time 
Format,"
+                               or "Timestamp Format."</li>
+                       <li>A String value can be coerced into a numeric value 
if the value is of the appropriate type. For example, the String value
+                               <code>8</code> can be coerced into any numeric 
type. However, the String value <code>8.2</code> can be coerced into a Double 
or Float
+                               type but not an Integer.</li>
+                       <li>A String value of "true" or "false" (regardless of 
case) can be coerced into a Boolean value.</li>
+                       <li>A String value that is not empty can be coerced 
into a Char type. If the String contains more than 1 character, the first 
character is used
+                               and the rest of the characters are ignored.</li>
+                       <li>Any "date/time" type (Date, Time, Timestamp) can be 
coerced into any other "date/time" type.</li>
+                       <li>Any "date/time" type can be coerced into a Long 
type, representing the number of milliseconds since epoch (Midnight GMT, 
January 1, 1970).</li>
+                       <li>Any "date/time" type can be coerced into a String. 
The format of the String is whatever DateFormat is configured for the 
corresponding
+                               property (Date Format, Time Format, Timestamp 
Format property).</li>
+               </ul>
+               
+               <p>
+                       If none of the above rules apply when attempting to 
coerce a value from one data type to another, the coercion will fail and an 
Exception
+                       will be thrown.
+               </p>
+                       
+                       
+
+               <h2>Examples</h2>
+               
+               <h3>Example 1</h3>
+               
+        <p>
+               As an example, consider a FlowFile whose contents consists of 
the following:
+        </p>
+
+<code>
+id, name, balance, join_date, notes<br />
+1, John, 48.23, 04/03/2007 "Our very<br />
+first customer!"<br />
+2, Jane, 1245.89, 08/22/2009,<br />
+3, Frank Franklin, "48481.29", 04/04/2016,<br />
+</code>
+        
+        <p>
+               Additionally, let's consider that this Controller Service is 
configured with the Schema Registry pointing to an AvroSchemaRegistry and the 
schema is
+               configured as the following:
+        </p>
+        
+<code>
+<pre>
+{
+  "namespace": "nifi",
+  "name": "balances",
+  "type": "record",
+  "fields": [
+    { "name": "id", "type": "int" },
+    { "name": "name": "type": "string" },
+    { "name": "balance": "type": "double" },
+    { "name": "join_date", "type": {
+      "type": "int",
+      "logicalType": "date"
+    }},
+    { "name": "notes": "type": "string" }
+  ]
+}
+</pre>
+</code>
+
+       <p>
+               In the example above, we see that the 'join_date' column is a 
Date type. In order for the CSV Reader to be able to properly parse a value as 
a date,
+               we need to provide the reader with the date format to use. In 
this example, we would configure the Date Format property to be 
<code>MM/dd/yyyy</code>
+               to indicate that it is a two-digit month, followed by a 
two-digit day, followed by a four-digit year - each separated by a slash.
+               In this case, the result will be that this FlowFile consists of 
3 different records. The first record will contain the following values:
+       </p>
+
+               <table>
+               <head>
+                       <th>Field Name</th>
+                       <th>Field Value</th>
+               </head>
+               <body>
+                       <tr>
+                               <td>id</td>
+                               <td>1</td>
+                       </tr>
+                       <tr>
+                               <td>name</td>
+                               <td>John</td>
+                       </tr>
+                       <tr>
+                               <td>balance</td>
+                               <td>48.23</td>
+                       </tr>
+                       <tr>
+                               <td>join_date</td>
+                               <td>04/03/2007</td>
+                       </tr>
+                       <tr>
+                               <td>notes</td>
+                               <td>Our very<br />first customer!</td>
+                       </tr>
+               </body>
+       </table>
+       
+       <p>
+               The second record will contain the following values:
+       </p>
+       
+               <table>
+               <head>
+                       <th>Field Name</th>
+                       <th>Field Value</th>
+               </head>
+               <body>
+                       <tr>
+                               <td>id</td>
+                               <td>2</td>
+                       </tr>
+                       <tr>
+                               <td>name</td>
+                               <td>Jane</td>
+                       </tr>
+                       <tr>
+                               <td>balance</td>
+                               <td>1245.89</td>
+                       </tr>
+                       <tr>
+                               <td>join_date</td>
+                               <td>08/22/2009</td>
+                       </tr>
+                       <tr>
+                               <td>notes</td>
+                               <td></td>
+                       </tr>
+               </body>
+       </table>
+       
+               <p>
+                       The third record will contain the following values:
+               </p>            
+       
+               <table>
+               <head>
+                       <th>Field Name</th>
+                       <th>Field Value</th>
+               </head>
+               <body>
+                       <tr>
+                               <td>id</td>
+                               <td>3</td>
+                       </tr>
+                       <tr>
+                               <td>name</td>
+                               <td>Frank Franklin</td>
+                       </tr>
+                       <tr>
+                               <td>balance</td>
+                               <td>48481.29</td>
+                       </tr>
+                       <tr>
+                               <td>join_date</td>
+                               <td>04/04/2016</td>
+                       </tr>
+                       <tr>
+                               <td>notes</td>
+                               <td></td>
+                       </tr>
+               </body>
+       </table>
+
+
+
+       <h3>Example 2 - Schema with CSV Header Line</h3>
+
+       <p>
+               When CSV data consists of a header line that outlines the 
column names, the reader provides
+               a couple of different properties for configuring how to handle 
these column names. The
+               "Schema Access Strategy" property as well as the associated 
properties ("Schema Registry," "Schema Text," and
+               "Schema Name" properties) can be used to specify how to obtain 
the schema. If the "Schema Access Strategy" is set
+               to "Use String Fields From Header" then the header line of the 
CSV will be used to determine the schema. Otherwise,
+               a schema will be referenced elsewhere. But what happens if a 
schema is obtained from a Schema Registry, for instance,
+               and the CSV Header indicates a different set of column names?
+       </p>
+       
+       <p>
+               For example, let's say that the following schema is obtained 
from the Schema Registry:
+       </p>
+
+<code>
+<pre>
+{
+  "namespace": "nifi",
+  "name": "balances",
+  "type": "record",
+  "fields": [
+    { "name": "id", "type": "int" },
+    { "name": "name": "type": "string" },
+    { "name": "balance": "type": "double" },
+    { "name": "memo": "type": "string" }
+  ]
+}
+</pre>
+</code>
+               
+               <p>
+                       And the CSV contains the following data:
+               </p>
+               
+<code>
+<pre>
+id, name, balance, notes
+1, John Doe, 123.45, First Customer
+</pre>
+</code>
+               
+               <p>
+               Note here that our schema indicates that the final column is 
named "memo" whereas the CSV Header indicates that it is named "notes."
+               </p>
+       
+       <p>
+       In this case, the reader will look at the "Ignore CSV Header Column 
Names" property. If this property is set to "true" then the column names
+       provided in the CSV will simply be ignored and the last column will be 
called "memo." However, if the "Ignore CSV Header Column Names" property
+       is set to "false" then the result will be that the last column will be 
named "notes" and each record will have a null value for the "memo" column.
+       </p>
+
+               <p>
+               With "Ignore CSV Header Column Names" property set to 
"false":<br />
+               <table>
+               <head>
+                       <th>Field Name</th>
+                       <th>Field Value</th>
+               </head>
+               <body>
+                       <tr>
+                               <td>id</td>
+                               <td>1</td>
+                       </tr>
+                       <tr>
+                               <td>name</td>
+                               <td>John Doe</td>
+                       </tr>
+                       <tr>
+                               <td>balance</td>
+                               <td>123.45</td>
+                       </tr>
+                       <tr>
+                               <td>memo</td>
+                               <td>First Customer</td>
+                       </tr>
+               </body>
+       </table>
+               </p>
+               
+               
+               <p>
+               With "Ignore CSV Header Column Names" property set to 
"true":<br />
+                               <table>
+               <head>
+                       <th>Field Name</th>
+                       <th>Field Value</th>
+               </head>
+               <body>
+                       <tr>
+                               <td>id</td>
+                               <td>1</td>
+                       </tr>
+                       <tr>
+                               <td>name</td>
+                               <td>John Doe</td>
+                       </tr>
+                       <tr>
+                               <td>balance</td>
+                               <td>123.45</td>
+                       </tr>
+                       <tr>
+                               <td>notes</td>
+                               <td>First Customer</td>
+                       </tr>
+                       <tr>
+                               <td>memo</td>
+                               <td><code>null</code></td>
+                       </tr>
+               </body>
+       </table>
+               </p>
+               
+    </body>
+</html>

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/index.html?rev=1828578&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.6.0/org.apache.nifi.csv.CSVReader/index.html
 Sat Apr  7 00:33:22 2018
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>CSVReader</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">CSVReader</h1><h2>Description: </h2><p>Parses CSV-formatted data, 
returning each row in the CSV file as a separate record. This reader assumes 
that the first line in the content is the column names and all subsequent lines 
are the values. See Controller Service's Usage for further 
documentation.</p><p><a href="additionalDetails.html">Additional 
Details...</a></p><h3>Tags: </h3><p>csv, parse, record, row, reader, delimited, 
comma, separated, values</p><h3>Properties: </h3><p>In the list below, the 
names of required properties appear in <strong>bold</strong>. Any other 
properties (not in bold) are consi
 dered optional. The table also indicates any default values, and whether a 
property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Schema Access Strategy</strong></td><td 
id="default-value">csv-header-derived</td><td id="allowable-values"><ul><li>Use 
'Schema Name' Property <img src="../../../../../html/images/iconInfo.png" 
alt="The name of the Schema to use is specified by the 'Schema Name' Property. 
The value of this property is used to lookup the Schema in the configured 
Schema Registry service." title="The name of the Schema to use is specified by 
the 'Schema Name' Property. The value of this property is used to lookup the 
Schema in the configured Schema Registry service."></img></li><li>Use 'Schema 
Text' Property <img src="../../../../../html/images/iconInfo.png" alt="The text 
of the 
 Schema itself is specified by the 'Schema Text' Property. The value of this 
property must be a valid Avro Schema. If Expression Language is used, the value 
of the 'Schema Text' property must be valid after substituting the 
expressions." title="The text of the Schema itself is specified by the 'Schema 
Text' Property. The value of this property must be a valid Avro Schema. If 
Expression Language is used, the value of the 'Schema Text' property must be 
valid after substituting the expressions."></img></li><li>HWX Schema Reference 
Attributes <img src="../../../../../html/images/iconInfo.png" alt="The FlowFile 
contains 3 Attributes that will be used to lookup a Schema from the configured 
Schema Registry: 'schema.identifier', 'schema.version', and 
'schema.protocol.version'" title="The FlowFile contains 3 Attributes that will 
be used to lookup a Schema from the configured Schema Registry: 
'schema.identifier', 'schema.version', and 
'schema.protocol.version'"></img></li><li>HWX Content-Encod
 ed Schema Reference <img src="../../../../../html/images/iconInfo.png" 
alt="The content of the FlowFile contains a reference to a schema in the Schema 
Registry service. The reference is encoded as a single byte indicating the 
'protocol version', followed by 8 bytes indicating the schema identifier, and 
finally 4 bytes indicating the schema version, as per the Hortonworks Schema 
Registry serializers and deserializers, found at 
https://github.com/hortonworks/registry"; title="The content of the FlowFile 
contains a reference to a schema in the Schema Registry service. The reference 
is encoded as a single byte indicating the 'protocol version', followed by 8 
bytes indicating the schema identifier, and finally 4 bytes indicating the 
schema version, as per the Hortonworks Schema Registry serializers and 
deserializers, found at 
https://github.com/hortonworks/registry";></img></li><li>Confluent 
Content-Encoded Schema Reference <img 
src="../../../../../html/images/iconInfo.png" alt="The conten
 t of the FlowFile contains a reference to a schema in the Schema Registry 
service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes 
representing the identifier of the schema, as outlined at 
http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html.
 This is based on version 3.2.x of the Confluent Schema Registry." title="The 
content of the FlowFile contains a reference to a schema in the Schema Registry 
service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes 
representing the identifier of the schema, as outlined at 
http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html.
 This is based on version 3.2.x of the Confluent Schema 
Registry."></img></li><li>Use String Fields From Header <img 
src="../../../../../html/images/iconInfo.png" alt="The first non-comment line 
of the CSV file is a header line that contains the names of the columns. The 
schema will be derived by using the column names in the hea
 der and assuming that all columns are of type String." title="The first 
non-comment line of the CSV file is a header line that contains the names of 
the columns. The schema will be derived by using the column names in the header 
and assuming that all columns are of type String."></img></li></ul></td><td 
id="description">Specifies how to obtain the schema that is to be used for 
interpreting the data.</td></tr><tr><td id="name">Schema Registry</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>SchemaRegistry<br/><strong>Implementations: </strong><a 
href="../../../nifi-registry-nar/1.6.0/org.apache.nifi.schemaregistry.services.AvroSchemaRegistry/index.html">AvroSchemaRegistry</a><br/><a
 
href="../../../nifi-confluent-platform-nar/1.6.0/org.apache.nifi.confluent.schemaregistry.ConfluentSchemaRegistry/index.html">ConfluentSchemaRegistry</a><br/><a
 
href="../../../nifi-hwx-schema-registry-nar/1.6.0/org.apache.nifi.schemaregistry.hortonwor
 ks.HortonworksSchemaRegistry/index.html">HortonworksSchemaRegistry</a></td><td 
id="description">Specifies the Controller Service to use for the Schema 
Registry</td></tr><tr><td id="name">Schema Name</td><td 
id="default-value">${schema.name}</td><td id="allowable-values"></td><td 
id="description">Specifies the name of the schema to lookup in the Schema 
Registry property<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Schema Version</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the version of the schema to lookup in the Schema 
Registry. If not specified then the latest version of the schema will be 
retrieved.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Schema Branch</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the name of the branch to use when looking up the 
schema in the Schema Registry property. If the chosen Sche
 ma Registry does not support branching, this value will be 
ignored.<br/><strong>Supports Expression Language: 
true</strong></td></tr><tr><td id="name">Schema Text</td><td 
id="default-value">${avro.schema}</td><td id="allowable-values"></td><td 
id="description">The text of an Avro-formatted Schema<br/><strong>Supports 
Expression Language: true</strong></td></tr><tr><td id="name"><strong>CSV 
Parser</strong></td><td id="default-value">commons-csv</td><td 
id="allowable-values"><ul><li>Apache Commons CSV <img 
src="../../../../../html/images/iconInfo.png" alt="The CSV parser 
implementation from the Apache Commons CSV library." title="The CSV parser 
implementation from the Apache Commons CSV library."></img></li><li>Jackson CSV 
<img src="../../../../../html/images/iconInfo.png" alt="The CSV parser 
implementation from the Jackson Dataformats library." title="The CSV parser 
implementation from the Jackson Dataformats library."></img></li></ul></td><td 
id="description">Specifies which parser 
 to use to read CSV records. NOTE: Different parsers may support different 
subsets of functionality and may also exhibit different levels of 
performance.</td></tr><tr><td id="name">Date Format</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the format to use when reading/writing Date fields. 
If not specified, Date fields will be assumed to be number of milliseconds 
since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the 
Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, 
followed by a two-digit day, followed by a four-digit year, all separated by 
'/' characters, as in 01/01/2017).</td></tr><tr><td id="name">Time 
Format</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the format to use when reading/writing Time fields. 
If not specified, Time fields will be assumed to be number of milliseconds 
since epoch (Midnight, Jan 1, 1970 GMT). If specified, the v
 alue must match the Java Simple Date Format (for example, HH:mm:ss for a 
two-digit hour in 24-hour format, followed by a two-digit minute, followed by a 
two-digit second, all separated by ':' characters, as in 
18:04:15).</td></tr><tr><td id="name">Timestamp Format</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the format to use when reading/writing Timestamp 
fields. If not specified, Timestamp fields will be assumed to be number of 
milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value 
must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a 
two-digit month, followed by a two-digit day, followed by a four-digit year, 
all separated by '/' characters; and then followed by a two-digit hour in 
24-hour format, followed by a two-digit minute, followed by a two-digit second, 
all separated by ':' characters, as in 01/01/2017 18:04:15).</td></tr><tr><td 
id="name"><strong>CSV Format</strong></td><td id
 ="default-value">custom</td><td id="allowable-values"><ul><li>Custom Format 
<img src="../../../../../html/images/iconInfo.png" alt="The format of the CSV 
is configured by using the properties of this Controller Service, such as Value 
Separator" title="The format of the CSV is configured by using the properties 
of this Controller Service, such as Value Separator"></img></li><li>RFC 4180 
<img src="../../../../../html/images/iconInfo.png" alt="CSV data follows the 
RFC 4180 Specification defined at https://tools.ietf.org/html/rfc4180"; 
title="CSV data follows the RFC 4180 Specification defined at 
https://tools.ietf.org/html/rfc4180";></img></li><li>Microsoft Excel <img 
src="../../../../../html/images/iconInfo.png" alt="CSV data follows the format 
used by Microsoft Excel" title="CSV data follows the format used by Microsoft 
Excel"></img></li><li>Tab-Delimited <img 
src="../../../../../html/images/iconInfo.png" alt="CSV data is Tab-Delimited 
instead of Comma Delimited" title="CSV data is Tab
 -Delimited instead of Comma Delimited"></img></li><li>MySQL Format <img 
src="../../../../../html/images/iconInfo.png" alt="CSV data follows the format 
used by MySQL" title="CSV data follows the format used by 
MySQL"></img></li><li>Informix Unload <img 
src="../../../../../html/images/iconInfo.png" alt="The format used by Informix 
when issuing the UNLOAD TO file_name command" title="The format used by 
Informix when issuing the UNLOAD TO file_name command"></img></li><li>Informix 
Unload Escape Disabled <img src="../../../../../html/images/iconInfo.png" 
alt="The format used by Informix when issuing the UNLOAD TO file_name command 
with escaping disabled" title="The format used by Informix when issuing the 
UNLOAD TO file_name command with escaping disabled"></img></li></ul></td><td 
id="description">Specifies which "format" the CSV data is in, or specifies if 
custom formatting should be used.</td></tr><tr><td id="name"><strong>Value 
Separator</strong></td><td id="default-value">,</td><td i
 d="allowable-values"></td><td id="description">The character that is used to 
separate values/fields in a CSV Record</td></tr><tr><td id="name"><strong>Treat 
First Line as Header</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether or not the first line of CSV should be 
considered a Header or should be considered a record. If the Schema Access 
Strategy indicates that the columns must be defined in the header, then this 
property will be ignored, since the header must always be present and won't be 
processed as a Record. Otherwise, if 'true', then the first line of CSV data 
will not be processed as a record and if 'false',then the first line will be 
interpreted as a record.</td></tr><tr><td id="name">Ignore CSV Header Column 
Names</td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">If the first line of a CSV is a hea
 der, and the configured schema does not match the fields named in the header 
line, this controls how the Reader will interpret the fields. If this property 
is true, then the field names mapped to each column are driven only by the 
configured schema and any fields not in the schema will be ignored. If this 
property is false, then the field names found in the CSV Header will be used as 
the names of the fields.</td></tr><tr><td id="name"><strong>Quote 
Character</strong></td><td id="default-value">"</td><td 
id="allowable-values"></td><td id="description">The character that is used to 
quote values so that escape characters do not have to be used</td></tr><tr><td 
id="name"><strong>Escape Character</strong></td><td 
id="default-value">\</td><td id="allowable-values"></td><td 
id="description">The character that is used to escape characters that would 
otherwise have a specific meaning to the CSV Parser.</td></tr><tr><td 
id="name">Comment Marker</td><td id="default-value"></td><td id="allowabl
 e-values"></td><td id="description">The character that is used to denote the 
start of a comment. Any line that begins with this comment will be 
ignored.</td></tr><tr><td id="name">Null String</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies a String that, if present as a value in the CSV, 
should be considered a null field instead of using the literal 
value.</td></tr><tr><td id="name"><strong>Trim Fields</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether or not white space should be removed from the 
beginning and end of fields</td></tr><tr><td id="name"><strong>Character 
Set</strong></td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">The Character Encoding that is 
used to encode/decode the CSV file<br/><strong>Supports Expression Language: 
true</strong></td></tr></table><h3>State management: </h3>This component do
 es not store state.<h3>Restricted: </h3>This component is not 
restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file


Reply via email to