Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutSyslog/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutSyslog/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutSyslog/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutSyslog/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutSyslog</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutSyslog</h1><h2>Description: </h2><p>Sends Syslog messages to a given 
host and port over TCP or UDP. Messages are constructed from the "Message ___" 
properties of the processor which can use expression language to generate 
messages from incoming FlowFiles. The properties are used to construct messages 
of the form: (&lt;PRIORITY&gt;)(VERSION )(TIMESTAMP) (HOSTNAME) (BODY) where 
version is optional.  The constructed messages are checked against regular 
expressions for RFC5424 and RFC3164 formatted messages. The timestamp can be an 
RFC5424 timestamp with a format of "yyyy-MM-dd'T'HH:mm:ss.SZ" or "yyyy-
 MM-dd'T'HH:mm:ss.S+hh:mm", or it can be an RFC3164 timestamp with a format of 
"MMM d HH:mm:ss". If a message is constructed that does not form a valid Syslog 
message according to the above description, then it is routed to the invalid 
relationship. Valid messages are sent to the Syslog server and successes are 
routed to the success relationship, failures routed to the failure 
relationship.</p><h3>Tags: </h3><p>syslog, put, udp, tcp, 
logs</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Hostname</strong></td><td 
id="default-value">localhost</td><td id="all
 owable-values"></td><td id="description">The ip address or hostname of the 
Syslog server. Note that Expression language is not evaluated per 
FlowFile.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Protocol</strong></td><td id="default-value">UDP</td><td 
id="allowable-values"><ul><li>TCP</li><li>UDP</li></ul></td><td 
id="description">The protocol for Syslog communication.</td></tr><tr><td 
id="name"><strong>Port</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The port for Syslog 
communication. Note that Expression language is not evaluated per 
FlowFile.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name"><strong>Max 
Size of Socket Send Buffer</strong></td><td id="default-value">1 MB</td><td 
id="allowable-values"></td><td id="description">The maximum size of the socket s
 end buffer that should be used. This is a suggestion to the Operating System 
to indicate how big the socket buffer should be. If this value is set too low, 
the buffer may fill up before the data can be read, and incoming data will be 
dropped. Note that Expression language is not evaluated per 
FlowFile.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name">SSL Context 
Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>SSLContextService<br/><strong>Implementations: </strong><a 
href="../../../nifi-ssl-context-service-nar/1.9.0/org.apache.nifi.ssl.StandardRestrictedSSLContextService/index.html">StandardRestrictedSSLContextService</a><br/><a
 
href="../../../nifi-ssl-context-service-nar/1.9.0/org.apache.nifi.ssl.StandardSSLContextService/index.html">StandardSSLContextService</a></td><td
 id="description">The Controller Service to use in order to obtain a
 n SSL Context. If this property is set, syslog messages will be sent over a 
secure connection.</td></tr><tr><td id="name"><strong>Idle Connection 
Expiration</strong></td><td id="default-value">5 seconds</td><td 
id="allowable-values"></td><td id="description">The amount of time a connection 
should be held open without being used before closing the connection. Note that 
Expression language is not evaluated per FlowFile.<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Timeout</td><td id="default-value">10 
seconds</td><td id="allowable-values"></td><td id="description">The timeout for 
connecting to and communicating with the syslog server. Does not apply to UDP. 
Note that Expression language is not evaluated per 
FlowFile.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name"><strong>Batch 
Size</strong></td><td id="default-value
 ">25</td><td id="allowable-values"></td><td id="description">The number of 
incoming FlowFiles to process in a single execution of this processor. Note 
that Expression language is not evaluated per FlowFile.<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>Character 
Set</strong></td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">Specifies the character set of 
the Syslog messages. Note that Expression language is not evaluated per 
FlowFile.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Message Priority</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The priority for the Syslog 
messages, excluding &lt; &gt;.<br/><strong>Supports Expression Language: true 
(will be evaluated using flow file attributes and variable registry)<
 /strong></td></tr><tr><td id="name">Message Version</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
version for the Syslog messages.<br/><strong>Supports Expression Language: true 
(will be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td id="name"><strong>Message 
Timestamp</strong></td><td id="default-value">${now():format('MMM d 
HH:mm:ss')}</td><td id="allowable-values"></td><td id="description">The 
timestamp for the Syslog messages. The timestamp can be an RFC5424 timestamp 
with a format of "yyyy-MM-dd'T'HH:mm:ss.SZ" or "yyyy-MM-dd'T'HH:mm:ss.S+hh:mm", 
" or it can be an RFC3164 timestamp with a format of "MMM d 
HH:mm:ss".<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Message Hostname</strong></td><td 
id="default-value">${hostname(true)}</td><td id="allowable-values"></td><td 
id="description">T
 he hostname for the Syslog messages.<br/><strong>Supports Expression Language: 
true (will be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td id="name"><strong>Message 
Body</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The body for the Syslog 
messages.<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attributes and variable 
registry)</strong></td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFiles
 that are sent successfully to Syslog are sent out this 
relationship.</td></tr><tr><td>failure</td><td>FlowFiles that failed to send to 
Syslog are sent out this 
relationship.</td></tr><tr><td>invalid</td><td>FlowFiles that do not form a 
valid Syslog message are sent out this relationship.</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3>None specified.<
 h3>State management: </h3>This component does not store state.<h3>Restricted: 
</h3>This component is not restricted.<h3>Input requirement: </h3>This 
component requires an incoming relationship.<h3>System Resource 
Considerations:</h3>None specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.standard.ListenSyslog/index.html">ListenSyslog</a>,
 <a 
href="../org.apache.nifi.processors.standard.ParseSyslog/index.html">ParseSyslog</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutTCP/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutTCP/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutTCP/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutTCP/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutTCP</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutTCP</h1><h2>Description: </h2><p>The PutTCP processor receives a 
FlowFile and transmits the FlowFile content over a TCP connection to the 
configured TCP server. By default, the FlowFiles are transmitted over the same 
TCP connection (or pool of TCP connections if multiple input threads are 
configured). To assist the TCP server with determining message boundaries, an 
optional "Outgoing Message Delimiter" string can be configured which is 
appended to the end of each FlowFiles content when it is transmitted over the 
TCP connection. An optional "Connection Per FlowFile" parameter can be 
specified to change
  the behaviour so that each FlowFiles content is transmitted over a single TCP 
connection which is opened when the FlowFile is received and closed after the 
FlowFile has been sent. This option should only be used for low message volume 
scenarios, otherwise the platform may run out of TCP sockets.</p><h3>Tags: 
</h3><p>remote, egress, put, tcp</p><h3>Properties: </h3><p>In the list below, 
the names of required properties appear in <strong>bold</strong>. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Hostname</strong></td><td 
id="default-value">localhost</td><td id="allowable-values"></td><td 
id="description">The ip address or hostname of the destination.<br/><stron
 g>Supports Expression Language: true (will be evaluated using variable 
registry only)</strong></td></tr><tr><td 
id="name"><strong>Port</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The port on the 
destination.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name"><strong>Max 
Size of Socket Send Buffer</strong></td><td id="default-value">1 MB</td><td 
id="allowable-values"></td><td id="description">The maximum size of the socket 
send buffer that should be used. This is a suggestion to the Operating System 
to indicate how big the socket buffer should be. If this value is set too low, 
the buffer may fill up before the data can be read, and incoming data will be 
dropped.</td></tr><tr><td id="name"><strong>Idle Connection 
Expiration</strong></td><td id="default-value">5 seconds</td><td 
id="allowable-values"></td><td id="description">The amount of time a connection 
s
 hould be held open without being used before closing the 
connection.</td></tr><tr><td id="name"><strong>Connection Per 
FlowFile</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether to send each FlowFile's content on an 
individual connection.</td></tr><tr><td id="name">Outgoing Message 
Delimiter</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the delimiter to use when sending messages out over 
the same TCP stream. The delimiter is appended to each FlowFile message that is 
transmitted over the stream so that the receiver can determine when one message 
ends and the next message begins. Users should ensure that the FlowFile content 
does not contain the delimiter character to avoid errors. In order to use a new 
line character you can enter '\n'. For a tab character use '\t'. Finally for a 
carriage return use '\r'.<br/><strong>Supports Expression
  Language: true (will be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td id="name">Timeout</td><td 
id="default-value">10 seconds</td><td id="allowable-values"></td><td 
id="description">The timeout for connecting to and communicating with the 
destination. Does not apply to UDP</td></tr><tr><td id="name">SSL Context 
Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>SSLContextService<br/><strong>Implementations: </strong><a 
href="../../../nifi-ssl-context-service-nar/1.9.0/org.apache.nifi.ssl.StandardRestrictedSSLContextService/index.html">StandardRestrictedSSLContextService</a><br/><a
 
href="../../../nifi-ssl-context-service-nar/1.9.0/org.apache.nifi.ssl.StandardSSLContextService/index.html">StandardSSLContextService</a></td><td
 id="description">The Controller Service to use in order to obtain an SSL 
Context. If this property is set, messages will be sent over a secure 
connection.</td></
 tr><tr><td id="name"><strong>Character Set</strong></td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">Specifies the character set of the data being 
sent.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFiles
 that are sent successfully to the destination are sent out this 
relationship.</td></tr><tr><td>failure</td><td>FlowFiles that failed to send to 
the destination are sent out this relationship.</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3>None 
specified.<h3>State management: </h3>This component does not store 
state.<h3>Restricted: </h3>This component is not restricted.<h3>Input 
requirement: </h3>This component requires an incoming relationship.<h3>System 
Resource Considerations:</h3>None specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.standard.ListenTCP/index.html">ListenTCP</a></p></body></h
 tml>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutUDP/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutUDP/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutUDP/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.PutUDP/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PutUDP</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PutUDP</h1><h2>Description: </h2><p>The PutUDP processor receives a 
FlowFile and packages the FlowFile content into a single UDP datagram packet 
which is then transmitted to the configured UDP server. The user must ensure 
that the FlowFile content being fed to this processor is not larger than the 
maximum size for the underlying UDP transport. The maximum transport size will 
vary based on the platform setup but is generally just under 64KB. FlowFiles 
will be marked as failed if their content is larger than the maximum transport 
size.</p><h3>Tags: </h3><p>remote, egress, put, udp</p><h3>Properties: </h3><
 p>In the list below, the names of required properties appear in 
<strong>bold</strong>. Any other properties (not in bold) are considered 
optional. The table also indicates any default values, and whether a property 
supports the <a href="../../../../../html/expression-language-guide.html">NiFi 
Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Hostname</strong></td><td 
id="default-value">localhost</td><td id="allowable-values"></td><td 
id="description">The ip address or hostname of the 
destination.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Port</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The port on the 
destination.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></t
 r><tr><td id="name"><strong>Max Size of Socket Send Buffer</strong></td><td 
id="default-value">1 MB</td><td id="allowable-values"></td><td 
id="description">The maximum size of the socket send buffer that should be 
used. This is a suggestion to the Operating System to indicate how big the 
socket buffer should be. If this value is set too low, the buffer may fill up 
before the data can be read, and incoming data will be 
dropped.</td></tr><tr><td id="name"><strong>Idle Connection 
Expiration</strong></td><td id="default-value">5 seconds</td><td 
id="allowable-values"></td><td id="description">The amount of time a connection 
should be held open without being used before closing the 
connection.</td></tr></table><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFiles
 that are sent successfully to the destination are sent out this 
relationship.</td></tr><tr><td>failure</td><td>FlowFiles that failed to send to 
the destinati
 on are sent out this relationship.</td></tr></table><h3>Reads Attributes: 
</h3>None specified.<h3>Writes Attributes: </h3>None specified.<h3>State 
management: </h3>This component does not store state.<h3>Restricted: </h3>This 
component is not restricted.<h3>Input requirement: </h3>This component requires 
an incoming relationship.<h3>System Resource Considerations:</h3>None 
specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.standard.ListenUDP/index.html">ListenUDP</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTable/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTable/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTable/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTable/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>QueryDatabaseTable</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">QueryDatabaseTable</h1><h2>Description: </h2><p>Generates a SQL select 
query, or uses a provided statement, and executes it to fetch all rows whose 
values in the specified Maximum Value column(s) are larger than the 
previously-seen maxima. Query result will be converted to Avro format. 
Expression Language is supported for several properties, but no incoming 
connections are permitted. The Variable Registry may be used to provide values 
for any property containing Expression Language. If it is desired to leverage 
flow file attributes to perform these queries, the GenerateTableFetch and/or 
Execu
 teSQL processors can be used for this purpose. Streaming is used so 
arbitrarily large result sets are supported. This processor can be scheduled to 
run on a timer or cron expression, using the standard scheduling methods. This 
processor is intended to be run on the Primary Node only. FlowFile attribute 
'querydbtable.row.count' indicates how many rows were selected.</p><h3>Tags: 
</h3><p>sql, select, jdbc, query, database</p><h3>Properties: </h3><p>In the 
list below, the names of required properties appear in <strong>bold</strong>. 
Any other properties (not in bold) are considered optional. The table also 
indicates any default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Database Connection Pooling Service</strong></td><td 
id="default-value"></td><td id="all
 owable-values"><strong>Controller Service API: 
</strong><br/>DBCPService<br/><strong>Implementations: </strong><a 
href="../../../nifi-dbcp-service-nar/1.9.0/org.apache.nifi.dbcp.DBCPConnectionPoolLookup/index.html">DBCPConnectionPoolLookup</a><br/><a
 
href="../../../nifi-hive-nar/1.9.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html">HiveConnectionPool</a><br/><a
 
href="../../../nifi-dbcp-service-nar/1.9.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html">DBCPConnectionPool</a></td><td
 id="description">The Controller Service that is used to obtain a connection to 
the database.</td></tr><tr><td id="name"><strong>Database Type</strong></td><td 
id="default-value">Generic</td><td id="allowable-values"><ul><li>Generic <img 
src="../../../../../html/images/iconInfo.png" alt="Generates ANSI SQL" 
title="Generates ANSI SQL"></img></li><li>Oracle <img 
src="../../../../../html/images/iconInfo.png" alt="Generates Oracle compliant 
SQL" title="Generates Oracle compliant SQL"></img></li><li
 >Oracle 12+ <img src="../../../../../html/images/iconInfo.png" alt="Generates 
 >Oracle compliant SQL for version 12 or greater" title="Generates Oracle 
 >compliant SQL for version 12 or greater"></img></li><li>MS SQL 2012+ <img 
 >src="../../../../../html/images/iconInfo.png" alt="Generates MS SQL 
 >Compatible SQL, for version 2012 or greater" title="Generates MS SQL 
 >Compatible SQL, for version 2012 or greater"></img></li><li>MS SQL 2008 <img 
 >src="../../../../../html/images/iconInfo.png" alt="Generates MS SQL 
 >Compatible SQL for version 2008" title="Generates MS SQL Compatible SQL for 
 >version 2008"></img></li><li>MySQL <img 
 >src="../../../../../html/images/iconInfo.png" alt="Generates MySQL compatible 
 >SQL" title="Generates MySQL compatible SQL"></img></li></ul></td><td 
 >id="description">The type/flavor of database, used for generating 
 >database-specific code. In many cases the Generic type should suffice, but 
 >some databases (such as Oracle) require custom SQL clauses. </td></tr><tr><td 
 >id="name"
 ><strong>Table Name</strong></td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">The name of the database 
 >table to be queried. When a custom query is used, this property is used to 
 >alias the query and appears as an attribute on the 
 >FlowFile.<br/><strong>Supports Expression Language: true (will be evaluated 
 >using variable registry only)</strong></td></tr><tr><td id="name">Columns to 
 >Return</td><td id="default-value"></td><td id="allowable-values"></td><td 
 >id="description">A comma-separated list of column names to be used in the 
 >query. If your database requires special treatment of the names (quoting, 
 >e.g.), each name should include such treatment. If no column names are 
 >supplied, all columns in the specified table will be returned. NOTE: It is 
 >important to use consistent column names for a given table for incremental 
 >fetch to work properly.<br/><strong>Supports Expression Language: true (will 
 >be evaluated using variable registry only)</strong></td></tr
 ><tr><td id="name">Additional WHERE clause</td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">A custom clause to be added 
 >in the WHERE condition when building SQL queries.<br/><strong>Supports 
 >Expression Language: true (will be evaluated using variable registry 
 >only)</strong></td></tr><tr><td id="name">Custom Query</td><td 
 >id="default-value"></td><td id="allowable-values"></td><td id="description">A 
 >custom SQL query used to retrieve data. Instead of building a SQL query from 
 >other properties, this query will be wrapped as a sub-query. Query must have 
 >no ORDER BY statement.<br/><strong>Supports Expression Language: true (will 
 >be evaluated using variable registry only)</strong></td></tr><tr><td 
 >id="name">Maximum-value Columns</td><td id="default-value"></td><td 
 >id="allowable-values"></td><td id="description">A comma-separated list of 
 >column names. The processor will keep track of the maximum value for each 
 >column that has been returned since the proces
 sor started running. Using multiple columns implies an order to the column 
list, and each column's values are expected to increase more slowly than the 
previous columns' values. Thus, using multiple columns implies a hierarchical 
structure of columns, which is usually used for partitioning tables. This 
processor can be used to retrieve only those rows that have been added/updated 
since the last retrieval. Note that some JDBC types such as bit/boolean are not 
conducive to maintaining maximum value, so columns of these types should not be 
listed in this property, and will result in error(s) during processing. If no 
columns are provided, all rows from the table will be considered, which could 
have a performance impact. NOTE: It is important to use consistent max-value 
column names for a given table for incremental fetch to work 
properly.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name"><strong>Max 
Wait T
 ime</strong></td><td id="default-value">0 seconds</td><td 
id="allowable-values"></td><td id="description">The maximum amount of time 
allowed for a running SQL select query , zero means there is no limit. Max time 
less than 1 second will be equal to zero.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>Fetch Size</strong></td><td 
id="default-value">0</td><td id="allowable-values"></td><td 
id="description">The number of result rows to be fetched from the result set at 
a time. This is a hint to the database driver and may not be honored and/or 
exact. If the value specified is zero, then the hint is 
ignored.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name"><strong>Max 
Rows Per Flow File</strong></td><td id="default-value">0</td><td 
id="allowable-values"></td><td id="description">The maximum number of result 
rows th
 at will be included in a single FlowFile. This will allow you to break up very 
large result sets into multiple FlowFiles. If the value specified is zero, then 
all rows are returned in a single FlowFile.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>Output Batch 
Size</strong></td><td id="default-value">0</td><td 
id="allowable-values"></td><td id="description">The number of output FlowFiles 
to queue before committing the process session. When set to zero, the session 
will be committed when all result set rows have been processed and the output 
FlowFiles are ready for transfer to the downstream relationship. For large 
result sets, this can cause a large burst of FlowFiles to be transferred at the 
end of processor execution. If this property is set, then when the specified 
number of FlowFiles are ready for transfer, then the session will be committed, 
thus releasing the FlowFiles to the downstr
 eam relationship. NOTE: The maxvalue.* and fragment.count attributes will not 
be set on FlowFiles when this property is set.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>Maximum Number of 
Fragments</strong></td><td id="default-value">0</td><td 
id="allowable-values"></td><td id="description">The maximum number of 
fragments. If the value specified is zero, then all fragments are returned. 
This prevents OutOfMemoryError when this processor ingests huge table. NOTE: 
Setting this property can result in data loss, as the incoming results are not 
ordered, and fragments may end at arbitrary boundaries where rows are not 
included in the result set.<br/><strong>Supports Expression Language: true 
(will be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Normalize Table/Column Names</strong></td><td 
id="default-value">false</td><td id="allowable-values"><ul><li>true<
 /li><li>false</li></ul></td><td id="description">Whether to change 
non-Avro-compatible characters in column names to Avro-compatible characters. 
For example, colons and periods will be changed to underscores in order to 
build a valid Avro record.</td></tr><tr><td id="name">Transaction Isolation 
Level</td><td id="default-value"></td><td 
id="allowable-values"><ul><li>TRANSACTION_NONE</li><li>TRANSACTION_READ_COMMITTED</li><li>TRANSACTION_READ_UNCOMMITTED</li><li>TRANSACTION_REPEATABLE_READ</li><li>TRANSACTION_SERIALIZABLE</li></ul></td><td
 id="description">This setting will set the transaction isolation level for the 
database connection for drivers that support this setting</td></tr><tr><td 
id="name"><strong>Use Avro Logical Types</strong></td><td 
id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether to use Avro Logical Types for DECIMAL/NUMBER, DATE, 
TIME and TIMESTAMP columns. If disabled, written as string. If e
 nabled, Logical types are used and written as its underlying type, 
specifically, DECIMAL/NUMBER as logical 'decimal': written as bytes with 
additional precision and scale meta data, DATE as logical 'date-millis': 
written as int denoting days since Unix epoch (1970-01-01), TIME as logical 
'time-millis': written as int denoting milliseconds since Unix epoch, and 
TIMESTAMP as logical 'timestamp-millis': written as long denoting milliseconds 
since Unix epoch. If a reader of written Avro records also knows these logical 
types, then these values can be deserialized with more context depending on 
reader implementation.</td></tr><tr><td id="name"><strong>Default Decimal 
Precision</strong></td><td id="default-value">10</td><td 
id="allowable-values"></td><td id="description">When a DECIMAL/NUMBER value is 
written as a 'decimal' Avro logical type, a specific 'precision' denoting 
number of available digits is required. Generally, precision is defined by 
column data type definition or database e
 ngines default. However undefined precision (0) can be returned from some 
database engines. 'Default Decimal Precision' is used when writing those 
undefined precision numbers.<br/><strong>Supports Expression Language: true 
(will be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Default Decimal Scale</strong></td><td 
id="default-value">0</td><td id="allowable-values"></td><td 
id="description">When a DECIMAL/NUMBER value is written as a 'decimal' Avro 
logical type, a specific 'scale' denoting number of available decimal digits is 
required. Generally, scale is defined by column data type definition or 
database engines default. However when undefined precision (0) is returned, 
scale can also be uncertain with some database engines. 'Default Decimal Scale' 
is used when writing those undefined numbers. If a value has more decimals than 
specified scale, then the value will be rounded-up, e.g. 1.53 becomes 2 with 
scale 0, and 1.5 with scale 1.<br/><stron
 g>Supports Expression Language: true (will be evaluated using variable 
registry only)</strong></td></tr></table><h3>Dynamic Properties: 
</h3><p>Dynamic Properties allow the user to specify both the name and value of 
a property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">initial.maxvalue.&lt;max_value_column&gt;</td><td id="value">Initial 
maximum value for the specified column</td><td>Specifies an initial max value 
for max value column(s). Properties should be added in the format 
`initial.maxvalue.&lt;max_value_column&gt;`. This value is only used the first 
time the table is accessed (when a Maximum Value Column is 
specified).<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr></table></p><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Successfully
 created FlowFile from SQL query result set.</td
 ></tr></table><h3>Reads Attributes: </h3>None specified.<h3>Writes Attributes: 
 ></h3><table 
 >id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>tablename</td><td>Name
 > of the table being 
 >queried</td></tr><tr><td>querydbtable.row.count</td><td>The number of rows 
 >selected by the query</td></tr><tr><td>fragment.identifier</td><td>If 'Max 
 >Rows Per Flow File' is set then all FlowFiles from the same query result set 
 >will have the same value for the fragment.identifier attribute. This can then 
 >be used to correlate the results.</td></tr><tr><td>fragment.count</td><td>If 
 >'Max Rows Per Flow File' is set then this is the total number of  FlowFiles 
 >produced by a single ResultSet. This can be used in conjunction with the 
 >fragment.identifier attribute in order to know how many FlowFiles belonged to 
 >the same incoming ResultSet. If Output Batch Size is set, then this attribute 
 >will not be populated.</td></tr><tr><td>fragment.index</td><td>If 'Max Rows 
 >Per Flow File' is set then t
 he position of this FlowFile in the list of outgoing FlowFiles that were all 
derived from the same result set FlowFile. This can be used in conjunction with 
the fragment.identifier attribute to know which FlowFiles originated from the 
same query result set and in what order  FlowFiles were 
produced</td></tr><tr><td>maxvalue.*</td><td>Each attribute contains the 
observed maximum value of a specified 'Maximum-value Column'. The suffix of the 
attribute is the name of the column. If Output Batch Size is set, then this 
attribute will not be populated.</td></tr></table><h3>State management: 
</h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>After
 performing a query on the specified table, the maximum values for the 
specified column(s) will be retained for use in future executions of the query. 
This allows the Processor to fetch only those records that have max values 
greater than the retained values. This can be used for incremental fetching, 
fetc
 hing of newly added rows, etc. To clear the maximum values, clear the state of 
the processor per the State Management 
documentation</td></tr></table><h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component does not allow an 
incoming relationship.<h3>System Resource Considerations:</h3>None 
specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.standard.GenerateTableFetch/index.html">GenerateTableFetch</a>,
 <a 
href="../org.apache.nifi.processors.standard.ExecuteSQL/index.html">ExecuteSQL</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTableRecord/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTableRecord/index.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTableRecord/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryDatabaseTableRecord/index.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>QueryDatabaseTableRecord</title><link 
rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">QueryDatabaseTableRecord</h1><h2>Description: </h2><p>Generates a SQL 
select query, or uses a provided statement, and executes it to fetch all rows 
whose values in the specified Maximum Value column(s) are larger than the 
previously-seen maxima. Query result will be converted to the format specified 
by the record writer. Expression Language is supported for several properties, 
but no incoming connections are permitted. The Variable Registry may be used to 
provide values for any property containing Expression Language. If it is 
desired to leverage flow file attributes to perform these qu
 eries, the GenerateTableFetch and/or ExecuteSQL processors can be used for 
this purpose. Streaming is used so arbitrarily large result sets are supported. 
This processor can be scheduled to run on a timer or cron expression, using the 
standard scheduling methods. This processor is intended to be run on the 
Primary Node only. FlowFile attribute 'querydbtable.row.count' indicates how 
many rows were selected.</p><h3>Tags: </h3><p>sql, select, jdbc, query, 
database, record</p><h3>Properties: </h3><p>In the list below, the names of 
required properties appear in <strong>bold</strong>. Any other properties (not 
in bold) are considered optional. The table also indicates any default values, 
and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Database Connection Pooling Service</s
 trong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>DBCPService<br/><strong>Implementations: </strong><a 
href="../../../nifi-dbcp-service-nar/1.9.0/org.apache.nifi.dbcp.DBCPConnectionPoolLookup/index.html">DBCPConnectionPoolLookup</a><br/><a
 
href="../../../nifi-hive-nar/1.9.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html">HiveConnectionPool</a><br/><a
 
href="../../../nifi-dbcp-service-nar/1.9.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html">DBCPConnectionPool</a></td><td
 id="description">The Controller Service that is used to obtain a connection to 
the database.</td></tr><tr><td id="name"><strong>Database Type</strong></td><td 
id="default-value">Generic</td><td id="allowable-values"><ul><li>Generic <img 
src="../../../../../html/images/iconInfo.png" alt="Generates ANSI SQL" 
title="Generates ANSI SQL"></img></li><li>Oracle <img 
src="../../../../../html/images/iconInfo.png" alt="Generates Oracle compliant 
SQL" tit
 le="Generates Oracle compliant SQL"></img></li><li>Oracle 12+ <img 
src="../../../../../html/images/iconInfo.png" alt="Generates Oracle compliant 
SQL for version 12 or greater" title="Generates Oracle compliant SQL for 
version 12 or greater"></img></li><li>MS SQL 2012+ <img 
src="../../../../../html/images/iconInfo.png" alt="Generates MS SQL Compatible 
SQL, for version 2012 or greater" title="Generates MS SQL Compatible SQL, for 
version 2012 or greater"></img></li><li>MS SQL 2008 <img 
src="../../../../../html/images/iconInfo.png" alt="Generates MS SQL Compatible 
SQL for version 2008" title="Generates MS SQL Compatible SQL for version 
2008"></img></li><li>MySQL <img src="../../../../../html/images/iconInfo.png" 
alt="Generates MySQL compatible SQL" title="Generates MySQL compatible 
SQL"></img></li></ul></td><td id="description">The type/flavor of database, 
used for generating database-specific code. In many cases the Generic type 
should suffice, but some databases (such as Oracle) requi
 re custom SQL clauses. </td></tr><tr><td id="name"><strong>Table 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the database table 
to be queried. When a custom query is used, this property is used to alias the 
query and appears as an attribute on the FlowFile.<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Columns to Return</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
comma-separated list of column names to be used in the query. If your database 
requires special treatment of the names (quoting, e.g.), each name should 
include such treatment. If no column names are supplied, all columns in the 
specified table will be returned. NOTE: It is important to use consistent 
column names for a given table for incremental fetch to work 
properly.<br/><strong>Supports Expression Language: true (will be evaluat
 ed using variable registry only)</strong></td></tr><tr><td 
id="name">Additional WHERE clause</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A custom clause to be added in 
the WHERE condition when building SQL queries.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Custom Query</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
custom SQL query used to retrieve data. Instead of building a SQL query from 
other properties, this query will be wrapped as a sub-query. Query must have no 
ORDER BY statement.<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Record Writer</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>RecordSetWriterFactory<br/><strong>Implementations: </strong><a 
hre
 
f="../../../nifi-record-serialization-services-nar/1.9.0/org.apache.nifi.xml.XMLRecordSetWriter/index.html">XMLRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.9.0/org.apache.nifi.json.JsonRecordSetWriter/index.html">JsonRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.9.0/org.apache.nifi.csv.CSVRecordSetWriter/index.html">CSVRecordSetWriter</a><br/><a
 
href="../../../nifi-scripting-nar/1.9.0/org.apache.nifi.record.script.ScriptedRecordSetWriter/index.html">ScriptedRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.9.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html">AvroRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.9.0/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html">FreeFormTextRecordSetWriter</a></td><td
 id="description">Specifies the Controller Service to use for writing results 
to a FlowFile. The Record Writer may use Inherit Sc
 hema to emulate the inferred schema behavior, i.e. an explicit schema need not 
be defined in the writer, and will be supplied by the same logic used to infer 
the schema from the column types.</td></tr><tr><td id="name">Maximum-value 
Columns</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A comma-separated list of column names. The processor will 
keep track of the maximum value for each column that has been returned since 
the processor started running. Using multiple columns implies an order to the 
column list, and each column's values are expected to increase more slowly than 
the previous columns' values. Thus, using multiple columns implies a 
hierarchical structure of columns, which is usually used for partitioning 
tables. This processor can be used to retrieve only those rows that have been 
added/updated since the last retrieval. Note that some JDBC types such as 
bit/boolean are not conducive to maintaining maximum value, so columns of these 
typ
 es should not be listed in this property, and will result in error(s) during 
processing. If no columns are provided, all rows from the table will be 
considered, which could have a performance impact. NOTE: It is important to use 
consistent max-value column names for a given table for incremental fetch to 
work properly.<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Max Wait Time</strong></td><td id="default-value">0 
seconds</td><td id="allowable-values"></td><td id="description">The maximum 
amount of time allowed for a running SQL select query , zero means there is no 
limit. Max time less than 1 second will be equal to zero.<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>Fetch Size</strong></td><td 
id="default-value">0</td><td id="allowable-values"></td><td 
id="description">The number of result rows 
 to be fetched from the result set at a time. This is a hint to the database 
driver and may not be honored and/or exact. If the value specified is zero, 
then the hint is ignored.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Max Rows Per Flow File</strong></td><td 
id="default-value">0</td><td id="allowable-values"></td><td 
id="description">The maximum number of result rows that will be included in a 
single FlowFile. This will allow you to break up very large result sets into 
multiple FlowFiles. If the value specified is zero, then all rows are returned 
in a single FlowFile.<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Output Batch Size</strong></td><td 
id="default-value">0</td><td id="allowable-values"></td><td 
id="description">The number of output FlowFiles to queue before committing the 
process s
 ession. When set to zero, the session will be committed when all result set 
rows have been processed and the output FlowFiles are ready for transfer to the 
downstream relationship. For large result sets, this can cause a large burst of 
FlowFiles to be transferred at the end of processor execution. If this property 
is set, then when the specified number of FlowFiles are ready for transfer, 
then the session will be committed, thus releasing the FlowFiles to the 
downstream relationship. NOTE: The maxvalue.* and fragment.count attributes 
will not be set on FlowFiles when this property is set.<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name"><strong>Maximum Number of 
Fragments</strong></td><td id="default-value">0</td><td 
id="allowable-values"></td><td id="description">The maximum number of 
fragments. If the value specified is zero, then all fragments are returned. 
This prevents OutOfMemoryError when this 
 processor ingests huge table. NOTE: Setting this property can result in data 
loss, as the incoming results are not ordered, and fragments may end at 
arbitrary boundaries where rows are not included in the result 
set.<br/><strong>Supports Expression Language: true (will be evaluated using 
variable registry only)</strong></td></tr><tr><td id="name"><strong>Normalize 
Table/Column Names</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether to change characters in column names when creating the 
output schema. For example, colons and periods will be changed to 
underscores.</td></tr><tr><td id="name"><strong>Use Avro Logical 
Types</strong></td><td id="default-value">false</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Whether to use Avro Logical Types for DECIMAL/NUMBER, DATE, 
TIME and TIMESTAMP columns. If disabled, written as string. If enabled, Logical 
typ
 es are used and written as its underlying type, specifically, DECIMAL/NUMBER 
as logical 'decimal': written as bytes with additional precision and scale meta 
data, DATE as logical 'date-millis': written as int denoting days since Unix 
epoch (1970-01-01), TIME as logical 'time-millis': written as int denoting 
milliseconds since Unix epoch, and TIMESTAMP as logical 'timestamp-millis': 
written as long denoting milliseconds since Unix epoch. If a reader of written 
Avro records also knows these logical types, then these values can be 
deserialized with more context depending on reader 
implementation.</td></tr></table><h3>Dynamic Properties: </h3><p>Dynamic 
Properties allow the user to specify both the name and value of a 
property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">initial.maxvalue.&lt;max_value_column&gt;</td><td id="value">Initial 
maximum value for the specified column</td><td>Specifies an initial max value 
for max valu
 e column(s). Properties should be added in the format 
`initial.maxvalue.&lt;max_value_column&gt;`. This value is only used the first 
time the table is accessed (when a Maximum Value Column is 
specified).<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr></table></p><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>Successfully
 created FlowFile from SQL query result set.</td></tr></table><h3>Reads 
Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>tablename</td><td>Name
 of the table being queried</td></tr><tr><td>querydbtable.row.count</td><td>The 
number of rows selected by the 
query</td></tr><tr><td>fragment.identifier</td><td>If 'Max Rows Per Flow File' 
is set then all FlowFiles from the same query result set will have the same 
value for the fragment.identifier attri
 bute. This can then be used to correlate the 
results.</td></tr><tr><td>fragment.count</td><td>If 'Max Rows Per Flow File' is 
set then this is the total number of  FlowFiles produced by a single ResultSet. 
This can be used in conjunction with the fragment.identifier attribute in order 
to know how many FlowFiles belonged to the same incoming ResultSet. If Output 
Batch Size is set, then this attribute will not be 
populated.</td></tr><tr><td>fragment.index</td><td>If 'Max Rows Per Flow File' 
is set then the position of this FlowFile in the list of outgoing FlowFiles 
that were all derived from the same result set FlowFile. This can be used in 
conjunction with the fragment.identifier attribute to know which FlowFiles 
originated from the same query result set and in what order  FlowFiles were 
produced</td></tr><tr><td>maxvalue.*</td><td>Each attribute contains the 
observed maximum value of a specified 'Maximum-value Column'. The suffix of the 
attribute is the name of the column. If Output 
 Batch Size is set, then this attribute will not be 
populated.</td></tr><tr><td>mime.type</td><td>Sets the mime.type attribute to 
the MIME Type specified by the Record 
Writer.</td></tr><tr><td>record.count</td><td>The number of records output by 
the Record Writer.</td></tr></table><h3>State management: </h3><table 
id="stateful"><tr><th>Scope</th><th>Description</th></tr><tr><td>CLUSTER</td><td>After
 performing a query on the specified table, the maximum values for the 
specified column(s) will be retained for use in future executions of the query. 
This allows the Processor to fetch only those records that have max values 
greater than the retained values. This can be used for incremental fetching, 
fetching of newly added rows, etc. To clear the maximum values, clear the state 
of the processor per the State Management 
documentation</td></tr></table><h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component does not allow an 
incoming relationship.<h3
 >System Resource Considerations:</h3>None specified.<h3>See Also:</h3><p><a 
 >href="../org.apache.nifi.processors.standard.GenerateTableFetch/index.html">GenerateTableFetch</a>,
 > <a 
 >href="../org.apache.nifi.processors.standard.ExecuteSQL/index.html">ExecuteSQL</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html?rev=1854109&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.0/org.apache.nifi.processors.standard.QueryRecord/additionalDetails.html
 Fri Feb 22 01:03:44 2019
@@ -0,0 +1,555 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8" />
+        <title>QueryRecord</title>
+
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+    </head>
+
+    <body>
+        <h3>SQL Over Streams</h3>
+       <p>
+               QueryRecord provides users a tremendous amount of power by 
leveraging an extremely well-known
+               syntax (SQL) to route, filter, transform, and query data as it 
traverses the system. In order to
+               provide the Processor with the maximum amount of flexibility, 
it is configured with a Controller
+               Service that is responsible for reading and parsing the 
incoming FlowFiles and a Controller Service
+               that is responsible for writing the results out. By using this 
paradigm, users are not forced to
+               convert their data from one format to another just to query it, 
and then transform the data back
+               into the form that they want. Rather, the appropriate 
Controller Service can easily be configured
+               and put to use for the appropriate data format.
+       </p>
+
+       <p>
+               Rather than providing a single "SQL SELECT Statement" type of 
Property, this Processor makes use
+               of user-defined properties. Each user-defined property that is 
added to the Processor has a name
+               that becomes a new Relationship for the Processor and a 
corresponding SQL query that will be evaluated
+               against each FlowFile. This allows multiple SQL queries to be 
run against each FlowFile.
+       </p>
+
+       <p>
+                       The SQL syntax that is supported by this Processor is 
ANSI SQL and is powered by Apache Calcite. Please
+                       note that identifiers are quoted using double-quotes, 
and column names/labels are case-insensitive.
+       </p>
+
+        <p>
+            As an example, let's consider that we have a FlowFile with the 
following CSV data:
+        </p>
+        <pre><code>
+            name, age, title
+            John Doe, 34, Software Engineer
+            Jane Doe, 30, Program Manager
+            Jacob Doe, 45, Vice President
+            Janice Doe, 46, Vice President
+        </code></pre>
+
+        <p>
+            Now consider that we add the following properties to the Processor:
+        </p>
+        <table>
+            <tr>
+                <th>Property Name</th>
+                <th>Property Value</th>
+            </tr>
+            <tr>
+                <td>Engineers</td>
+                <td>SELECT * FROM FLOWFILE WHERE title LIKE '%Engineer%'</td>
+            </tr>
+            <tr>
+                <td>VP</td>
+                <td>SELECT name FROM FLOWFILE WHERE title = 'Vice 
President'</td>
+            </tr>
+            <tr>
+                <td>Younger Than Average</td>
+                <td>SELECT * FROM FLOWFILE WHERE age < (SELECT AVG(age) FROM 
FLOWFILE)</td>
+            </tr>
+        </table>
+
+        <p>
+            This Processor will now have five relationships: 
<code>original</code>, <code>failure</code>, <code>Engineers</code>, 
<code>VP</code>, and <code>Younger Than Average</code>.
+            If there is a failure processing the FlowFile, then the original 
FlowFile will be routed to <code>failure</code>. Otherwise, the original 
FlowFile will be routed to <code>original</code>
+            and one FlowFile will be routed to each of the other 
relationships, with the following values:
+        </p>
+
+        <table>
+            <tr>
+                <th>Relationship Name</th>
+                <th>FlowFile Value</th>
+            </tr>
+            <tr>
+                <td>Engineers</td>
+                <td>
+                    <pre><code>
+                        name, age, title
+                        John Doe, 34, Software Engineer
+                    </code></pre>
+                </td>
+            </tr>
+            <tr>
+                <td>VP</td>
+                <td>
+                    <pre><code>
+                        name
+                        Jacob Doe
+                        Janice Doe
+                    </code></pre>
+                </td>
+            </tr>
+            <tr>
+                <td>Younger Than Average</td>
+                <td>
+                    <pre><code>
+                        name, age, title
+                        John Doe, 34, Software Engineer
+                        Jane Doe, 30, Program Manager
+                    </code></pre>
+                </td>
+            </tr>
+        </table>
+
+        <p>
+            Note that this example is intended to illustrate the data that is 
input and output from the Processor. The actual format of the data may vary, 
depending on the configuration of the
+            Record Reader and Record Writer that is used. For example, here we 
assume that we are using a CSV Reader and a CSV Writer and that both are 
configured to have a header line. Should we have
+            used a JSON Writer instead, the output would have contained the 
same information but been presented in JSON Output. The user is able to choose 
which input and output format make the most
+            since for his or her use case. The input and output formats need 
not be the same.
+        </p>
+
+        <p>
+            It is also worth noting that the outbound FlowFiles have two 
different schemas. The <code>Engineers</code> and <code>Younger Than 
Average</code> FlowFiles contain 3 fields:
+            <code>name</code>, <code>age</code>, and <code>title</code> while 
the <code>VP</code> FlowFile contains only the <code>name</code> field. In most 
cases, the Record Writer is configured to
+            use whatever Schema is provided to it by the Record (this 
generally means that it is configured with a <code>Schema Access 
Strategy</code> of <code>Inherit Record Schema</code>). In such
+            a case, this works well. However, if a Schema is supplied to the 
Record Writer explicitly, it is important to ensure that the Schema accounts 
for all fields. If not, then then the
+            fields that are missing from the Record Writer's schema will 
simply not be present in the output.
+        </p>
+
+
+        <h3>SQL Over Hierarchical Data</h3>
+        <p>
+            One important detail that we must taken into account when 
evaluating SQL over streams of arbitrary data is how
+            we can handle hierarchical data, such as JSON, XML, and Avro. 
Because SQL was developed originally for relational databases, which
+            represent "flat" data, it is easy to understand how this would map 
to other "flat" data like a CSV file. Or even
+            a "flat" JSON representation where all fields are primitive types. 
However, in many cases, users encounter cases where they would like to evaluate 
SQL
+            over JSON or Avro data that is made up of many nested values. For 
example, consider the following JSON as input:
+        </p>
+
+        <pre><code>
+            {
+              "name": "John Doe",
+              "title": "Software Engineer",
+              "age": 40,
+              "addresses": [{
+                  "streetNumber": 4820,
+                  "street": "My Street",
+                  "apartment": null,
+                  "city": "New York",
+                  "state": "NY",
+                  "country": "USA",
+                  "label": "work"
+              }, {
+                  "streetNumber": 327,
+                  "street": "Small Street",
+                  "apartment": 309,
+                  "city": "Los Angeles",
+                  "state": "CA",
+                  "country": "USA",
+                  "label": "home"
+              }],
+              "project": {
+                  "name": "Apache NiFi",
+                  "maintainer": {
+                        "id": 28302873,
+                        "name": "Apache Software Foundation"
+                   },
+                  "debutYear": 2014
+              }
+            }
+        </code></pre>
+
+        <p>
+            Consider a query that will select the title and name of any person 
who has a home address in a different state
+            than their work address. Here, we can only select the fields 
<code>name</code>, <code>title</code>,
+            <code>age</code>, and <code>addresses</code>. In this scenario, 
<code>addresses</code> represents an Array of complex
+            objects - records. In order to accommodate for this, QueryRecord 
provides User-Defined Functions to enable
+            <a href="../../../../../html/record-path-guide.html">Record 
Path</a> to be used. Record Path is a simple NiFi Domain Specific Language (DSL)
+            that allows users to reference a nested structure.
+        </p>
+
+        <p>
+            The primary User-Defined Function that will be used is named 
<code>RPATH</code> (short for Record Path). This function expects exactly two 
arguments:
+            the Record to evaluate the RecordPath against, and the RecordPath 
to evaluate (in that order).
+            So, to select the title and name of any person who has a home 
address in a different state than their work address, we can use
+            the following SQL statement:
+        </p>
+
+        <code><pre>
+            SELECT title, name
+            FROM FLOWFILE
+            WHERE RPATH(addresses, '/state[/label = ''home'']') <>
+                  RPATH(addresses, '/state[/label = ''work'']')
+        </pre></code>
+
+        <p>
+            To explain this query in English, we can say that it selects the 
"title" and "name" fields from any Record in the FlowFile for which there is an 
address whose "label" is "home" and
+            another address whose "label" is "work" and for which the two 
addreses have different states.
+        </p>
+
+        <p>
+            Similarly, we could select the entire Record (all fields) of any 
person who has a "project" whose maintainer is the Apache Software Foundation 
using the query:
+        </p>
+
+        <code><pre>
+            SELECT *
+            FROM FLOWFILE
+            WHERE RPATH(project, '/maintainer/name') = 'Apache Software 
Foundation'
+        </pre></code>
+
+        <p>
+            There does exist a caveat, though, when using RecordPath. That is 
that the <code>RPATH</code> function returns an <code>Object</code>, which in 
JDBC is represented as an <code>OTHER</code>
+            type. This is fine and does not affect anything when it is used 
like above. However, what if we wanted to use another SQL function on the 
result? For example, what if we wanted to use
+            the SQL query <code>SELECT * FROM FLOWFILE WHERE RPATH(project, 
'/maintainer/name') LIKE 'Apache%'</code>? This would fail with a very long 
error such as:
+        </p>
+
+        <code><pre>
+3860 [pool-2-thread-1] ERROR org.apache.nifi.processors.standard.QueryRecord - 
QueryRecord[id=135e9bc8-0372-4c1e-9c82-9d9a5bfe1261] Unable to query 
FlowFile[0,174730597574853.mockFlowFile,0B] due to java.lang.RuntimeException: 
Error while compiling generated Java code:
+org.apache.calcite.DataContext root;
+
+public org.apache.calcite.linq4j.Enumerable bind(final 
org.apache.calcite.DataContext root0) {
+  root = root0;
+  final org.apache.calcite.linq4j.Enumerable _inputEnumerable = 
((org.apache.nifi.queryrecord.FlowFileTable) 
root.getRootSchema().getTable("FLOWFILE")).project(new int[] {
+    0,
+    1,
+    2,
+    3});
+  return new org.apache.calcite.linq4j.AbstractEnumerable(){
+      public org.apache.calcite.linq4j.Enumerator enumerator() {
+        return new org.apache.calcite.linq4j.Enumerator(){
+            public final org.apache.calcite.linq4j.Enumerator inputEnumerator 
= _inputEnumerable.enumerator();
+            public void reset() {
+              inputEnumerator.reset();
+            }
+
+            public boolean moveNext() {
+              while (inputEnumerator.moveNext()) {
+                final Object[] inp3_ = (Object[]) ((Object[]) 
inputEnumerator.current())[3];
+                if (new 
org.apache.nifi.processors.standard.QueryRecord.ObjectRecordPath().eval(inp3_, 
"/state[. = 'NY']") != null && org.apache.calcite.runtime.SqlFunctions.like(new 
org.apache.nifi.processors.standard.QueryRecord.ObjectRecordPath().eval(inp3_, 
"/state[. = 'NY']"), "N%")) {
+                  return true;
+                }
+              }
+              return false;
+            }
+
+            public void close() {
+              inputEnumerator.close();
+            }
+
+            public Object current() {
+              final Object[] current = (Object[]) inputEnumerator.current();
+              return new Object[] {
+                  current[2],
+                  current[0]};
+            }
+
+          };
+      }
+
+    };
+}
+
+
+public Class getElementType() {
+  return java.lang.Object[].class;
+}
+
+
+: java.lang.RuntimeException: Error while compiling generated Java code:
+org.apache.calcite.DataContext root;
+
+public org.apache.calcite.linq4j.Enumerable bind(final 
org.apache.calcite.DataContext root0) {
+  root = root0;
+  final org.apache.calcite.linq4j.Enumerable _inputEnumerable = 
((org.apache.nifi.queryrecord.FlowFileTable) 
root.getRootSchema().getTable("FLOWFILE")).project(new int[] {
+    0,
+    1,
+    2,
+    3});
+  return new org.apache.calcite.linq4j.AbstractEnumerable(){
+      public org.apache.calcite.linq4j.Enumerator enumerator() {
+        return new org.apache.calcite.linq4j.Enumerator(){
+            public final org.apache.calcite.linq4j.Enumerator inputEnumerator 
= _inputEnumerable.enumerator();
+            public void reset() {
+              inputEnumerator.reset();
+            }
+
+            public boolean moveNext() {
+              while (inputEnumerator.moveNext()) {
+                final Object[] inp3_ = (Object[]) ((Object[]) 
inputEnumerator.current())[3];
+                if (new 
org.apache.nifi.processors.standard.QueryRecord.ObjectRecordPath().eval(inp3_, 
"/state[. = 'NY']") != null && org.apache.calcite.runtime.SqlFunctions.like(new 
org.apache.nifi.processors.standard.QueryRecord.ObjectRecordPath().eval(inp3_, 
"/state[. = 'NY']"), "N%")) {
+                  return true;
+                }
+              }
+              return false;
+            }
+
+            public void close() {
+              inputEnumerator.close();
+            }
+
+            public Object current() {
+              final Object[] current = (Object[]) inputEnumerator.current();
+              return new Object[] {
+                  current[2],
+                  current[0]};
+            }
+
+          };
+      }
+
+    };
+}
+
+
+public Class getElementType() {
+  return java.lang.Object[].class;
+}
+
+
+
+3864 [pool-2-thread-1] ERROR org.apache.nifi.processors.standard.QueryRecord -
+java.lang.RuntimeException: Error while compiling generated Java code:
+org.apache.calcite.DataContext root;
+
+public org.apache.calcite.linq4j.Enumerable bind(final 
org.apache.calcite.DataContext root0) {
+  root = root0;
+  final org.apache.calcite.linq4j.Enumerable _inputEnumerable = 
((org.apache.nifi.queryrecord.FlowFileTable) 
root.getRootSchema().getTable("FLOWFILE")).project(new int[] {
+    0,
+    1,
+    2,
+    3});
+  return new org.apache.calcite.linq4j.AbstractEnumerable(){
+      public org.apache.calcite.linq4j.Enumerator enumerator() {
+        return new org.apache.calcite.linq4j.Enumerator(){
+            public final org.apache.calcite.linq4j.Enumerator inputEnumerator 
= _inputEnumerable.enumerator();
+            public void reset() {
+              inputEnumerator.reset();
+            }
+
+            public boolean moveNext() {
+              while (inputEnumerator.moveNext()) {
+                final Object[] inp3_ = (Object[]) ((Object[]) 
inputEnumerator.current())[3];
+                if (new 
org.apache.nifi.processors.standard.QueryRecord.ObjectRecordPath().eval(inp3_, 
"/state[. = 'NY']") != null && org.apache.calcite.runtime.SqlFunctions.like(new 
org.apache.nifi.processors.standard.QueryRecord.ObjectRecordPath().eval(inp3_, 
"/state[. = 'NY']"), "N%")) {
+                  return true;
+                }
+              }
+              return false;
+            }
+
+            public void close() {
+              inputEnumerator.close();
+            }
+
+            public Object current() {
+              final Object[] current = (Object[]) inputEnumerator.current();
+              return new Object[] {
+                  current[2],
+                  current[0]};
+            }
+
+          };
+      }
+
+    };
+}
+
+
+public Class getElementType() {
+  return java.lang.Object[].class;
+}
+
+
+
+       at org.apache.calcite.avatica.Helper.wrap(Helper.java:37)
+       at 
org.apache.calcite.adapter.enumerable.EnumerableInterpretable.toBindable(EnumerableInterpretable.java:108)
+       at 
org.apache.calcite.prepare.CalcitePrepareImpl$CalcitePreparingStmt.implement(CalcitePrepareImpl.java:1237)
+       at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:331)
+       at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:230)
+       at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:772)
+       at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:636)
+       at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:606)
+       at 
org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:229)
+       at 
org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement_(CalciteConnectionImpl.java:211)
+       at 
org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement(CalciteConnectionImpl.java:200)
+       at 
org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement(CalciteConnectionImpl.java:90)
+       at 
org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:175)
+       at 
org.apache.nifi.processors.standard.QueryRecord.buildCachedStatement(QueryRecord.java:428)
+       at 
org.apache.nifi.processors.standard.QueryRecord.getStatement(QueryRecord.java:415)
+       at 
org.apache.nifi.processors.standard.QueryRecord.queryWithCache(QueryRecord.java:475)
+       at 
org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:311)
+       at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
+       at 
org.apache.nifi.util.StandardProcessorTestRunner$RunProcessor.call(StandardProcessorTestRunner.java:255)
+       at 
org.apache.nifi.util.StandardProcessorTestRunner$RunProcessor.call(StandardProcessorTestRunner.java:249)
+       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
+       at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
+       at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
+       at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
+       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
+       at java.lang.Thread.run(Thread.java:745)
+Caused by: org.codehaus.commons.compiler.CompileException: Line 21, Column 
180: No applicable constructor/method found for actual parameters 
"java.lang.Object, java.lang.String"; candidates are: "public static boolean 
org.apache.calcite.runtime.SqlFunctions.like(java.lang.String, 
java.lang.String)", "public static boolean 
org.apache.calcite.runtime.SqlFunctions.like(java.lang.String, 
java.lang.String, java.lang.String)"
+       at 
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:10092)
+       at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:7506)
+       at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:7376)
+       at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:7280)
+       at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:3850)
+       at org.codehaus.janino.UnitCompiler.access$6900(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$10.visitMethodInvocation(UnitCompiler.java:3251)
+       at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:3974)
+       at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:3278)
+       at 
org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:4345)
+       at 
org.codehaus.janino.UnitCompiler.compileBoolean2(UnitCompiler.java:2842)
+       at org.codehaus.janino.UnitCompiler.access$4800(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$8.visitMethodInvocation(UnitCompiler.java:2803)
+       at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:3974)
+       at 
org.codehaus.janino.UnitCompiler.compileBoolean(UnitCompiler.java:2830)
+       at 
org.codehaus.janino.UnitCompiler.compileBoolean2(UnitCompiler.java:2924)
+       at org.codehaus.janino.UnitCompiler.access$5000(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$8.visitBinaryOperation(UnitCompiler.java:2797)
+       at org.codehaus.janino.Java$BinaryOperation.accept(Java.java:3768)
+       at 
org.codehaus.janino.UnitCompiler.compileBoolean(UnitCompiler.java:2830)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:1742)
+       at org.codehaus.janino.UnitCompiler.access$1200(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$4.visitIfStatement(UnitCompiler.java:935)
+       at org.codehaus.janino.Java$IfStatement.accept(Java.java:2157)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:956)
+       at 
org.codehaus.janino.UnitCompiler.compileStatements(UnitCompiler.java:997)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:983)
+       at org.codehaus.janino.UnitCompiler.access$1000(UnitCompiler.java:183)
+       at org.codehaus.janino.UnitCompiler$4.visitBlock(UnitCompiler.java:933)
+       at org.codehaus.janino.Java$Block.accept(Java.java:2012)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:956)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:1263)
+       at org.codehaus.janino.UnitCompiler.access$1500(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$4.visitWhileStatement(UnitCompiler.java:938)
+       at org.codehaus.janino.Java$WhileStatement.accept(Java.java:2244)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:956)
+       at 
org.codehaus.janino.UnitCompiler.compileStatements(UnitCompiler.java:997)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:2283)
+       at 
org.codehaus.janino.UnitCompiler.compileDeclaredMethods(UnitCompiler.java:820)
+       at 
org.codehaus.janino.UnitCompiler.compileDeclaredMethods(UnitCompiler.java:792)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:505)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:656)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:620)
+       at org.codehaus.janino.UnitCompiler.access$200(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$2.visitAnonymousClassDeclaration(UnitCompiler.java:343)
+       at 
org.codehaus.janino.Java$AnonymousClassDeclaration.accept(Java.java:894)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:352)
+       at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4194)
+       at org.codehaus.janino.UnitCompiler.access$7300(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$10.visitNewAnonymousClassInstance(UnitCompiler.java:3260)
+       at 
org.codehaus.janino.Java$NewAnonymousClassInstance.accept(Java.java:4131)
+       at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:3278)
+       at 
org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:4345)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:1901)
+       at org.codehaus.janino.UnitCompiler.access$2100(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$4.visitReturnStatement(UnitCompiler.java:944)
+       at org.codehaus.janino.Java$ReturnStatement.accept(Java.java:2544)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:956)
+       at 
org.codehaus.janino.UnitCompiler.compileStatements(UnitCompiler.java:997)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:2283)
+       at 
org.codehaus.janino.UnitCompiler.compileDeclaredMethods(UnitCompiler.java:820)
+       at 
org.codehaus.janino.UnitCompiler.compileDeclaredMethods(UnitCompiler.java:792)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:505)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:656)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:620)
+       at org.codehaus.janino.UnitCompiler.access$200(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$2.visitAnonymousClassDeclaration(UnitCompiler.java:343)
+       at 
org.codehaus.janino.Java$AnonymousClassDeclaration.accept(Java.java:894)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:352)
+       at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4194)
+       at org.codehaus.janino.UnitCompiler.access$7300(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$10.visitNewAnonymousClassInstance(UnitCompiler.java:3260)
+       at 
org.codehaus.janino.Java$NewAnonymousClassInstance.accept(Java.java:4131)
+       at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:3278)
+       at 
org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:4345)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:1901)
+       at org.codehaus.janino.UnitCompiler.access$2100(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$4.visitReturnStatement(UnitCompiler.java:944)
+       at org.codehaus.janino.Java$ReturnStatement.accept(Java.java:2544)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:956)
+       at 
org.codehaus.janino.UnitCompiler.compileStatements(UnitCompiler.java:997)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:2283)
+       at 
org.codehaus.janino.UnitCompiler.compileDeclaredMethods(UnitCompiler.java:820)
+       at 
org.codehaus.janino.UnitCompiler.compileDeclaredMethods(UnitCompiler.java:792)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:505)
+       at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:391)
+       at org.codehaus.janino.UnitCompiler.access$400(UnitCompiler.java:183)
+       at 
org.codehaus.janino.UnitCompiler$2.visitPackageMemberClassDeclaration(UnitCompiler.java:345)
+       at 
org.codehaus.janino.Java$PackageMemberClassDeclaration.accept(Java.java:1139)
+       at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:352)
+       at org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:320)
+       at 
org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:383)
+       at 
org.codehaus.janino.ClassBodyEvaluator.compileToClass(ClassBodyEvaluator.java:315)
+       at 
org.codehaus.janino.ClassBodyEvaluator.cook(ClassBodyEvaluator.java:233)
+       at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:192)
+       at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:47)
+       at 
org.codehaus.janino.ClassBodyEvaluator.createInstance(ClassBodyEvaluator.java:340)
+       at 
org.apache.calcite.adapter.enumerable.EnumerableInterpretable.getBindable(EnumerableInterpretable.java:140)
+       at 
org.apache.calcite.adapter.enumerable.EnumerableInterpretable.toBindable(EnumerableInterpretable.java:105)
+       ... 24 common frames omitted
+        </pre></code>
+
+        <p>
+            This happens because the <code>LIKE</code> function expects that 
you use it to compare <code>String</code> objects. I.e., it expects a format of 
<code>String LIKE String</code>
+            and we have instead passed to it <code>Other LIKE String</code>. 
To account for this, there exact a few other RecordPath functions: 
<code>RPATH_STRING</code>, <code>RPATH_INT</code>,
+            <code>RPATH_LONG</code>, <code>RPATH_FLOAT</code>, and 
<code>RPATH_DOUBLE</code> that can be used when you want to cause the return 
type to be of type <code>String</code>,
+            <code>Integer</code>, <code>Long</code> (64-bit Integer), 
<code>Float</code>, or <code>Double</code>, respectively. So the above query 
would need to instead be written as
+            <code>SELECT * FROM FLOWFILE WHERE RPATH_STRING(project, 
'/maintainer/name') LIKE 'Apache%'</code>, which will produce the desired 
output.
+        </p>
+
+
+        <h3>Aggregate Functions</h3>
+        <p>
+            In order to evaluate SQL against a stream of data, the Processor 
treats each individual FlowFile as its own
+            Table. Therefore, aggregate functions such as SUM and AVG will be 
evaluated against all Records in each FlowFile
+            but will not span FlowFile boundaries. As an example, consider an 
input FlowFile in CSV format with the following
+            data:
+        </p>
+
+        <pre><code>
+name, age, gender
+John Doe, 40, Male
+Jane Doe, 39, Female
+Jimmy Doe, 4, Male
+June Doe, 1, Female
+        </code></pre>
+
+        <p>
+            Given this data, we may wish to perform a query that performs an 
aggregate function, such as MAX:
+        </p>
+
+        <pre><code>
+            SELECT name
+            FROM FLOWFILE
+            WHERE age = (
+                SELECT MAX(age)
+            )
+        </code></pre>
+
+        <p>
+            The above query will select the name of the oldest person, namely 
John Doe. If a second FlowFile were to then arrive,
+            its contents would be evaluated as an entirely new Table.
+        </p>
+
+       </body>
+</html>
\ No newline at end of file


Reply via email to