Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_1_0/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_1_0/index.html?rev=1874478&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_1_0/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_1_0/index.html
 Tue Feb 25 07:28:36 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>ConsumeKafka_1_0</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">ConsumeKafka_1_0</h1><h2>Description: </h2><p>Consumes messages from 
Apache Kafka specifically built against the Kafka 1.0 Consumer API. The 
complementary NiFi processor for sending messages is PublishKafka_1_0.</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>Kafka, Get, Ingest, Ingress, Topic, PubSub, Consume, 
1.0</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property suppor
 ts the <a href="../../../../../html/expression-language-guide.html">NiFi 
Expression Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Kafka Brokers</strong></td><td 
id="default-value">localhost:9092</td><td id="allowable-values"></td><td 
id="description">A comma-separated list of known Kafka Brokers in the format 
&lt;host&gt;:&lt;port&gt;<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Security Protocol</strong></td><td 
id="default-value">PLAINTEXT</td><td id="allowable-values"><ul><li>PLAINTEXT 
<img src="../../../../../html/images/iconInfo.png" alt="PLAINTEXT" 
title="PLAINTEXT"></img></li><li>SSL <img 
src="../../../../../html/images/iconInfo.png" alt="SSL" 
title="SSL"></img></li><li>SASL_PLAINTEXT <img 
src="../../../../../html/images/iconInfo.png" alt="SASL_PLAINTEXT" 
title="SASL_PLAINTEX
 T"></img></li><li>SASL_SSL <img src="../../../../../html/images/iconInfo.png" 
alt="SASL_SSL" title="SASL_SSL"></img></li></ul></td><td 
id="description">Protocol used to communicate with brokers. Corresponds to 
Kafka's 'security.protocol' property.</td></tr><tr><td id="name">Kerberos 
Service Name</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">The service name that matches the primary name of the Kafka 
server configured in the broker JAAS file.This can be defined either in Kafka's 
JAAS config or in Kafka's config. Corresponds to Kafka's 'security.protocol' 
property.It is ignored unless one of the SASL options of the &lt;Security 
Protocol&gt; are selected.<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Credentials Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>KerberosCredentialsService<br/><s
 trong>Implementation: </strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.11.3/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Kerberos principal that 
will be used to connect to brokers. If not set, it is expected to set a JAAS 
configuration file in the JVM properties defined in the bootstrap.conf file. 
This principal will be set into 'sasl.jaas.config' Kafka's 
property.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name">Kerberos 
Keytab</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">The Kerberos keytab that will be used to connect to brokers. 
If not
  set, it is expected to set a JAAS configuration file in the JVM properties 
defined in the bootstrap.conf file. This principal will be set into 
'sasl.jaas.config' Kafka's property.<br/><strong>Supports Expression Language: 
true (will be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">SSL Context Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>SSLContextService<br/><strong>Implementations: </strong><a 
href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardSSLContextService/index.html">StandardSSLContextService</a><br/><a
 
href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardRestrictedSSLContextService/index.html">StandardRestrictedSSLContextService</a></td><td
 id="description">Specifies the SSL Context Service to use for communicating 
with Kafka.</td></tr><tr><td id="name"><strong>Topic Name(s)</strong></td><td 
id="default-value"></td><td id="al
 lowable-values"></td><td id="description">The name of the Kafka Topic(s) to 
pull from. More than one can be supplied if comma 
separated.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name"><strong>Topic 
Name Format</strong></td><td id="default-value">names</td><td 
id="allowable-values"><ul><li>names <img 
src="../../../../../html/images/iconInfo.png" alt="Topic is a full topic name 
or comma separated list of names" title="Topic is a full topic name or comma 
separated list of names"></img></li><li>pattern <img 
src="../../../../../html/images/iconInfo.png" alt="Topic is a regex using the 
Java Pattern syntax" title="Topic is a regex using the Java Pattern 
syntax"></img></li></ul></td><td id="description">Specifies whether the 
Topic(s) provided are a comma separated list of names or a single regular 
expression</td></tr><tr><td id="name"><strong>Honor 
Transactions</strong></td><td id="default-value">true</td><
 td id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether or not NiFi should honor transactional 
guarantees when communicating with Kafka. If false, the Processor will use an 
"isolation level" of read_uncomitted. This means that messages will be received 
as soon as they are written to Kafka but will be pulled, even if the producer 
cancels the transactions. If this value is true, NiFi will not receive any 
messages for which the producer's transaction was canceled, but this can result 
in some latency since the consumer must wait for the producer to finish its 
entire transaction instead of pulling as the messages become 
available.</td></tr><tr><td id="name"><strong>Group ID</strong></td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
Group ID is used to identify consumers that are within the same consumer group. 
Corresponds to Kafka's 'group.id' property.<br/><strong>Supports Expression 
Language: true (wil
 l be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Offset Reset</strong></td><td 
id="default-value">latest</td><td id="allowable-values"><ul><li>earliest <img 
src="../../../../../html/images/iconInfo.png" alt="Automatically reset the 
offset to the earliest offset" title="Automatically reset the offset to the 
earliest offset"></img></li><li>latest <img 
src="../../../../../html/images/iconInfo.png" alt="Automatically reset the 
offset to the latest offset" title="Automatically reset the offset to the 
latest offset"></img></li><li>none <img 
src="../../../../../html/images/iconInfo.png" alt="Throw exception to the 
consumer if no previous offset is found for the consumer's group" title="Throw 
exception to the consumer if no previous offset is found for the consumer's 
group"></img></li></ul></td><td id="description">Allows you to manage the 
condition when there is no initial offset in Kafka or if the current offset 
does not exist any more on the server
  (e.g. because that data has been deleted). Corresponds to Kafka's 
'auto.offset.reset' property.</td></tr><tr><td id="name"><strong>Key Attribute 
Encoding</strong></td><td id="default-value">utf-8</td><td 
id="allowable-values"><ul><li>UTF-8 Encoded <img 
src="../../../../../html/images/iconInfo.png" alt="The key is interpreted as a 
UTF-8 Encoded string." title="The key is interpreted as a UTF-8 Encoded 
string."></img></li><li>Hex Encoded <img 
src="../../../../../html/images/iconInfo.png" alt="The key is interpreted as 
arbitrary binary data and is encoded using hexadecimal characters with 
uppercase letters" title="The key is interpreted as arbitrary binary data and 
is encoded using hexadecimal characters with uppercase 
letters"></img></li></ul></td><td id="description">FlowFiles that are emitted 
have an attribute named 'kafka.key'. This property dictates how the value of 
the attribute should be encoded.</td></tr><tr><td id="name">Message 
Demarcator</td><td id="default-value"></td><td 
 id="allowable-values"></td><td id="description">Since KafkaConsumer receives 
messages in batches, you have an option to output FlowFiles which contains all 
Kafka messages in a single batch for a given topic and partition and this 
property allows you to provide a string (interpreted as UTF-8) to use for 
demarcating apart multiple Kafka messages. This is an optional property and if 
not provided each Kafka message received will result in a single FlowFile which 
 time it is triggered. To enter special character such as 'new line' use 
CTRL+Enter or Shift+Enter depending on the OS<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Message Header Encoding</td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">Any message header that is found on a Kafka message will be 
added to the outbound FlowFile as an attribute. This property indicates the 
Character Encoding to use for dese
 rializing the headers.</td></tr><tr><td id="name">Headers to Add as Attributes 
(Regex)</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">A Regular Expression that is matched against all message 
headers. Any message header whose name matches the regex will be added to the 
FlowFile as an Attribute. If not specified, no Header values will be added as 
FlowFile attributes. If two messages have a different value for the same header 
and that header is selected by the provided regex, then those two messages must 
be added to different FlowFiles. As a result, users should be cautious about 
using a regex like ".*" if messages are expected to have header values that are 
unique per message, such as an identifier or timestamp, because it will prevent 
NiFi from bundling the messages together efficiently.</td></tr><tr><td 
id="name">Max Poll Records</td><td id="default-value">10000</td><td 
id="allowable-values"></td><td id="description">Specifies the maximum number 
 of records Kafka should return in a single poll.</td></tr><tr><td 
id="name">Max Uncommitted Time</td><td id="default-value">1 secs</td><td 
id="allowable-values"></td><td id="description">Specifies the maximum amount of 
time allowed to pass before offsets must be committed. This value impacts how 
often offsets will be committed.  Committing offsets less often increases 
throughput but also increases the window of potential data duplication in the 
event of a rebalance or JVM restart between commits.  This value is also 
related to maximum poll records and the use of a message demarcator.  When 
using a message demarcator we can have far more uncommitted messages than when 
we're not as there is much less for us to keep track of in 
memory.</td></tr></table><h3>Dynamic Properties: </h3><p>Dynamic Properties 
allow the user to specify both the name and value of a property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">The name of a Kaf
 ka configuration property.</td><td id="value">The value of a given Kafka 
configuration property.</td><td>These properties will be added on the Kafka 
configuration after loading any provided configuration properties. In the event 
a dynamic property represents a property that was already set, its value will 
be ignored and WARN message logged. For the list of available Kafka properties 
please refer to: http://kafka.apache.org/documentation.html#configuration. 
<br/><strong>Supports Expression Language: true (will be evaluated using 
variable registry only)</strong></td></tr></table></p><h3>Relationships: 
</h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFiles
 received from Kafka. Depending on demarcation strategy it is a flow file per 
message or a bundle of messages grouped by topic and 
partition.</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Desc
 ription</th></tr><tr><td>kafka.count</td><td>The number of messages written if 
more than one</td></tr><tr><td>kafka.key</td><td>The key of message if present 
and if single message. How the key is encoded depends on the value of the 'Key 
Attribute Encoding' property.</td></tr><tr><td>kafka.offset</td><td>The offset 
of the message in the partition of the 
topic.</td></tr><tr><td>kafka.timestamp</td><td>The timestamp of the message in 
the partition of the topic.</td></tr><tr><td>kafka.partition</td><td>The 
partition of the topic the message or message bundle is 
from</td></tr><tr><td>kafka.topic</td><td>The topic the message or message 
bundle is from</td></tr></table><h3>State management: </h3>This component does 
not store state.<h3>Restricted: </h3>This component is not restricted.<h3>Input 
requirement: </h3>This component does not allow an incoming 
relationship.<h3>System Resource Considerations:</h3>None 
specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/additionalDetails.html?rev=1874478&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/additionalDetails.html
 Tue Feb 25 07:28:36 2020
@@ -0,0 +1,144 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8" />
+        <title>PublishKafka</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+    </head>
+
+    <body>
+        <h2>Description</h2>
+        <p>
+            This Processor puts the contents of a FlowFile to a Topic in
+            <a href="http://kafka.apache.org/";>Apache Kafka</a> using 
KafkaProducer API available
+            with Kafka 1.0 API. The contents of the incoming FlowFile will be 
read using the
+            configured Record Reader. Each record will then be serialized 
using the configured
+            Record Writer, and this serialized form will be the content of a 
Kafka message.
+            This message is optionally assigned a key by using the &lt;Kafka 
Key&gt; Property.
+        </p>
+        
+
+        <h2>Security Configuration</h2>
+        <p>
+            The Security Protocol property allows the user to specify the 
protocol for communicating
+            with the Kafka broker. The following sections describe each of the 
protocols in further detail.
+        </p>
+        <h3>PLAINTEXT</h3>
+        <p>
+            This option provides an unsecured connection to the broker, with 
no client authentication and no encryption.
+            In order to use this option the broker must be configured with a 
listener of the form:
+            <pre>
+    PLAINTEXT://host.name:port
+            </pre>
+        </p>
+        <h3>SSL</h3>
+        <p>
+            This option provides an encrypted connection to the broker, with 
optional client authentication. In order
+            to use this option the broker must be configured with a listener 
of the form:
+            <pre>
+    SSL://host.name:port
+            </pre>
+            In addition, the processor must have an SSL Context Service 
selected.
+        </p>
+        <p>
+            If the broker specifies ssl.client.auth=none, or does not specify 
ssl.client.auth, then the client will
+            not be required to present a certificate. In this case, the SSL 
Context Service selected may specify only
+            a truststore containing the public key of the certificate 
authority used to sign the broker's key.
+        </p>
+        <p>
+            If the broker specifies ssl.client.auth=required then the client 
will be required to present a certificate.
+            In this case, the SSL Context Service must also specify a keystore 
containing a client key, in addition to
+            a truststore as described above.
+        </p>
+        <h3>SASL_PLAINTEXT</h3>
+        <p>
+            This option uses SASL with a PLAINTEXT transport layer to 
authenticate to the broker. In order to use this
+            option the broker must be configured with a listener of the form:
+            <pre>
+    SASL_PLAINTEXT://host.name:port
+            </pre>
+            In addition, the Kerberos Service Name must be specified in the 
processor.
+        </p>
+        <h4>SASL_PLAINTEXT - GSSAPI</h4>
+        <p>
+            If the SASL mechanism is GSSAPI, then the client must provide a 
JAAS configuration to authenticate. The
+            JAAS configuration can be provided by specifying the 
java.security.auth.login.config system property in
+            NiFi's bootstrap.conf, such as:
+            <pre>
+    
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
+            </pre>
+        </p>
+        <p>
+            An example of the JAAS config file would be the following:
+            <pre>
+    KafkaClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/path/to/nifi.keytab"
+        serviceName="kafka"
+        principal="[email protected]";
+    };
+            </pre>
+        <b>NOTE:</b> The serviceName in the JAAS file must match the Kerberos 
Service Name in the processor.
+        </p>
+        <p>
+            Alternatively, the JAAS
+            configuration when using GSSAPI can be provided by specifying the 
Kerberos Principal and Kerberos Keytab
+            directly in the processor properties. This will dynamically create 
a JAAS configuration like above, and
+            will take precedence over the java.security.auth.login.config 
system property.
+        </p>
+        <h4>SASL_PLAINTEXT - PLAIN</h4>
+        <p>
+            If the SASL mechanism is PLAIN, then client must provide a JAAS 
configuration to authenticate, but
+            the JAAS configuration must use Kafka's PlainLoginModule. An 
example of the JAAS config file would
+            be the following:
+            <pre>
+    KafkaClient {
+      org.apache.kafka.common.security.plain.PlainLoginModule required
+      username="nifi"
+      password="nifi-password";
+    };
+            </pre>
+        </p>
+        <p>
+            <b>NOTE:</b> It is not recommended to use a SASL mechanism of 
PLAIN with SASL_PLAINTEXT, as it would transmit
+            the username and password unencrypted.
+        </p>
+        <p>
+            <b>NOTE:</b> Using the PlainLoginModule will cause it be 
registered in the JVM's static list of Providers, making
+            it visible to components in other NARs that may access the 
providers. There is currently a known issue
+            where Kafka processors using the PlainLoginModule will cause HDFS 
processors with Keberos to no longer work.
+        </p>
+        <h3>SASL_SSL</h3>
+        <p>
+            This option uses SASL with an SSL/TLS transport layer to 
authenticate to the broker. In order to use this
+            option the broker must be configured with a listener of the form:
+            <pre>
+    SASL_SSL://host.name:port
+            </pre>
+        </p>
+        <p>
+            See the SASL_PLAINTEXT section for a description of how to provide 
the proper JAAS configuration
+            depending on the SASL mechanism (GSSAPI or PLAIN).
+        </p>
+        <p>
+            See the SSL section for a description of how to configure the SSL 
Context Service based on the
+            ssl.client.auth property.
+        </p>
+    </body>
+</html>

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/index.html?rev=1874478&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0/index.html
 Tue Feb 25 07:28:36 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PublishKafkaRecord_1_0</title><link 
rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PublishKafkaRecord_1_0</h1><h2>Description: </h2><p>Sends the contents 
of a FlowFile as individual records to Apache Kafka using the Kafka 1.0 
Producer API. The contents of the FlowFile are expected to be record-oriented 
data that can be read by the configured Record Reader. The complementary NiFi 
processor for fetching messages is ConsumeKafkaRecord_1_0.</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>Apache, Kafka, Record, csv, json, avro, logs, Put, Send, Message, 
PubSub, 1.0</p><h3>Properties: </h3><p>In the list below, the names of required 
prope
 rties appear in <strong>bold</strong>. Any other properties (not in bold) are 
considered optional. The table also indicates any default values, and whether a 
property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Kafka Brokers</strong></td><td 
id="default-value">localhost:9092</td><td id="allowable-values"></td><td 
id="description">A comma-separated list of known Kafka Brokers in the format 
&lt;host&gt;:&lt;port&gt;<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Topic Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the Kafka Topic to 
publish to.<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attribute
 s and variable registry)</strong></td></tr><tr><td id="name"><strong>Record 
Reader</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>RecordReaderFactory<br/><strong>Implementations: </strong><a 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.syslog.SyslogReader/index.html">SyslogReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.json.JsonPathReader/index.html">JsonPathReader</a><br/><a
 
href="../../../nifi-scripting-nar/1.11.3/org.apache.nifi.record.script.ScriptedReader/index.html">ScriptedReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.json.JsonTreeReader/index.html">JsonTreeReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.grok.GrokReader/index.html">GrokReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.avro.Avro
 Reader/index.html">AvroReader</a><br/><a 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.xml.XMLReader/index.html">XMLReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.csv.CSVReader/index.html">CSVReader</a><br/><a
 
href="../../../nifi-parquet-nar/1.11.3/org.apache.nifi.parquet.ParquetReader/index.html">ParquetReader</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.syslog.Syslog5424Reader/index.html">Syslog5424Reader</a></td><td
 id="description">The Record Reader to use for incoming 
FlowFiles</td></tr><tr><td id="name"><strong>Record Writer</strong></td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>RecordSetWriterFactory<br/><strong>Implementations: 
</strong><a 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html">FreeFormTextRecordSetWriter</a><br/><a
 
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.json.JsonRecordSetWriter/index.html">JsonRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.csv.CSVRecordSetWriter/index.html">CSVRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.avro.AvroRecordSetWriter/index.html">AvroRecordSetWriter</a><br/><a
 
href="../../../nifi-scripting-nar/1.11.3/org.apache.nifi.record.script.ScriptedRecordSetWriter/index.html">ScriptedRecordSetWriter</a><br/><a
 
href="../../../nifi-parquet-nar/1.11.3/org.apache.nifi.parquet.ParquetRecordSetWriter/index.html">ParquetRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.xml.XMLRecordSetWriter/index.html">XMLRecordSetWriter</a></td><td
 id="description">The Record Writer to use in order to serialize the data 
before sending to Kafka</td></tr><tr><td id="name"><strong>Use Transactions
 </strong></td><td id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether or not NiFi should provide Transactional 
guarantees when communicating with Kafka. If there is a problem sending data to 
Kafka, and this property is set to false, then the messages that have already 
been sent to Kafka will continue on and be delivered to consumers. If this is 
set to true, then the Kafka transaction will be rolled back so that those 
messages are not available to consumers. Setting this to true requires that the 
&lt;Delivery Guarantee&gt; property be set to "Guarantee Replicated 
Delivery."</td></tr><tr><td id="name">Transactional Id Prefix</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">When Use Transaction is set to true, KafkaProducer config 
'transactional.id' will be a generated UUID and will be prefixed with this 
string.<br/><strong>Supports Expression Language: true (will be eva
 luated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Delivery Guarantee</strong></td><td 
id="default-value">0</td><td id="allowable-values"><ul><li>Best Effort <img 
src="../../../../../html/images/iconInfo.png" alt="FlowFile will be routed to 
success after successfully writing the content to a Kafka node, without waiting 
for a response. This provides the best performance but may result in data 
loss." title="FlowFile will be routed to success after successfully writing the 
content to a Kafka node, without waiting for a response. This provides the best 
performance but may result in data loss."></img></li><li>Guarantee Single Node 
Delivery <img src="../../../../../html/images/iconInfo.png" alt="FlowFile will 
be routed to success if the message is received by a single Kafka node, whether 
or not it is replicated. This is faster than &lt;Guarantee Replicated 
Delivery&gt; but can result in data loss if a Kafka node crashes" 
title="FlowFile will be routed to succ
 ess if the message is received by a single Kafka node, whether or not it is 
replicated. This is faster than &lt;Guarantee Replicated Delivery&gt; but can 
result in data loss if a Kafka node crashes"></img></li><li>Guarantee 
Replicated Delivery <img src="../../../../../html/images/iconInfo.png" 
alt="FlowFile will be routed to failure unless the message is replicated to the 
appropriate number of Kafka Nodes according to the Topic configuration" 
title="FlowFile will be routed to failure unless the message is replicated to 
the appropriate number of Kafka Nodes according to the Topic 
configuration"></img></li></ul></td><td id="description">Specifies the 
requirement for guaranteeing that a message is sent to Kafka. Corresponds to 
Kafka's 'acks' property.</td></tr><tr><td id="name">Attributes to Send as 
Headers (Regex)</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">A Regular Expression that is 
matched against all FlowFile attribute names. Any attribute 
 whose name matches the regex will be added to the Kafka messages as a Header. 
If not specified, no FlowFile attributes will be added as 
headers.</td></tr><tr><td id="name">Message Header Encoding</td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">For any attribute that is added as a message header, as 
configured via the &lt;Attributes to Send as Headers&gt; property, this 
property indicates the Character Encoding to use for serializing the 
headers.</td></tr><tr><td id="name"><strong>Security Protocol</strong></td><td 
id="default-value">PLAINTEXT</td><td id="allowable-values"><ul><li>PLAINTEXT 
<img src="../../../../../html/images/iconInfo.png" alt="PLAINTEXT" 
title="PLAINTEXT"></img></li><li>SSL <img 
src="../../../../../html/images/iconInfo.png" alt="SSL" 
title="SSL"></img></li><li>SASL_PLAINTEXT <img 
src="../../../../../html/images/iconInfo.png" alt="SASL_PLAINTEXT" 
title="SASL_PLAINTEXT"></img></li><li>SASL_SSL <img src="../../../../../html/ima
 ges/iconInfo.png" alt="SASL_SSL" title="SASL_SSL"></img></li></ul></td><td 
id="description">Protocol used to communicate with brokers. Corresponds to 
Kafka's 'security.protocol' property.</td></tr><tr><td id="name">Kerberos 
Credentials Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
</strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.11.3/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Service Name</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The service name that matches 
the primary name of the Kafka server configured in the broker JAAS file.This 
can be defined either in Kafka's JAAS config or in Kafka's 
 config. Corresponds to Kafka's 'security.protocol' property.It is ignored 
unless one of the SASL options of the &lt;Security Protocol&gt; are 
selected.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name">Kerberos 
Principal</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">The Kerberos principal that will be used to connect to 
brokers. If not set, it is expected to set a JAAS configuration file in the JVM 
properties defined in the bootstrap.conf file. This principal will be set into 
'sasl.jaas.config' Kafka's property.<br/><strong>Supports Expression Language: 
true (will be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Kerberos Keytab</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Kerberos keytab that will 
be used to connect to brokers. If not set, it is expected to set a JAAS 
configuration file in the
  JVM properties defined in the bootstrap.conf file. This principal will be set 
into 'sasl.jaas.config' Kafka's property.<br/><strong>Supports Expression 
Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">SSL Context Service</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>SSLContextService<br/><strong>Implementations: </strong><a 
href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardSSLContextService/index.html">StandardSSLContextService</a><br/><a
 
href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardRestrictedSSLContextService/index.html">StandardRestrictedSSLContextService</a></td><td
 id="description">Specifies the SSL Context Service to use for communicating 
with Kafka.</td></tr><tr><td id="name">Message Key Field</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
name of a field in the Inpu
 t Records that should be used as the Key for the Kafka 
message.<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Max Request Size</strong></td><td id="default-value">1 
MB</td><td id="allowable-values"></td><td id="description">The maximum size of 
a request in bytes. Corresponds to Kafka's 'max.request.size' property and 
defaults to 1 MB (1048576).</td></tr><tr><td id="name"><strong>Acknowledgment 
Wait Time</strong></td><td id="default-value">5 secs</td><td 
id="allowable-values"></td><td id="description">After sending a message to 
Kafka, this indicates the amount of time that we are willing to wait for a 
response from Kafka. If Kafka does not acknowledge the message within this time 
period, the FlowFile will be routed to 'failure'.</td></tr><tr><td 
id="name"><strong>Max Metadata Wait Time</strong></td><td id="default-value">5 
sec</td><td id="allowable-values"></td><td id="de
 scription">The amount of time publisher will wait to obtain metadata or wait 
for the buffer to flush during the 'send' call before failing the entire 'send' 
call. Corresponds to Kafka's 'max.block.ms' property<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">Partitioner class</td><td 
id="default-value">org.apache.kafka.clients.producer.internals.DefaultPartitioner</td><td
 id="allowable-values"><ul><li>RoundRobinPartitioner <img 
src="../../../../../html/images/iconInfo.png" alt="Messages will be assigned 
partitions in a round-robin fashion, sending the first message to Partition 1, 
the next Partition to Partition 2, and so on, wrapping as necessary." 
title="Messages will be assigned partitions in a round-robin fashion, sending 
the first message to Partition 1, the next Partition to Partition 2, and so on, 
wrapping as necessary."></img></li><li>DefaultPartitioner <img 
src="../../../../../html/images/ico
 nInfo.png" alt="Messages will be assigned to random partitions." 
title="Messages will be assigned to random 
partitions."></img></li><li>RecordPath Partitioner <img 
src="../../../../../html/images/iconInfo.png" alt="Interprets the 
&lt;Partition&gt; property as a RecordPath that will be evaluated against each 
Record to determine which partition the Record will go to. All Records that 
have the same value for the given RecordPath will go to the same Partition." 
title="Interprets the &lt;Partition&gt; property as a RecordPath that will be 
evaluated against each Record to determine which partition the Record will go 
to. All Records that have the same value for the given RecordPath will go to 
the same Partition."></img></li><li>Expression Language Partitioner <img 
src="../../../../../html/images/iconInfo.png" alt="Interprets the 
&lt;Partition&gt; property as Expression Language that will be evaluated 
against each FlowFile. This Expression will be evaluated once against the 
FlowFile, so all
  Records in a given FlowFile will go to the same partition." title="Interprets 
the &lt;Partition&gt; property as Expression Language that will be evaluated 
against each FlowFile. This Expression will be evaluated once against the 
FlowFile, so all Records in a given FlowFile will go to the same 
partition."></img></li></ul></td><td id="description">Specifies which class to 
use to compute a partition id for a message. Corresponds to Kafka's 
'partitioner.class' property.</td></tr><tr><td id="name">Partition</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies which Partition Records will go to. How this value 
is interpreted is dictated by the &lt;Partitioner class&gt; 
property.<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Compression Type</strong></td><td 
id="default-value">none</td><td id="allowable-values"><ul><li>none</li><li>gzip
 </li><li>snappy</li><li>lz4</li></ul></td><td id="description">This parameter 
allows you to specify the compression codec for all data generated by this 
producer.</td></tr></table><h3>Dynamic Properties: </h3><p>Dynamic Properties 
allow the user to specify both the name and value of a property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">The name of a Kafka configuration property.</td><td id="value">The 
value of a given Kafka configuration property.</td><td>These properties will be 
added on the Kafka configuration after loading any provided configuration 
properties. In the event a dynamic property represents a property that was 
already set, its value will be ignored and WARN message logged. For the list of 
available Kafka properties please refer to: 
http://kafka.apache.org/documentation.html#configuration. <br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr></ta
 ble></p><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFiles
 for which all content was sent to Kafka.</td></tr><tr><td>failure</td><td>Any 
FlowFile that cannot be sent to Kafka will be routed to this 
Relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>msg.count</td><td>The
 number of messages that were sent to Kafka for this FlowFile. This attribute 
is added only to FlowFiles that are routed to 
success.</td></tr></table><h3>State management: </h3>This component does not 
store state.<h3>Restricted: </h3>This component is not restricted.<h3>Input 
requirement: </h3>This component requires an incoming relationship.<h3>System 
Resource Considerations:</h3>None specified.<h3>See Also:</h3><p><a 
href="../org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/index.html">PublishKafka_1_0</a>,
 
 <a 
href="../org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_1_0/index.html">ConsumeKafka_1_0</a>,
 <a 
href="../org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaRecord_1_0/index.html">ConsumeKafkaRecord_1_0</a></p></body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/additionalDetails.html?rev=1874478&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/additionalDetails.html
 Tue Feb 25 07:28:36 2020
@@ -0,0 +1,156 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8" />
+        <title>PublishKafka</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+    </head>
+
+    <body>
+        <h2>Description</h2>
+        <p>
+            This Processor puts the contents of a FlowFile to a Topic in
+            <a href="http://kafka.apache.org/";>Apache Kafka</a> using 
KafkaProducer API available
+            with Kafka 1.0 API. The content of a FlowFile becomes the contents 
of a Kafka message.
+            This message is optionally assigned a key by using the &lt;Kafka 
Key&gt; Property.
+        </p>
+
+        <p>
+            The Processor allows the user to configure an optional Message 
Demarcator that
+            can be used to send many messages per FlowFile. For example, a 
<i>\n</i> could be used
+            to indicate that the contents of the FlowFile should be used to 
send one message
+            per line of text. It also supports multi-char demarcators (e.g., 
'my custom demarcator').
+            If the property is not set, the entire contents of the FlowFile
+            will be sent as a single message. When using the demarcator, if 
some messages are
+            successfully sent but other messages fail to send, the resulting 
FlowFile will be
+            considered a failed FlowFile and will have additional attributes 
to that effect.
+            One of such attributes is 'failed.last.idx' which indicates the 
index of the last message
+            that was successfully ACKed by Kafka. (if no demarcator is used 
the value of this index will be -1).
+            This will allow PublishKafka to only re-send un-ACKed messages on 
the next re-try.
+        </p>
+        
+        
+        <h2>Security Configuration</h2>
+        <p>
+            The Security Protocol property allows the user to specify the 
protocol for communicating
+            with the Kafka broker. The following sections describe each of the 
protocols in further detail.
+        </p>
+        <h3>PLAINTEXT</h3>
+        <p>
+            This option provides an unsecured connection to the broker, with 
no client authentication and no encryption.
+            In order to use this option the broker must be configured with a 
listener of the form:
+            <pre>
+    PLAINTEXT://host.name:port
+            </pre>
+        </p>
+        <h3>SSL</h3>
+        <p>
+            This option provides an encrypted connection to the broker, with 
optional client authentication. In order
+            to use this option the broker must be configured with a listener 
of the form:
+            <pre>
+    SSL://host.name:port
+            </pre>
+            In addition, the processor must have an SSL Context Service 
selected.
+        </p>
+        <p>
+            If the broker specifies ssl.client.auth=none, or does not specify 
ssl.client.auth, then the client will
+            not be required to present a certificate. In this case, the SSL 
Context Service selected may specify only
+            a truststore containing the public key of the certificate 
authority used to sign the broker's key.
+        </p>
+        <p>
+            If the broker specifies ssl.client.auth=required then the client 
will be required to present a certificate.
+            In this case, the SSL Context Service must also specify a keystore 
containing a client key, in addition to
+            a truststore as described above.
+        </p>
+        <h3>SASL_PLAINTEXT</h3>
+        <p>
+            This option uses SASL with a PLAINTEXT transport layer to 
authenticate to the broker. In order to use this
+            option the broker must be configured with a listener of the form:
+            <pre>
+    SASL_PLAINTEXT://host.name:port
+            </pre>
+            In addition, the Kerberos Service Name must be specified in the 
processor.
+        </p>
+        <h4>SASL_PLAINTEXT - GSSAPI</h4>
+        <p>
+            If the SASL mechanism is GSSAPI, then the client must provide a 
JAAS configuration to authenticate. The
+            JAAS configuration can be provided by specifying the 
java.security.auth.login.config system property in
+            NiFi's bootstrap.conf, such as:
+            <pre>
+    
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
+            </pre>
+        </p>
+        <p>
+            An example of the JAAS config file would be the following:
+            <pre>
+    KafkaClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/path/to/nifi.keytab"
+        serviceName="kafka"
+        principal="[email protected]";
+    };
+            </pre>
+        <b>NOTE:</b> The serviceName in the JAAS file must match the Kerberos 
Service Name in the processor.
+        </p>
+        <p>
+            Alternatively, the JAAS
+            configuration when using GSSAPI can be provided by specifying the 
Kerberos Principal and Kerberos Keytab
+            directly in the processor properties. This will dynamically create 
a JAAS configuration like above, and
+            will take precedence over the java.security.auth.login.config 
system property.
+        </p>
+        <h4>SASL_PLAINTEXT - PLAIN</h4>
+        <p>
+            If the SASL mechanism is PLAIN, then client must provide a JAAS 
configuration to authenticate, but
+            the JAAS configuration must use Kafka's PlainLoginModule. An 
example of the JAAS config file would
+            be the following:
+            <pre>
+    KafkaClient {
+      org.apache.kafka.common.security.plain.PlainLoginModule required
+      username="nifi"
+      password="nifi-password";
+    };
+            </pre>
+        </p>
+        <p>
+            <b>NOTE:</b> It is not recommended to use a SASL mechanism of 
PLAIN with SASL_PLAINTEXT, as it would transmit
+            the username and password unencrypted.
+        </p>
+        <p>
+            <b>NOTE:</b> Using the PlainLoginModule will cause it be 
registered in the JVM's static list of Providers, making
+            it visible to components in other NARs that may access the 
providers. There is currently a known issue
+            where Kafka processors using the PlainLoginModule will cause HDFS 
processors with Keberos to no longer work.
+        </p>
+        <h3>SASL_SSL</h3>
+        <p>
+            This option uses SASL with an SSL/TLS transport layer to 
authenticate to the broker. In order to use this
+            option the broker must be configured with a listener of the form:
+            <pre>
+    SASL_SSL://host.name:port
+            </pre>
+        </p>
+        <p>
+            See the SASL_PLAINTEXT section for a description of how to provide 
the proper JAAS configuration
+            depending on the SASL mechanism (GSSAPI or PLAIN).
+        </p>
+        <p>
+            See the SSL section for a description of how to configure the SSL 
Context Service based on the
+            ssl.client.auth property.
+        </p>
+    </body>
+</html>

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/index.html?rev=1874478&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0/index.html
 Tue Feb 25 07:28:36 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>PublishKafka_1_0</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">PublishKafka_1_0</h1><h2>Description: </h2><p>Sends the contents of a 
FlowFile as a message to Apache Kafka using the Kafka 1.0 Producer API.The 
messages to send may be individual FlowFiles or may be delimited, using a 
user-specified delimiter, such as a new-line. The complementary NiFi processor 
for fetching messages is ConsumeKafka_1_0.</p><p><a 
href="additionalDetails.html">Additional Details...</a></p><h3>Tags: 
</h3><p>Apache, Kafka, Put, Send, Message, PubSub, 1.0</p><h3>Properties: 
</h3><p>In the list below, the names of required properties appear in 
<strong>bold</strong>. Any other prope
 rties (not in bold) are considered optional. The table also indicates any 
default values, and whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td 
id="name"><strong>Kafka Brokers</strong></td><td 
id="default-value">localhost:9092</td><td id="allowable-values"></td><td 
id="description">A comma-separated list of known Kafka Brokers in the format 
&lt;host&gt;:&lt;port&gt;<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Security Protocol</strong></td><td 
id="default-value">PLAINTEXT</td><td id="allowable-values"><ul><li>PLAINTEXT 
<img src="../../../../../html/images/iconInfo.png" alt="PLAINTEXT" 
title="PLAINTEXT"></img></li><li>SSL <img 
src="../../../../../html/images/iconInfo.png" alt="SSL" title="SSL"></i
 mg></li><li>SASL_PLAINTEXT <img src="../../../../../html/images/iconInfo.png" 
alt="SASL_PLAINTEXT" title="SASL_PLAINTEXT"></img></li><li>SASL_SSL <img 
src="../../../../../html/images/iconInfo.png" alt="SASL_SSL" 
title="SASL_SSL"></img></li></ul></td><td id="description">Protocol used to 
communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.</td></tr><tr><td id="name">Kerberos Service Name</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
service name that matches the primary name of the Kafka server configured in 
the broker JAAS file.This can be defined either in Kafka's JAAS config or in 
Kafka's config. Corresponds to Kafka's 'security.protocol' property.It is 
ignored unless one of the SASL options of the &lt;Security Protocol&gt; are 
selected.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name">Kerberos 
Credentials Service</td><td id="defaul
 t-value"></td><td id="allowable-values"><strong>Controller Service API: 
</strong><br/>KerberosCredentialsService<br/><strong>Implementation: 
</strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.11.3/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Principal</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The Kerberos principal that 
will be used to connect to brokers. If not set, it is expected to set a JAAS 
configuration file in the JVM properties defined in the bootstrap.conf file. 
This principal will be set into 'sasl.jaas.config' Kafka's 
property.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name">Kerberos 
Keytab</td><td id="default-value"></td>
 <td id="allowable-values"></td><td id="description">The Kerberos keytab that 
will be used to connect to brokers. If not set, it is expected to set a JAAS 
configuration file in the JVM properties defined in the bootstrap.conf file. 
This principal will be set into 'sasl.jaas.config' Kafka's 
property.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td id="name">SSL Context 
Service</td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>SSLContextService<br/><strong>Implementations: </strong><a 
href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardSSLContextService/index.html">StandardSSLContextService</a><br/><a
 
href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardRestrictedSSLContextService/index.html">StandardRestrictedSSLContextService</a></td><td
 id="description">Specifies the SSL Context Service to use for communi
 cating with Kafka.</td></tr><tr><td id="name"><strong>Topic 
Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the Kafka Topic to 
publish to.<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Delivery Guarantee</strong></td><td 
id="default-value">0</td><td id="allowable-values"><ul><li>Best Effort <img 
src="../../../../../html/images/iconInfo.png" alt="FlowFile will be routed to 
success after successfully writing the content to a Kafka node, without waiting 
for a response. This provides the best performance but may result in data 
loss." title="FlowFile will be routed to success after successfully writing the 
content to a Kafka node, without waiting for a response. This provides the best 
performance but may result in data loss."></img></li><li>Guarantee Single Node 
Delivery <img src="../../../../../html/images/iconInf
 o.png" alt="FlowFile will be routed to success if the message is received by a 
single Kafka node, whether or not it is replicated. This is faster than 
&lt;Guarantee Replicated Delivery&gt; but can result in data loss if a Kafka 
node crashes" title="FlowFile will be routed to success if the message is 
received by a single Kafka node, whether or not it is replicated. This is 
faster than &lt;Guarantee Replicated Delivery&gt; but can result in data loss 
if a Kafka node crashes"></img></li><li>Guarantee Replicated Delivery <img 
src="../../../../../html/images/iconInfo.png" alt="FlowFile will be routed to 
failure unless the message is replicated to the appropriate number of Kafka 
Nodes according to the Topic configuration" title="FlowFile will be routed to 
failure unless the message is replicated to the appropriate number of Kafka 
Nodes according to the Topic configuration"></img></li></ul></td><td 
id="description">Specifies the requirement for guaranteeing that a message is 
sent to Kafka
 . Corresponds to Kafka's 'acks' property.</td></tr><tr><td 
id="name"><strong>Use Transactions</strong></td><td 
id="default-value">true</td><td 
id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td 
id="description">Specifies whether or not NiFi should provide Transactional 
guarantees when communicating with Kafka. If there is a problem sending data to 
Kafka, and this property is set to false, then the messages that have already 
been sent to Kafka will continue on and be delivered to consumers. If this is 
set to true, then the Kafka transaction will be rolled back so that those 
messages are not available to consumers. Setting this to true requires that the 
&lt;Delivery Guarantee&gt; property be set to "Guarantee Replicated 
Delivery."</td></tr><tr><td id="name">Transactional Id Prefix</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">When Use Transaction is set to true, KafkaProducer config 
'transactional.id' will be a generated UUID and w
 ill be prefixed with this string.<br/><strong>Supports Expression Language: 
true (will be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Attributes to Send as Headers (Regex)</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">A 
Regular Expression that is matched against all FlowFile attribute names. Any 
attribute whose name matches the regex will be added to the Kafka messages as a 
Header. If not specified, no FlowFile attributes will be added as 
headers.</td></tr><tr><td id="name">Message Header Encoding</td><td 
id="default-value">UTF-8</td><td id="allowable-values"></td><td 
id="description">For any attribute that is added as a message header, as 
configured via the &lt;Attributes to Send as Headers&gt; property, this 
property indicates the Character Encoding to use for serializing the 
headers.</td></tr><tr><td id="name">Kafka Key</td><td 
id="default-value"></td><td id="allowable-values"></td><td id="description">The 
Key 
 to use for the Message. If not specified, the flow file attribute 'kafka.key' 
is used as the message key, if it is present.Beware that setting Kafka key and 
demarcating at the same time may potentially lead to many Kafka messages with 
the same key.Normally this is not a problem as Kafka does not enforce or assume 
message and key uniqueness. Still, setting the demarcator and Kafka key at the 
same time poses a risk of data loss on Kafka. During a topic compaction on 
Kafka, messages will be deduplicated based on this key.<br/><strong>Supports 
Expression Language: true (will be evaluated using flow file attributes and 
variable registry)</strong></td></tr><tr><td id="name"><strong>Key Attribute 
Encoding</strong></td><td id="default-value">utf-8</td><td 
id="allowable-values"><ul><li>UTF-8 Encoded <img 
src="../../../../../html/images/iconInfo.png" alt="The key is interpreted as a 
UTF-8 Encoded string." title="The key is interpreted as a UTF-8 Encoded 
string."></img></li><li>Hex Encoded <im
 g src="../../../../../html/images/iconInfo.png" alt="The key is interpreted as 
arbitrary binary data that is encoded using hexadecimal characters with 
uppercase letters." title="The key is interpreted as arbitrary binary data that 
is encoded using hexadecimal characters with uppercase 
letters."></img></li></ul></td><td id="description">FlowFiles that are emitted 
have an attribute named 'kafka.key'. This property dictates how the value of 
the attribute should be encoded.</td></tr><tr><td id="name">Message 
Demarcator</td><td id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies the string (interpreted as UTF-8) to use for 
demarcating multiple messages within a single FlowFile. If not specified, the 
entire content of the FlowFile will be used as a single message. If specified, 
the contents of the FlowFile will be split on this delimiter and each section 
sent as a separate Kafka message. To enter special character such as 'new line' 
use CTRL+Enter or Shif
 t+Enter, depending on your OS.<br/><strong>Supports Expression Language: true 
(will be evaluated using flow file attributes and variable 
registry)</strong></td></tr><tr><td id="name"><strong>Max Request 
Size</strong></td><td id="default-value">1 MB</td><td 
id="allowable-values"></td><td id="description">The maximum size of a request 
in bytes. Corresponds to Kafka's 'max.request.size' property and defaults to 1 
MB (1048576).</td></tr><tr><td id="name"><strong>Acknowledgment Wait 
Time</strong></td><td id="default-value">5 secs</td><td 
id="allowable-values"></td><td id="description">After sending a message to 
Kafka, this indicates the amount of time that we are willing to wait for a 
response from Kafka. If Kafka does not acknowledge the message within this time 
period, the FlowFile will be routed to 'failure'.</td></tr><tr><td 
id="name"><strong>Max Metadata Wait Time</strong></td><td id="default-value">5 
sec</td><td id="allowable-values"></td><td id="description">The amount of time 
pub
 lisher will wait to obtain metadata or wait for the buffer to flush during the 
'send' call before failing the entire 'send' call. Corresponds to Kafka's 
'max.block.ms' property<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name">Partitioner class</td><td 
id="default-value">org.apache.kafka.clients.producer.internals.DefaultPartitioner</td><td
 id="allowable-values"><ul><li>RoundRobinPartitioner <img 
src="../../../../../html/images/iconInfo.png" alt="Messages will be assigned 
partitions in a round-robin fashion, sending the first message to Partition 1, 
the next Partition to Partition 2, and so on, wrapping as necessary." 
title="Messages will be assigned partitions in a round-robin fashion, sending 
the first message to Partition 1, the next Partition to Partition 2, and so on, 
wrapping as necessary."></img></li><li>DefaultPartitioner <img 
src="../../../../../html/images/iconInfo.png" alt="Messages will be 
 assigned to random partitions." title="Messages will be assigned to random 
partitions."></img></li><li>Expression Language Partitioner <img 
src="../../../../../html/images/iconInfo.png" alt="Interprets the 
&lt;Partition&gt; property as Expression Language that will be evaluated 
against each FlowFile. This Expression will be evaluated once against the 
FlowFile, so all Records in a given FlowFile will go to the same partition." 
title="Interprets the &lt;Partition&gt; property as Expression Language that 
will be evaluated against each FlowFile. This Expression will be evaluated once 
against the FlowFile, so all Records in a given FlowFile will go to the same 
partition."></img></li></ul></td><td id="description">Specifies which class to 
use to compute a partition id for a message. Corresponds to Kafka's 
'partitioner.class' property.</td></tr><tr><td id="name">Partition</td><td 
id="default-value"></td><td id="allowable-values"></td><td 
id="description">Specifies which Partition Records w
 ill go to.<br/><strong>Supports Expression Language: true (will be evaluated 
using flow file attributes and variable registry)</strong></td></tr><tr><td 
id="name"><strong>Compression Type</strong></td><td 
id="default-value">none</td><td 
id="allowable-values"><ul><li>none</li><li>gzip</li><li>snappy</li><li>lz4</li></ul></td><td
 id="description">This parameter allows you to specify the compression codec 
for all data generated by this producer.</td></tr></table><h3>Dynamic 
Properties: </h3><p>Dynamic Properties allow the user to specify both the name 
and value of a property.<table 
id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td
 id="name">The name of a Kafka configuration property.</td><td id="value">The 
value of a given Kafka configuration property.</td><td>These properties will be 
added on the Kafka configuration after loading any provided configuration 
properties. In the event a dynamic property represents a property that was 
already set, its 
 value will be ignored and WARN message logged. For the list of available Kafka 
properties please refer to: 
http://kafka.apache.org/documentation.html#configuration. <br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr></table></p><h3>Relationships: </h3><table 
id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>success</td><td>FlowFiles
 for which all content was sent to Kafka.</td></tr><tr><td>failure</td><td>Any 
FlowFile that cannot be sent to Kafka will be routed to this 
Relationship</td></tr></table><h3>Reads Attributes: </h3>None 
specified.<h3>Writes Attributes: </h3><table 
id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>msg.count</td><td>The
 number of messages that were sent to Kafka for this FlowFile. This attribute 
is added only to FlowFiles that are routed to success. If the &lt;Message 
Demarcator&gt; Property is not set, this will always be 1, but if the Property 
is set, 
 it may be greater than 1.</td></tr></table><h3>State management: </h3>This 
component does not store state.<h3>Restricted: </h3>This component is not 
restricted.<h3>Input requirement: </h3>This component requires an incoming 
relationship.<h3>System Resource Considerations:</h3>None 
specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.record.sink.kafka.KafkaRecordSink_1_0/index.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.record.sink.kafka.KafkaRecordSink_1_0/index.html?rev=1874478&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.record.sink.kafka.KafkaRecordSink_1_0/index.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-1-0-nar/1.11.3/org.apache.nifi.record.sink.kafka.KafkaRecordSink_1_0/index.html
 Tue Feb 25 07:28:36 2020
@@ -0,0 +1 @@
+<!DOCTYPE html><html lang="en"><head><meta 
charset="utf-8"></meta><title>KafkaRecordSink_1_0</title><link rel="stylesheet" 
href="../../../../../css/component-usage.css" 
type="text/css"></link></head><script type="text/javascript">window.onload = 
function(){if(self==top) { document.getElementById('nameHeader').style.display 
= "inherit"; } }</script><body><h1 id="nameHeader" style="display: 
none;">KafkaRecordSink_1_0</h1><h2>Description: </h2><p>Provides a service to 
write records to a Kafka 1.x topic.</p><h3>Tags: </h3><p>kafka, record, 
sink</p><h3>Properties: </h3><p>In the list below, the names of required 
properties appear in <strong>bold</strong>. Any other properties (not in bold) 
are considered optional. The table also indicates any default values, and 
whether a property supports the <a 
href="../../../../../html/expression-language-guide.html">NiFi Expression 
Language</a>.</p><table id="properties"><tr><th>Name</th><th>Default 
Value</th><th>Allowable Values</th><th>Description<
 /th></tr><tr><td id="name"><strong>Kafka Brokers</strong></td><td 
id="default-value">localhost:9092</td><td id="allowable-values"></td><td 
id="description">A comma-separated list of known Kafka Brokers in the format 
&lt;host&gt;:&lt;port&gt;<br/><strong>Supports Expression Language: true (will 
be evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Topic Name</strong></td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The name of the Kafka Topic to 
publish to.<br/><strong>Supports Expression Language: true (will be evaluated 
using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Record Writer</strong></td><td id="default-value"></td><td 
id="allowable-values"><strong>Controller Service API: 
</strong><br/>RecordSetWriterFactory<br/><strong>Implementations: </strong><a 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html">FreeFormTextRec
 ordSetWriter</a><br/><a 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.json.JsonRecordSetWriter/index.html">JsonRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.csv.CSVRecordSetWriter/index.html">CSVRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.avro.AvroRecordSetWriter/index.html">AvroRecordSetWriter</a><br/><a
 
href="../../../nifi-scripting-nar/1.11.3/org.apache.nifi.record.script.ScriptedRecordSetWriter/index.html">ScriptedRecordSetWriter</a><br/><a
 
href="../../../nifi-parquet-nar/1.11.3/org.apache.nifi.parquet.ParquetRecordSetWriter/index.html">ParquetRecordSetWriter</a><br/><a
 
href="../../../nifi-record-serialization-services-nar/1.11.3/org.apache.nifi.xml.XMLRecordSetWriter/index.html">XMLRecordSetWriter</a></td><td
 id="description">Specifies the Controller Service to use for writing out the 
records.</td></tr><tr><td id="name"><strong>Del
 ivery Guarantee</strong></td><td id="default-value">0</td><td 
id="allowable-values"><ul><li>Best Effort <img 
src="../../../../../html/images/iconInfo.png" alt="Records are considered 
'transmitted successfully' after successfully writing the content to a Kafka 
node, without waiting for a response. This provides the best performance but 
may result in data loss." title="Records are considered 'transmitted 
successfully' after successfully writing the content to a Kafka node, without 
waiting for a response. This provides the best performance but may result in 
data loss."></img></li><li>Guarantee Single Node Delivery <img 
src="../../../../../html/images/iconInfo.png" alt="Records are considered 
'transmitted successfully' if the message is received by a single Kafka node, 
whether or not it is replicated. This is faster than &lt;Guarantee Replicated 
Delivery&gt; but can result in data loss if a Kafka node crashes." 
title="Records are considered 'transmitted successfully' if the message is r
 eceived by a single Kafka node, whether or not it is replicated. This is 
faster than &lt;Guarantee Replicated Delivery&gt; but can result in data loss 
if a Kafka node crashes."></img></li><li>Guarantee Replicated Delivery <img 
src="../../../../../html/images/iconInfo.png" alt="Records are considered 
'transmitted unsuccessfully' unless the message is replicated to the 
appropriate number of Kafka Nodes according to the Topic configuration." 
title="Records are considered 'transmitted unsuccessfully' unless the message 
is replicated to the appropriate number of Kafka Nodes according to the Topic 
configuration."></img></li></ul></td><td id="description">Specifies the 
requirement for guaranteeing that a message is sent to Kafka. Corresponds to 
Kafka's 'acks' property.</td></tr><tr><td id="name">Message Header 
Encoding</td><td id="default-value">UTF-8</td><td 
id="allowable-values"></td><td id="description">For any attribute that is added 
as a message header, as configured via the &lt;Attri
 butes to Send as Headers&gt; property, this property indicates the Character 
Encoding to use for serializing the headers.</td></tr><tr><td 
id="name"><strong>Security Protocol</strong></td><td 
id="default-value">PLAINTEXT</td><td id="allowable-values"><ul><li>PLAINTEXT 
<img src="../../../../../html/images/iconInfo.png" alt="PLAINTEXT" 
title="PLAINTEXT"></img></li><li>SSL <img 
src="../../../../../html/images/iconInfo.png" alt="SSL" 
title="SSL"></img></li><li>SASL_PLAINTEXT <img 
src="../../../../../html/images/iconInfo.png" alt="SASL_PLAINTEXT" 
title="SASL_PLAINTEXT"></img></li><li>SASL_SSL <img 
src="../../../../../html/images/iconInfo.png" alt="SASL_SSL" 
title="SASL_SSL"></img></li></ul></td><td id="description">Protocol used to 
communicate with brokers. Corresponds to Kafka's 'security.protocol' 
property.</td></tr><tr><td id="name">Kerberos Credentials Service</td><td 
id="default-value"></td><td id="allowable-values"><strong>Controller Service 
API: </strong><br/>KerberosCredentialsSe
 rvice<br/><strong>Implementation: </strong><a 
href="../../../nifi-kerberos-credentials-service-nar/1.11.3/org.apache.nifi.kerberos.KeytabCredentialsService/index.html">KeytabCredentialsService</a></td><td
 id="description">Specifies the Kerberos Credentials Controller Service that 
should be used for authenticating with Kerberos</td></tr><tr><td 
id="name">Kerberos Service Name</td><td id="default-value"></td><td 
id="allowable-values"></td><td id="description">The service name that matches 
the primary name of the Kafka server configured in the broker JAAS file.This 
can be defined either in Kafka's JAAS config or in Kafka's config. Corresponds 
to Kafka's 'security.protocol' property.It is ignored unless one of the SASL 
options of the &lt;Security Protocol&gt; are selected.<br/><strong>Supports 
Expression Language: true (will be evaluated using variable registry 
only)</strong></td></tr><tr><td id="name">SSL Context Service</td><td 
id="default-value"></td><td id="allowable-values"><strong
 >Controller Service API: 
 ></strong><br/>SSLContextService<br/><strong>Implementations: </strong><a 
 >href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardSSLContextService/index.html">StandardSSLContextService</a><br/><a
 > 
 >href="../../../nifi-ssl-context-service-nar/1.11.3/org.apache.nifi.ssl.StandardRestrictedSSLContextService/index.html">StandardRestrictedSSLContextService</a></td><td
 > id="description">Specifies the SSL Context Service to use for communicating 
 >with Kafka.</td></tr><tr><td id="name"><strong>Max Request 
 >Size</strong></td><td id="default-value">1 MB</td><td 
 >id="allowable-values"></td><td id="description">The maximum size of a request 
 >in bytes. Corresponds to Kafka's 'max.request.size' property and defaults to 
 >1 MB (1048576).</td></tr><tr><td id="name"><strong>Acknowledgment Wait 
 >Time</strong></td><td id="default-value">5 secs</td><td 
 >id="allowable-values"></td><td id="description">After sending a message to 
 >Kafka, this indicates the amount of time
  that we are willing to wait for a response from Kafka. If Kafka does not 
acknowledge the message within this time period, the FlowFile will be routed to 
'failure'.</td></tr><tr><td id="name"><strong>Max Metadata Wait 
Time</strong></td><td id="default-value">5 sec</td><td 
id="allowable-values"></td><td id="description">The amount of time publisher 
will wait to obtain metadata or wait for the buffer to flush during the 'send' 
call before failing the entire 'send' call. Corresponds to Kafka's 
'max.block.ms' property<br/><strong>Supports Expression Language: true (will be 
evaluated using variable registry only)</strong></td></tr><tr><td 
id="name"><strong>Compression Type</strong></td><td 
id="default-value">none</td><td 
id="allowable-values"><ul><li>none</li><li>gzip</li><li>snappy</li><li>lz4</li></ul></td><td
 id="description">This parameter allows you to specify the compression codec 
for all data generated by this producer.</td></tr></table><h3>State management: 
</h3>This component do
 es not store state.<h3>Restricted: </h3>This component is not 
restricted.<h3>System Resource Considerations:</h3>None specified.</body></html>
\ No newline at end of file

Added: 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaRecord_2_0/additionalDetails.html
URL: 
http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaRecord_2_0/additionalDetails.html?rev=1874478&view=auto
==============================================================================
--- 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaRecord_2_0/additionalDetails.html
 (added)
+++ 
nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.11.3/org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaRecord_2_0/additionalDetails.html
 Tue Feb 25 07:28:36 2020
@@ -0,0 +1,205 @@
+<!DOCTYPE html>
+<html lang="en">
+    <!--
+      Licensed to the Apache Software Foundation (ASF) under one or more
+      contributor license agreements.  See the NOTICE file distributed with
+      this work for additional information regarding copyright ownership.
+      The ASF licenses this file to You under the Apache License, Version 2.0
+      (the "License"); you may not use this file except in compliance with
+      the License.  You may obtain a copy of the License at
+          http://www.apache.org/licenses/LICENSE-2.0
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+      See the License for the specific language governing permissions and
+      limitations under the License.
+    -->
+    <head>
+        <meta charset="utf-8" />
+        <title>ConsumeKafkaRecord</title>
+        <link rel="stylesheet" href="../../../../../css/component-usage.css" 
type="text/css" />
+    </head>
+
+    <body>
+        <h2>Description</h2>
+        <p>
+            This Processor polls <a href="http://kafka.apache.org/";>Apache 
Kafka</a>
+            for data using KafkaConsumer API available with Kafka 2.0. When a 
message is received
+            from Kafka, the message will be deserialized using the configured 
Record Reader, and then
+            written to a FlowFile by serializing the message with the 
configured Record Writer.
+        </p>
+
+
+        <h2>Security Configuration:</h2>
+        <p>
+            The Security Protocol property allows the user to specify the 
protocol for communicating
+            with the Kafka broker. The following sections describe each of the 
protocols in further detail.
+        </p>
+        <h3>PLAINTEXT</h3>
+        <p>
+            This option provides an unsecured connection to the broker, with 
no client authentication and no encryption.
+            In order to use this option the broker must be configured with a 
listener of the form:
+        <pre>
+    PLAINTEXT://host.name:port
+            </pre>
+        </p>
+        <h3>SSL</h3>
+        <p>
+            This option provides an encrypted connection to the broker, with 
optional client authentication. In order
+            to use this option the broker must be configured with a listener 
of the form:
+        <pre>
+    SSL://host.name:port
+            </pre>
+        In addition, the processor must have an SSL Context Service selected.
+        </p>
+        <p>
+            If the broker specifies ssl.client.auth=none, or does not specify 
ssl.client.auth, then the client will
+            not be required to present a certificate. In this case, the SSL 
Context Service selected may specify only
+            a truststore containing the public key of the certificate 
authority used to sign the broker's key.
+        </p>
+        <p>
+            If the broker specifies ssl.client.auth=required then the client 
will be required to present a certificate.
+            In this case, the SSL Context Service must also specify a keystore 
containing a client key, in addition to
+            a truststore as described above.
+        </p>
+        <h3>SASL_PLAINTEXT</h3>
+        <p>
+            This option uses SASL with a PLAINTEXT transport layer to 
authenticate to the broker. In order to use this
+            option the broker must be configured with a listener of the form:
+        <pre>
+    SASL_PLAINTEXT://host.name:port
+            </pre>
+        In addition, the Kerberos Service Name must be specified in the 
processor.
+        </p>
+        <h4>SASL_PLAINTEXT - GSSAPI</h4>
+        <p>
+            If the SASL mechanism is GSSAPI, then the client must provide a 
JAAS configuration to authenticate.
+        </p>
+        <p>
+            An example of the JAAS config file would be the following:
+        <pre>
+    KafkaClient {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/path/to/nifi.keytab"
+        serviceName="kafka"
+        principal="[email protected]";
+    };
+            </pre>
+        <b>NOTE:</b> The serviceName in the JAAS file must match the Kerberos 
Service Name in the processor.
+        </p>
+        <p>
+        The JAAS configuration can be provided by either of below ways
+        <ol type="1">
+            <li>specify the java.security.auth.login.config system property in
+                NiFi's bootstrap.conf. This limits you to use only one user 
credential across the cluster.</li>
+            <pre>
+                
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
+            </pre>
+            <li>add user attribute 'sasl.jaas.config' in the processor 
configurations. This method allows one to have multiple consumers with 
different user credentials or gives flexibility to consume from multiple kafka 
clusters.</li>
+            <pre>
+                sasl.jaas.config : 
com.sun.security.auth.module.Krb5LoginModule required
+                                        useKeyTab=true
+                                        storeKey=true
+                                        keyTab="/path/to/nifi.keytab"
+                                        serviceName="kafka"
+                                        principal="[email protected]";
+            </pre>
+        </ol>
+        </p>
+        <p>
+            Alternatively, the JAAS
+            configuration when using GSSAPI can be provided by specifying the 
Kerberos Principal and Kerberos Keytab
+            directly in the processor properties. This will dynamically create 
a JAAS configuration like above, and
+            will take precedence over the java.security.auth.login.config 
system property.
+        </p>
+        <h4>SASL_PLAINTEXT - PLAIN</h4>
+        <p>
+            If the SASL mechanism is PLAIN, then client must provide a JAAS 
configuration to authenticate, but
+            the JAAS configuration must use Kafka's PlainLoginModule. An 
example of the JAAS config file would
+            be the following:
+        <pre>
+    KafkaClient {
+      org.apache.kafka.common.security.plain.PlainLoginModule required
+      username="nifi"
+      password="nifi-password";
+    };
+            </pre>
+        The JAAS configuration can be provided by either of below ways
+        <ol type="1">
+            <li>specify the java.security.auth.login.config system property in
+                NiFi's bootstrap.conf. This limits you to use only one user 
credential across the cluster.</li>
+            <pre>
+                
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
+            </pre>
+            <li>add user attribute 'sasl.jaas.config' in the processor 
configurations. This method allows one to have multiple consumers with 
different user credentials or gives flexibility to consume from multiple kafka 
clusters.</li>
+            <pre>
+                sasl.jaas.config : 
org.apache.kafka.common.security.plain.PlainLoginModule required
+                                        username="nifi"
+                                        password="nifi-password";
+            </pre>
+            <b>NOTE:</b> The dynamic properties of this processor are not 
secured and as a result the password entered when utilizing sasl.jaas.config 
will be stored in the flow.xml.gz file in plain-text, and will be saved to NiFi 
Registry if using versioned flows.
+        </ol>
+        </p>
+        <p>
+            <b>NOTE:</b> It is not recommended to use a SASL mechanism of 
PLAIN with SASL_PLAINTEXT, as it would transmit
+            the username and password unencrypted.
+        </p>
+        <p>
+            <b>NOTE:</b> The Kerberos Service Name is not required for SASL 
mechanism of PLAIN. However, processor warns saying this attribute has to be 
filled with non empty string. You can choose to fill any random string, such as 
"null".
+        </p>
+        <p>
+            <b>NOTE:</b> Using the PlainLoginModule will cause it be 
registered in the JVM's static list of Providers, making
+            it visible to components in other NARs that may access the 
providers. There is currently a known issue
+            where Kafka processors using the PlainLoginModule will cause HDFS 
processors with Keberos to no longer work.
+        </p>
+        <h4>SASL_PLAINTEXT - SCRAM</h4>
+        <p>
+            If the SASL mechanism is SCRAM, then client must provide a JAAS 
configuration to authenticate, but
+            the JAAS configuration must use Kafka's ScramLoginModule. Ensure 
that you add user defined attribute 'sasl.mechanism' and assign 'SCRAM-SHA-256' 
or 'SCRAM-SHA-512' based on kafka broker configurations. An example of the JAAS 
config file would
+            be the following:
+        <pre>
+    KafkaClient {
+      org.apache.kafka.common.security.scram.ScramLoginModule required
+      username="nifi"
+      password="nifi-password";
+    };
+        </pre>
+        The JAAS configuration can be provided by either of below ways
+        <ol type="1">
+        <li>specify the java.security.auth.login.config system property in
+            NiFi's bootstrap.conf. This limits you to use only one user 
credential across the cluster.</li>
+        <pre>
+                
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
+            </pre>
+        <li>add user attribute 'sasl.jaas.config' in the processor 
configurations. This method allows one to have multiple consumers with 
different user credentials or gives flexibility to consume from multiple kafka 
clusters.</li>
+            <pre>
+                sasl.jaas.config : 
org.apache.kafka.common.security.scram.ScramLoginModule required
+                                        username="nifi"
+                                        password="nifi-password";
+            </pre>
+            <b>NOTE:</b> The dynamic properties of this processor are not 
secured and as a result the password entered when utilizing sasl.jaas.config 
will be stored in the flow.xml.gz file in plain-text, and will be saved to NiFi 
Registry if using versioned flows.
+        </ol>
+        <p>
+        <b>NOTE:</b> The Kerberos Service Name is not required for SASL 
mechanism of SCRAM-SHA-256 or SCRAM-SHA-512. However, processor warns saying 
this attribute has to be filled with non empty string. You can choose to fill 
any random string, such as "null".
+        </p>
+        <h3>SASL_SSL</h3>
+        <p>
+            This option uses SASL with an SSL/TLS transport layer to 
authenticate to the broker. In order to use this
+            option the broker must be configured with a listener of the form:
+        <pre>
+    SASL_SSL://host.name:port
+            </pre>
+        </p>
+        <p>
+            See the SASL_PLAINTEXT section for a description of how to provide 
the proper JAAS configuration
+            depending on the SASL mechanism (GSSAPI or PLAIN).
+        </p>
+        <p>
+            See the SSL section for a description of how to configure the SSL 
Context Service based on the
+            ssl.client.auth property.
+        </p>
+
+    </body>
+</html>


Reply via email to