Modified: websites/staging/flume/trunk/content/FlumeUserGuide.html
==============================================================================
--- websites/staging/flume/trunk/content/FlumeUserGuide.html (original)
+++ websites/staging/flume/trunk/content/FlumeUserGuide.html Tue Jan 8
13:13:55 2019
@@ -7,7 +7,7 @@
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
- <title>Flume 1.8.0 User Guide — Apache Flume</title>
+ <title>Flume 1.9.0 User Guide — Apache Flume</title>
<link rel="stylesheet" href="_static/flume.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
@@ -26,7 +26,7 @@
<script type="text/javascript" src="_static/doctools.js"></script>
<link rel="top" title="Apache Flume" href="index.html" />
<link rel="up" title="Documentation" href="documentation.html" />
- <link rel="next" title="Flume 1.8.0 Developer Guide"
href="FlumeDeveloperGuide.html" />
+ <link rel="next" title="Flume 1.9.0 Developer Guide"
href="FlumeDeveloperGuide.html" />
<link rel="prev" title="Documentation" href="documentation.html" />
</head>
<body>
@@ -37,6 +37,7 @@
<div class="logo">
<a href="index.html">
<img class="logo" src="_static/flume-logo.png" alt="Logo"/>
+ </a>
</div>
</td>
<td width="2%">
@@ -59,8 +60,8 @@
<div class="bodywrapper">
<div class="body">
- <div class="section" id="flume-1-8-0-user-guide">
-<h1>Flume 1.8.0 User Guide<a class="headerlink" href="#flume-1-8-0-user-guide"
title="Permalink to this headline">¶</a></h1>
+ <div class="section" id="flume-1-9-0-user-guide">
+<h1>Flume 1.9.0 User Guide<a class="headerlink" href="#flume-1-9-0-user-guide"
title="Permalink to this headline">¶</a></h1>
<div class="section" id="introduction">
<h2>Introduction<a class="headerlink" href="#introduction" title="Permalink to
this headline">¶</a></h2>
<div class="section" id="overview">
@@ -73,13 +74,6 @@ Since data sources are customizable, Flu
of event data including but not limited to network traffic data,
social-media-generated data,
email messages and pretty much any data source possible.</p>
<p>Apache Flume is a top level project at the Apache Software Foundation.</p>
-<p>There are currently two release code lines available, versions 0.9.x and
1.x.</p>
-<p>Documentation for the 0.9.x track is available at
-<a class="reference external"
href="http://archive.cloudera.com/cdh/3/flume/UserGuide/">the Flume 0.9.x User
Guide</a>.</p>
-<p>This documentation applies to the 1.4.x track.</p>
-<p>New and existing users are encouraged to use the 1.x releases so as to
-leverage the performance improvements and configuration flexibilities available
-in the latest architecture.</p>
</div>
<div class="section" id="system-requirements">
<h3>System Requirements<a class="headerlink" href="#system-requirements"
title="Permalink to this headline">¶</a></h3>
@@ -715,6 +709,176 @@ specified. If no channels are designated
the selector will attempt to write the events to the optional channels. Any
failures are simply ignored in that case.</p>
</div>
+<div class="section" id="ssl-tls-support">
+<h3>SSL/TLS support<a class="headerlink" href="#ssl-tls-support"
title="Permalink to this headline">¶</a></h3>
+<p>Several Flume components support the SSL/TLS protocols in order to
communicate with other systems
+securely.</p>
+<table border="1" class="docutils">
+<colgroup>
+<col width="55%" />
+<col width="45%" />
+</colgroup>
+<thead valign="bottom">
+<tr class="row-odd"><th class="head">Component</th>
+<th class="head">SSL server or client</th>
+</tr>
+</thead>
+<tbody valign="top">
+<tr class="row-even"><td>Avro Source</td>
+<td>server</td>
+</tr>
+<tr class="row-odd"><td>Avro Sink</td>
+<td>client</td>
+</tr>
+<tr class="row-even"><td>Thrift Source</td>
+<td>server</td>
+</tr>
+<tr class="row-odd"><td>Thrift Sink</td>
+<td>client</td>
+</tr>
+<tr class="row-even"><td>Kafka Source</td>
+<td>client</td>
+</tr>
+<tr class="row-odd"><td>Kafka Channel</td>
+<td>client</td>
+</tr>
+<tr class="row-even"><td>Kafka Sink</td>
+<td>client</td>
+</tr>
+<tr class="row-odd"><td>HTTP Source</td>
+<td>server</td>
+</tr>
+<tr class="row-even"><td>JMS Source</td>
+<td>client</td>
+</tr>
+<tr class="row-odd"><td>Syslog TCP Source</td>
+<td>server</td>
+</tr>
+<tr class="row-even"><td>Multiport Syslog TCP Source</td>
+<td>server</td>
+</tr>
+</tbody>
+</table>
+<p>The SSL compatible components have several configuration parameters to set
up SSL, like
+enable SSL flag, keystore / truststore parameters (location, password, type)
and additional
+SSL parameters (eg. disabled protocols).</p>
+<p>Enabling SSL for a component is always specified at component level in the
agent configuration file.
+So some components may be configured to use SSL while others not (even with
the same component type).</p>
+<p>The keystore / truststore setup can be specified at component level or
globally.</p>
+<p>In case of the component level setup, the keystore / truststore is
configured in the agent
+configuration file through component specific parameters. The advantage of
this method is that the
+components can use different keystores (if this would be needed). The
disadvantage is that the
+keystore parameters must be copied for each component in the agent
configuration file.
+The component level setup is optional, but if it is defined, it has higher
precedence than
+the global parameters.</p>
+<p>With the global setup, it is enough to define the keystore / truststore
parameters once
+and use the same settings for all components, which means less and more
centralized configuration.</p>
+<p>The global setup can be configured either through system properties or
through environment variables.</p>
+<table border="1" class="docutils">
+<colgroup>
+<col width="22%" />
+<col width="20%" />
+<col width="59%" />
+</colgroup>
+<thead valign="bottom">
+<tr class="row-odd"><th class="head">System property</th>
+<th class="head">Environment variable</th>
+<th class="head">Description</th>
+</tr>
+</thead>
+<tbody valign="top">
+<tr class="row-even"><td>javax.net.ssl.keyStore</td>
+<td>FLUME_SSL_KEYSTORE_PATH</td>
+<td>Keystore location</td>
+</tr>
+<tr class="row-odd"><td>javax.net.ssl.keyStorePassword</td>
+<td>FLUME_SSL_KEYSTORE_PASSWORD</td>
+<td>Keystore password</td>
+</tr>
+<tr class="row-even"><td>javax.net.ssl.keyStoreType</td>
+<td>FLUME_SSL_KEYSTORE_TYPE</td>
+<td>Keystore type (by default JKS)</td>
+</tr>
+<tr class="row-odd"><td>javax.net.ssl.trustStore</td>
+<td>FLUME_SSL_TRUSTSTORE_PATH</td>
+<td>Truststore location</td>
+</tr>
+<tr class="row-even"><td>javax.net.ssl.trustStorePassword</td>
+<td>FLUME_SSL_TRUSTSTORE_PASSWORD</td>
+<td>Truststore password</td>
+</tr>
+<tr class="row-odd"><td>javax.net.ssl.trustStoreType</td>
+<td>FLUME_SSL_TRUSTSTORE_TYPE</td>
+<td>Truststore type (by default JKS)</td>
+</tr>
+<tr class="row-even"><td>flume.ssl.include.protocols</td>
+<td>FLUME_SSL_INCLUDE_PROTOCOLS</td>
+<td>Protocols to include when calculating enabled protocols. A comma (,)
separated list.
+Excluded protocols will be excluded from this list if provided.</td>
+</tr>
+<tr class="row-odd"><td>flume.ssl.exclude.protocols</td>
+<td>FLUME_SSL_EXCLUDE_PROTOCOLS</td>
+<td>Protocols to exclude when calculating enabled protocols. A comma (,)
separated list.</td>
+</tr>
+<tr class="row-even"><td>flume.ssl.include.cipherSuites</td>
+<td>FLUME_SSL_INCLUDE_CIPHERSUITES</td>
+<td>Cipher suites to include when calculating enabled cipher suites. A comma
(,) separated list.
+Excluded cipher suites will be excluded from this list if provided.</td>
+</tr>
+<tr class="row-odd"><td>flume.ssl.exclude.cipherSuites</td>
+<td>FLUME_SSL_EXCLUDE_CIPHERSUITES</td>
+<td>Cipher suites to exclude when calculating enabled cipher suites. A comma
(,) separated list.</td>
+</tr>
+</tbody>
+</table>
+<p>The SSL system properties can either be passed on the command line or by
setting the <tt class="docutils literal"><span class="pre">JAVA_OPTS</span></tt>
+environment variable in <em>conf/flume-env.sh</em>. (Although, using the
command line is inadvisable because
+the commands including the passwords will be saved to the command history.)</p>
+<div class="highlight-properties"><div class="highlight"><pre><span
class="na">export JAVA_OPTS</span><span class="o">=</span><span
class="s">"$JAVA_OPTS
-Djavax.net.ssl.keyStore=/path/to/keystore.jks"</span>
+<span class="na">export JAVA_OPTS</span><span class="o">=</span><span
class="s">"$JAVA_OPTS
-Djavax.net.ssl.keyStorePassword=password"</span>
+</pre></div>
+</div>
+<p>Flume uses the system properties defined in JSSE (Java Secure Socket
Extension), so this is
+a standard way for setting up SSL. On the other hand, specifying passwords in
system properties
+means that the passwords can be seen in the process list. For cases where it
is not acceptable,
+it is also be possible to define the parameters in environment variables.
Flume initializes
+the JSSE system properties from the corresponding environment variables
internally in this case.</p>
+<p>The SSL environment variables can either be set in the shell environment
before
+starting Flume or in <em>conf/flume-env.sh</em>. (Although, using the command
line is inadvisable because
+the commands including the passwords will be saved to the command history.)</p>
+<div class="highlight-properties"><div class="highlight"><pre><span
class="na">export FLUME_SSL_KEYSTORE_PATH</span><span class="o">=</span><span
class="s">/path/to/keystore.jks</span>
+<span class="na">export FLUME_SSL_KEYSTORE_PASSWORD</span><span
class="o">=</span><span class="s">password</span>
+</pre></div>
+</div>
+<p><strong>Please note:</strong></p>
+<ul class="simple">
+<li>SSL must be enabled at component level. Specifying the global SSL
parameters alone will not
+have any effect.</li>
+<li>If the global SSL parameters are specified at multiple levels, the
priority is the
+following (from higher to lower):<ul>
+<li>component parameters in agent config</li>
+<li>system properties</li>
+<li>environment variables</li>
+</ul>
+</li>
+<li>If SSL is enabled for a component, but the SSL parameters are not
specified in any of the ways
+described above, then<ul>
+<li>in case of keystores: configuration error</li>
+<li>in case of truststores: the default truststore will be used (<tt
class="docutils literal"><span class="pre">jssecacerts</span></tt> / <tt
class="docutils literal"><span class="pre">cacerts</span></tt> in Oracle
JDK)</li>
+</ul>
+</li>
+<li>The trustore password is optional in all cases. If not specified, then no
integrity check will be
+performed on the truststore when it is opened by the JDK.</li>
+</ul>
+</div>
+<div class="section"
id="source-and-sink-batch-sizes-and-channel-transaction-capacities">
+<h3>Source and sink batch sizes and channel transaction capacities<a
class="headerlink"
href="#source-and-sink-batch-sizes-and-channel-transaction-capacities"
title="Permalink to this headline">¶</a></h3>
+<p>Sources and sinks can have a batch size parameter that determines the
maximum number of events they
+process in one batch. This happens within a channel transaction that has an
upper limit called
+transaction capacity. Batch size must be smaller than the channel’s
transaction capacity.
+There is an explicit check to prevent incompatible settings. This check happens
+whenever the configuration is read.</p>
+</div>
<div class="section" id="flume-sources">
<h3>Flume Sources<a class="headerlink" href="#flume-sources" title="Permalink
to this headline">¶</a></h3>
<div class="section" id="avro-source">
@@ -725,9 +889,9 @@ it can create tiered collection topologi
Required properties are in <strong>bold</strong>.</p>
<table border="1" class="docutils">
<colgroup>
+<col width="14%" />
<col width="11%" />
-<col width="10%" />
-<col width="78%" />
+<col width="75%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -778,29 +942,55 @@ Required properties are in <strong>bold<
</tr>
<tr class="row-even"><td>ssl</td>
<td>false</td>
-<td>Set this to true to enable SSL encryption. You must also specify a
“keystore” and a “keystore-password”.</td>
+<td>Set this to true to enable SSL encryption. If SSL is enabled,
+you must also specify a “keystore” and a
“keystore-password”,
+either through component level parameters (see below)
+or as global SSL parameters (see <a class="reference internal"
href="#ssl-tls-support">SSL/TLS support</a> section).</td>
</tr>
<tr class="row-odd"><td>keystore</td>
<td>–</td>
-<td>This is the path to a Java keystore file. Required for SSL.</td>
+<td>This is the path to a Java keystore file.
+If not specified here, then the global keystore will be used
+(if defined, otherwise configuration error).</td>
</tr>
<tr class="row-even"><td>keystore-password</td>
<td>–</td>
-<td>The password for the Java keystore. Required for SSL.</td>
+<td>The password for the Java keystore.
+If not specified here, then the global keystore password will be used
+(if defined, otherwise configuration error).</td>
</tr>
<tr class="row-odd"><td>keystore-type</td>
<td>JKS</td>
-<td>The type of the Java keystore. This can be “JKS” or
“PKCS12”.</td>
+<td>The type of the Java keystore. This can be “JKS” or
“PKCS12”.
+If not specified here, then the global keystore type will be used
+(if defined, otherwise the default is JKS).</td>
</tr>
<tr class="row-even"><td>exclude-protocols</td>
<td>SSLv3</td>
-<td>Space-separated list of SSL/TLS protocols to exclude. SSLv3 will always be
excluded in addition to the protocols specified.</td>
+<td>Space-separated list of SSL/TLS protocols to exclude.
+SSLv3 will always be excluded in addition to the protocols specified.</td>
+</tr>
+<tr class="row-odd"><td>include-protocols</td>
+<td>–</td>
+<td>Space-separated list of SSL/TLS protocols to include.
+The enabled protocols will be the included protocols without the excluded
protocols.
+If included-protocols is empty, it includes every supported protocols.</td>
</tr>
-<tr class="row-odd"><td>ipFilter</td>
+<tr class="row-even"><td>exclude-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to exclude.</td>
+</tr>
+<tr class="row-odd"><td>include-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to include.
+The enabled cipher suites will be the included cipher suites without the
excluded cipher suites.
+If included-cipher-suites is empty, it includes every supported cipher
suites.</td>
+</tr>
+<tr class="row-even"><td>ipFilter</td>
<td>false</td>
<td>Set this to true to enable ipFiltering for netty</td>
</tr>
-<tr class="row-even"><td>ipFilterRules</td>
+<tr class="row-odd"><td>ipFilterRules</td>
<td>–</td>
<td>Define N netty ipFilter pattern rules with this config.</td>
</tr>
@@ -836,7 +1026,7 @@ Thrift source to authenticate to the ker
Required properties are in <strong>bold</strong>.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="5%" />
+<col width="6%" />
<col width="3%" />
<col width="91%" />
</colgroup>
@@ -885,33 +1075,58 @@ Required properties are in <strong>bold<
</tr>
<tr class="row-odd"><td>ssl</td>
<td>false</td>
-<td>Set this to true to enable SSL encryption. You must also specify a
“keystore” and a “keystore-password”.</td>
+<td>Set this to true to enable SSL encryption. If SSL is enabled,
+you must also specify a “keystore” and a
“keystore-password”,
+either through component level parameters (see below)
+or as global SSL parameters (see <a class="reference internal"
href="#ssl-tls-support">SSL/TLS support</a> section)</td>
</tr>
<tr class="row-even"><td>keystore</td>
<td>–</td>
-<td>This is the path to a Java keystore file. Required for SSL.</td>
+<td>This is the path to a Java keystore file.
+If not specified here, then the global keystore will be used
+(if defined, otherwise configuration error).</td>
</tr>
<tr class="row-odd"><td>keystore-password</td>
<td>–</td>
-<td>The password for the Java keystore. Required for SSL.</td>
+<td>The password for the Java keystore.
+If not specified here, then the global keystore password will be used
+(if defined, otherwise configuration error).</td>
</tr>
<tr class="row-even"><td>keystore-type</td>
<td>JKS</td>
-<td>The type of the Java keystore. This can be “JKS” or
“PKCS12”.</td>
+<td>The type of the Java keystore. This can be “JKS” or
“PKCS12”.
+If not specified here, then the global keystore type will be used
+(if defined, otherwise the default is JKS).</td>
</tr>
<tr class="row-odd"><td>exclude-protocols</td>
<td>SSLv3</td>
-<td>Space-separated list of SSL/TLS protocols to exclude. SSLv3 will always be
excluded in addition to the protocols specified.</td>
+<td>Space-separated list of SSL/TLS protocols to exclude.
+SSLv3 will always be excluded in addition to the protocols specified.</td>
+</tr>
+<tr class="row-even"><td>include-protocols</td>
+<td>–</td>
+<td>Space-separated list of SSL/TLS protocols to include.
+The enabled protocols will be the included protocols without the excluded
protocols.
+If included-protocols is empty, it includes every supported protocols.</td>
+</tr>
+<tr class="row-odd"><td>exclude-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to exclude.</td>
+</tr>
+<tr class="row-even"><td>include-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to include.
+The enabled cipher suites will be the included cipher suites without the
excluded cipher suites.</td>
</tr>
-<tr class="row-even"><td>kerberos</td>
+<tr class="row-odd"><td>kerberos</td>
<td>false</td>
<td>Set to true to enable kerberos authentication. In kerberos mode,
agent-principal and agent-keytab are required for successful authentication.
The Thrift source in secure mode, will accept connections only from Thrift
clients that have kerberos enabled and are successfully authenticated to the
kerberos KDC.</td>
</tr>
-<tr class="row-odd"><td>agent-principal</td>
+<tr class="row-even"><td>agent-principal</td>
<td>–</td>
<td>The kerberos principal used by the Thrift Source to authenticate to the
kerberos KDC.</td>
</tr>
-<tr class="row-even"><td>agent-keytab</td>
+<tr class="row-odd"><td>agent-keytab</td>
<td>â-</td>
<td>The keytab location used by the Thrift Source in combination with the
agent-principal to authenticate to the kerberos KDC.</td>
</tr>
@@ -1139,8 +1354,8 @@ Required for durable subscriptions.</td>
</tr>
</tbody>
</table>
-<div class="section" id="converter">
-<h5>Converter<a class="headerlink" href="#converter" title="Permalink to this
headline">¶</a></h5>
+<div class="section" id="jms-message-converter">
+<h5>JMS message converter<a class="headerlink" href="#jms-message-converter"
title="Permalink to this headline">¶</a></h5>
<p>The JMS source allows pluggable converters, though it’s likely the
default converter will work
for most purposes. The default converter is able to convert Bytes, Text, and
Object messages
to FlumeEvents. In all cases, the properties in the message are added as
headers to the
@@ -1169,6 +1384,39 @@ the resulting array is copied to the bod
</pre></div>
</div>
</div>
+<div class="section" id="ssl-and-jms-source">
+<h5>SSL and JMS Source<a class="headerlink" href="#ssl-and-jms-source"
title="Permalink to this headline">¶</a></h5>
+<p>JMS client implementations typically support to configure SSL/TLS via some
Java system properties defined by JSSE
+(Java Secure Socket Extension). Specifying these system properties for
Flume’s JVM, JMS Source (or more precisely the
+JMS client implementation used by the JMS Source) can connect to the JMS
server through SSL (of course only when the JMS
+server has also been set up to use SSL).
+It should work with any JMS provider and has been tested with ActiveMQ, IBM MQ
and Oracle WebLogic.</p>
+<p>The following sections describe the SSL configuration steps needed on the
Flume side only. You can find more detailed
+descriptions about the server side setup of the different JMS providers and
also full working configuration examples on
+Flume Wiki.</p>
+<p><strong>SSL transport / server authentication:</strong></p>
+<p>If the JMS server uses self-signed certificate or its certificate is signed
by a non-trusted CA (eg. the company’s own
+CA), then a truststore (containing the right certificate) needs to be set up
and passed to Flume. It can be done via
+the global SSL parameters. For more details about the global SSL setup, see
the <a class="reference internal" href="#ssl-tls-support">SSL/TLS support</a>
section.</p>
+<p>Some JMS providers require SSL specific JNDI Initial Context Factory and/or
Provider URL settings when using SSL (eg.
+ActiveMQ uses ssl:// URL prefix instead of tcp://).
+In this case the source properties (<tt class="docutils literal"><span
class="pre">initialContextFactory</span></tt> and/or <tt class="docutils
literal"><span class="pre">providerURL</span></tt>) have to be adjusted in the
agent
+config file.</p>
+<p><strong>Client certificate authentication (two-way SSL):</strong></p>
+<p>JMS Source can authenticate to the JMS server through client certificate
authentication instead of the usual
+user/password login (when SSL is used and the JMS server is configured to
accept this kind of authentication).</p>
+<p>The keystore containing Flume’s key used for the authentication needs
to be configured via the global SSL parameters
+again. For more details about the global SSL setup, see the <a
class="reference internal" href="#ssl-tls-support">SSL/TLS support</a>
section.</p>
+<p>The keystore should contain only one key (if multiple keys are present,
then the first one will be used).
+The key password must be the same as the keystore password.</p>
+<p>In case of client certificate authentication, it is not needed to specify
the <tt class="docutils literal"><span class="pre">userName</span></tt> / <tt
class="docutils literal"><span class="pre">passwordFile</span></tt> properties
+for the JMS Source in the Flume agent config file.</p>
+<p><strong>Please note:</strong></p>
+<p>There are no component level configuration parameters for JMS Source unlike
in case of other components.
+No enable SSL flag either.
+SSL setup is controlled by JNDI/Provider URL settings (ultimately the JMS
server settings) and by the presence / absence
+of the truststore / keystore.</p>
+</div>
</div>
<div class="section" id="spooling-directory-source">
<h4>Spooling Directory Source<a class="headerlink"
href="#spooling-directory-source" title="Permalink to this headline">¶</a></h4>
@@ -1178,7 +1426,8 @@ This source will watch the specified dir
events out of new files as they appear.
The event parsing logic is pluggable.
After a given file has been fully read
-into the channel, it is renamed to indicate completion (or optionally
deleted).</p>
+into the channel, completion by default is indicated by renaming the file or
it can be deleted or the trackerDir is used
+to keep track of processed files.</p>
<p>Unlike the Exec source, this source is reliable and will not miss data,
even if
Flume is restarted or killed. In exchange for this reliability, only immutable,
uniquely-named files must be dropped into the spooling directory. Flume tries
@@ -1263,7 +1512,15 @@ the file is ignored.</td>
<td>Directory to store metadata related to processing of files.
If this path is not an absolute path, then it is interpreted as relative to
the spoolDir.</td>
</tr>
-<tr class="row-even"><td>consumeOrder</td>
+<tr class="row-even"><td>trackingPolicy</td>
+<td>rename</td>
+<td>The tracking policy defines how file processing is tracked. It can be
“rename” or
+“tracker_dir”. This parameter is only effective if the
deletePolicy is “never”.
+“rename” - After processing files they get renamed according to
the fileSuffix parameter.
+“tracker_dir” - Files are not renamed but a new empty file is
created in the trackerDir.
+The new tracker file name is derived from the ingested one plus the
fileSuffix.</td>
+</tr>
+<tr class="row-odd"><td>consumeOrder</td>
<td>oldest</td>
<td>In which order files in the spooling directory will be consumed <tt
class="docutils literal"><span class="pre">oldest</span></tt>,
<tt class="docutils literal"><span class="pre">youngest</span></tt> and <tt
class="docutils literal"><span class="pre">random</span></tt>. In case of <tt
class="docutils literal"><span class="pre">oldest</span></tt> and <tt
class="docutils literal"><span class="pre">youngest</span></tt>, the last
modified
@@ -1274,30 +1531,30 @@ directory will be scanned to pick the ol
are a large number of files, while using <tt class="docutils literal"><span
class="pre">random</span></tt> may cause old files to be consumed
very late if new files keep coming in the spooling directory.</td>
</tr>
-<tr class="row-odd"><td>pollDelay</td>
+<tr class="row-even"><td>pollDelay</td>
<td>500</td>
<td>Delay (in milliseconds) used when polling for new files.</td>
</tr>
-<tr class="row-even"><td>recursiveDirectorySearch</td>
+<tr class="row-odd"><td>recursiveDirectorySearch</td>
<td>false</td>
<td>Whether to monitor sub directories for new files to read.</td>
</tr>
-<tr class="row-odd"><td>maxBackoff</td>
+<tr class="row-even"><td>maxBackoff</td>
<td>4000</td>
<td>The maximum time (in millis) to wait between consecutive attempts to
write to the channel(s) if the channel is full. The source will start at
a low backoff and increase it exponentially each time the channel throws a
ChannelException, upto the value specified by this parameter.</td>
</tr>
-<tr class="row-even"><td>batchSize</td>
+<tr class="row-odd"><td>batchSize</td>
<td>100</td>
<td>Granularity at which to batch transfer to the channel</td>
</tr>
-<tr class="row-odd"><td>inputCharset</td>
+<tr class="row-even"><td>inputCharset</td>
<td>UTF-8</td>
<td>Character set used by deserializers that treat the input file as text.</td>
</tr>
-<tr class="row-even"><td>decodeErrorPolicy</td>
+<tr class="row-odd"><td>decodeErrorPolicy</td>
<td><tt class="docutils literal"><span class="pre">FAIL</span></tt></td>
<td>What to do when we see a non-decodable character in the input file.
<tt class="docutils literal"><span class="pre">FAIL</span></tt>: Throw an
exception and fail to parse the file.
@@ -1305,37 +1562,37 @@ ChannelException, upto the value specifi
typically Unicode U+FFFD.
<tt class="docutils literal"><span class="pre">IGNORE</span></tt>: Drop the
unparseable character sequence.</td>
</tr>
-<tr class="row-odd"><td>deserializer</td>
+<tr class="row-even"><td>deserializer</td>
<td><tt class="docutils literal"><span class="pre">LINE</span></tt></td>
<td>Specify the deserializer used to parse the file into events.
Defaults to parsing each line as an event. The class specified must implement
<tt class="docutils literal"><span
class="pre">EventDeserializer.Builder</span></tt>.</td>
</tr>
-<tr class="row-even"><td>deserializer.*</td>
+<tr class="row-odd"><td>deserializer.*</td>
<td> </td>
<td>Varies per event deserializer.</td>
</tr>
-<tr class="row-odd"><td>bufferMaxLines</td>
+<tr class="row-even"><td>bufferMaxLines</td>
<td>–</td>
<td>(Obselete) This option is now ignored.</td>
</tr>
-<tr class="row-even"><td>bufferMaxLineLength</td>
+<tr class="row-odd"><td>bufferMaxLineLength</td>
<td>5000</td>
<td>(Deprecated) Maximum length of a line in the commit buffer. Use
deserializer.maxLineLength instead.</td>
</tr>
-<tr class="row-odd"><td>selector.type</td>
+<tr class="row-even"><td>selector.type</td>
<td>replicating</td>
<td>replicating or multiplexing</td>
</tr>
-<tr class="row-even"><td>selector.*</td>
+<tr class="row-odd"><td>selector.*</td>
<td> </td>
<td>Depends on the selector.type value</td>
</tr>
-<tr class="row-odd"><td>interceptors</td>
+<tr class="row-even"><td>interceptors</td>
<td>–</td>
<td>Space-separated list of interceptors</td>
</tr>
-<tr class="row-even"><td>interceptors.*</td>
+<tr class="row-odd"><td>interceptors.*</td>
<td> </td>
<td> </td>
</tr>
@@ -1524,26 +1781,33 @@ Currently this source does not support t
<td>100</td>
<td>Max number of lines to read and send to the channel at a time. Using the
default is usually fine.</td>
</tr>
-<tr class="row-odd"><td>backoffSleepIncrement</td>
+<tr class="row-odd"><td>maxBatchCount</td>
+<td>Long.MAX_VALUE</td>
+<td>Controls the number of batches being read consecutively from the same file.
+If the source is tailing multiple files and one of them is written at a fast
rate,
+it can prevent other files to be processed, because the busy file would be
read in an endless loop.
+In this case lower this value.</td>
+</tr>
+<tr class="row-even"><td>backoffSleepIncrement</td>
<td>1000</td>
<td>The increment for time delay before reattempting to poll for new data,
when the last attempt did not find any new data.</td>
</tr>
-<tr class="row-even"><td>maxBackoffSleep</td>
+<tr class="row-odd"><td>maxBackoffSleep</td>
<td>5000</td>
<td>The max time delay between each reattempt to poll for new data, when the
last attempt did not find any new data.</td>
</tr>
-<tr class="row-odd"><td>cachePatternMatching</td>
+<tr class="row-even"><td>cachePatternMatching</td>
<td>true</td>
<td>Listing directories and applying the filename regex pattern may be time
consuming for directories
containing thousands of files. Caching the list of matching files can improve
performance.
The order in which files are consumed will also be cached.
Requires that the file system keeps track of modification times with at least
a 1-second granularity.</td>
</tr>
-<tr class="row-even"><td>fileHeader</td>
+<tr class="row-odd"><td>fileHeader</td>
<td>false</td>
<td>Whether to add a header storing the absolute path filename.</td>
</tr>
-<tr class="row-odd"><td>fileHeaderKey</td>
+<tr class="row-even"><td>fileHeaderKey</td>
<td>file</td>
<td>Header key to use when appending absolute path filename to event
header.</td>
</tr>
@@ -1562,6 +1826,7 @@ Requires that the file system keeps trac
<span class="na">a1.sources.r1.headers.f2.headerKey1</span> <span
class="o">=</span> <span class="s">value2</span>
<span class="na">a1.sources.r1.headers.f2.headerKey2</span> <span
class="o">=</span> <span class="s">value2-2</span>
<span class="na">a1.sources.r1.fileHeader</span> <span class="o">=</span>
<span class="s">true</span>
+<span class="na">a1.sources.ri.maxBatchCount</span> <span class="o">=</span>
<span class="s">1000</span>
</pre></div>
</div>
</div>
@@ -1642,7 +1907,7 @@ Required properties are in <strong>bold<
<h4>Kafka Source<a class="headerlink" href="#kafka-source" title="Permalink to
this headline">¶</a></h4>
<p>Kafka Source is an Apache Kafka consumer that reads messages from Kafka
topics.
If you have multiple Kafka sources running, you can configure them with the
same Consumer Group
-so each will read a unique set of partitions for the topics.</p>
+so each will read a unique set of partitions for the topics. This currently
supports Kafka server releases 0.10.1.0 or higher. Testing was done up to 2.0.1
that was the highest avilable version at the time of the release.</p>
<table border="1" class="docutils">
<colgroup>
<col width="19%" />
@@ -1723,25 +1988,16 @@ from, if the <tt class="docutils literal
with the Kafka Sink <tt class="docutils literal"><span
class="pre">topicHeader</span></tt> property so as to avoid sending the message
back to the same
topic in a loop.</td>
</tr>
-<tr class="row-odd"><td>migrateZookeeperOffsets</td>
-<td>true</td>
-<td>When no Kafka stored offset is found, look up the offsets in Zookeeper and
commit them to Kafka.
-This should be true to support seamless Kafka client migration from older
versions of Flume.
-Once migrated this can be set to false, though that should generally not be
required.
-If no Zookeeper offset is found, the Kafka configuration
kafka.consumer.auto.offset.reset
-defines how offsets are handled.
-Check <a class="reference external"
href="http://kafka.apache.org/documentation.html#newconsumerconfigs">Kafka
documentation</a> for details</td>
-</tr>
-<tr class="row-even"><td>kafka.consumer.security.protocol</td>
+<tr class="row-odd"><td>kafka.consumer.security.protocol</td>
<td>PLAINTEXT</td>
<td>Set to SASL_PLAINTEXT, SASL_SSL or SSL if writing to Kafka using some
level of security. See below for additional info on secure setup.</td>
</tr>
-<tr class="row-odd"><td><em>more consumer security props</em></td>
+<tr class="row-even"><td><em>more consumer security props</em></td>
<td> </td>
<td>If using SASL_PLAINTEXT, SASL_SSL or SSL refer to <a class="reference
external" href="http://kafka.apache.org/documentation.html#security">Kafka
security</a> for additional
properties that need to be set on consumer.</td>
</tr>
-<tr class="row-even"><td>Other Kafka Consumer Properties</td>
+<tr class="row-odd"><td>Other Kafka Consumer Properties</td>
<td>–</td>
<td>These properties are used to configure the Kafka Consumer. Any consumer
property supported
by Kafka can be used. The only requirement is to prepend the property name
with the prefix
@@ -1761,9 +2017,9 @@ and value.deserializer(org.apache.kafka.
<p>Deprecated Properties</p>
<table border="1" class="docutils">
<colgroup>
-<col width="22%" />
+<col width="21%" />
<col width="13%" />
-<col width="65%" />
+<col width="66%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -1785,6 +2041,16 @@ and value.deserializer(org.apache.kafka.
<td>Is no longer supported by kafka consumer client since 0.9.x. Use
kafka.bootstrap.servers
to establish connection with kafka cluster</td>
</tr>
+<tr class="row-odd"><td>migrateZookeeperOffsets</td>
+<td>true</td>
+<td>When no Kafka stored offset is found, look up the offsets in Zookeeper and
commit them to Kafka.
+This should be true to support seamless Kafka client migration from older
versions of Flume.
+Once migrated this can be set to false, though that should generally not be
required.
+If no Zookeeper offset is found, the Kafka configuration
kafka.consumer.auto.offset.reset
+defines how offsets are handled.
+Check <a class="reference external"
href="http://kafka.apache.org/documentation.html#newconsumerconfigs">Kafka
documentation</a>
+for details</td>
+</tr>
</tbody>
</table>
<p>Example for topic subscription by comma-separated topic list.</p>
@@ -1833,10 +2099,13 @@ security provider, cipher suites, enable
<span class="na">a1.sources.source1.kafka.topics</span> <span
class="o">=</span> <span class="s">mytopic</span>
<span class="na">a1.sources.source1.kafka.consumer.group.id</span> <span
class="o">=</span> <span class="s">flume-consumer</span>
<span class="na">a1.sources.source1.kafka.consumer.security.protocol</span>
<span class="o">=</span> <span class="s">SSL</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span
class="na">a1.sources.source1.kafka.consumer.ssl.truststore.location</span><span
class="o">=</span><span class="s">/path/to/truststore.jks</span>
<span
class="na">a1.sources.source1.kafka.consumer.ssl.truststore.password</span><span
class="o">=</span><span class="s"><password to access the
truststore></span>
</pre></div>
</div>
+<p>Specyfing the truststore is optional here, the global truststore can be
used instead.
+For more details about the global SSL setup, see the <a class="reference
internal" href="#ssl-tls-support">SSL/TLS support</a> section.</p>
<p>Note: By default the property <tt class="docutils literal"><span
class="pre">ssl.endpoint.identification.algorithm</span></tt>
is not defined, so hostname verification is not performed.
In order to enable hostname verification, set the following properties</p>
@@ -1849,11 +2118,13 @@ against one of the following two fields:
<li>Common Name (CN) <a class="reference external"
href="https://tools.ietf.org/html/rfc6125#section-2.3">https://tools.ietf.org/html/rfc6125#section-2.3</a></li>
<li>Subject Alternative Name (SAN) <a class="reference external"
href="https://tools.ietf.org/html/rfc5280#section-4.2.1.6">https://tools.ietf.org/html/rfc5280#section-4.2.1.6</a></li>
</ol>
-<p>If client side authentication is also required then additionally the
following should be added to Flume agent configuration.
+<p>If client side authentication is also required then additionally the
following needs to be added to Flume agent
+configuration or the global SSL setup can be used (see <a class="reference
internal" href="#ssl-tls-support">SSL/TLS support</a> section).
Each Flume agent has to have its client certificate which has to be trusted by
Kafka brokers either
individually or by their signature chain. Common example is to sign each
client certificate by a single Root CA
which in turn is trusted by Kafka brokers.</p>
-<div class="highlight-properties"><div class="highlight"><pre><span
class="na">a1.sources.source1.kafka.consumer.ssl.keystore.location</span><span
class="o">=</span><span class="s">/path/to/client.keystore.jks</span>
+<div class="highlight-properties"><div class="highlight"><pre><span
class="c"># optional, the global keystore can be used alternatively</span>
+<span
class="na">a1.sources.source1.kafka.consumer.ssl.keystore.location</span><span
class="o">=</span><span class="s">/path/to/client.keystore.jks</span>
<span
class="na">a1.sources.source1.kafka.consumer.ssl.keystore.password</span><span
class="o">=</span><span class="s"><password to access the keystore></span>
</pre></div>
</div>
@@ -1889,6 +2160,7 @@ for information on the JAAS file content
<span class="na">a1.sources.source1.kafka.consumer.security.protocol</span>
<span class="o">=</span> <span class="s">SASL_SSL</span>
<span class="na">a1.sources.source1.kafka.consumer.sasl.mechanism</span> <span
class="o">=</span> <span class="s">GSSAPI</span>
<span
class="na">a1.sources.source1.kafka.consumer.sasl.kerberos.service.name</span>
<span class="o">=</span> <span class="s">kafka</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span
class="na">a1.sources.source1.kafka.consumer.ssl.truststore.location</span><span
class="o">=</span><span class="s">/path/to/truststore.jks</span>
<span
class="na">a1.sources.source1.kafka.consumer.ssl.truststore.password</span><span
class="o">=</span><span class="s"><password to access the
truststore></span>
</pre></div>
@@ -2129,9 +2401,9 @@ of characters separated by a newline (&#
<p>The original, tried-and-true syslog TCP source.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="19%" />
-<col width="15%" />
-<col width="67%" />
+<col width="16%" />
+<col width="9%" />
+<col width="75%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -2170,6 +2442,26 @@ fields can be included: priority, versio
timestamp, hostname. The values ‘true’ and ‘false’
have been deprecated in favor of ‘all’ and ‘none’.</td>
</tr>
+<tr class="row-even"><td>clientIPHeader</td>
+<td>–</td>
+<td>If specified, the IP address of the client will be stored in
+the header of each event using the header name specified here.
+This allows for interceptors and channel selectors to customize
+routing logic based on the IP address of the client.
+Do not use the standard Syslog header names here (like _host_)
+because the event header will be overridden in that case.</td>
+</tr>
+<tr class="row-odd"><td>clientHostnameHeader</td>
+<td>–</td>
+<td>If specified, the host name of the client will be stored in
+the header of each event using the header name specified here.
+This allows for interceptors and channel selectors to customize
+routing logic based on the host name of the client.
+Retrieving the host name may involve a name service reverse
+lookup which may affect the performance.
+Do not use the standard Syslog header names here (like _host_)
+because the event header will be overridden in that case.</td>
+</tr>
<tr class="row-even"><td>selector.type</td>
<td> </td>
<td>replicating or multiplexing</td>
@@ -2186,6 +2478,52 @@ have been deprecated in favor of ‘
<td> </td>
<td> </td>
</tr>
+<tr class="row-even"><td>ssl</td>
+<td>false</td>
+<td>Set this to true to enable SSL encryption. If SSL is enabled,
+you must also specify a “keystore” and a
“keystore-password”,
+either through component level parameters (see below)
+or as global SSL parameters (see <a class="reference internal"
href="#ssl-tls-support">SSL/TLS support</a> section).</td>
+</tr>
+<tr class="row-odd"><td>keystore</td>
+<td>–</td>
+<td>This is the path to a Java keystore file.
+If not specified here, then the global keystore will be used
+(if defined, otherwise configuration error).</td>
+</tr>
+<tr class="row-even"><td>keystore-password</td>
+<td>–</td>
+<td>The password for the Java keystore.
+If not specified here, then the global keystore password will be used
+(if defined, otherwise configuration error).</td>
+</tr>
+<tr class="row-odd"><td>keystore-type</td>
+<td>JKS</td>
+<td>The type of the Java keystore. This can be “JKS” or
“PKCS12”.
+If not specified here, then the global keystore type will be used
+(if defined, otherwise the default is JKS).</td>
+</tr>
+<tr class="row-even"><td>exclude-protocols</td>
+<td>SSLv3</td>
+<td>Space-separated list of SSL/TLS protocols to exclude.
+SSLv3 will always be excluded in addition to the protocols specified.</td>
+</tr>
+<tr class="row-odd"><td>include-protocols</td>
+<td>–</td>
+<td>Space-separated list of SSL/TLS protocols to include.
+The enabled protocols will be the included protocols without the excluded
protocols.
+If included-protocols is empty, it includes every supported protocols.</td>
+</tr>
+<tr class="row-even"><td>exclude-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to exclude.</td>
+</tr>
+<tr class="row-odd"><td>include-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to include.
+The enabled cipher suites will be the included cipher suites without the
excluded cipher suites.
+If included-cipher-suites is empty, it includes every supported cipher
suites.</td>
+</tr>
</tbody>
</table>
<p>For example, a syslog TCP source for agent named a1:</p>
@@ -2209,9 +2547,9 @@ Also provides the capability to configur
basis.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="7%" />
+<col width="8%" />
<col width="6%" />
-<col width="87%" />
+<col width="86%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -2254,6 +2592,26 @@ have been deprecated in favor of ‘
<td>–</td>
<td>If specified, the port number will be stored in the header of each event
using the header name specified here. This allows for interceptors and channel
selectors to customize routing logic based on the incoming port.</td>
</tr>
+<tr class="row-odd"><td>clientIPHeader</td>
+<td>–</td>
+<td>If specified, the IP address of the client will be stored in
+the header of each event using the header name specified here.
+This allows for interceptors and channel selectors to customize
+routing logic based on the IP address of the client.
+Do not use the standard Syslog header names here (like _host_)
+because the event header will be overridden in that case.</td>
+</tr>
+<tr class="row-even"><td>clientHostnameHeader</td>
+<td>–</td>
+<td>If specified, the host name of the client will be stored in
+the header of each event using the header name specified here.
+This allows for interceptors and channel selectors to customize
+routing logic based on the host name of the client.
+Retrieving the host name may involve a name service reverse
+lookup which may affect the performance.
+Do not use the standard Syslog header names here (like _host_)
+because the event header will be overridden in that case.</td>
+</tr>
<tr class="row-odd"><td>charset.default</td>
<td>UTF-8</td>
<td>Default character set used while parsing syslog events into strings.</td>
@@ -2290,6 +2648,52 @@ have been deprecated in favor of ‘
<td> </td>
<td> </td>
</tr>
+<tr class="row-even"><td>ssl</td>
+<td>false</td>
+<td>Set this to true to enable SSL encryption. If SSL is enabled,
+you must also specify a “keystore” and a
“keystore-password”,
+either through component level parameters (see below)
+or as global SSL parameters (see <a class="reference internal"
href="#ssl-tls-support">SSL/TLS support</a> section).</td>
+</tr>
+<tr class="row-odd"><td>keystore</td>
+<td>–</td>
+<td>This is the path to a Java keystore file.
+If not specified here, then the global keystore will be used
+(if defined, otherwise configuration error).</td>
+</tr>
+<tr class="row-even"><td>keystore-password</td>
+<td>–</td>
+<td>The password for the Java keystore.
+If not specified here, then the global keystore password will be used
+(if defined, otherwise configuration error).</td>
+</tr>
+<tr class="row-odd"><td>keystore-type</td>
+<td>JKS</td>
+<td>The type of the Java keystore. This can be “JKS” or
“PKCS12”.
+If not specified here, then the global keystore type will be used
+(if defined, otherwise the default is JKS).</td>
+</tr>
+<tr class="row-even"><td>exclude-protocols</td>
+<td>SSLv3</td>
+<td>Space-separated list of SSL/TLS protocols to exclude.
+SSLv3 will always be excluded in addition to the protocols specified.</td>
+</tr>
+<tr class="row-odd"><td>include-protocols</td>
+<td>–</td>
+<td>Space-separated list of SSL/TLS protocols to include.
+The enabled protocols will be the included protocols without the excluded
protocols.
+If included-protocols is empty, it includes every supported protocols.</td>
+</tr>
+<tr class="row-even"><td>exclude-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to exclude.</td>
+</tr>
+<tr class="row-odd"><td>include-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to include.
+The enabled cipher suites will be the included cipher suites without the
excluded cipher suites.
+If included-cipher-suites is empty, it includes every supported cipher
suites.</td>
+</tr>
</tbody>
</table>
<p>For example, a multiport syslog TCP source for agent named a1:</p>
@@ -2307,8 +2711,8 @@ have been deprecated in favor of ‘
<h5>Syslog UDP Source<a class="headerlink" href="#syslog-udp-source"
title="Permalink to this headline">¶</a></h5>
<table border="1" class="docutils">
<colgroup>
-<col width="19%" />
-<col width="15%" />
+<col width="21%" />
+<col width="12%" />
<col width="67%" />
</colgroup>
<thead valign="bottom">
@@ -2339,6 +2743,26 @@ have been deprecated in favor of ‘
<td>Setting this to true will preserve the Priority,
Timestamp and Hostname in the body of the event.</td>
</tr>
+<tr class="row-odd"><td>clientIPHeader</td>
+<td>–</td>
+<td>If specified, the IP address of the client will be stored in
+the header of each event using the header name specified here.
+This allows for interceptors and channel selectors to customize
+routing logic based on the IP address of the client.
+Do not use the standard Syslog header names here (like _host_)
+because the event header will be overridden in that case.</td>
+</tr>
+<tr class="row-even"><td>clientHostnameHeader</td>
+<td>–</td>
+<td>If specified, the host name of the client will be stored in
+the header of each event using the header name specified here.
+This allows for interceptors and channel selectors to customize
+routing logic based on the host name of the client.
+Retrieving the host name may involve a name service reverse
+lookup which may affect the performance.
+Do not use the standard Syslog header names here (like _host_)
+because the event header will be overridden in that case.</td>
+</tr>
<tr class="row-odd"><td>selector.type</td>
<td> </td>
<td>replicating or multiplexing</td>
@@ -2382,11 +2806,13 @@ append events to the channel, the source
unavailable status.</p>
<p>All events sent in one post request are considered to be one batch and
inserted into the channel in one transaction.</p>
+<p>This source is based on Jetty 9.4 and offers the ability to set additional
+Jetty-specific parameters which will be passed directly to the Jetty
components.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="12%" />
-<col width="30%" />
-<col width="58%" />
+<col width="13%" />
+<col width="27%" />
+<col width="60%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -2431,23 +2857,104 @@ inserted into the channel in one transac
<td> </td>
<td> </td>
</tr>
-<tr class="row-odd"><td>enableSSL</td>
+<tr class="row-odd"><td>ssl</td>
<td>false</td>
<td>Set the property true, to enable SSL. <em>HTTP Source does not support
SSLv3.</em></td>
</tr>
-<tr class="row-even"><td>excludeProtocols</td>
+<tr class="row-even"><td>exclude-protocols</td>
<td>SSLv3</td>
-<td>Space-separated list of SSL/TLS protocols to exclude. SSLv3 is always
excluded.</td>
+<td>Space-separated list of SSL/TLS protocols to exclude.
+SSLv3 will always be excluded in addition to the protocols specified.</td>
</tr>
-<tr class="row-odd"><td>keystore</td>
+<tr class="row-odd"><td>include-protocols</td>
+<td>–</td>
+<td>Space-separated list of SSL/TLS protocols to include.
+The enabled protocols will be the included protocols without the excluded
protocols.
+If included-protocols is empty, it includes every supported protocols.</td>
+</tr>
+<tr class="row-even"><td>exclude-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to exclude.</td>
+</tr>
+<tr class="row-odd"><td>include-cipher-suites</td>
+<td>–</td>
+<td>Space-separated list of cipher suites to include.
+The enabled cipher suites will be the included cipher suites without the
excluded cipher suites.</td>
+</tr>
+<tr class="row-even"><td>keystore</td>
+<td> </td>
+<td>Location of the keystore including keystore file name.
+If SSL is enabled but the keystore is not specified here,
+then the global keystore will be used
+(if defined, otherwise configuration error).</td>
+</tr>
+<tr class="row-odd"><td>keystore-password</td>
+<td> </td>
+<td>Keystore password.
+If SSL is enabled but the keystore password is not specified here,
+then the global keystore password will be used
+(if defined, otherwise configuration error).</td>
+</tr>
+<tr class="row-even"><td>keystore-type</td>
+<td>JKS</td>
+<td>Keystore type. This can be “JKS” or “PKCS12”.</td>
+</tr>
+<tr class="row-odd"><td>QueuedThreadPool.*</td>
+<td> </td>
+<td>Jetty specific settings to be set on
org.eclipse.jetty.util.thread.QueuedThreadPool.
+N.B. QueuedThreadPool will only be used if at least one property of this class
is set.</td>
+</tr>
+<tr class="row-even"><td>HttpConfiguration.*</td>
+<td> </td>
+<td>Jetty specific settings to be set on
org.eclipse.jetty.server.HttpConfiguration</td>
+</tr>
+<tr class="row-odd"><td>SslContextFactory.*</td>
+<td> </td>
+<td>Jetty specific settings to be set on
org.eclipse.jetty.util.ssl.SslContextFactory (only
+applicable when <em>ssl</em> is set to true).</td>
+</tr>
+<tr class="row-even"><td>ServerConnector.*</td>
<td> </td>
-<td>Location of the keystore includng keystore file name</td>
+<td>Jetty specific settings to be set on
org.eclipse.jetty.server.ServerConnector</td>
+</tr>
+</tbody>
+</table>
+<p>Deprecated Properties</p>
+<table border="1" class="docutils">
+<colgroup>
+<col width="22%" />
+<col width="13%" />
+<col width="65%" />
+</colgroup>
+<thead valign="bottom">
+<tr class="row-odd"><th class="head">Property Name</th>
+<th class="head">Default</th>
+<th class="head">Description</th>
+</tr>
+</thead>
+<tbody valign="top">
+<tr class="row-even"><td>keystorePassword</td>
+<td>–</td>
+<td>Use <em>keystore-password</em>. Deprecated value will be overwritten with
the new one.</td>
+</tr>
+<tr class="row-odd"><td>excludeProtocols</td>
+<td>SSLv3</td>
+<td>Use <em>exclude-protocols</em>. Deprecated value will be overwritten with
the new one.</td>
</tr>
-<tr class="row-even"><td colspan="3">keystorePassword
Keystore password</td>
+<tr class="row-even"><td>enableSSL</td>
+<td>false</td>
+<td>Use <em>ssl</em>. Deprecated value will be overwritten with the new
one.</td>
</tr>
</tbody>
</table>
-<p>For example, a http source for agent named a1:</p>
+<p>N.B. Jetty-specific settings are set using the setter-methods on the
objects listed above. For full details see the Javadoc for these classes
+(<a class="reference external"
href="http://www.eclipse.org/jetty/javadoc/9.4.6.v20170531/org/eclipse/jetty/util/thread/QueuedThreadPool.html">QueuedThreadPool</a>,
+<a class="reference external"
href="http://www.eclipse.org/jetty/javadoc/9.4.6.v20170531/org/eclipse/jetty/server/HttpConfiguration.html">HttpConfiguration</a>,
+<a class="reference external"
href="http://www.eclipse.org/jetty/javadoc/9.4.6.v20170531/org/eclipse/jetty/util/ssl/SslContextFactory.html">SslContextFactory</a>
and
+<a class="reference external"
href="http://www.eclipse.org/jetty/javadoc/9.4.6.v20170531/org/eclipse/jetty/server/ServerConnector.html">ServerConnector</a>).</p>
+<p>When using Jetty-specific setings, named properites above will take
precedence (for example excludeProtocols will take
+precedence over SslContextFactory.ExcludeProtocols). All properties will be
inital lower case.</p>
+<p>An example http source for agent named a1:</p>
<div class="highlight-properties"><div class="highlight"><pre><span
class="na">a1.sources</span> <span class="o">=</span> <span class="s">r1</span>
<span class="na">a1.channels</span> <span class="o">=</span> <span
class="s">c1</span>
<span class="na">a1.sources.r1.type</span> <span class="o">=</span> <span
class="s">http</span>
@@ -2455,6 +2962,8 @@ inserted into the channel in one transac
<span class="na">a1.sources.r1.channels</span> <span class="o">=</span> <span
class="s">c1</span>
<span class="na">a1.sources.r1.handler</span> <span class="o">=</span> <span
class="s">org.example.rest.RestHandler</span>
<span class="na">a1.sources.r1.handler.nickname</span> <span
class="o">=</span> <span class="s">random props</span>
+<span class="na">a1.sources.r1.HttpConfiguration.sendServerVersion</span>
<span class="o">=</span> <span class="s">false</span>
+<span class="na">a1.sources.r1.ServerConnector.idleTimeout</span> <span
class="o">=</span> <span class="s">300</span>
</pre></div>
</div>
<div class="section" id="jsonhandler">
@@ -2531,9 +3040,9 @@ Event to be delivered.</p>
<p>Required properties are in <strong>bold</strong>.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="18%" />
+<col width="17%" />
<col width="10%" />
-<col width="72%" />
+<col width="73%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -2562,6 +3071,10 @@ Event to be delivered.</p>
<td>1</td>
<td>Number of Events to be sent in one batch</td>
</tr>
+<tr class="row-odd"><td>maxEventsPerSecond</td>
+<td>0</td>
+<td>When set to an integer greater than zero, enforces a rate limiter onto the
source.</td>
+</tr>
</tbody>
</table>
<p>Example for agent named <strong>a1</strong>:</p>
@@ -2942,9 +3455,9 @@ this automatically is to use the Timesta
</div>
<table border="1" class="docutils">
<colgroup>
-<col width="9%" />
-<col width="5%" />
-<col width="86%" />
+<col width="8%" />
+<col width="4%" />
+<col width="88%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Name</th>
@@ -2981,56 +3494,55 @@ this automatically is to use the Timesta
<td><tt class="docutils literal"><span class="pre">.tmp</span></tt></td>
<td>Suffix that is used for temporal files that flume actively writes into</td>
</tr>
-<tr class="row-odd"><td>hdfs.rollInterval</td>
+<tr class="row-odd"><td>hdfs.emptyInUseSuffix</td>
+<td>false</td>
+<td>If <tt class="docutils literal"><span class="pre">false</span></tt> an <tt
class="docutils literal"><span class="pre">hdfs.inUseSuffix</span></tt> is used
while writing the output. After closing the output <tt class="docutils
literal"><span class="pre">hdfs.inUseSuffix</span></tt> is removed from the
output file name. If <tt class="docutils literal"><span
class="pre">true</span></tt> the <tt class="docutils literal"><span
class="pre">hdfs.inUseSuffix</span></tt> parameter is ignored an empty string
is used instead.</td>
+</tr>
+<tr class="row-even"><td>hdfs.rollInterval</td>
<td>30</td>
<td>Number of seconds to wait before rolling current file
(0 = never roll based on time interval)</td>
</tr>
-<tr class="row-even"><td>hdfs.rollSize</td>
+<tr class="row-odd"><td>hdfs.rollSize</td>
<td>1024</td>
<td>File size to trigger roll, in bytes (0: never roll based on file size)</td>
</tr>
-<tr class="row-odd"><td>hdfs.rollCount</td>
+<tr class="row-even"><td>hdfs.rollCount</td>
<td>10</td>
<td>Number of events written to file before it rolled
(0 = never roll based on number of events)</td>
</tr>
-<tr class="row-even"><td>hdfs.idleTimeout</td>
+<tr class="row-odd"><td>hdfs.idleTimeout</td>
<td>0</td>
<td>Timeout after which inactive files get closed
(0 = disable automatic closing of idle files)</td>
</tr>
-<tr class="row-odd"><td>hdfs.batchSize</td>
+<tr class="row-even"><td>hdfs.batchSize</td>
<td>100</td>
<td>number of events written to file before it is flushed to HDFS</td>
</tr>
-<tr class="row-even"><td>hdfs.codeC</td>
+<tr class="row-odd"><td>hdfs.codeC</td>
<td>–</td>
<td>Compression codec. one of following : gzip, bzip2, lzo, lzop, snappy</td>
</tr>
-<tr class="row-odd"><td>hdfs.fileType</td>
+<tr class="row-even"><td>hdfs.fileType</td>
<td>SequenceFile</td>
<td>File format: currently <tt class="docutils literal"><span
class="pre">SequenceFile</span></tt>, <tt class="docutils literal"><span
class="pre">DataStream</span></tt> or <tt class="docutils literal"><span
class="pre">CompressedStream</span></tt>
(1)DataStream will not compress output file and please don’t set codeC
(2)CompressedStream requires set hdfs.codeC with an available codeC</td>
</tr>
-<tr class="row-even"><td>hdfs.maxOpenFiles</td>
+<tr class="row-odd"><td>hdfs.maxOpenFiles</td>
<td>5000</td>
<td>Allow only this number of open files. If this number is exceeded, the
oldest file is closed.</td>
</tr>
-<tr class="row-odd"><td>hdfs.minBlockReplicas</td>
+<tr class="row-even"><td>hdfs.minBlockReplicas</td>
<td>–</td>
<td>Specify minimum number of replicas per HDFS block. If not specified, it
comes from the default Hadoop config in the classpath.</td>
</tr>
-<tr class="row-even"><td>hdfs.writeFormat</td>
+<tr class="row-odd"><td>hdfs.writeFormat</td>
<td>Writable</td>
<td>Format for sequence file records. One of <tt class="docutils
literal"><span class="pre">Text</span></tt> or <tt class="docutils
literal"><span class="pre">Writable</span></tt>. Set to <tt class="docutils
literal"><span class="pre">Text</span></tt> before creating data files with
Flume, otherwise those files cannot be read by either Apache Impala
(incubating) or Apache Hive.</td>
</tr>
-<tr class="row-odd"><td>hdfs.callTimeout</td>
-<td>10000</td>
-<td>Number of milliseconds allowed for HDFS operations, such as open, write,
flush, close.
-This number should be increased if many HDFS timeout operations are
occurring.</td>
-</tr>
<tr class="row-even"><td>hdfs.threadsPoolSize</td>
<td>10</td>
<td>Number of threads per HDFS sink for HDFS IO ops (open, write, etc.)</td>
@@ -3096,6 +3608,11 @@ fully-qualified class name of an impleme
</tr>
</tbody>
</table>
+<p>Deprecated Properties</p>
+<p>Name Default Description
+====================== ============
======================================================================
+hdfs.callTimeout 30000 Number of milliseconds allowed for HDFS
operations, such as open, write, flush, close. This number should be increased
if many HDFS timeout operations are occurring.
+====================== ============
======================================================================</p>
<p>Example for agent named a1:</p>
<div class="highlight-properties"><div class="highlight"><pre><span
class="na">a1.channels</span> <span class="o">=</span> <span class="s">c1</span>
<span class="na">a1.sinks</span> <span class="o">=</span> <span
class="s">k1</span>
@@ -3419,9 +3936,9 @@ batches of the configured batch size.
Required properties are in <strong>bold</strong>.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="6%" />
-<col width="13%" />
-<col width="81%" />
+<col width="5%" />
+<col width="10%" />
+<col width="84%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -3480,15 +3997,15 @@ Required properties are in <strong>bold<
</tr>
<tr class="row-even"><td>truststore</td>
<td>–</td>
-<td>The path to a custom Java truststore file. Flume uses the certificate
authority information in this file to determine whether the remote Avro
Source’s SSL authentication credentials should be trusted. If not
specified, the default Java JSSE certificate authority files (typically
“jssecacerts” or “cacerts” in the Oracle JRE) will be
used.</td>
+<td>The path to a custom Java truststore file. Flume uses the certificate
authority information in this file to determine whether the remote Avro
Source’s SSL authentication credentials should be trusted. If not
specified, then the global keystore will be used. If the global keystore not
specified either, then the default Java JSSE certificate authority files
(typically “jssecacerts” or “cacerts” in the Oracle
JRE) will be used.</td>
</tr>
<tr class="row-odd"><td>truststore-password</td>
<td>–</td>
-<td>The password for the specified truststore.</td>
+<td>The password for the truststore. If not specified, then the global
keystore password will be used (if defined).</td>
</tr>
<tr class="row-even"><td>truststore-type</td>
<td>JKS</td>
-<td>The type of the Java truststore. This can be “JKS” or other
supported Java truststore type.</td>
+<td>The type of the Java truststore. This can be “JKS” or other
supported Java truststore type. If not specified, then the global keystore type
will be used (if defined, otherwise the defautl is JKS).</td>
</tr>
<tr class="row-odd"><td>exclude-protocols</td>
<td>SSLv3</td>
@@ -3524,9 +4041,9 @@ principal of the Thrift source this sink
Required properties are in <strong>bold</strong>.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="7%" />
+<col width="6%" />
<col width="2%" />
-<col width="91%" />
+<col width="93%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -3573,15 +4090,15 @@ Required properties are in <strong>bold<
</tr>
<tr class="row-odd"><td>truststore</td>
<td>–</td>
-<td>The path to a custom Java truststore file. Flume uses the certificate
authority information in this file to determine whether the remote Thrift
Source’s SSL authentication credentials should be trusted. If not
specified, the default Java JSSE certificate authority files (typically
“jssecacerts” or “cacerts” in the Oracle JRE) will be
used.</td>
+<td>The path to a custom Java truststore file. Flume uses the certificate
authority information in this file to determine whether the remote Thrift
Source’s SSL authentication credentials should be trusted. If not
specified, then the global keystore will be used. If the global keystore not
specified either, then the default Java JSSE certificate authority files
(typically “jssecacerts” or “cacerts” in the Oracle
JRE) will be used.</td>
</tr>
<tr class="row-even"><td>truststore-password</td>
<td>–</td>
-<td>The password for the specified truststore.</td>
+<td>The password for the truststore. If not specified, then the global
keystore password will be used (if defined).</td>
</tr>
<tr class="row-odd"><td>truststore-type</td>
<td>JKS</td>
-<td>The type of the Java truststore. This can be “JKS” or other
supported Java truststore type.</td>
+<td>The type of the Java truststore. This can be “JKS” or other
supported Java truststore type. If not specified, then the global keystore type
will be used (if defined, otherwise the defautl is JKS).</td>
</tr>
<tr class="row-even"><td>exclude-protocols</td>
<td>SSLv3</td>
@@ -3741,7 +4258,7 @@ Required properties are in <strong>bold<
<td>TEXT</td>
<td>Other possible options include <tt class="docutils literal"><span
class="pre">avro_event</span></tt> or the FQCN of an implementation of
EventSerializer.Builder interface.</td>
</tr>
-<tr class="row-even"><td>batchSize</td>
+<tr class="row-even"><td>sink.batchSize</td>
<td>100</td>
<td> </td>
</tr>
@@ -3896,8 +4413,89 @@ better performance if there are multiple
</pre></div>
</div>
</div>
-<div class="section" id="asynchbasesink">
-<h5>AsyncHBaseSink<a class="headerlink" href="#asynchbasesink"
title="Permalink to this headline">¶</a></h5>
+<div class="section" id="hbase2sink">
+<h5>HBase2Sink<a class="headerlink" href="#hbase2sink" title="Permalink to
this headline">¶</a></h5>
+<p>HBase2Sink is the equivalent of HBaseSink for HBase version 2.
+The provided functionality and the configuration parameters are the same as in
case of HBaseSink (except the hbase2 tag in the sink type and the package/class
names).</p>
+<p>The type is the FQCN: org.apache.flume.sink.hbase2.HBase2Sink.</p>
+<p>Required properties are in <strong>bold</strong>.</p>
+<table border="1" class="docutils">
+<colgroup>
+<col width="10%" />
+<col width="31%" />
+<col width="58%" />
+</colgroup>
+<thead valign="bottom">
+<tr class="row-odd"><th class="head">Property Name</th>
+<th class="head">Default</th>
+<th class="head">Description</th>
+</tr>
+</thead>
+<tbody valign="top">
+<tr class="row-even"><td><strong>channel</strong></td>
+<td>–</td>
+<td> </td>
+</tr>
+<tr class="row-odd"><td><strong>type</strong></td>
+<td>–</td>
+<td>The component type name, needs to be <tt class="docutils literal"><span
class="pre">hbase2</span></tt></td>
+</tr>
+<tr class="row-even"><td><strong>table</strong></td>
+<td>–</td>
+<td>The name of the table in HBase to write to.</td>
+</tr>
+<tr class="row-odd"><td><strong>columnFamily</strong></td>
+<td>–</td>
+<td>The column family in HBase to write to.</td>
+</tr>
+<tr class="row-even"><td>zookeeperQuorum</td>
+<td>–</td>
+<td>The quorum spec. This is the value for the property <tt class="docutils
literal"><span class="pre">hbase.zookeeper.quorum</span></tt> in
hbase-site.xml</td>
+</tr>
+<tr class="row-odd"><td>znodeParent</td>
+<td>/hbase</td>
+<td>The base path for the znode for the -ROOT- region. Value of <tt
class="docutils literal"><span class="pre">zookeeper.znode.parent</span></tt>
in hbase-site.xml</td>
+</tr>
+<tr class="row-even"><td>batchSize</td>
+<td>100</td>
+<td>Number of events to be written per txn.</td>
+</tr>
+<tr class="row-odd"><td>coalesceIncrements</td>
+<td>false</td>
+<td>Should the sink coalesce multiple increments to a cell per batch. This
might give
+better performance if there are multiple increments to a limited number of
cells.</td>
+</tr>
+<tr class="row-even"><td>serializer</td>
+<td>org.apache.flume.sink.hbase2.SimpleHBase2EventSerializer</td>
+<td>Default increment column = “iCol”, payload column =
“pCol”.</td>
+</tr>
+<tr class="row-odd"><td>serializer.*</td>
+<td>–</td>
+<td>Properties to be passed to the serializer.</td>
+</tr>
+<tr class="row-even"><td>kerberosPrincipal</td>
+<td>–</td>
+<td>Kerberos user principal for accessing secure HBase</td>
+</tr>
+<tr class="row-odd"><td>kerberosKeytab</td>
+<td>–</td>
+<td>Kerberos keytab for accessing secure HBase</td>
+</tr>
+</tbody>
+</table>
+<p>Example for agent named a1:</p>
+<div class="highlight-properties"><div class="highlight"><pre><span
class="na">a1.channels</span> <span class="o">=</span> <span class="s">c1</span>
+<span class="na">a1.sinks</span> <span class="o">=</span> <span
class="s">k1</span>
+<span class="na">a1.sinks.k1.type</span> <span class="o">=</span> <span
class="s">hbase2</span>
+<span class="na">a1.sinks.k1.table</span> <span class="o">=</span> <span
class="s">foo_table</span>
+<span class="na">a1.sinks.k1.columnFamily</span> <span class="o">=</span>
<span class="s">bar_cf</span>
+<span class="na">a1.sinks.k1.serializer</span> <span class="o">=</span> <span
class="s">org.apache.flume.sink.hbase2.RegexHBase2EventSerializer</span>
+<span class="na">a1.sinks.k1.channel</span> <span class="o">=</span> <span
class="s">c1</span>
+</pre></div>
+</div>
+</div>
+<div class="section" id="asynchbasesink">
+<h5>AsyncHBaseSink<a class="headerlink" href="#asynchbasesink"
title="Permalink to this headline">¶</a></h5>
<p>This sink writes data to HBase using an asynchronous model. A class
implementing
AsyncHbaseEventSerializer which is specified by the configuration is used to
convert the events into
HBase puts and/or increments. These puts and increments are then written
@@ -3905,13 +4503,14 @@ to HBase. This sink uses the <a class="r
HBase. This sink provides the same consistency guarantees as HBase,
which is currently row-wise atomicity. In the event of Hbase failing to
write certain events, the sink will replay all events in that transaction.
+AsyncHBaseSink can only be used with HBase 1.x. The async client library used
by AsyncHBaseSink is not available for HBase 2.
The type is the FQCN: org.apache.flume.sink.hbase.AsyncHBaseSink.
Required properties are in <strong>bold</strong>.</p>
<table border="1" class="docutils">
<colgroup>
-<col width="10%" />
-<col width="33%" />
-<col width="57%" />
+<col width="9%" />
+<col width="29%" />
+<col width="61%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -3966,6 +4565,13 @@ all events in a transaction.</td>
<td>–</td>
<td>Properties to be passed to the serializer.</td>
</tr>
+<tr class="row-odd"><td>async.*</td>
+<td>–</td>
+<td>Properties to be passed to asyncHbase library.
+These properties have precedence over the old <tt class="docutils
literal"><span class="pre">zookeeperQuorum</span></tt> and <tt class="docutils
literal"><span class="pre">znodeParent</span></tt> values.
+You can find the list of the available properties at
+<a class="reference external"
href="http://opentsdb.github.io/asynchbase/docs/build/html/configuration.html#properties">the
documentation page of AsyncHBase</a>.</td>
+</tr>
</tbody>
</table>
<p>Note that this sink takes the Zookeeper Quorum and parent znode information
in
@@ -4294,8 +4900,8 @@ the kerberos principal</td>
<p>This is a Flume Sink implementation that can publish data to a
<a class="reference external" href="http://kafka.apache.org/">Kafka</a> topic.
One of the objective is to integrate Flume
with Kafka so that pull based processing systems can process the data coming
-through various Flume sources. This currently supports Kafka 0.9.x series of
releases.</p>
-<p>This version of Flume no longer supports Older Versions (0.8.x) of
Kafka.</p>
+through various Flume sources.</p>
+<p>This currently supports Kafka server releases 0.10.1.0 or higher. Testing
was done up to 2.0.1 that was the highest avilable version at the time of the
release.</p>
<p>Required properties are marked in bold font.</p>
<table border="1" class="docutils">
<colgroup>
@@ -4471,10 +5077,13 @@ security provider, cipher suites, enable
<span class="na">a1.sinks.sink1.kafka.bootstrap.servers</span> <span
class="o">=</span> <span class="s">kafka-1:9093,kafka-2:9093,kafka-3:9093</span>
<span class="na">a1.sinks.sink1.kafka.topic</span> <span class="o">=</span>
<span class="s">mytopic</span>
<span class="na">a1.sinks.sink1.kafka.producer.security.protocol</span> <span
class="o">=</span> <span class="s">SSL</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span class="na">a1.sinks.sink1.kafka.producer.ssl.truststore.location</span>
<span class="o">=</span> <span class="s">/path/to/truststore.jks</span>
<span class="na">a1.sinks.sink1.kafka.producer.ssl.truststore.password</span>
<span class="o">=</span> <span class="s"><password to access the
truststore></span>
</pre></div>
</div>
+<p>Specyfing the truststore is optional here, the global truststore can be
used instead.
+For more details about the global SSL setup, see the <a class="reference
internal" href="#ssl-tls-support">SSL/TLS support</a> section.</p>
<p>Note: By default the property <tt class="docutils literal"><span
class="pre">ssl.endpoint.identification.algorithm</span></tt>
is not defined, so hostname verification is not performed.
In order to enable hostname verification, set the following properties</p>
@@ -4487,11 +5096,13 @@ against one of the following two fields:
<li>Common Name (CN) <a class="reference external"
href="https://tools.ietf.org/html/rfc6125#section-2.3">https://tools.ietf.org/html/rfc6125#section-2.3</a></li>
<li>Subject Alternative Name (SAN) <a class="reference external"
href="https://tools.ietf.org/html/rfc5280#section-4.2.1.6">https://tools.ietf.org/html/rfc5280#section-4.2.1.6</a></li>
</ol>
-<p>If client side authentication is also required then additionally the
following should be added to Flume agent configuration.
+<p>If client side authentication is also required then additionally the
following needs to be added to Flume agent
+configuration or the global SSL setup can be used (see <a class="reference
internal" href="#ssl-tls-support">SSL/TLS support</a> section).
Each Flume agent has to have its client certificate which has to be trusted by
Kafka brokers either
individually or by their signature chain. Common example is to sign each
client certificate by a single Root CA
which in turn is trusted by Kafka brokers.</p>
-<div class="highlight-properties"><div class="highlight"><pre><span
class="na">a1.sinks.sink1.kafka.producer.ssl.keystore.location</span> <span
class="o">=</span> <span class="s">/path/to/client.keystore.jks</span>
+<div class="highlight-properties"><div class="highlight"><pre><span
class="c"># optional, the global keystore can be used alternatively</span>
+<span class="na">a1.sinks.sink1.kafka.producer.ssl.keystore.location</span>
<span class="o">=</span> <span class="s">/path/to/client.keystore.jks</span>
<span class="na">a1.sinks.sink1.kafka.producer.ssl.keystore.password</span>
<span class="o">=</span> <span class="s"><password to access the
keystore></span>
</pre></div>
</div>
@@ -4525,6 +5136,7 @@ for information on the JAAS file content
<span class="na">a1.sinks.sink1.kafka.producer.security.protocol</span> <span
class="o">=</span> <span class="s">SASL_SSL</span>
<span class="na">a1.sinks.sink1.kafka.producer.sasl.mechanism</span> <span
class="o">=</span> <span class="s">GSSAPI</span>
<span
class="na">a1.sinks.sink1.kafka.producer.sasl.kerberos.service.name</span>
<span class="o">=</span> <span class="s">kafka</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span class="na">a1.sinks.sink1.kafka.producer.ssl.truststore.location</span>
<span class="o">=</span> <span class="s">/path/to/truststore.jks</span>
<span class="na">a1.sinks.sink1.kafka.producer.ssl.truststore.password</span>
<span class="o">=</span> <span class="s"><password to access the
truststore></span>
</pre></div>
@@ -4858,8 +5470,7 @@ replication, so in case an agent or a ka
<li>With Flume source and interceptor but no sink - it allows writing Flume
events into a Kafka topic, for use by other apps</li>
<li>With Flume sink, but no source - it is a low-latency, fault tolerant way
to send events from Kafka to Flume sinks such as HDFS, HBase or Solr</li>
</ol>
-<p>This version of Flume requires Kafka version 0.9 or greater due to the
reliance on the Kafka clients shipped with that version. The configuration of
-the channel has changed compared to previous flume versions.</p>
+<p>This currently supports Kafka server releases 0.10.1.0 or higher. Testing
was done up to 2.0.1 that was the highest avilable version at the time of the
release.</p>
<p>The configuration parameters are organized as such:</p>
<ol class="arabic simple">
<li>Configuration values related to the channel generically are applied at the
channel config level, eg: a1.channel.k1.type =</li>
@@ -4910,33 +5521,26 @@ This should be true if Flume source is w
writing into the topic that the channel is using. Flume source messages to
Kafka can be parsed outside of Flume by using
org.apache.flume.source.avro.AvroFlumeEvent provided by the flume-ng-sdk
artifact</td>
</tr>
-<tr class="row-odd"><td>migrateZookeeperOffsets</td>
-<td>true</td>
-<td>When no Kafka stored offset is found, look up the offsets in Zookeeper and
commit them to Kafka.
-This should be true to support seamless Kafka client migration from older
versions of Flume. Once migrated this can be set
-to false, though that should generally not be required. If no Zookeeper offset
is found the kafka.consumer.auto.offset.reset
-configuration defines how offsets are handled.</td>
-</tr>
-<tr class="row-even"><td>pollTimeout</td>
+<tr class="row-odd"><td>pollTimeout</td>
<td>500</td>
<td>The amount of time(in milliseconds) to wait in the “poll()”
call of the consumer.
<a class="reference external"
href="https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long">https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long</a>)</td>
</tr>
-<tr class="row-odd"><td>defaultPartitionId</td>
+<tr class="row-even"><td>defaultPartitionId</td>
<td>–</td>
<td>Specifies a Kafka partition ID (integer) for all events in this channel to
be sent to, unless
overriden by <tt class="docutils literal"><span
class="pre">partitionIdHeader</span></tt>. By default, if this property is not
set, events will be
distributed by the Kafka Producer’s partitioner - including by <tt
class="docutils literal"><span class="pre">key</span></tt> if specified (or by a
partitioner specified by <tt class="docutils literal"><span
class="pre">kafka.partitioner.class</span></tt>).</td>
</tr>
-<tr class="row-even"><td>partitionIdHeader</td>
+<tr class="row-odd"><td>partitionIdHeader</td>
<td>–</td>
<td>When set, the producer will take the value of the field named using the
value of this property
from the event header and send the message to the specified partition of the
topic. If the
value represents an invalid partition the event will not be accepted into the
channel. If the header value
is present then this setting overrides <tt class="docutils literal"><span
class="pre">defaultPartitionId</span></tt>.</td>
</tr>
-<tr class="row-odd"><td>kafka.consumer.auto.offset.reset</td>
+<tr class="row-even"><td>kafka.consumer.auto.offset.reset</td>
<td>latest</td>
<td>What to do when there is no initial offset in Kafka or if the current
offset does not exist any more on the server
(e.g. because that data has been deleted):
@@ -4945,15 +5549,15 @@ latest: automatically reset the offset t
none: throw exception to the consumer if no previous offset is found for the
consumer’s group
anything else: throw exception to the consumer.</td>
</tr>
-<tr class="row-even"><td>kafka.producer.security.protocol</td>
+<tr class="row-odd"><td>kafka.producer.security.protocol</td>
<td>PLAINTEXT</td>
<td>Set to SASL_PLAINTEXT, SASL_SSL or SSL if writing to Kafka using some
level of security. See below for additional info on secure setup.</td>
</tr>
-<tr class="row-odd"><td>kafka.consumer.security.protocol</td>
+<tr class="row-even"><td>kafka.consumer.security.protocol</td>
<td>PLAINTEXT</td>
<td>Same as kafka.producer.security.protocol but for reading/consuming from
Kafka.</td>
</tr>
-<tr class="row-even"><td><em>more producer/consumer security props</em></td>
+<tr class="row-odd"><td><em>more producer/consumer security props</em></td>
<td> </td>
<td>If using SASL_PLAINTEXT, SASL_SSL or SSL refer to <a class="reference
external" href="http://kafka.apache.org/documentation.html#security">Kafka
security</a> for additional
properties that need to be set on producer/consumer.</td>
@@ -4963,9 +5567,9 @@ properties that need to be set on produc
<p>Deprecated Properties</p>
<table border="1" class="docutils">
<colgroup>
-<col width="19%" />
-<col width="15%" />
-<col width="66%" />
+<col width="18%" />
+<col width="14%" />
+<col width="68%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Property Name</th>
@@ -4992,6 +5596,13 @@ The format is comma separated list of ho
<td>false</td>
<td>Use kafka.consumer.auto.offset.reset</td>
</tr>
+<tr class="row-even"><td>migrateZookeeperOffsets</td>
+<td>true</td>
+<td>When no Kafka stored offset is found, look up the offsets in Zookeeper and
commit them to Kafka.
+This should be true to support seamless Kafka client migration from older
versions of Flume. Once migrated this can be set
+to false, though that should generally not be required. If no Zookeeper offset
is found the kafka.consumer.auto.offset.reset
+configuration defines how offsets are handled.</td>
+</tr>
</tbody>
</table>
<div class="admonition note">
@@ -5033,13 +5644,17 @@ security provider, cipher suites, enable
<span class="na">a1.channels.channel1.kafka.topic</span> <span
class="o">=</span> <span class="s">channel1</span>
<span class="na">a1.channels.channel1.kafka.consumer.group.id</span> <span
class="o">=</span> <span class="s">flume-consumer</span>
<span class="na">a1.channels.channel1.kafka.producer.security.protocol</span>
<span class="o">=</span> <span class="s">SSL</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span
class="na">a1.channels.channel1.kafka.producer.ssl.truststore.location</span>
<span class="o">=</span> <span class="s">/path/to/truststore.jks</span>
<span
class="na">a1.channels.channel1.kafka.producer.ssl.truststore.password</span>
<span class="o">=</span> <span class="s"><password to access the
truststore></span>
<span class="na">a1.channels.channel1.kafka.consumer.security.protocol</span>
<span class="o">=</span> <span class="s">SSL</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span
class="na">a1.channels.channel1.kafka.consumer.ssl.truststore.location</span>
<span class="o">=</span> <span class="s">/path/to/truststore.jks</span>
<span
class="na">a1.channels.channel1.kafka.consumer.ssl.truststore.password</span>
<span class="o">=</span> <span class="s"><password to access the
truststore></span>
</pre></div>
</div>
+<p>Specyfing the truststore is optional here, the global truststore can be
used instead.
+For more details about the global SSL setup, see the <a class="reference
internal" href="#ssl-tls-support">SSL/TLS support</a> section.</p>
<p>Note: By default the property <tt class="docutils literal"><span
class="pre">ssl.endpoint.identification.algorithm</span></tt>
is not defined, so hostname verification is not performed.
In order to enable hostname verification, set the following properties</p>
@@ -5053,12 +5668,15 @@ against one of the following two fields:
<li>Common Name (CN) <a class="reference external"
href="https://tools.ietf.org/html/rfc6125#section-2.3">https://tools.ietf.org/html/rfc6125#section-2.3</a></li>
<li>Subject Alternative Name (SAN) <a class="reference external"
href="https://tools.ietf.org/html/rfc5280#section-4.2.1.6">https://tools.ietf.org/html/rfc5280#section-4.2.1.6</a></li>
</ol>
-<p>If client side authentication is also required then additionally the
following should be added to Flume agent configuration.
+<p>If client side authentication is also required then additionally the
following needs to be added to Flume agent
+configuration or the global SSL setup can be used (see <a class="reference
internal" href="#ssl-tls-support">SSL/TLS support</a> section).
Each Flume agent has to have its client certificate which has to be trusted by
Kafka brokers either
individually or by their signature chain. Common example is to sign each
client certificate by a single Root CA
which in turn is trusted by Kafka brokers.</p>
-<div class="highlight-properties"><div class="highlight"><pre><span
class="na">a1.channels.channel1.kafka.producer.ssl.keystore.location</span>
<span class="o">=</span> <span class="s">/path/to/client.keystore.jks</span>
+<div class="highlight-properties"><div class="highlight"><pre><span
class="c"># optional, the global keystore can be used alternatively</span>
+<span
class="na">a1.channels.channel1.kafka.producer.ssl.keystore.location</span>
<span class="o">=</span> <span class="s">/path/to/client.keystore.jks</span>
<span
class="na">a1.channels.channel1.kafka.producer.ssl.keystore.password</span>
<span class="o">=</span> <span class="s"><password to access the
keystore></span>
+<span class="c"># optional, the global keystore can be used
alternatively</span>
<span
class="na">a1.channels.channel1.kafka.consumer.ssl.keystore.location</span>
<span class="o">=</span> <span class="s">/path/to/client.keystore.jks</span>
<span
class="na">a1.channels.channel1.kafka.consumer.ssl.keystore.password</span>
<span class="o">=</span> <span class="s"><password to access the
keystore></span>
</pre></div>
@@ -5099,11 +5717,13 @@ for information on the JAAS file content
<span class="na">a1.channels.channel1.kafka.producer.security.protocol</span>
<span class="o">=</span> <span class="s">SASL_SSL</span>
<span class="na">a1.channels.channel1.kafka.producer.sasl.mechanism</span>
<span class="o">=</span> <span class="s">GSSAPI</span>
<span
class="na">a1.channels.channel1.kafka.producer.sasl.kerberos.service.name</span>
<span class="o">=</span> <span class="s">kafka</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span
class="na">a1.channels.channel1.kafka.producer.ssl.truststore.location</span>
<span class="o">=</span> <span class="s">/path/to/truststore.jks</span>
<span
class="na">a1.channels.channel1.kafka.producer.ssl.truststore.password</span>
<span class="o">=</span> <span class="s"><password to access the
truststore></span>
<span class="na">a1.channels.channel1.kafka.consumer.security.protocol</span>
<span class="o">=</span> <span class="s">SASL_SSL</span>
<span class="na">a1.channels.channel1.kafka.consumer.sasl.mechanism</span>
<span class="o">=</span> <span class="s">GSSAPI</span>
<span
class="na">a1.channels.channel1.kafka.consumer.sasl.kerberos.service.name</span>
<span class="o">=</span> <span class="s">kafka</span>
+<span class="c"># optional, the global truststore can be used
alternatively</span>
<span
class="na">a1.channels.channel1.kafka.consumer.ssl.truststore.location</span>
<span class="o">=</span> <span class="s">/path/to/truststore.jks</span>
<span
class="na">a1.channels.channel1.kafka.consumer.ssl.truststore.password</span>
<span class="o">=</span> <span class="s"><password to access the
truststore></span>
</pre></div>
@@ -5939,7 +6559,7 @@ This interceptor can preserve an existin
<td>–</td>
<td>The component type name, has to be <tt class="docutils literal"><span
class="pre">timestamp</span></tt> or the FQCN</td>
</tr>
-<tr class="row-odd"><td>header</td>
+<tr class="row-odd"><td>headerName</td>
<td>timestamp</td>
<td>The name of the header in which to place the generated timestamp.</td>
</tr>
@@ -6381,11 +7001,196 @@ polling rather than terminating.</p>
</div>
</div>
</div>
+<div class="section" id="configuration-filters">
+<h2>Configuration Filters<a class="headerlink" href="#configuration-filters"
title="Permalink to this headline">¶</a></h2>
+<p>Flume provides a tool for injecting sensitive or generated data into the
configuration
+in the form of configuration filters. A configuration key can be set as the
value of configuration properties
+and it will be replaced by the configuration filter with the value it
represents.</p>
+<div class="section" id="common-usage-of-config-filters">
+<h3>Common usage of config filters<a class="headerlink"
href="#common-usage-of-config-filters" title="Permalink to this
headline">¶</a></h3>
+<p>The format is similar to the Java Expression Language, however
+it is currently not a fully working EL expression parser, just a format that
looks like it.</p>
+<div class="highlight-properties"><div class="highlight"><pre><span
class="na"><agent_name>.configfilters</span> <span class="o">=</span>
<span class="s"><filter_name></span>
+<span
class="na"><agent_name>.configfilters.<filter_name>.type</span>
<span class="o">=</span> <span class="s"><filter_type></span>
+
[... 1164 lines stripped ...]