Author: denes
Date: Wed Oct  4 16:22:29 2017
New Revision: 1811107

URL: http://svn.apache.org/viewvc?rev=1811107&view=rev
Log:
Update the website for Apache Flume 1.8.0


Added:
    flume/site/trunk/content/sphinx/releases/1.8.0.rst
Modified:
    flume/site/trunk/content/sphinx/FlumeDeveloperGuide.rst
    flume/site/trunk/content/sphinx/FlumeUserGuide.rst
    flume/site/trunk/content/sphinx/conf.py
    flume/site/trunk/content/sphinx/download.rst
    flume/site/trunk/content/sphinx/index.rst
    flume/site/trunk/content/sphinx/releases/index.rst

Modified: flume/site/trunk/content/sphinx/FlumeDeveloperGuide.rst
URL: 
http://svn.apache.org/viewvc/flume/site/trunk/content/sphinx/FlumeDeveloperGuide.rst?rev=1811107&r1=1811106&r2=1811107&view=diff
==============================================================================
--- flume/site/trunk/content/sphinx/FlumeDeveloperGuide.rst (original)
+++ flume/site/trunk/content/sphinx/FlumeDeveloperGuide.rst Wed Oct  4 16:22:29 
2017
@@ -15,7 +15,7 @@
 
 
 ======================================
-Flume 1.7.0 Developer Guide
+Flume 1.8.0 Developer Guide
 ======================================
 
 Introduction
@@ -114,6 +114,18 @@ Please note that Flume builds requires t
 be in the path. You can download and install it by following the instructions
 `here <https://developers.google.com/protocol-buffers/>`_.
 
+Updating Protocol Buffer Version
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+File channel has a dependency on Protocol Buffer. When updating the version of 
Protocol Buffer
+used by Flume, it is necessary to regenerate the data access classes using the 
protoc compiler
+that is part of Protocol Buffer as follows.
+
+#. Install the desired version of Protocol Buffer on your local machine
+#. Update version of Protocol Buffer in pom.xml
+#. Generate new Protocol Buffer data access classes in Flume: ``cd 
flume-ng-channels/flume-file-channel; mvn -P compile-proto clean package 
-DskipTests``
+#. Add Apache license header to any of the generated files that are missing it
+#. Rebuild and test Flume:  ``cd ../..; mvn clean install``
+
 Developing custom components
 ----------------------------
 
@@ -573,23 +585,23 @@ Required properties are in **bold**.
 Property Name          Default           Description
 =====================  ================  
======================================================================
 source.type            embedded          The only available source is the 
embedded source.
-**channel.type**       --                Either ``memory`` or ``file`` which 
correspond
+**channel.type**       --                Either ``memory`` or ``file`` which 
correspond 
                                         to MemoryChannel and FileChannel 
respectively.
 channel.*              --                Configuration options for the channel 
type requested,
                                         see MemoryChannel or FileChannel user 
guide for an exhaustive list.
 **sinks**              --                List of sink names
-**sink.type**          --                Property name must match a name in 
the list of sinks.
+**sink.type**          --                Property name must match a name in 
the list of sinks. 
                                         Value must be ``avro``
-sink.*                 --                Configuration options for the sink.
+sink.*                 --                Configuration options for the sink. 
                                         See AvroSink user guide for an 
exhaustive list,
                                         however note AvroSink requires at 
least hostname and port.
 **processor.type**     --                Either ``failover`` or 
``load_balance`` which correspond
                                         to FailoverSinksProcessor and 
LoadBalancingSinkProcessor respectively.
 processor.*            --                Configuration options for the sink 
processor selected.
-                                        See FailoverSinksProcessor and 
LoadBalancingSinkProcessor
+                                        See FailoverSinksProcessor and 
LoadBalancingSinkProcessor 
                                         user guide for an exhaustive list.
 source.interceptors    --                Space-separated list of interceptors
-source.interceptors.*  --                Configuration options for individual 
interceptors
+source.interceptors.*  --                Configuration options for individual 
interceptors 
                                         specified in the source.interceptors 
property
 =====================  ================  
======================================================================
 

Modified: flume/site/trunk/content/sphinx/FlumeUserGuide.rst
URL: 
http://svn.apache.org/viewvc/flume/site/trunk/content/sphinx/FlumeUserGuide.rst?rev=1811107&r1=1811106&r2=1811107&view=diff
==============================================================================
--- flume/site/trunk/content/sphinx/FlumeUserGuide.rst (original)
+++ flume/site/trunk/content/sphinx/FlumeUserGuide.rst Wed Oct  4 16:22:29 2017
@@ -15,7 +15,7 @@
 
 
 ======================================
-Flume 1.7.0 User Guide
+Flume 1.8.0 User Guide
 ======================================
 
 Introduction
@@ -50,7 +50,7 @@ in the latest architecture.
 System Requirements
 -------------------
 
-#. Java Runtime Environment - Java 1.7 or later
+#. Java Runtime Environment - Java 1.8 or later
 #. Memory - Sufficient memory for configurations used by sources, channels or 
sinks
 #. Disk Space - Sufficient disk space for configurations used by channels or 
sinks
 #. Directory Permissions - Read/Write permissions for directories used by agent
@@ -234,6 +234,25 @@ The original Flume terminal will output
 
 Congratulations - you've successfully configured and deployed a Flume agent! 
Subsequent sections cover agent configuration in much more detail.
 
+Using environment variables in configuration files
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Flume has the ability to substitute environment variables in the 
configuration. For example::
+
+  a1.sources = r1
+  a1.sources.r1.type = netcat
+  a1.sources.r1.bind = 0.0.0.0
+  a1.sources.r1.port = ${NC_PORT}
+  a1.sources.r1.channels = c1
+
+NB: it currently works for values only, not for keys. (Ie. only on the "right 
side" of the `=` mark of the config lines.)
+
+This can be enabled via Java system properties on agent invocation by setting 
`propertiesImplementation = org.apache.flume.node.EnvVarResolverProperties`.
+
+For example::
+  $ NC_PORT=44444 bin/flume-ng agent --conf conf --conf-file example.conf 
--name a1 -Dflume.root.logger=INFO,console 
-DpropertiesImplementation=org.apache.flume.node.EnvVarResolverProperties
+
+Note the above is just an example, environment variables can be configured in 
other ways, including being set in `conf/flume-env.sh`.
+
 Logging raw data
 ~~~~~~~~~~~~~~~~
 
@@ -357,8 +376,6 @@ There's an exec source that executes a g
 single 'line' of output ie. text followed by carriage return ('\\r') or line
 feed ('\\n') or both together.
 
-.. note:: Flume does not support tail as a source. One can wrap the tail 
command in an exec source to stream the file.
-
 Network streams
 ~~~~~~~~~~~~~~~
 
@@ -578,9 +595,9 @@ Weblog agent config:
   agent_foo.sinks.avro-forward-sink.channel = file-channel
 
   # avro sink properties
-  agent_foo.sources.avro-forward-sink.type = avro
-  agent_foo.sources.avro-forward-sink.hostname = 10.1.1.100
-  agent_foo.sources.avro-forward-sink.port = 10000
+  agent_foo.sinks.avro-forward-sink.type = avro
+  agent_foo.sinks.avro-forward-sink.hostname = 10.1.1.100
+  agent_foo.sinks.avro-forward-sink.port = 10000
 
   # configure other pieces
   #...
@@ -599,7 +616,7 @@ HDFS agent config:
   agent_foo.sources.avro-collection-source.channels = mem-channel
   agent_foo.sinks.hdfs-sink.channel = mem-channel
 
-  # avro sink properties
+  # avro source properties
   agent_foo.sources.avro-collection-source.type = avro
   agent_foo.sources.avro-collection-source.bind = 10.1.1.100
   agent_foo.sources.avro-collection-source.port = 10000
@@ -882,13 +899,9 @@ interceptors.*
              asynchronous interface such as ExecSource! As an extension of this
              warning - and to be completely clear - there is absolutely zero 
guarantee
              of event delivery when using this source. For stronger reliability
-             guarantees, consider the Spooling Directory Source or direct 
integration
+             guarantees, consider the Spooling Directory Source, Taildir 
Source or direct integration
              with Flume via the SDK.
 
-.. note:: You can use ExecSource to emulate TailSource from Flume 0.9x (flume 
og).
-          Just use unix command ``tail -F /full/path/to/your/file``. Parameter
-          -F is better in this case than -f as it will also follow file 
rotation.
-
 Example for agent named a1:
 
 .. code-block:: properties
@@ -941,6 +954,12 @@ batchSize                   100
 converter.type              DEFAULT      Class to use to convert messages to 
flume events. See below.
 converter.*                 --           Converter properties.
 converter.charset           UTF-8        Default converter only. Charset to 
use when converting JMS TextMessages to byte arrays.
+createDurableSubscription   false        Whether to create durable 
subscription. Durable subscription can only be used with
+                                         destinationType topic. If true, 
"clientId" and "durableSubscriptionName"
+                                         have to be specified.
+clientId                    --           JMS client identifier set on 
Connection right after it is created.
+                                         Required for durable subscriptions.
+durableSubscriptionName     --           Name used to identify the durable 
subscription. Required for durable subscriptions.
 =========================   ===========  
==============================================================
 
 
@@ -1265,6 +1284,12 @@ useFlumeEventFormat                 fals
                                                  true to read events as the 
Flume Avro binary format. Used in conjunction with the same property
                                                  on the KafkaSink or with the 
parseAsFlumeEvent property on the Kafka Channel this will preserve
                                                  any Flume headers sent on the 
producing side.
+setTopicHeader                      true         When set to true, stores the 
topic of the retrieved message into a header, defined by the
+                                                 ``topicHeader`` property.
+topicHeader                         topic        Defines the name of the 
header in which to store the name of the topic the message was received
+                                                 from, if the 
``setTopicHeader`` property is set to ``true``. Care should be taken if 
combining
+                                                 with the Kafka Sink 
``topicHeader`` property so as to avoid sending the message back to the same
+                                                 topic in a loop.
 migrateZookeeperOffsets             true         When no Kafka stored offset 
is found, look up the offsets in Zookeeper and commit them to Kafka.
                                                  This should be true to 
support seamless Kafka client migration from older versions of Flume.
                                                  Once migrated this can be set 
to false, though that should generally not be required.
@@ -1351,13 +1376,13 @@ Example configuration with server side a
 
 .. code-block:: properties
 
-    a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
-    a1.channels.channel1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
-    a1.channels.channel1.kafka.topic = channel1
-    a1.channels.channel1.kafka.consumer.group.id = flume-consumer
-    a1.channels.channel1.kafka.consumer.security.protocol = SSL
-    
a1.channels.channel1.kafka.consumer.ssl.truststore.location=/path/to/truststore.jks
-    a1.channels.channel1.kafka.consumer.ssl.truststore.password=<password to 
access the truststore>
+    a1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
+    a1.sources.source1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
+    a1.sources.source1.kafka.topics = mytopic
+    a1.sources.source1.kafka.consumer.group.id = flume-consumer
+    a1.sources.source1.kafka.consumer.security.protocol = SSL
+    
a1.sources.source1.kafka.consumer.ssl.truststore.location=/path/to/truststore.jks
+    a1.sources.source1.kafka.consumer.ssl.truststore.password=<password to 
access the truststore>
 
 
 Note: By default the property ``ssl.endpoint.identification.algorithm``
@@ -1366,7 +1391,7 @@ In order to enable hostname verification
 
 .. code-block:: properties
 
-    
a1.channels.channel1.kafka.consumer.ssl.endpoint.identification.algorithm=HTTPS
+    
a1.sources.source1.kafka.consumer.ssl.endpoint.identification.algorithm=HTTPS
 
 Once enabled, clients will verify the server's fully qualified domain name 
(FQDN)
 against one of the following two fields:
@@ -1381,15 +1406,15 @@ which in turn is trusted by Kafka broker
 
 .. code-block:: properties
 
-    
a1.channels.channel1.kafka.consumer.ssl.keystore.location=/path/to/client.keystore.jks
-    a1.channels.channel1.kafka.consumer.ssl.keystore.password=<password to 
access the keystore>
+    
a1.sources.source1.kafka.consumer.ssl.keystore.location=/path/to/client.keystore.jks
+    a1.sources.source1.kafka.consumer.ssl.keystore.password=<password to 
access the keystore>
 
 If keystore and key use different password protection then 
``ssl.key.password`` property will
 provide the required additional secret for both consumer keystores:
 
 .. code-block:: properties
 
-    a1.channels.channel1.kafka.consumer.ssl.key.password=<password to access 
the key>
+    a1.sources.source1.kafka.consumer.ssl.key.password=<password to access the 
key>
 
 
 **Kerberos and Kafka Source:**
@@ -1408,27 +1433,27 @@ Example secure configuration using SASL_
 
 .. code-block:: properties
 
-    a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
-    a1.channels.channel1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
-    a1.channels.channel1.kafka.topic = channel1
-    a1.channels.channel1.kafka.consumer.group.id = flume-consumer
-    a1.channels.channel1.kafka.consumer.security.protocol = SASL_PLAINTEXT
-    a1.channels.channel1.kafka.consumer.sasl.mechanism = GSSAPI
-    a1.channels.channel1.kafka.consumer.sasl.kerberos.service.name = kafka
+    a1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
+    a1.sources.source1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
+    a1.sources.source1.kafka.topics = mytopic
+    a1.sources.source1.kafka.consumer.group.id = flume-consumer
+    a1.sources.source1.kafka.consumer.security.protocol = SASL_PLAINTEXT
+    a1.sources.source1.kafka.consumer.sasl.mechanism = GSSAPI
+    a1.sources.source1.kafka.consumer.sasl.kerberos.service.name = kafka
 
 Example secure configuration using SASL_SSL:
 
 .. code-block:: properties
 
-    a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
-    a1.channels.channel1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
-    a1.channels.channel1.kafka.topic = channel1
-    a1.channels.channel1.kafka.consumer.group.id = flume-consumer
-    a1.channels.channel1.kafka.consumer.security.protocol = SASL_SSL
-    a1.channels.channel1.kafka.consumer.sasl.mechanism = GSSAPI
-    a1.channels.channel1.kafka.consumer.sasl.kerberos.service.name = kafka
-    
a1.channels.channel1.kafka.consumer.ssl.truststore.location=/path/to/truststore.jks
-    a1.channels.channel1.kafka.consumer.ssl.truststore.password=<password to 
access the truststore>
+    a1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
+    a1.sources.source1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
+    a1.sources.source1.kafka.topics = mytopic
+    a1.sources.source1.kafka.consumer.group.id = flume-consumer
+    a1.sources.source1.kafka.consumer.security.protocol = SASL_SSL
+    a1.sources.source1.kafka.consumer.sasl.mechanism = GSSAPI
+    a1.sources.source1.kafka.consumer.sasl.kerberos.service.name = kafka
+    
a1.sources.source1.kafka.consumer.ssl.truststore.location=/path/to/truststore.jks
+    a1.sources.source1.kafka.consumer.ssl.truststore.password=<password to 
access the truststore>
 
 
 Sample JAAS file. For reference of its content please see client config 
sections of the desired authentication mechanism (GSSAPI/PLAIN)
@@ -1456,8 +1481,8 @@ Also please make sure that the operating
     };
 
 
-NetCat Source
-~~~~~~~~~~~~~
+NetCat TCP Source
+~~~~~~~~~~~~~~~~~
 
 A netcat-like source that listens on a given port and turns each line of text
 into an event. Acts like ``nc -k -l [host] [port]``. In other words,
@@ -1493,12 +1518,48 @@ Example for agent named a1:
   a1.sources.r1.port = 6666
   a1.sources.r1.channels = c1
 
+NetCat UDP Source
+~~~~~~~~~~~~~~~~~
+
+As per the original Netcat (TCP) source, this source that listens on a given
+port and turns each line of text into an event and sent via the connected 
channel.
+Acts like ``nc -u -k -l [host] [port]``.
+
+Required properties are in **bold**.
+
+===================  ===========  ===========================================
+Property Name        Default      Description
+===================  ===========  ===========================================
+**channels**         --
+**type**             --           The component type name, needs to be 
``netcatudp``
+**bind**             --           Host name or IP address to bind to
+**port**             --           Port # to bind to
+remoteAddressHeader  --
+selector.type        replicating  replicating or multiplexing
+selector.*                        Depends on the selector.type value
+interceptors         --           Space-separated list of interceptors
+interceptors.*
+===================  ===========  ===========================================
+
+Example for agent named a1:
+
+.. code-block:: properties
+
+  a1.sources = r1
+  a1.channels = c1
+  a1.sources.r1.type = netcatudp
+  a1.sources.r1.bind = 0.0.0.0
+  a1.sources.r1.port = 6666
+  a1.sources.r1.channels = c1
+
 Sequence Generator Source
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
 A simple sequence generator that continuously generates events with a counter 
that starts from 0,
 increments by 1 and stops at totalEvents. Retries when it can't send events to 
the channel. Useful
-mainly for testing. Required properties are in **bold**.
+mainly for testing. During retries it keeps the body of the retried messages 
the same as before so
+that the number of unique events - after de-duplication at destination - is 
expected to be
+equal to the specified ``totalEvents``. Required properties are in **bold**.
 
 ==============  ===============  ========================================
 Property Name   Default          Description
@@ -1509,7 +1570,7 @@ selector.type                    replica
 selector.*      replicating      Depends on the selector.type value
 interceptors    --               Space-separated list of interceptors
 interceptors.*
-batchSize       1
+batchSize       1                Number of events to attempt to process per 
request loop.
 totalEvents     Long.MAX_VALUE   Number of unique events sent by the source.
 ==============  ===============  ========================================
 
@@ -1997,7 +2058,7 @@ hdfs.fileType           SequenceFile  Fi
                                       (2)CompressedStream requires set 
hdfs.codeC with an available codeC
 hdfs.maxOpenFiles       5000          Allow only this number of open files. If 
this number is exceeded, the oldest file is closed.
 hdfs.minBlockReplicas   --            Specify minimum number of replicas per 
HDFS block. If not specified, it comes from the default Hadoop config in the 
classpath.
-hdfs.writeFormat        --            Format for sequence file records. One of 
"Text" or "Writable" (the default).
+hdfs.writeFormat        Writable      Format for sequence file records. One of 
``Text`` or ``Writable``. Set to ``Text`` before creating data files with 
Flume, otherwise those files cannot be read by either Apache Impala 
(incubating) or Apache Hive.
 hdfs.callTimeout        10000         Number of milliseconds allowed for HDFS 
operations, such as open, write, flush, close.
                                       This number should be increased if many 
HDFS timeout operations are occurring.
 hdfs.threadsPoolSize    10            Number of threads per HDFS sink for HDFS 
IO ops (open, write, etc.)
@@ -2712,6 +2773,8 @@ kafka.topic                         defa
                                                          messages will be 
published to this topic.
                                                          If the event header 
contains a "topic" field, the event will be published to that topic
                                                          overriding the topic 
configured here.
+                                                         Arbitrary header 
substitution is supported, eg. %{header} is replaced with value of event header 
named "header".
+                                                         (If using the 
substitution, it is recommended to set "auto.create.topics.enable" property of 
Kafka broker to true.)
 flumeBatchSize                      100                  How many messages to 
process in one batch. Larger batches improve throughput while adding latency.
 kafka.producer.acks                 1                    How many replicas 
must acknowledge a message before its considered successfully written.
                                                          Accepted values are 0 
(Never wait for acknowledgement), 1 (wait for leader only), -1 (wait for all 
replicas)
@@ -2728,6 +2791,9 @@ partitionIdHeader                   --
                                                          from the event header 
and send the message to the specified partition of the topic. If the
                                                          value represents an 
invalid partition, an EventDeliveryException will be thrown. If the header value
                                                          is present then this 
setting overrides ``defaultPartitionId``.
+allowTopicOverride                  true                 When set, the sink 
will allow a message to be produced into a topic specified by the 
``topicHeader`` property (if provided).
+topicHeader                         topic                When set in 
conjunction with ``allowTopicOverride`` will produce a message into the value 
of the header named using the value of this property.
+                                                         Care should be taken 
when using in conjunction with the Kafka Source ``topicHeader`` property to 
avoid creating a loopback.
 kafka.producer.security.protocol    PLAINTEXT            Set to 
SASL_PLAINTEXT, SASL_SSL or SSL if writing to Kafka using some level of 
security. See below for additional info on secure setup.
 *more producer security props*                           If using 
SASL_PLAINTEXT, SASL_SSL or SSL refer to `Kafka security 
<http://kafka.apache.org/documentation.html#security>`_ for additional
                                                          properties that need 
to be set on producer.
@@ -2773,7 +2839,7 @@ argument.
     a1.sinks.k1.kafka.flumeBatchSize = 20
     a1.sinks.k1.kafka.producer.acks = 1
     a1.sinks.k1.kafka.producer.linger.ms = 1
-    a1.sinks.ki.kafka.producer.compression.type = snappy
+    a1.sinks.k1.kafka.producer.compression.type = snappy
 
 
 **Security and Kafka Sink:**
@@ -2807,12 +2873,12 @@ Example configuration with server side a
 
 .. code-block:: properties
 
-    a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
-    a1.channels.channel1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
-    a1.channels.channel1.kafka.topic = channel1
-    a1.channels.channel1.kafka.producer.security.protocol = SSL
-    a1.channels.channel1.kafka.producer.ssl.truststore.location = 
/path/to/truststore.jks
-    a1.channels.channel1.kafka.producer.ssl.truststore.password = <password to 
access the truststore>
+    a1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
+    a1.sinks.sink1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
+    a1.sinks.sink1.kafka.topic = mytopic
+    a1.sinks.sink1.kafka.producer.security.protocol = SSL
+    a1.sinks.sink1.kafka.producer.ssl.truststore.location = 
/path/to/truststore.jks
+    a1.sinks.sink1.kafka.producer.ssl.truststore.password = <password to 
access the truststore>
 
 
 Note: By default the property ``ssl.endpoint.identification.algorithm``
@@ -2821,7 +2887,7 @@ In order to enable hostname verification
 
 .. code-block:: properties
 
-    a1.channels.channel1.kafka.producer.ssl.endpoint.identification.algorithm 
= HTTPS
+    a1.sinks.sink1.kafka.producer.ssl.endpoint.identification.algorithm = HTTPS
 
 Once enabled, clients will verify the server's fully qualified domain name 
(FQDN)
 against one of the following two fields:
@@ -2836,15 +2902,15 @@ which in turn is trusted by Kafka broker
 
 .. code-block:: properties
 
-    a1.channels.channel1.kafka.producer.ssl.keystore.location = 
/path/to/client.keystore.jks
-    a1.channels.channel1.kafka.producer.ssl.keystore.password = <password to 
access the keystore>
+    a1.sinks.sink1.kafka.producer.ssl.keystore.location = 
/path/to/client.keystore.jks
+    a1.sinks.sink1.kafka.producer.ssl.keystore.password = <password to access 
the keystore>
 
 If keystore and key use different password protection then 
``ssl.key.password`` property will
 provide the required additional secret for producer keystore:
 
 .. code-block:: properties
 
-    a1.channels.channel1.kafka.producer.ssl.key.password = <password to access 
the key>
+    a1.sinks.sink1.kafka.producer.ssl.key.password = <password to access the 
key>
 
 
 **Kerberos and Kafka Sink:**
@@ -2863,26 +2929,26 @@ Example secure configuration using SASL_
 
 .. code-block:: properties
 
-    a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
-    a1.channels.channel1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
-    a1.channels.channel1.kafka.topic = channel1
-    a1.channels.channel1.kafka.producer.security.protocol = SASL_PLAINTEXT
-    a1.channels.channel1.kafka.producer.sasl.mechanism = GSSAPI
-    a1.channels.channel1.kafka.producer.sasl.kerberos.service.name = kafka
+    a1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
+    a1.sinks.sink1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
+    a1.sinks.sink1.kafka.topic = mytopic
+    a1.sinks.sink1.kafka.producer.security.protocol = SASL_PLAINTEXT
+    a1.sinks.sink1.kafka.producer.sasl.mechanism = GSSAPI
+    a1.sinks.sink1.kafka.producer.sasl.kerberos.service.name = kafka
 
 
 Example secure configuration using SASL_SSL:
 
 .. code-block:: properties
 
-    a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
-    a1.channels.channel1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
-    a1.channels.channel1.kafka.topic = channel1
-    a1.channels.channel1.kafka.producer.security.protocol = SASL_SSL
-    a1.channels.channel1.kafka.producer.sasl.mechanism = GSSAPI
-    a1.channels.channel1.kafka.producer.sasl.kerberos.service.name = kafka
-    a1.channels.channel1.kafka.producer.ssl.truststore.location = 
/path/to/truststore.jks
-    a1.channels.channel1.kafka.producer.ssl.truststore.password = <password to 
access the truststore>
+    a1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
+    a1.sinks.sink1.kafka.bootstrap.servers = 
kafka-1:9093,kafka-2:9093,kafka-3:9093
+    a1.sinks.sink1.kafka.topic = mytopic
+    a1.sinks.sink1.kafka.producer.security.protocol = SASL_SSL
+    a1.sinks.sink1.kafka.producer.sasl.mechanism = GSSAPI
+    a1.sinks.sink1.kafka.producer.sasl.kerberos.service.name = kafka
+    a1.sinks.sink1.kafka.producer.ssl.truststore.location = 
/path/to/truststore.jks
+    a1.sinks.sink1.kafka.producer.ssl.truststore.password = <password to 
access the truststore>
 
 
 Sample JAAS file. For reference of its content please see client config 
sections of the desired authentication mechanism (GSSAPI/PLAIN)
@@ -2901,6 +2967,74 @@ that the operating system user of the Fl
     };
 
 
+HTTP Sink
+~~~~~~~~~
+
+Behaviour of this sink is that it will take events from the channel, and
+send those events to a remote service using an HTTP POST request. The event
+content is sent as the POST body.
+
+Error handling behaviour of this sink depends on the HTTP response returned
+by the target server. The sink backoff/ready status is configurable, as is the
+transaction commit/rollback result and whether the event contributes to the
+successful event drain count.
+
+Any malformed HTTP response returned by the server where the status code is
+not readable will result in a backoff signal and the event is not consumed
+from the channel.
+
+Required properties are in **bold**.
+
+========================== ================= 
===========================================================================================
+Property Name              Default           Description
+========================== ================= 
===========================================================================================
+**channel**                --
+**type**                   --                The component type name, needs to 
be ``http``.
+**endpoint**               --                The fully qualified URL endpoint 
to POST to
+connectTimeout             5000              The socket connection timeout in 
milliseconds
+requestTimeout             5000              The maximum request processing 
time in milliseconds
+contentTypeHeader          text/plain        The HTTP Content-Type header
+acceptHeader               text/plain        The HTTP Accept header value
+defaultBackoff             true              Whether to backoff by default on 
receiving all HTTP status codes
+defaultRollback            true              Whether to rollback by default on 
receiving all HTTP status codes
+defaultIncrementMetrics    false             Whether to increment metrics by 
default on receiving all HTTP status codes
+backoff.CODE               --                Configures a specific backoff for 
an individual (i.e. 200) code or a group (i.e. 2XX) code
+rollback.CODE              --                Configures a specific rollback 
for an individual (i.e. 200) code or a group (i.e. 2XX) code
+incrementMetrics.CODE      --                Configures a specific metrics 
increment for an individual (i.e. 200) code or a group (i.e. 2XX) code
+========================== ================= 
===========================================================================================
+
+Note that the most specific HTTP status code match is used for the backoff,
+rollback and incrementMetrics configuration options. If there are configuration
+values for both 2XX and 200 status codes, then 200 HTTP codes will use the 200
+value, and all other HTTP codes in the 201-299 range will use the 2XX value.
+
+Any empty or null events are consumed without any request being made to the
+HTTP endpoint.
+
+Example for agent named a1:
+
+.. code-block:: properties
+
+  a1.channels = c1
+  a1.sinks = k1
+  a1.sinks.k1.type = http
+  a1.sinks.k1.channel = c1
+  a1.sinks.k1.endpoint = http://localhost:8080/someuri
+  a1.sinks.k1.connectTimeout = 2000
+  a1.sinks.k1.requestTimeout = 2000
+  a1.sinks.k1.acceptHeader = application/json
+  a1.sinks.k1.contentTypeHeader = application/json
+  a1.sinks.k1.defaultBackoff = true
+  a1.sinks.k1.defaultRollback = true
+  a1.sinks.k1.defaultIncrementMetrics = false
+  a1.sinks.k1.backoff.4XX = false
+  a1.sinks.k1.rollback.4XX = false
+  a1.sinks.k1.incrementMetrics.4XX = true
+  a1.sinks.k1.backoff.200 = false
+  a1.sinks.k1.rollback.200 = false
+  a1.sinks.k1.incrementMetrics.200 = true
+
+
 Custom Sink
 ~~~~~~~~~~~
 
@@ -3062,7 +3196,7 @@ pollTimeout
                                                                      
https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)
 defaultPartitionId                       --                          Specifies 
a Kafka partition ID (integer) for all events in this channel to be sent to, 
unless
                                                                      overriden 
by ``partitionIdHeader``. By default, if this property is not set, events will 
be
-                                                                     
distributed by the Kafka Producer's partitioner - including by ``key`` if 
specified (or by a
+                                                                     
distributed by the Kafka Producer's partitioner - including by ``key`` if 
specified (or by a 
                                                                      
partitioner specified by ``kafka.partitioner.class``).
 partitionIdHeader                        --                          When set, 
the producer will take the value of the field named using the value of this 
property
                                                                      from the 
event header and send the message to the specified partition of the topic. If 
the
@@ -3831,15 +3965,16 @@ Timestamp Interceptor
 ~~~~~~~~~~~~~~~~~~~~~
 
 This interceptor inserts into the event headers, the time in millis at which 
it processes the event. This interceptor
-inserts a header with key ``timestamp`` whose value is the relevant timestamp. 
This interceptor
-can preserve an existing timestamp if it is already present in the 
configuration.
+inserts a header with key ``timestamp`` (or as specified by the ``header`` 
property) whose value is the relevant timestamp.
+This interceptor can preserve an existing timestamp if it is already present 
in the configuration.
 
-================  =======  
========================================================================
-Property Name     Default  Description
-================  =======  
========================================================================
-**type**          --       The component type name, has to be ``timestamp`` or 
the FQCN
-preserveExisting  false    If the timestamp already exists, should it be 
preserved - true or false
-================  =======  
========================================================================
+================  =========  
========================================================================
+Property Name     Default    Description
+================  =========  
========================================================================
+**type**          --         The component type name, has to be ``timestamp`` 
or the FQCN
+header            timestamp  The name of the header in which to place the 
generated timestamp.
+preserveExisting  false      If the timestamp already exists, should it be 
preserved - true or false
+================  =========  
========================================================================
 
 Example for agent named a1:
 
@@ -3875,7 +4010,6 @@ Example for agent named a1:
   a1.channels = c1
   a1.sources.r1.interceptors = i1
   a1.sources.r1.interceptors.i1.type = host
-  a1.sources.r1.interceptors.i1.hostHeader = hostname
 
 Static Interceptor
 ~~~~~~~~~~~~~~~~~~
@@ -3907,6 +4041,25 @@ Example for agent named a1:
   a1.sources.r1.interceptors.i1.key = datacenter
   a1.sources.r1.interceptors.i1.value = NEW_YORK
 
+
+Remove Header Interceptor
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This interceptor manipulates Flume event headers, by removing one or many 
headers. It can remove a statically defined header, headers based on a regular 
expression or headers in a list. If none of these is defined, or if no header 
matches the criteria, the Flume events are not modified.
+
+Note that if only one header needs to be removed, specifying it by name 
provides performance benefits over the other 2 methods.
+
+=====================  ===========  
===============================================================
+Property Name          Default      Description
+=====================  ===========  
===============================================================
+**type**               --           The component type name has to be 
``remove_header``
+withName               --           Name of the header to remove
+fromList               --           List of headers to remove, separated with 
the separator specified by ``fromListSeparator``
+fromListSeparator      \\s*,\\s*    Regular expression used to separate 
multiple header names in the list specified by ``fromList``. Default is a comma 
surrounded by any number of whitespace characters
+matching               --           All the headers which names match this 
regular expression are removed
+=====================  ===========  
===============================================================
+
+
 UUID Interceptor
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4101,7 +4254,7 @@ Log4J Appender
 
 Appends Log4j events to a flume agent's avro source. A client using this
 appender must have the flume-ng-sdk in the classpath (eg,
-flume-ng-sdk-1.8.0-SNAPSHOT.jar).
+flume-ng-sdk-1.8.0.jar).
 Required properties are in **bold**.
 
 =====================  =======  
==================================================================================
@@ -4165,7 +4318,7 @@ Load Balancing Log4J Appender
 
 Appends Log4j events to a list of flume agent's avro source. A client using 
this
 appender must have the flume-ng-sdk in the classpath (eg,
-flume-ng-sdk-1.8.0-SNAPSHOT.jar). This appender supports a round-robin and 
random
+flume-ng-sdk-1.8.0.jar). This appender supports a round-robin and random
 scheme for performing the load balancing. It also supports a configurable 
backoff
 timeout so that down agents are removed temporarily from the set of hosts
 Required properties are in **bold**.

Modified: flume/site/trunk/content/sphinx/conf.py
URL: 
http://svn.apache.org/viewvc/flume/site/trunk/content/sphinx/conf.py?rev=1811107&r1=1811106&r2=1811107&view=diff
==============================================================================
--- flume/site/trunk/content/sphinx/conf.py (original)
+++ flume/site/trunk/content/sphinx/conf.py Wed Oct  4 16:22:29 2017
@@ -17,6 +17,7 @@
 
 import sys
 import os
+from datetime import date
 
 # eventlet/gevent should not monkey patch anything.
 os.environ["GEVENT_NOPATCH"] = "yes"
@@ -62,7 +63,7 @@ master_doc = 'index'
 
 # General information about the project.
 project = 'Apache Flume'
-copyright = '2009-2012 The Apache Software Foundation. Apache Flume, Flume, 
Apache, the Apache feather logo, and the Apache Flume project logo are 
trademarks of The Apache Software Foundation.'
+copyright = '2009-%s The Apache Software Foundation. Apache Flume, Flume, 
Apache, the Apache feather logo, and the Apache Flume project logo are 
trademarks of The Apache Software Foundation.' % date.today().year
 
 keep_warnings = True
 
@@ -138,4 +139,4 @@ html_sidebars = {
 pdf_documents = [
   ('FlumeUserGuide', 'FlumeUserGuide.tex', 'Flume User Guide', 'Apache Flume'),
   ('FlumeDeveloperGuide', 'FlumeDeveloperGuide.tex', 'Flume Developer Guide', 
'Apache Flume'),
-]
\ No newline at end of file
+]

Modified: flume/site/trunk/content/sphinx/download.rst
URL: 
http://svn.apache.org/viewvc/flume/site/trunk/content/sphinx/download.rst?rev=1811107&r1=1811106&r2=1811107&view=diff
==============================================================================
--- flume/site/trunk/content/sphinx/download.rst (original)
+++ flume/site/trunk/content/sphinx/download.rst Wed Oct  4 16:22:29 2017
@@ -10,8 +10,8 @@ originals on the main distribution serve
 
 .. csv-table::
 
-   "Apache Flume binary (tar.gz)",  `apache-flume-1.7.0-bin.tar.gz 
<http://www.apache.org/dyn/closer.lua/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz>`_,
 `apache-flume-1.7.0-bin.tar.gz.md5 
<http://www.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz.md5>`_, 
`apache-flume-1.7.0-bin.tar.gz.sha1 
<http://www.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz.sha1>`_, 
`apache-flume-1.7.0-bin.tar.gz.asc 
<http://www.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz.asc>`_
-  "Apache Flume source (tar.gz)",  `apache-flume-1.7.0-src.tar.gz 
<http://www.apache.org/dyn/closer.lua/flume/1.7.0/apache-flume-1.7.0-src.tar.gz>`_,
 `apache-flume-1.7.0-src.tar.gz.md5 
<http://www.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-src.tar.gz.md5>`_, 
`apache-flume-1.7.0-src.tar.gz.sha1 
<http://www.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-src.tar.gz.sha1>`_, 
`apache-flume-1.7.0-src.tar.gz.asc 
<http://www.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-src.tar.gz.asc>`_
+   "Apache Flume binary (tar.gz)",  `apache-flume-1.8.0-bin.tar.gz 
<http://www.apache.org/dyn/closer.lua/flume/1.8.0/apache-flume-1.8.0-bin.tar.gz>`_,
 `apache-flume-1.8.0-bin.tar.gz.md5 
<http://www.apache.org/dist/flume/1.8.0/apache-flume-1.8.0-bin.tar.gz.md5>`_, 
`apache-flume-1.8.0-bin.tar.gz.sha1 
<http://www.apache.org/dist/flume/1.8.0/apache-flume-1.8.0-bin.tar.gz.sha1>`_, 
`apache-flume-1.8.0-bin.tar.gz.asc 
<http://www.apache.org/dist/flume/1.8.0/apache-flume-1.8.0-bin.tar.gz.asc>`_
+  "Apache Flume source (tar.gz)",  `apache-flume-1.8.0-src.tar.gz 
<http://www.apache.org/dyn/closer.lua/flume/1.8.0/apache-flume-1.8.0-src.tar.gz>`_,
 `apache-flume-1.8.0-src.tar.gz.md5 
<http://www.apache.org/dist/flume/1.8.0/apache-flume-1.8.0-src.tar.gz.md5>`_, 
`apache-flume-1.8.0-src.tar.gz.sha1 
<http://www.apache.org/dist/flume/1.8.0/apache-flume-1.8.0-src.tar.gz.sha1>`_, 
`apache-flume-1.8.0-src.tar.gz.asc 
<http://www.apache.org/dist/flume/1.8.0/apache-flume-1.8.0-src.tar.gz.asc>`_
 
 It is essential that you verify the integrity of the downloaded files using 
the PGP or MD5 signatures. Please read
 `Verifying Apache HTTP Server Releases 
<http://httpd.apache.org/dev/verification.html>`_ for more information on
@@ -23,9 +23,9 @@ as well as the asc signature file for th
 Then verify the signatures using::
 
     % gpg --import KEYS
-    % gpg --verify apache-flume-1.7.0-src.tar.gz.asc
+    % gpg --verify apache-flume-1.8.0-src.tar.gz.asc
 
-Apache Flume 1.7.0 is signed by Bessenyei Balázs Donát 35EAD82D
+Apache Flume 1.8.0 is signed by Denes Arvay 4199ACFF
 
 Alternatively, you can verify the MD5 or SHA1 signatures of the files. A 
program called md5, md5sum, or shasum is included in many
 Unix distributions for this purpose.

Modified: flume/site/trunk/content/sphinx/index.rst
URL: 
http://svn.apache.org/viewvc/flume/site/trunk/content/sphinx/index.rst?rev=1811107&r1=1811106&r2=1811107&view=diff
==============================================================================
--- flume/site/trunk/content/sphinx/index.rst (original)
+++ flume/site/trunk/content/sphinx/index.rst Wed Oct  4 16:22:29 2017
@@ -33,6 +33,72 @@ application.
 
 .. raw:: html
 
+   <h3>October 4, 2017 - Apache Flume 1.8.0 Released</h3>
+
+The Apache Flume team is pleased to announce the release of Flume 1.8.0.
+
+Flume is a distributed, reliable, and available service for efficiently
+collecting, aggregating, and moving large amounts of streaming event data.
+
+Version 1.8.0 is the eleventh Flume release as an Apache top-level project. 
Flume
+1.8.0 is stable, production-ready software, and is backwards-compatible with
+previous versions of the Flume 1.x codeline.
+
+Several months of active development went into this release: about 80 patches 
were committed since 1.7.0, representing many features, enhancements, and bug 
fixes. While the full change log can be found on the 1.8.0 release page (link 
below), here are a few new feature highlights:
+
+    * Add Interceptor to remove headers from event
+    * Provide netcat UDP source as alternative to TCP
+    * Support environment variables in configuration files
+
+Below is the list of people (from Git logs) who submitted and/or reviewed
+improvements to Flume during the 1.8.0 development cycle:
+
+* Andras Beni
+* Ashish Paliwal
+* Attila Simon
+* Ben Wheeler
+* Bessenyei Balázs Donát
+* Chris Horrocks
+* Denes Arvay
+* Ferenc Szabo
+* Gabriel Commeau
+* Hari Shreedharan
+* Jeff Holoman
+* Lior Zeno
+* Marcell Hegedus
+* Mike Percy
+* Miklos Csanady
+* Peter Ableda
+* Peter Chen
+* Ping Wang
+* Pravin D'silva
+* Robin Wang
+* Roshan Naik
+* Satoshi Iijima
+* Shang Wu
+* Siddharth Ahuja
+* Takafumi Saito
+* TeddyBear1314
+* Theodore michael Malaska
+* Tristan Stevens
+* dengkai02
+* eskrm
+* filippovmn
+* loleek
+
+The full change log and documentation are available on the
+`Flume 1.8.0 release page <releases/1.8.0.html>`__.
+
+This release can be downloaded from the Flume `Download <download.html>`__ 
page.
+
+Your contributions, feedback, help and support make Flume better!
+For more information on how to report problems or contribute,
+please visit our `Get Involved <getinvolved.html>`__ page.
+
+The Apache Flume Team
+
+.. raw:: html
+
    <h3>October 17, 2016 - Apache Flume 1.7.0 Released</h3>
 
 The Apache Flume team is pleased to announce the release of Flume 1.7.0.

Added: flume/site/trunk/content/sphinx/releases/1.8.0.rst
URL: 
http://svn.apache.org/viewvc/flume/site/trunk/content/sphinx/releases/1.8.0.rst?rev=1811107&view=auto
==============================================================================
--- flume/site/trunk/content/sphinx/releases/1.8.0.rst (added)
+++ flume/site/trunk/content/sphinx/releases/1.8.0.rst Wed Oct  4 16:22:29 2017
@@ -0,0 +1,80 @@
+===============
+Version 1.8.0
+===============
+
+.. rubric:: Status of this release
+
+Apache Flume 1.8.0 is the eleventh release of Flume as an Apache top-level 
project
+(TLP). Apache Flume 1.8.0 is production-ready software.
+
+.. rubric:: Release Documentation
+
+* `Flume 1.8.0 User Guide <../FlumeUserGuide.html>`__ (also in `pdf 
<content/1.8.0/FlumeUserGuide.pdf>`__)
+* `Flume 1.8.0 Developer Guide <../FlumeDeveloperGuide.html>`__ (also in `pdf 
<content/1.8.0/FlumeDeveloperGuide.pdf>`__)
+* `Flume 1.8.0 API Documentation <content/1.8.0/apidocs/index.html>`__
+
+.. rubric:: Changes
+
+Release Notes - Flume - Version v1.8.0
+
+** New Feature
+    * [`FLUME-2171 <https://issues.apache.org/jira/browse/FLUME-2171>`__] - 
Add Interceptor to remove headers from event
+    * [`FLUME-2917 <https://issues.apache.org/jira/browse/FLUME-2917>`__] - 
Provide netcat UDP source as alternative to TCP
+    * [`FLUME-2993 <https://issues.apache.org/jira/browse/FLUME-2993>`__] - 
Support environment variables in configuration files
+
+** Improvement
+    * [`FLUME-1520 <https://issues.apache.org/jira/browse/FLUME-1520>`__] - 
Timestamp interceptor should support custom headers
+    * [`FLUME-2945 <https://issues.apache.org/jira/browse/FLUME-2945>`__] - 
Bump java target version to 1.8
+    * [`FLUME-3020 <https://issues.apache.org/jira/browse/FLUME-3020>`__] - 
Improve HDFSEventSink Escape Ingestion by more then 10x by not getting 
InetAddress on every record
+    * [`FLUME-3025 <https://issues.apache.org/jira/browse/FLUME-3025>`__] - 
Expose FileChannel.open on JMX
+    * [`FLUME-3072 <https://issues.apache.org/jira/browse/FLUME-3072>`__] - 
Add IP address to headers in flume log4j appender
+    * [`FLUME-3092 <https://issues.apache.org/jira/browse/FLUME-3092>`__] - 
Extend the FileChannel's monitoring metrics
+    * [`FLUME-3100 <https://issues.apache.org/jira/browse/FLUME-3100>`__] - 
Support arbitrary header substitution for topic of Kafka
+    * [`FLUME-3144 <https://issues.apache.org/jira/browse/FLUME-3144>`__] - 
Improve Log4jAppender's performance by allowing logging collection of messages
+
+** Bug
+    * [`FLUME-2620 <https://issues.apache.org/jira/browse/FLUME-2620>`__] - 
File channel throws NullPointerException if a header value is null
+    * [`FLUME-2752 <https://issues.apache.org/jira/browse/FLUME-2752>`__] - 
Flume AvroSource will leak the memory and the OOM will be happened.
+    * [`FLUME-2812 <https://issues.apache.org/jira/browse/FLUME-2812>`__] - 
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" 
java.lang.Error: Maximum permit count exceeded
+    * [`FLUME-2857 <https://issues.apache.org/jira/browse/FLUME-2857>`__] - 
Kafka Source/Channel/Sink does not restore default values when live update 
config
+    * [`FLUME-2905 <https://issues.apache.org/jira/browse/FLUME-2905>`__] - 
NetcatSource - Socket not closed when an exception is encountered during 
start() leading to file descriptor leaks
+    * [`FLUME-2991 <https://issues.apache.org/jira/browse/FLUME-2991>`__] - 
ExecSource command execution starts before starting the sourceCounter
+    * [`FLUME-3027 <https://issues.apache.org/jira/browse/FLUME-3027>`__] - 
Kafka Channel should clear offsets map after commit
+    * [`FLUME-3031 <https://issues.apache.org/jira/browse/FLUME-3031>`__] - 
sequence source should reset its counter for event body on channel exception
+    * [`FLUME-3043 <https://issues.apache.org/jira/browse/FLUME-3043>`__] - 
KafkaSink SinkCallback throws NullPointerException when Log4J level is debug
+    * [`FLUME-3046 <https://issues.apache.org/jira/browse/FLUME-3046>`__] - 
Kafka Sink and Source Configuration Improvements
+    * [`FLUME-3049 <https://issues.apache.org/jira/browse/FLUME-3049>`__] - 
Wrapping the exception into SecurityException in UGIExecutor.execute hides the 
original one
+    * [`FLUME-3057 <https://issues.apache.org/jira/browse/FLUME-3057>`__] - 
Build fails due to unsupported snappy-java version on ppc64le
+    * [`FLUME-3080 <https://issues.apache.org/jira/browse/FLUME-3080>`__] - 
Close failure in HDFS Sink might cause data loss
+    * [`FLUME-3083 <https://issues.apache.org/jira/browse/FLUME-3083>`__] - 
Taildir source can miss events if file updated in same second as file close
+    * [`FLUME-3085 <https://issues.apache.org/jira/browse/FLUME-3085>`__] - 
HDFS Sink can skip flushing some BucketWriters, might lead to data loss
+    * [`FLUME-3112 <https://issues.apache.org/jira/browse/FLUME-3112>`__] - 
Upgrade jackson-core library dependency
+    * [`FLUME-3127 <https://issues.apache.org/jira/browse/FLUME-3127>`__] - 
Upgrade libfb303 library dependency
+    * [`FLUME-3131 <https://issues.apache.org/jira/browse/FLUME-3131>`__] - 
Upgrade spring framework library dependencies
+    * [`FLUME-3132 <https://issues.apache.org/jira/browse/FLUME-3132>`__] - 
Upgrade tomcat jasper library dependencies
+    * [`FLUME-3135 <https://issues.apache.org/jira/browse/FLUME-3135>`__] - 
property logger in org.apache.flume.interceptor.RegexFilteringInterceptor 
confused
+    * [`FLUME-3141 <https://issues.apache.org/jira/browse/FLUME-3141>`__] - 
Small typo found in RegexHbaseEventSerializer.java
+    * [`FLUME-3152 <https://issues.apache.org/jira/browse/FLUME-3152>`__] - 
Add Flume Metric for Backup Checkpoint Errors
+    * [`FLUME-3155 <https://issues.apache.org/jira/browse/FLUME-3155>`__] - 
Use batch mode in mvn to fix Travis CI error
+    * [`FLUME-3157 <https://issues.apache.org/jira/browse/FLUME-3157>`__] - 
Refactor TestHDFSEventSinkOnMiniCluster to not use LeaseManager private API
+    * [`FLUME-3173 <https://issues.apache.org/jira/browse/FLUME-3173>`__] - 
Upgrade joda-time
+    * [`FLUME-3174 <https://issues.apache.org/jira/browse/FLUME-3174>`__] - 
HdfsSink AWS S3A authentication does not work on JDK 8
+    * [`FLUME-3175 <https://issues.apache.org/jira/browse/FLUME-3175>`__] - 
Javadoc generation fails due to Java8's strict doclint
+
+** Documentation
+    * [`FLUME-2175 <https://issues.apache.org/jira/browse/FLUME-2175>`__] - 
Update Developer Guide with notes on how to upgrade Protocol Buffer version
+    * [`FLUME-2817 <https://issues.apache.org/jira/browse/FLUME-2817>`__] - 
Sink for multi-agent flow example in user guide is set up incorrectly
+
+** Wish
+    * [`FLUME-2579 <https://issues.apache.org/jira/browse/FLUME-2579>`__] - 
JMS source support durable subscriptions and message listening
+
+** Question
+    * [`FLUME-2427 <https://issues.apache.org/jira/browse/FLUME-2427>`__] - 
java.lang.NoSuchMethodException and warning on HDFS (S3) sink
+
+** Task
+    * [`FLUME-3093 <https://issues.apache.org/jira/browse/FLUME-3093>`__] - 
Groundwork for version changes in root pom
+    * [`FLUME-3154 <https://issues.apache.org/jira/browse/FLUME-3154>`__] - 
Add HBase client version check to AsyncHBaseSink and HBaseSink
+
+** Test
+    * [`FLUME-2997 <https://issues.apache.org/jira/browse/FLUME-2997>`__] - 
Fix flaky junit test in SpillableMemoryChannel
+    * [`FLUME-3002 <https://issues.apache.org/jira/browse/FLUME-3002>`__] - 
Some tests in TestBucketWriter are flaky

Modified: flume/site/trunk/content/sphinx/releases/index.rst
URL: 
http://svn.apache.org/viewvc/flume/site/trunk/content/sphinx/releases/index.rst?rev=1811107&r1=1811106&r2=1811107&view=diff
==============================================================================
--- flume/site/trunk/content/sphinx/releases/index.rst (original)
+++ flume/site/trunk/content/sphinx/releases/index.rst Wed Oct  4 16:22:29 2017
@@ -3,13 +3,13 @@ Releases
 
 .. rubric:: Current Release
 
-The current stable release is `Apache Flume Version 1.7.0 <1.7.0.html>`__.
+The current stable release is `Apache Flume Version 1.8.0 <1.8.0.html>`__.
 
 .. toctree::
    :maxdepth: 1
    :hidden:
 
-   1.7.0
+   1.8.0
 
 .. rubric:: Previous Releases
 
@@ -17,6 +17,7 @@ The current stable release is `Apache Fl
    :maxdepth: 1
    :glob:
 
+   1.7.0
    1.6.0
    1.5.2
    1.5.0.1


Reply via email to