Repository: flume Updated Branches: refs/heads/trunk ee4999bc2 -> 9601f5bf0
User guide: fix mistake and formatting change source to sink and fix formatting in Kafka Channel documentation (Dylan Jones via Mike Percy) Project: http://git-wip-us.apache.org/repos/asf/flume/repo Commit: http://git-wip-us.apache.org/repos/asf/flume/commit/9601f5bf Tree: http://git-wip-us.apache.org/repos/asf/flume/tree/9601f5bf Diff: http://git-wip-us.apache.org/repos/asf/flume/diff/9601f5bf Branch: refs/heads/trunk Commit: 9601f5bf0a5294d5ffd324010768fe0299044d6d Parents: ee4999b Author: Dylan Jones <[email protected]> Authored: Tue Jun 14 21:35:47 2016 +0100 Committer: Mike Percy <[email protected]> Committed: Thu Jun 16 10:34:15 2016 -0700 ---------------------------------------------------------------------- flume-ng-doc/sphinx/FlumeUserGuide.rst | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/flume/blob/9601f5bf/flume-ng-doc/sphinx/FlumeUserGuide.rst ---------------------------------------------------------------------- diff --git a/flume-ng-doc/sphinx/FlumeUserGuide.rst b/flume-ng-doc/sphinx/FlumeUserGuide.rst index 9c11fe6..74d2887 100644 --- a/flume-ng-doc/sphinx/FlumeUserGuide.rst +++ b/flume-ng-doc/sphinx/FlumeUserGuide.rst @@ -2684,19 +2684,21 @@ The events are stored in a Kafka cluster (must be installed separately). Kafka p replication, so in case an agent or a kafka broker crashes, the events are immediately available to other sinks The Kafka channel can be used for multiple scenarios: -* With Flume source and sink - it provides a reliable and highly available channel for events -* With Flume source and interceptor but no sink - it allows writing Flume events into a Kafka topic, for use by other apps -* With Flume sink, but no source - it is a low-latency, fault tolerant way to send events from Kafka to Flume sources such as HDFS, HBase or Solr + +#. With Flume source and sink - it provides a reliable and highly available channel for events +#. With Flume source and interceptor but no sink - it allows writing Flume events into a Kafka topic, for use by other apps +#. With Flume sink, but no source - it is a low-latency, fault tolerant way to send events from Kafka to Flume sinks such as HDFS, HBase or Solr This version of Flume requires Kafka version 0.9 or greater due to the reliance on the Kafka clients shipped with that version. The configuration of the channel has changed compared to previous flume versions. The configuration parameters are organized as such: -1) Configuration values related to the channel generically are applied at the channel config level, eg: a1.channel.k1.type = -2) Configuration values related to Kafka or how the Channel operates are prefixed with "kafka.", (this are analgous to CommonClient Configs)eg: a1.channels.k1.kafka.topica1.channels.k1.kafka.bootstrap.serversThis is not dissimilar to how the hdfs sink operates -3) Properties specific to the producer/consumer are prefixed by kafka.producer or kafka.consumer -4) Where possible, the Kafka paramter names are used, eg: bootstrap.servers and acks + +#. Configuration values related to the channel generically are applied at the channel config level, eg: a1.channel.k1.type = +#. Configuration values related to Kafka or how the Channel operates are prefixed with "kafka.", (this are analgous to CommonClient Configs) eg: a1.channels.k1.kafka.topic and a1.channels.k1.kafka.bootstrap.servers. This is not dissimilar to how the hdfs sink operates +#. Properties specific to the producer/consumer are prefixed by kafka.producer or kafka.consumer +#. Where possible, the Kafka paramter names are used, eg: bootstrap.servers and acks This version of flume is backwards-compatible with previous versions, however deprecated properties are indicated in the table below and a warning message is logged on startup when they are present in the configuration file.
