Modified: karaf/site/production/manual/decanter/latest-2/html/index.html
URL: 
http://svn.apache.org/viewvc/karaf/site/production/manual/decanter/latest-2/html/index.html?rev=1877281&r1=1877280&r2=1877281&view=diff
==============================================================================
--- karaf/site/production/manual/decanter/latest-2/html/index.html (original)
+++ karaf/site/production/manual/decanter/latest-2/html/index.html Sat May  2 
05:18:46 2020
@@ -2584,28 +2584,33 @@ table.CodeRay td.code>pre{padding:0}
 <li><a href="#_activemq_jmx">1.2.7. ActiveMQ (JMX)</a></li>
 <li><a href="#_camel_jmx">1.2.8. Camel (JMX)</a></li>
 <li><a href="#_camel_tracer_notifier">1.2.9. Camel Tracer &amp; 
Notifier</a></li>
-<li><a href="#_system">1.2.10. System</a></li>
-<li><a href="#_network_socket">1.2.11. Network socket</a></li>
-<li><a href="#_jms">1.2.12. JMS</a></li>
-<li><a href="#_mqtt">1.2.13. MQTT</a></li>
-<li><a href="#_kafka">1.2.14. Kafka</a></li>
-<li><a href="#_rest_servlet">1.2.15. Rest Servlet</a></li>
-<li><a href="#_soap">1.2.16. SOAP</a></li>
-<li><a href="#_dropwizard_metrics">1.2.17. Dropwizard Metrics</a></li>
-<li><a href="#_jdbc">1.2.18. JDBC</a></li>
-<li><a href="#_customizing_properties_in_collectors">1.2.19. Customizing 
properties in collectors</a></li>
+<li><a href="#_system_oshi">1.2.10. System (oshi)</a></li>
+<li><a href="#_system_script">1.2.11. System (script)</a></li>
+<li><a href="#_network_socket">1.2.12. Network socket</a></li>
+<li><a href="#_jms">1.2.13. JMS</a></li>
+<li><a href="#_mqtt">1.2.14. MQTT</a></li>
+<li><a href="#_kafka">1.2.15. Kafka</a></li>
+<li><a href="#_rest_servlet">1.2.16. Rest Servlet</a></li>
+<li><a href="#_soap">1.2.17. SOAP</a></li>
+<li><a href="#_dropwizard_metrics">1.2.18. Dropwizard Metrics</a></li>
+<li><a href="#_jdbc">1.2.19. JDBC</a></li>
+<li><a href="#_configadmin">1.2.20. ConfigAdmin</a></li>
+<li><a href="#_prometheus">1.2.21. Prometheus</a></li>
+<li><a href="#_redis">1.2.22. Redis</a></li>
+<li><a href="#_elasticsearch">1.2.23. Elasticsearch</a></li>
+<li><a href="#_customizing_properties_in_collectors">1.2.24. Customizing 
properties in collectors</a></li>
 </ul>
 </li>
 <li><a href="#_appenders">1.3. Appenders</a>
 <ul class="sectlevel3">
 <li><a href="#_log_2">1.3.1. Log</a></li>
-<li><a href="#_elasticsearch_kibana">1.3.2. Elasticsearch &amp; Kibana</a></li>
+<li><a href="#_elasticsearch_appender">1.3.2. Elasticsearch Appender</a></li>
 <li><a href="#_file_2">1.3.3. File</a></li>
 <li><a href="#_jdbc_2">1.3.4. JDBC</a></li>
 <li><a href="#_jms_2">1.3.5. JMS</a></li>
 <li><a href="#_camel">1.3.6. Camel</a></li>
 <li><a href="#_kafka_2">1.3.7. Kafka</a></li>
-<li><a href="#_redis">1.3.8. Redis</a></li>
+<li><a href="#_redis_2">1.3.8. Redis</a></li>
 <li><a href="#_mqtt_2">1.3.9. MQTT</a></li>
 <li><a href="#_cassandra">1.3.10. Cassandra</a></li>
 <li><a href="#_influxdb">1.3.11. InfluxDB</a></li>
@@ -2618,10 +2623,16 @@ table.CodeRay td.code>pre{padding:0}
 </li>
 <li><a href="#_alerting">1.4. Alerting</a>
 <ul class="sectlevel3">
-<li><a href="#_checker">1.4.1. Checker</a></li>
+<li><a href="#_service">1.4.1. Service</a></li>
 <li><a href="#_alerters">1.4.2. Alerters</a></li>
 </ul>
 </li>
+<li><a href="#_processors">1.5. Processors</a>
+<ul class="sectlevel3">
+<li><a href="#_pass_through">1.5.1. Pass Through</a></li>
+<li><a href="#_aggregate">1.5.2. Aggregate</a></li>
+</ul>
+</li>
 </ul>
 </li>
 <li><a href="#_developer_guide">2. Developer Guide</a>
@@ -2635,6 +2646,7 @@ table.CodeRay td.code>pre{padding:0}
 </li>
 <li><a href="#_custom_appender">2.3. Custom Appender</a></li>
 <li><a href="#_custom_alerter">2.4. Custom Alerter</a></li>
+<li><a href="#_custom_processor">2.5. Custom Processor</a></li>
 </ul>
 </li>
 </ul>
@@ -2656,15 +2668,15 @@ table.CodeRay td.code>pre{padding:0}
 <div class="sect2">
 <h3 id="_introduction">1.1. Introduction</h3>
 <div class="paragraph">
-<p>Apache Karaf Decanter is monitoring solution running in Apache Karaf.</p>
+<p>Apache Karaf Decanter is a monitoring solution running in Apache Karaf.</p>
 </div>
 <div class="paragraph">
-<p>It&#8217;s composed in three parts:</p>
+<p>It&#8217;s composed of three parts:</p>
 </div>
 <div class="ulist">
 <ul>
 <li>
-<p>Collectors are responsible of harvesting monitoring data. Decanter provides 
collectors to harvest different kind
+<p>Collectors are responsible for harvesting monitoring data. Decanter 
provides collectors to harvest different kinds
 of data. We have two kinds of collectors:</p>
 <div class="ulist">
 <ul>
@@ -2680,11 +2692,10 @@ appenders</p>
 </li>
 <li>
 <p>Appenders receive the data from the collectors and are responsible to store 
the data into a given backend. Decanter
-provides appenders depending of the backend storage that you want to use.</p>
+provides appenders depending on the backend storage that you want to use.</p>
 </li>
 <li>
-<p>Alerters is a special kind of appender. It receives all harvested data and 
checks on it. If a check fails, an alert event
-is created and sent to alerters. Decanter provides alerters depending of the 
kind of notification that you want.</p>
+<p>Alerters are a special kind of appender. A check is performed on all 
harvested data. If a check fails, an alert event is created and sent to the 
alerters. Decanter provides alerters depending on the kind of notification that 
you want.</p>
 </li>
 </ul>
 </div>
@@ -2734,7 +2745,7 @@ data and send to the appenders.</p>
 <h4 id="_log">1.2.1. Log</h4>
 <div class="paragraph">
 <p>The Decanter Log Collector is an event driven collector. It automatically 
reacts when a log occurs, and
-send the log details (level, logger name, message, etc) to the appenders.</p>
+sends the log details (level, logger name, message, etc) to the appenders.</p>
 </div>
 <div class="paragraph">
 <p>The <code>decanter-collector-log</code> feature installs the log 
collector:</p>
@@ -2841,9 +2852,9 @@ containing:</p>
 <div class="paragraph">
 <p>The Decanter File Collector is an event driven collector. It automatically 
reacts when new lines are appended into
 a file (especially a log file). It acts like the tail Unix command. Basically, 
it&#8217;s an alternative to the log collector.
-The log collector reacts for local Karaf log messages, whereas the file 
collector can react to any files, included log
-file from other system than Karaf. It means that you can monitor and send 
collected data for any system (even not Java
-base, or whatever).</p>
+The log collector reacts to local Karaf log messages, whereas the file 
collector can react to any file, including log
+files from other systems to Karaf. It means that you can monitor and send 
collected data for any system (even if it is not Java
+based).</p>
 </div>
 <div class="paragraph">
 <p>The file collector deals with file rotation, file not found.</p>
@@ -2877,7 +2888,7 @@ any=value</pre>
 <p><code>type</code> is an ID (mandatory) that allows you to easily identify 
the monitored file</p>
 </li>
 <li>
-<p><code>path</code> is the location of the file that you want to monitore</p>
+<p><code>path</code> is the location of the file that you want to monitor</p>
 </li>
 <li>
 <p>all other values (like <code>any</code>) will be part of the collected 
data. It means that you can add your own custom data, and
@@ -2923,13 +2934,13 @@ a typed Object (Long, Integer or String)
 <div class="sect5">
 <h6 id="_identity_parser">Identity parser</h6>
 <div class="paragraph">
-<p>The identity parser doesn&#8217;t actually parse the line, it just pass 
through. It&#8217;s the default parser used by the file collector.</p>
+<p>The identity parser doesn&#8217;t actually parse the line, it just passes 
through. It&#8217;s the default parser used by the file collector.</p>
 </div>
 </div>
 <div class="sect5">
 <h6 id="_split_parser">Split parser</h6>
 <div class="paragraph">
-<p>The split parser split the line using a separator (<code>,</code> by 
default). Optionally, it can take <code>keys</code> used a property name in the 
event.</p>
+<p>The split parser splits the line using a separator (<code>,</code> by 
default). Optionally, it can take <code>keys</code> used a property name in the 
event.</p>
 </div>
 <div class="paragraph">
 <p>For instance, you can have the following 
<code>etc/org.apache.karaf.decanter.parser.split.cfg</code> configuration 
file:</p>
@@ -2941,7 +2952,7 @@ keys=first,second,third,fourth</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>If the parser gets a line (collected by the file collector) like 
<code>this,is,a,test</code>, the line will be parsed as follow (the file 
collector will send the following data to the dispatcher):</p>
+<p>If the parser gets a line (collected by the file collector) like 
<code>this,is,a,test</code>, the line will be parsed as follows (the file 
collector will send the following data to the dispatcher):</p>
 </div>
 <div class="listingblock">
 <div class="content">
@@ -2977,7 +2988,7 @@ fourth-&gt;test</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>If the parser gets a line (collected by the file collector) like <code>a 
test here</code>, the linbe will be parsed as follow (the file collector will 
send the following data to the dispatcher):</p>
+<p>If the parser gets a line (collected by the file collector) like <code>a 
test here</code>, the line will be parsed as follows (the file collector will 
send the following data to the dispatcher):</p>
 </div>
 <div class="listingblock">
 <div class="content">
@@ -3143,7 +3154,7 @@ url=local
 <p>the <code>type</code> property is a name (of your choice) allowing you to 
easily identify the harvested data</p>
 </li>
 <li>
-<p>the <code>url</code> property is the MBeanServer to connect. "local" is 
reserved keyword to specify the local MBeanServer.
+<p>the <code>url</code> property is the MBeanServer to connect to. "local" is 
a reserved keyword to specify the local MBeanServer.
 Instead of "local", you can use the JMX service URL. For instance, for Karaf 
version 3.0.0, 3.0.1, 3.0.2, and 3.0.3,
 as the local MBeanServer is secured, you can specify 
<code>service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root</code>. You
 can also polled any remote MBean server (Karaf based or not) providing the 
service URL.</p>
@@ -3159,7 +3170,7 @@ is secured.</p>
 <li>
 <p>the <code>object.name</code> prefix is optional. If this property is not 
specified, the collector will retrieve the attributes
 of all MBeans. You can filter to consider only some MBeans. This property 
contains the ObjectName filter to retrieve
-the attributes only to some MBeans. Several object names can be listed, 
provided the property prefix is <code>object.name.</code>.</p>
+the attributes only of some MBeans. Several object names can be listed, 
provided the property prefix is <code>object.name.</code>.</p>
 </li>
 <li>
 <p>any other values will be part of the collected data. It means that you can 
add your own property if you want to add
@@ -3180,7 +3191,7 @@ additional data, and create queries base
 <p>The Karaf Decanter JMX collector by default uses RMI protocol for JMX. But 
it also supports JMXMP protocol.</p>
 </div>
 <div class="paragraph">
-<p>The features to install are the sames: 
<code>decanter-collector-jmx</code>.</p>
+<p>The features to install are the same: 
<code>decanter-collector-jmx</code>.</p>
 </div>
 <div class="paragraph">
 <p>However, you have to enable the <code>jmxmp</code> protocol support in the 
Apache Karaf instance hosting Karaf Decanter.</p>
@@ -3249,7 +3260,7 @@ jmx.remote.protocol.provider.pkgs=com.su
 </div>
 </div>
 <div class="paragraph">
-<p>This feature installs the same collector as the 
<code>decanter-collector-jmx</code>, but also add the
+<p>This feature installs the same collector as the 
<code>decanter-collector-jmx</code>, but also adds the
 <code>etc/org.apache.karaf.decanter.collector.jmx-activemq.cfg</code> 
configuration file.</p>
 </div>
 <div class="paragraph">
@@ -3298,7 +3309,7 @@ object.name=org.apache.activemq:*</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>This feature installs the same collector as the 
<code>decanter-collector-jmx</code>, but also add the
+<p>This feature installs the same collector as the 
<code>decanter-collector-jmx</code>, but also adds the
 <code>etc/org.apache.karaf.decanter.collector.jmx-camel.cfg</code> 
configuration file.</p>
 </div>
 <div class="paragraph">
@@ -3341,7 +3352,7 @@ object.name=org.apache.camel:context=*,t
 <div class="sect4">
 <h5 id="_camel_tracer">Camel Tracer</h5>
 <div class="paragraph">
-<p>If you enable the tracer on a Camel route, all tracer events (exchanges on 
each step of the route) are send to the
+<p>If you enable the tracer on a Camel route, all tracer events (exchanges on 
each step of the route) are sent to the
 appenders.</p>
 </div>
 <div class="paragraph">
@@ -3411,12 +3422,82 @@ call your extender to populate extra pro
 <p>Decanter also provides <code>DecanterEventNotifier</code> implementing a 
Camel event notifier: <a 
href="http://camel.apache.org/eventnotifier-to-log-details-about-all-sent-exchanges.html";
 
class="bare">http://camel.apache.org/eventnotifier-to-log-details-about-all-sent-exchanges.html</a></p>
 </div>
 <div class="paragraph">
-<p>It&#8217;s very similar to the Decanter Camel Tracer. You can control the 
camel contexts and routes to which you want to trap event.</p>
+<p>It&#8217;s very similar to the Decanter Camel Tracer. You can control the 
camel contexts and routes to which you want to trap events.</p>
+</div>
+</div>
+</div>
+<div class="sect3">
+<h4 id="_system_oshi">1.2.10. System (oshi)</h4>
+<div class="paragraph">
+<p>The oshi collector is a system collector (polled) that periodically 
retrieve all details about the hardware and the operating system.</p>
+</div>
+<div class="paragraph">
+<p>This collector gets lot of details about the machine.</p>
+</div>
+<div class="paragraph">
+<p>The <code>decanter-collector-oshi</code> feature installs the oshi system 
collector:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre>karaf@root()&gt; feature:install decanter-collector-oshi</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>This feature installs a default 
<code>etc/org.apache.karaf.decanter.collector.oshi.cfg</code> configuration 
file containing:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre>################################################################################
+#
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+################################################################################
+
+#
+# Decanter oshi (system) collector
+#
+
+# computerSystem=true
+# computerSystem.baseboard=true
+# computerSystem.firmware=true
+# memory=true
+# processors=true
+# processors.logical=true
+# displays=true
+# disks=true
+# disks.partitions=true
+# graphicsCards=true
+# networkIFs=true
+# powerSources=true
+# soundCards=true
+# sensors=true
+# usbDevices=true
+# operatingSystem=true
+# operatingSystem.fileSystems=true
+# operatingSystem.networkParams=true
+# operatingSystem.processes=true
+# operatingSystem.services=true</pre>
+</div>
 </div>
+<div class="paragraph">
+<p>By default, the oshi collector gets all details about the machine. You can 
filter what you want to harvest in this configuration file.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_system">1.2.10. System</h4>
+<h4 id="_system_script">1.2.11. System (script)</h4>
 <div class="paragraph">
 <p>The system collector is a polled collector (periodically executed by the 
Decanter Scheduler).</p>
 </div>
@@ -3443,6 +3524,9 @@ call your extender to populate extra pro
 # This collector executes system commands, retrieve the exec output/err
 # sent to the appenders
 #
+# You can define the number of thread to use for parallelization command calls:
+# thread.number=1
+#
 # The format is command.key=command_to_execute
 # where command is a reserved keyword used to identify a command property
 # for instance:
@@ -3486,7 +3570,7 @@ call your extender to populate extra pro
 </div>
 </div>
 <div class="sect3">
-<h4 id="_network_socket">1.2.11. Network socket</h4>
+<h4 id="_network_socket">1.2.12. Network socket</h4>
 <div class="paragraph">
 <p>The Decanter network socket collector listens for incoming messages coming 
from a remote network socket collector.</p>
 </div>
@@ -3525,23 +3609,23 @@ unmarshaller.target=(dataFormat=json)</p
 <p>the <code>port</code> property contains the port number where the network 
socket collector is listening</p>
 </li>
 <li>
-<p>the <code>workers</code> property contains the number of worker thread the 
socket collector is using for connection</p>
+<p>the <code>workers</code> property contains the number of worker threads the 
socket collector is using for the connection</p>
 </li>
 <li>
 <p>the <code>protocol</code> property contains the protocol used by the 
collector for transferring data with the client</p>
 </li>
 <li>
 <p>the <code>unmarshaller.target</code> property contains the unmarshaller 
used by the collector to transform the data
-sended by the client.</p>
+sent by the client.</p>
 </li>
 </ul>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_jms">1.2.12. JMS</h4>
+<h4 id="_jms">1.2.13. JMS</h4>
 <div class="paragraph">
 <p>The Decanter JMS collector consumes the data from a JMS queue or topic. 
It&#8217;s a way to aggregate collected data coming
-from remote and several machines.</p>
+from (several) remote machines.</p>
 </div>
 <div class="paragraph">
 <p>The <code>decanter-collector-jms</code> feature installs the JMS 
collector:</p>
@@ -3594,10 +3678,10 @@ destination.type=queue
 </div>
 </div>
 <div class="sect3">
-<h4 id="_mqtt">1.2.13. MQTT</h4>
+<h4 id="_mqtt">1.2.14. MQTT</h4>
 <div class="paragraph">
 <p>The Decanter MQTT collector receives collected messages from a MQTT broker. 
It&#8217;s a way to aggregate collected data coming
-from remote and several machines.</p>
+from (several) remote machines.</p>
 </div>
 <div class="paragraph">
 <p>The <code>decanter-collector-mqtt</code> feature installs the MQTT 
collector:</p>
@@ -3641,10 +3725,10 @@ topic=decanter</pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_kafka">1.2.14. Kafka</h4>
+<h4 id="_kafka">1.2.15. Kafka</h4>
 <div class="paragraph">
 <p>The Decanter Kafka collector receives collected messages from a Kafka 
broker. It&#8217;s a way to aggregate collected data coming
-from remote and several machines.</p>
+from (several) remote machines.</p>
 </div>
 <div class="paragraph">
 <p>The <code>decanter-collector-kafka</code> feature installs the Kafka 
collector:</p>
@@ -3719,11 +3803,11 @@ from remote and several machines.</p>
 </div>
 </div>
 <div class="paragraph">
-<p>The configuration is similar to the Decanter Kafka appender. Please, see 
Kafka collector for details.</p>
+<p>The configuration is similar to the Decanter Kafka appender. Please, see 
the Kafka collector for details.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_rest_servlet">1.2.15. Rest Servlet</h4>
+<h4 id="_rest_servlet">1.2.16. Rest Servlet</h4>
 <div class="paragraph">
 <p>The Decanter Rest Servlet collector registers a servlet on the OSGi HTTP 
service (by default on <code>/decanter/collect</code>).</p>
 </div>
@@ -3740,7 +3824,7 @@ from remote and several machines.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_soap">1.2.16. SOAP</h4>
+<h4 id="_soap">1.2.17. SOAP</h4>
 <div class="paragraph">
 <p>The Decanter SOAP collector periodically requests a SOAP service and 
returns the result (the SOAP Response, or error details if it failed).</p>
 </div>
@@ -3766,7 +3850,7 @@ soap.request=</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>The collector send several collected properties to the dispatcher, 
especially:</p>
+<p>The collector sends several collected properties to the dispatcher, 
especially:</p>
 </div>
 <div class="ulist">
 <ul>
@@ -3783,7 +3867,7 @@ soap.request=</pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_dropwizard_metrics">1.2.17. Dropwizard Metrics</h4>
+<h4 id="_dropwizard_metrics">1.2.18. Dropwizard Metrics</h4>
 <div class="paragraph">
 <p>The Decanter Dropwizard Metrics collector get a <code>MetricSet</code> OSGi 
service and periodically get the metrics in the set.</p>
 </div>
@@ -3801,12 +3885,12 @@ send to the Decanter dispatcher.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_jdbc">1.2.18. JDBC</h4>
+<h4 id="_jdbc">1.2.19. JDBC</h4>
 <div class="paragraph">
-<p>The Decanter JDBC collector periodically executes a query on a database and 
send the query result to the dispatcher.</p>
+<p>The Decanter JDBC collector periodically executes a query on a database and 
sends the query result to the dispatcher.</p>
 </div>
 <div class="paragraph">
-<p>The <code>decanter-collector-jdbc</code> installs the JDBC collector:</p>
+<p>The <code>decanter-collector-jdbc</code> feature installs the JDBC 
collector:</p>
 </div>
 <div class="listingblock">
 <div class="content">
@@ -3848,1024 +3932,260 @@ create this datasource using the Karaf <
 </div>
 </div>
 <div class="sect3">
-<h4 id="_customizing_properties_in_collectors">1.2.19. Customizing properties 
in collectors</h4>
-<div class="paragraph">
-<p>You can add, rename or remove properties collected by the collectors before 
sending it to the dispatcher.</p>
-</div>
+<h4 id="_configadmin">1.2.20. ConfigAdmin</h4>
 <div class="paragraph">
-<p>In the collector configuration file (for instance 
<code>etc/org.apache.karaf.decanter.collector.jmx-local.cfg</code> for the 
local JMX collector), you
-can add any property. By default, the property is added to the data sent to 
the dispatcher.</p>
+<p>The Decanter ConfigAdmin collector listens for any configuration change and 
send the updated configuration to the dispatcher.</p>
 </div>
 <div class="paragraph">
-<p>You can prefix the configuration property with the action you can perform 
before sending:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p><code>fields.add.</code> adds a property to the data sent. The following 
add property <code>hello</code> with value <code>world</code>:</p>
-<div class="literalblock">
-<div class="content">
-<pre>----
-fields.add.hello=world
-----</pre>
-</div>
-</div>
-</li>
-<li>
-<p><code>fields.remove.</code> removes a property to the data sent:</p>
-<div class="literalblock">
-<div class="content">
-<pre>----
-fields.remove.hello=
-----</pre>
-</div>
-</div>
-</li>
-<li>
-<p><code>fields.rename.</code> rename a property with another name:</p>
-<div class="literalblock">
-<div class="content">
-<pre>----
-fields.rename.helo=hello
-----</pre>
-</div>
-</div>
-</li>
-</ul>
-</div>
-</div>
-</div>
-<div class="sect2">
-<h3 id="_appenders">1.3. Appenders</h3>
-<div class="paragraph">
-<p>Decanter appenders receive the data from the collectors, and store the data 
into a storage backend.</p>
-</div>
-<div class="sect3">
-<h4 id="_log_2">1.3.1. Log</h4>
-<div class="paragraph">
-<p>The Decanter Log Appender creates a log message for each event received 
from the collectors.</p>
-</div>
-<div class="paragraph">
-<p>The <code>decanter-appender-log</code> feature installs the log 
appender:</p>
+<p>The <code>decanter-collector-configadmin</code> feature installs the 
ConfigAdmin collector:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>karaf@root()&gt; feature:install decanter-appender-log</pre>
+<pre>karaf@root()&gt; feature:install decanter-collector-configadmin</pre>
 </div>
 </div>
-<div class="paragraph">
-<p>The log appender doesn&#8217;t require any configuration.</p>
-</div>
 </div>
 <div class="sect3">
-<h4 id="_elasticsearch_kibana">1.3.2. Elasticsearch &amp; Kibana</h4>
-<div class="admonitionblock warning">
-<table>
-<tr>
-<td class="icon">
-<div class="title">Warning</div>
-</td>
-<td class="content">
-<div class="paragraph">
-<p>For production, we recommend to use a dedicated instance of Elasticsearch 
and Kibana. The following features are not recommended
-for production.</p>
-</div>
-</td>
-</tr>
-</table>
-</div>
-<div class="paragraph">
-<p>Decanter provides three appenders for Elasticsearch:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p>decanter-appender-elasticsearch-rest (recommanded) is an appender which 
directly uses the Elasticsearch HTTP REST API. It&#8217;s compliant with any 
Elasticsearch version (1.x, 2.x, 5.x, 6.x).</p>
-</li>
-<li>
-<p>decanter-appender-elasticsearch-jest (deprecated) is an appender which 
directly uses the Elasticsearch HTTP REST API, working with any Elasticsearch 
version (1.x, 2.x, 5.x, 6.x).</p>
-</li>
-<li>
-<p>decanter-appender-elasticsearch-native-1.x is an appender which uses the 
Elasticsearch 1.x Java Client API. It&#8217;s compliant only with Elasticsearch 
1.x versions.</p>
-</li>
-<li>
-<p>decanter-appender-elasticsearch-native-2.x is an appender which uses the 
Elasticsearch 2.x Java Client API. It&#8217;s compliant only with Elasticsearch 
2.x versions.</p>
-</li>
-</ul>
-</div>
-<div class="paragraph">
-<p>These appenders store the data (coming from the collectors) into an 
Elasticsearch node.
-They transformm the data as a json document, stored into Elasticsearch.</p>
-</div>
-<div class="sect4">
-<h5 id="_elasticsearch_5_x_6_x_rest_appender">Elasticsearch 5.x/6.x Rest 
Appender</h5>
-<div class="paragraph">
-<p>The Decanter Elasticsearch Rest appender uses the Elasticsearch Rest client 
provided since Elasticsearch 5.x. It can be use with Elasticsearch 5.x or 6.x 
versions.</p>
-</div>
-<div class="paragraph">
-<p>The <code>decanter-appender-elasticsearch-rest</code> feature installs this 
appender:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>karaf@root()&gt; feature:install 
decanter-appender-elasticsearch-rest</pre>
-</div>
-</div>
-</div>
-<div class="sect4">
-<h5 id="_elasticsearch_http_rest_jest_appender">Elasticsearch HTTP REST Jest 
appender</h5>
-<div class="paragraph">
-<p>The Decanter Elasticsearch HTTP REST API appender uses the Elasticsearch 
REST API. It works with any Elasticsearch version (1.x and 2.x).</p>
-</div>
-<div class="paragraph">
-<p>The <code>decanter-appender-elasticsearch-jest</code> feature installs this 
appender:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>karaf@root()&gt; feature:install 
decanter-appender-elasticsearch-jest</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>This feature installs the appender and the 
<code>etc/org.apache.karaf.decanter.appender.elasticsearch.jest.cfg</code> 
configuration file
-containing:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>#########################################################
-# Decanter Elasticsearch HTTP Jest Appender Configuration
-#########################################################
-
-# HTTP address of the elasticsearch node
-# NB: the appender uses discovery via elasticsearch nodes API
-address=http://localhost:9200
-
-# Basic username and password authentication
-# username=user
-# password=password</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>The file contains the Elasticsearch node location:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p>the <code>address</code> is the HTTP URL of the Elasticsearch node. Default 
is <code>http://localhost:9200</code>.</p>
-</li>
-<li>
-<p>the <code>username</code> is the username used for authentication 
(optional)</p>
-</li>
-<li>
-<p>the <code>password</code> is the password used for authentication 
(optional)</p>
-</li>
-</ul>
-</div>
-</div>
-<div class="sect4">
-<h5 id="_elasticsearch_1_x_native_appender">Elasticsearch 1.x Native 
appender</h5>
+<h4 id="_prometheus">1.2.21. Prometheus</h4>
 <div class="paragraph">
-<p>The Elasticsearch 1.x Native appender uses the Elasticsearch 1.x Java 
Client API. It&#8217;s very specific to
-Elasticsearch 1.x versions, and can&#8217;t run with Elasticsearch 2.x.</p>
+<p>The Decanter Prometheus collector is able to periodically (scheduled 
collector) read Prometheus servlet output to create events sent in Decanter.</p>
 </div>
 <div class="paragraph">
-<p>The <code>decanter-appender-elasticsearch-native-1.x</code> feature 
installs the elasticsearch appender:</p>
+<p>The <code>decanter-collector-prometheus</code> feature installs the 
Prometheus collector:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>karaf@root()&gt; feature:install 
decanter-appender-elasticsearch-native-1.x</pre>
+<pre>karaf@root()&gt; feature:install decanter-collector-prometheus</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>This feature installs the elasticsearch appender, especially the 
<code>etc/org.apache.karaf.decanter.appender.elasticsearch.cfg</code>
-configuration file containing:</p>
+<p>The feature also installs the 
<code>etc/org.apache.karaf.decanter.collector.prometheus.cfg</code> 
configuration file containing:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>################################################
-# Decanter Elasticsearch Appender Configuration
-################################################
-
-# Hostname of the elasticsearch instance
-host=localhost
-# Port number of the elasticsearch instance
-port=9300
-# Name of the elasticsearch cluster
-clusterName=elasticsearch</pre>
+<pre>prometheus.url=http://host/prometheus</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>This file contains the elasticsearch instance connection properties:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p>the <code>host</code> property contains the hostname (or IP address) of the 
Elasticsearch instance</p>
-</li>
-<li>
-<p>the <code>port</code> property contains the port number of the 
Elasticsearch instance</p>
-</li>
-<li>
-<p>the <code>clusterName</code> property contains the name of the 
Elasticsearch cluster where to send the data</p>
-</li>
-</ul>
-</div>
-</div>
-<div class="sect4">
-<h5 id="_elasticsearch_2_x_native_appender">Elasticsearch 2.x Native 
appender</h5>
-<div class="paragraph">
-<p>The Elasticsearch 2.x Native appender uses the Elasticsearch 2.x Java 
Client API. It&#8217;s very specific to
-Elasticsearch 2.x versions, and can&#8217;t run with Elasticsearch 1.x.</p>
-</div>
-<div class="paragraph">
-<p>The <code>decanter-appender-elasticsearch-native-2.x</code> feature 
installs the elasticsearch appender:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>karaf@root()&gt; feature:install 
decanter-appender-elasticsearch-native-2.x</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>This feature installs the elasticsearch appender, especially the 
<code>etc/org.apache.karaf.decanter.appender.elasticsearch.cfg</code>
-configuration file containing:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>################################################
-# Decanter Elasticsearch Appender Configuration
-################################################
-
-# Hostname of the elasticsearch instance
-host=localhost
-# Port number of the elasticsearch instance
-port=9300
-# Name of the elasticsearch cluster
-clusterName=elasticsearch</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>This file contains the elasticsearch instance connection properties:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p>the <code>host</code> property contains the hostname (or IP address) of the 
Elasticsearch instance</p>
-</li>
-<li>
-<p>the <code>port</code> property contains the port number of the 
Elasticsearch instance</p>
-</li>
-<li>
-<p>the <code>clusterName</code> property contains the name of the 
Elasticsearch cluster where to send the data</p>
-</li>
-</ul>
-</div>
-</div>
-<div class="sect4">
-<h5 id="_embedding_decanter_elasticsearch_1_x_and_2_x">Embedding Decanter 
Elasticsearch (1.x and 2.x)</h5>
-<div class="admonitionblock note">
-<table>
-<tr>
-<td class="icon">
-<div class="title">Note</div>
-</td>
-<td class="content">
-<div class="paragraph">
-<p>For a larger and shared production platform, we recommend to dedicate a 
Elasticsearch instance on its own JVM.
-It allows you some specific tuning for elasticsearch.
-Another acceptable configuration is to set up the Decanter embedded 
Elasticsearch instance as part (client) of a larger
-cluster.</p>
-</div>
-<div class="paragraph">
-<p>The following Decanter Elasticsearch embedded instance setup works 
perfectly fine for Karaf Decanter monitoring purpose,
-especially for the current Karaf instance.</p>
-</div>
-</td>
-</tr>
-</table>
-</div>
-<div class="paragraph">
-<p>For convenience, Decanter provides <code>elasticsearch</code> feature 
starting an embedded Elasticsearch instance:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>karaf@root()&gt; feature:install elasticsearch</pre>
+<p>The <code>prometheus.url</code> property is mandatory and define the 
location of the Prometheus export servlet (that could be provided by the 
Decanter Prometheus appender for instance).</p>
 </div>
 </div>
+<div class="sect3">
+<h4 id="_redis">1.2.22. Redis</h4>
 <div class="paragraph">
-<p>Decanter provides versions of this feature, depending of the Elasticsearch 
version you want to use (1.x or 2.x).</p>
+<p>The Decanter Redis collector is able to periodically (scheduled collector) 
read Redis Map to get key/value pairs.
+You can filter the keys you want thanks to key pattern.</p>
 </div>
 <div class="paragraph">
-<p>You can see the feature version available:</p>
+<p>The <code>decanter-collector-redis</code> feature installs the Redis 
collector:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>karaf@root()&gt; feature:version-list elasticsearch</pre>
-</div>
+<pre>karaf@root()&gt; feature:install decanter-collector-redis</pre>
 </div>
-<div class="paragraph">
-<p>Thanks to this elasticsearch instance, by default, the 
decanter-appender-elasticsearch* appenders will send the data to this 
instance.</p>
-</div>
-<div class="paragraph">
-<p>The feature also installs the <code>etc/elasticsearch.yml</code> 
configuration file, different depending of the Elasticsearch version.</p>
 </div>
 <div class="paragraph">
-<p>For Elasticsearch 1.x:</p>
+<p>The feature also installs the 
<code>etc/org.apache.karaf.decanter.collector.redis.cfg</code> configuration 
file containing:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>###############################################################################
-##################### Elasticsearch Decanter Configuration ####################
-###############################################################################
-
-# WARNING: change in this configuration file requires a refresh or restart of
-# the elasticsearch bundle
-
-################################### Cluster ###################################
-
-# Cluster name identifies your cluster for auto-discovery. If you're running
-# multiple clusters on the same network, make sure you're using unique names.
-#
-cluster.name: elasticsearch
-cluster.routing.schedule: 50ms
-
-
-#################################### Node #####################################
-
-# Node names are generated dynamically on startup, so you're relieved
-# from configuring them manually. You can tie this node to a specific name:
-#
-node.name: decanter
-
-# Every node can be configured to allow or deny being eligible as the master,
-# and to allow or deny to store the data.
-#
-# Allow this node to be eligible as a master node (enabled by default):
-#
-#node.master: true
-#
-# Allow this node to store data (enabled by default):
-#
-node.data: true
-
-# You can exploit these settings to design advanced cluster topologies.
-#
-# 1. You want this node to never become a master node, only to hold data.
-#    This will be the "workhorse" of your cluster.
-#
-#node.master: false
-#node.data: true
-#
-# 2. You want this node to only serve as a master: to not store any data and
-#    to have free resources. This will be the "coordinator" of your cluster.
-#
-#node.master: true
-#node.data: false
-#
-# 3. You want this node to be neither master nor data node, but
-#    to act as a "search load balancer" (fetching data from nodes,
-#    aggregating results, etc.)
-#
-#node.master: false
-#node.data: false
-
-# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
-# Node Info API [http://localhost:9200/_nodes] or GUI tools
-# such as &lt;http://www.elasticsearch.org/overview/marvel/&gt;,
-# &lt;http://github.com/karmi/elasticsearch-paramedic&gt;,
-# &lt;http://github.com/lukas-vlcek/bigdesk&gt; and
-# &lt;http://mobz.github.com/elasticsearch-head&gt; to inspect the cluster 
state.
-
-# A node can have generic attributes associated with it, which can later be 
used
-# for customized shard allocation filtering, or allocation awareness. An 
attribute
-# is a simple key value pair, similar to node.key: value, here is an example:
-#
-#node.rack: rack314
-
-# By default, multiple nodes are allowed to start from the same installation 
location
-# to disable it, set the following:
-#node.max_local_storage_nodes: 1
-
-
-#################################### Index ####################################
-
-# You can set a number of options (such as shard/replica options, mapping
-# or analyzer definitions, translog settings, ...) for indices globally,
-# in this file.
-#
-# Note, that it makes more sense to configure index settings specifically for
-# a certain index, either when creating it or by using the index templates API.
-#
-# See 
&lt;http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html&gt;
 and
-# 
&lt;http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html&gt;
-# for more information.
-
-# Set the number of shards (splits) of an index (5 by default):
-#
-#index.number_of_shards: 5
-
-# Set the number of replicas (additional copies) of an index (1 by default):
-#
-#index.number_of_replicas: 1
-
-# Note, that for development on a local machine, with small indices, it usually
-# makes sense to "disable" the distributed features:
-#
-#index.number_of_shards: 1
-#index.number_of_replicas: 0
-
-# These settings directly affect the performance of index and search operations
-# in your cluster. Assuming you have enough machines to hold shards and
-# replicas, the rule of thumb is:
-#
-# 1. Having more *shards* enhances the _indexing_ performance and allows to
-#    _distribute_ a big index across machines.
-# 2. Having more *replicas* enhances the _search_ performance and improves the
-#    cluster _availability_.
-#
-# The "number_of_shards" is a one-time setting for an index.
-#
-# The "number_of_replicas" can be increased or decreased anytime,
-# by using the Index Update Settings API.
-#
-# Elasticsearch takes care about load balancing, relocating, gathering the
-# results from nodes, etc. Experiment with different settings to fine-tune
-# your setup.
-
-# Use the Index Status API (&lt;http://localhost:9200/A/_status&gt;) to inspect
-# the index status.
-
-
-#################################### Paths ####################################
-
-# Path to directory containing configuration (this file and logging.yml):
-#
-#path.conf: /path/to/conf
-
-# Path to directory where to store index data allocated for this node.
-#
-#path.data: /path/to/data
-#
-# Can optionally include more than one location, causing data to be striped 
across
-# the locations (a la RAID 0) on a file level, favouring locations with most 
free
-# space on creation. For example:
-#
-#path.data: /path/to/data1,/path/to/data2
-path.data: data
-
-# Path to temporary files:
-#
-#path.work: /path/to/work
-
-# Path to log files:
-#
-#path.logs: /path/to/logs
-
-# Path to where plugins are installed:
-#
-#path.plugins: /path/to/plugins
-path.plugins: ${karaf.home}/elasticsearch/plugins
-
-#################################### Plugin ###################################
-
-# If a plugin listed here is not installed for current node, the node will not 
start.
-#
-#plugin.mandatory: mapper-attachments,lang-groovy
-
-
-################################### Memory ####################################
-
-# Elasticsearch performs poorly when JVM starts swapping: you should ensure 
that
-# it _never_ swaps.
-#
-# Set this property to true to lock the memory:
-#
-#bootstrap.mlockall: true
-
-# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
-# to the same value, and that the machine has enough memory to allocate
-# for Elasticsearch, leaving enough memory for the operating system itself.
-#
-# You should also make sure that the Elasticsearch process is allowed to lock
-# the memory, eg. by using `ulimit -l unlimited`.
-
-
-############################## Network And HTTP ###############################
-
-# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
-# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
-# communication. (the range means that if the port is busy, it will 
automatically
-# try the next port).
-
-# Set the bind address specifically (IPv4 or IPv6):
-#
-#network.bind_host: 192.168.0.1
-
-# Set the address other nodes will use to communicate with this node. If not
-# set, it is automatically derived. It must point to an actual IP address.
-#
-#network.publish_host: 192.168.0.1
-
-# Set both 'bind_host' and 'publish_host':
-#
-#network.host: 192.168.0.1
-network.host: 127.0.0.1
-
-# Set a custom port for the node to node communication (9300 by default):
-#
-#transport.tcp.port: 9300
-
-# Enable compression for all communication between nodes (disabled by default):
-#
-#transport.tcp.compress: true
-
-# Set a custom port to listen for HTTP traffic:
-#
-#http.port: 9200
-
-# Set a custom allowed content length:
-#
-#http.max_content_length: 100mb
-
-# Enable HTTP:
-#
-http.enabled: true
-http.cors.enabled: true
-http.cors.allow-origin: /.*/
-
-
-################################### Gateway ###################################
-
-# The gateway allows for persisting the cluster state between full cluster
-# restarts. Every change to the state (such as adding an index) will be stored
-# in the gateway, and when the cluster starts up for the first time,
-# it will read its state from the gateway.
-
-# There are several types of gateway implementations. For more information, see
-# 
&lt;http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html&gt;.
-
-# The default gateway type is the "local" gateway (recommended):
-#
-#gateway.type: local
-
-# Settings below control how and when to start the initial recovery process on
-# a full cluster restart (to reuse as much local data as possible when using 
shared
-# gateway).
-
-# Allow recovery process after N nodes in a cluster are up:
-#
-#gateway.recover_after_nodes: 1
-
-# Set the timeout to initiate the recovery process, once the N nodes
-# from previous setting are up (accepts time value):
-#
-#gateway.recover_after_time: 5m
-
-# Set how many nodes are expected in this cluster. Once these N nodes
-# are up (and recover_after_nodes is met), begin recovery process immediately
-# (without waiting for recover_after_time to expire):
-#
-#gateway.expected_nodes: 2
-
-
-############################# Recovery Throttling #############################
+<pre>address=localhost:6379
 
-# These settings allow to control the process of shards allocation between
-# nodes during initial recovery, replica allocation, rebalancing,
-# or when adding and removing nodes.
-
-# Set the number of concurrent recoveries happening on a node:
-#
-# 1. During the initial recovery
-#
-#cluster.routing.allocation.node_initial_primaries_recoveries: 4
-#
-# 2. During adding/removing nodes, rebalancing, etc
-#
-#cluster.routing.allocation.node_concurrent_recoveries: 2
-
-# Set to throttle throughput when recovering (eg. 100mb, by default 20mb):
 #
-#indices.recovery.max_bytes_per_sec: 20mb
-
-# Set to limit the number of open concurrent streams when
-# recovering a shard from a peer:
+# Define the connection mode.
+# Possible modes: Single (default), Master_Slave, Sentinel, Cluster
 #
-#indices.recovery.concurrent_streams: 5
-
-
-################################## Discovery ##################################
-
-# Discovery infrastructure ensures nodes can be found within a cluster
-# and master node is elected. Multicast discovery is the default.
+mode=Single
 
-# Set to ensure a node sees N other master eligible nodes to be considered
-# operational within the cluster. This should be set to a quorum/majority of
-# the master-eligible nodes in the cluster.
 #
-#discovery.zen.minimum_master_nodes: 1
-
-# Set the time to wait for ping responses from other nodes when discovering.
-# Set this option to a higher value on a slow or congested network
-# to minimize discovery failures:
+# Name of the Redis map
+# Default is Decanter
 #
-#discovery.zen.ping.timeout: 3s
-
-# For more information, see
-# 
&lt;http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html&gt;
+map=Decanter
 
-# Unicast discovery allows to explicitly control which nodes will be used
-# to discover the cluster. It can be used when multicast is not present,
-# or to restrict the cluster communication-wise.
 #
-# 1. Disable multicast discovery (enabled by default):
-#
-#discovery.zen.ping.multicast.enabled: false
-#
-# 2. Configure an initial list of master nodes in the cluster
-#    to perform discovery when new nodes (master or data) are started:
+# For Master_Slave mode, we define the location of the master
+# Default is localhost:6379
 #
-#discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]
+#masterAddress=localhost:6379
 
-# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
-#
-# You have to install the cloud-aws plugin for enabling the EC2 discovery.
 #
-# For more information, see
-# 
&lt;http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html&gt;
+# For Sentinel model, define the name of the master
+# Default is myMaster
 #
-# See &lt;http://elasticsearch.org/tutorials/elasticsearch-on-ec2/&gt;
-# for a step-by-step tutorial.
+#masterName=myMaster
 
-# GCE discovery allows to use Google Compute Engine API in order to perform 
discovery.
 #
-# You have to install the cloud-gce plugin for enabling the GCE discovery.
+# For Cluster mode, define the scan interval of the nodes in the cluster
+# Default value is 2000 (2 seconds).
 #
-# For more information, see 
&lt;https://github.com/elasticsearch/elasticsearch-cloud-gce&gt;.
+#scanInterval=2000
 
-# Azure discovery allows to use Azure API in order to perform discovery.
 #
-# You have to install the cloud-azure plugin for enabling the Azure discovery.
+# Key pattern to looking for.
+# Default is *
 #
-# For more information, see 
&lt;https://github.com/elasticsearch/elasticsearch-cloud-azure&gt;.
-
-################################## Slow Log ##################################
-
-# Shard level query and fetch threshold logging.
-
-#index.search.slowlog.threshold.query.warn: 10s
-#index.search.slowlog.threshold.query.info: 5s
-#index.search.slowlog.threshold.query.debug: 2s
-#index.search.slowlog.threshold.query.trace: 500ms
-
-#index.search.slowlog.threshold.fetch.warn: 1s
-#index.search.slowlog.threshold.fetch.info: 800ms
-#index.search.slowlog.threshold.fetch.debug: 500ms
-#index.search.slowlog.threshold.fetch.trace: 200ms
-
-#index.indexing.slowlog.threshold.index.warn: 10s
-#index.indexing.slowlog.threshold.index.info: 5s
-#index.indexing.slowlog.threshold.index.debug: 2s
-#index.indexing.slowlog.threshold.index.trace: 500ms
-
-################################## GC Logging ################################
-
-#monitor.jvm.gc.young.warn: 1000ms
-#monitor.jvm.gc.young.info: 700ms
-#monitor.jvm.gc.young.debug: 400ms
-
-#monitor.jvm.gc.old.warn: 10s
-#monitor.jvm.gc.old.info: 5s
-#monitor.jvm.gc.old.debug: 2s
-
-################################## Security ################################
-
-# Uncomment if you want to enable JSONP as a valid return transport on the
-# http server. With this enabled, it may pose a security risk, so disabling
-# it unless you need it is recommended (it is disabled by default).
-#
-#http.jsonp.enable: true</pre>
+#keyPattern=*</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>For Elasticsearch 2.x:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre># ======================== Elasticsearch Configuration 
=========================
-#
-# NOTE: Elasticsearch comes with reasonable defaults for most settings.
-#       Before you set out to tweak and tune the configuration, make sure you
-#       understand what are you trying to accomplish and the consequences.
-#
-# The primary way of configuring a node is via this file. This template lists
-# the most important settings you may want to configure for a production 
cluster.
-#
-# Please see the documentation for further information on configuration 
options:
-# 
&lt;http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html&gt;
-#
-# ---------------------------------- Cluster 
-----------------------------------
-#
-# Use a descriptive name for your cluster:
-#
-cluster.name: elasticsearch
-#
-# ------------------------------------ Node 
------------------------------------
-#
-# Use a descriptive name for the node:
-#
-node.name: decanter
-#
-# Add custom attributes to the node:
-#
-# node.rack: r1
-#
-# ----------------------------------- Paths 
------------------------------------
-#
-# Path to directory where to store the data (separate multiple locations by 
comma):
-#
-# path.data: /path/to/data
-path.data: data
-path.home: data
-#
-# Path to log files:
-#
-# path.logs: /path/to/logs
-#
-# ----------------------------------- Memory 
-----------------------------------
-#
-# Lock the memory on startup:
-#
-# bootstrap.mlockall: true
-#
-# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half 
the memory
-# available on the system and that the owner of the process is allowed to use 
this limit.
-#
-# Elasticsearch performs poorly when the system is swapping the memory.
-#
-# ---------------------------------- Network 
-----------------------------------
-#
-# Set the bind address to a specific IP (IPv4 or IPv6):
-#
-# network.host: 192.168.0.1
-#
-# Set a custom port for HTTP:
-#
-# http.port: 9200
-#
-# For more information, see the documentation at:
-# 
&lt;http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html&gt;
-#
-# --------------------------------- Discovery 
----------------------------------
-#
-# Pass an initial list of hosts to perform discovery when new node is started:
-# The default list of hosts is ["127.0.0.1", "[::1]"]
-#
-# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
-#
-# Prevent the "split brain" by configuring the majority of nodes (total number 
of nodes / 2 + 1):
-#
-# discovery.zen.minimum_master_nodes: 3
-#
-# For more information, see the documentation at:
-# 
&lt;http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html&gt;
-#
-# ---------------------------------- Gateway 
-----------------------------------
-#
-# Block initial recovery after a full cluster restart until N nodes are 
started:
-#
-# gateway.recover_after_nodes: 3
-#
-# For more information, see the documentation at:
-# 
&lt;http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html&gt;
-#
-# ---------------------------------- Various 
-----------------------------------
-#
-# Disable starting multiple nodes on a single system:
-#
-# node.max_local_storage_nodes: 1
-#
-# Require explicit names when deleting indices:
-#
-# action.destructive_requires_name: true</pre>
+<p>You can configure the Redis connection (depending of the topology) and the 
key pattern in this configuration file.</p>
 </div>
 </div>
+<div class="sect3">
+<h4 id="_elasticsearch">1.2.23. Elasticsearch</h4>
 <div class="paragraph">
-<p>It&#8217;s a "standard" elasticsearch configuration file, allowing you to 
configure the embedded elasticsearch instance.</p>
+<p>The Decanter Elasticsearch collector retrieves documents from Elasticsearch 
periodically (scheduled collector).
+By default, it harvests all documents in the given index, but you can also 
specify a query.</p>
 </div>
 <div class="paragraph">
-<p>Warning: if you change the <code>etc/elasticsearch.yml</code> file, you 
have to restart (with the <code>bundle:restart</code> command) the
-Decanter elasticsearch bundle in order to load the changes.</p>
+<p>The <code>decanter-collector-elasticsearch</code> feature installs the 
Elasticsearch collector:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre>karaf@root()&gt; feature:install decanter-collector-elasticsearch</pre>
 </div>
-<div class="paragraph">
-<p>The Decanter elasticsearch node also supports loading and override of the 
settings using a
-<code>etc/org.apache.karaf.decanter.elasticsearch.cfg</code> configuration 
file.
-This file is not provided by default, as it&#8217;s used for override of the 
default settings.</p>
 </div>
 <div class="paragraph">
-<p>You can override the following elasticsearch properties in this 
configuration file:</p>
+<p>The feature also install 
<code>etc/org.apache.karaf.decanter.collector.elasticsearch.cfg</code> 
configuration file containing:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre># HTTP address of the elasticsearch nodes (separated with comma)
+addresses=http://localhost:9200
+
+# Basic username and password authentication (no authentication by default)
+#username=user
+#password=password
+
+# Name of the index to request (decanter by default)
+#index=decanter
+
+# Query to request document (match all by default)
+#query=
+
+# Starting point for the document query (no from by default)
+#from=
+
+# Max number of documents retrieved (no max by default)
+#max=
+
+# Search timeout, in seconds (no timeout by default)
+#timeout=</pre>
+</div>
 </div>
 <div class="ulist">
 <ul>
 <li>
-<p><code>cluster.name</code></p>
-</li>
-<li>
-<p><code>http.enabled</code></p>
-</li>
-<li>
-<p><code>node.data</code></p>
-</li>
-<li>
-<p><code>node.name</code></p>
-</li>
-<li>
-<p><code>node.master</code></p>
-</li>
-<li>
-<p><code>path.data</code></p>
+<p><code>addresses</code> property is the location of the Elasticsearch 
instances. Default is <code>http://localhost:9200</code>.</p>
 </li>
 <li>
-<p><code>network.host</code></p>
+<p><code>username</code> and <code>password</code> properties are used for 
authentication. They are <code>null</code> (no authentication) by default.</p>
 </li>
 <li>
-<p><code>cluster.routing.schedule</code></p>
+<p><code>index</code> property is the Elasticsearch index where to get 
documents. It&#8217;s <code>decanter</code> by default.</p>
 </li>
 <li>
-<p><code>path.plugins</code></p>
+<p><code>query</code> property is a search query to use. Default is 
<code>null</code> meaning all documents in the index are harvested.</p>
 </li>
 <li>
-<p><code>http.cors.enabled</code></p>
+<p><code>from</code> and <code>max</code> properties are used to "square" the 
query. They are <code>null</code> by default.</p>
 </li>
 <li>
-<p><code>http.cors.allow-origin</code></p>
+<p><code>timeout</code> property limits the query execution. There&#8217;s no 
timeout by default.</p>
 </li>
 </ul>
 </div>
-<div class="paragraph">
-<p>The advantage of using this file is that the elasticsearch node is 
automatically restarted in order to reload the
-settings as soon as you change the cfg file.</p>
-</div>
-</div>
-<div class="sect4">
-<h5 
id="_embedding_decanter_kibana_3_x_only_working_with_elasticsearch_1_x">Embedding
 Decanter Kibana 3.x (only working with Elasticsearch 1.x)</h5>
-<div class="paragraph">
-<p>In addition of the embedded elasticsearch 1.x instance, Decanter also 
provides an embedded Kibana 3.x instance, containing
-ready to use Decanter dashboards.</p>
 </div>
+<div class="sect3">
+<h4 id="_customizing_properties_in_collectors">1.2.24. Customizing properties 
in collectors</h4>
 <div class="paragraph">
-<p>The <code>kibana</code> feature installs the embedded kibana instance:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>karaf@root()&gt; feature:install kibana/3.1.1</pre>
-</div>
+<p>You can add, rename or remove properties collected by the collectors before 
sending it to the dispatcher.</p>
 </div>
 <div class="paragraph">
-<p>By default, the kibana instance is available on 
<code>http://host:8181/kibana</code>.</p>
+<p>In the collector configuration file (for instance 
<code>etc/org.apache.karaf.decanter.collector.jmx-local.cfg</code> for the 
local JMX collector), you
+can add any property. By default, the property is added to the data sent to 
the dispatcher.</p>
 </div>
 <div class="paragraph">
-<p>The Decanter Kibana instance provides ready to use dashboards:</p>
+<p>You can prefix the configuration property with the action you can perform 
before sending:</p>
 </div>
 <div class="ulist">
 <ul>
 <li>
-<p>Karaf dashboard uses the data harvested by the default JMX collector, and 
the log collector. Especially, it provides
-details about the threads, memory, garbage collection, etc.</p>
-</li>
-<li>
-<p>Camel dashboard uses the data harvested by the default JMX collector, or 
the Camel (JMX) collector. It can also
-leverage the Camel Tracer collector. It provides details about routes 
processing time, the failed exchanges, etc. This
-dashboard requires some tuning (updating the queries to match the route 
IDs).</p>
-</li>
-<li>
-<p>ActiveMQ dashboard uses the data harvested by the default JMX collector, or 
the ActiveMQ (JMX) collector. It provides
-details about the pending queue, the system usage, etc.</p>
-</li>
-<li>
-<p>OperatingSystem dashboard uses the data harvested by the system collector. 
The default dashboard expects data containing
-the filesystem usage, and temperature data. It&#8217;s just a sample, you have 
to tune the system collector and adapt this
-dashboard accordingly.</p>
-</li>
-</ul>
-</div>
-<div class="paragraph">
-<p>You can change these dashboards to add new panels, change the existing 
panels, etc.</p>
-</div>
-<div class="paragraph">
-<p>Of course, you can create your own dashboards, starting from blank or 
simple dashboards.</p>
-</div>
-<div class="paragraph">
-<p>By default, Decanter Kibana uses embedded elasticsearch instance. However, 
it&#8217;s possible to use a remote elasticsearch
-instance by providing the elasticsearch parameter on the URL like this for 
instance:</p>
-</div>
-<div class="listingblock">
+<p><code>fields.add.</code> adds a property to the data sent. The following 
add property <code>hello</code> with value <code>world</code>:</p>
+<div class="literalblock">
 <div class="content">
-<pre>http://localhost:8181/kibana?elasticsearch=http://localhost:9400</pre>
-</div>
+<pre>----
+fields.add.hello=world
+----</pre>
 </div>
 </div>
-<div class="sect4">
-<h5 
id="_embedding_decanter_kibana_4_x_only_working_with_elasticsearch_2_x">Embedding
 Decanter Kibana 4.x (only working with Elasticsearch 2.x)</h5>
-<div class="paragraph">
-<p>In addition of the embedded elasticsearch 2.x instance, Decanter also 
provides an embedded Kibana 4.x instance.</p>
+</li>
+<li>
+<p><code>fields.remove.</code> removes a property to the data sent:</p>
+<div class="literalblock">
+<div class="content">
+<pre>----
+fields.remove.hello=
+----</pre>
 </div>
-<div class="paragraph">
-<p>The <code>kibana</code> feature installs the embedded kibana instance:</p>
 </div>
-<div class="listingblock">
+</li>
+<li>
+<p><code>fields.rename.</code> rename a property with another name:</p>
+<div class="literalblock">
 <div class="content">
-<pre>karaf@root()&gt; feature:install kibana/4.1.2</pre>
+<pre>----
+fields.rename.helo=hello
+----</pre>
 </div>
 </div>
-<div class="paragraph">
-<p>By default, the kibana instance is available on 
<code>http://host:8181/kibana</code>.</p>
+</li>
+</ul>
 </div>
-<div class="admonitionblock note">
-<table>
-<tr>
-<td class="icon">
-<div class="title">Note</div>
-</td>
-<td class="content">
-<div class="paragraph">
-<p>Decanter Kibana 4 automatically detects collector features. Then, it 
automatically creates corresponding dashboards.</p>
 </div>
-<div class="paragraph">
-<p>However, you still have a complete control of the visualizations and 
dashboards. You can update the index to
-automatically include new fields and create your own visualizations and 
dashboards.</p>
 </div>
+<div class="sect2">
+<h3 id="_appenders">1.3. Appenders</h3>
 <div class="paragraph">
-<p>The default dashboard displayed is the "System" dashboard, requiring the 
jmx collector.</p>
-</div>
-</td>
-</tr>
-</table>
-</div>
+<p>Decanter appenders receive the data from the collectors, and store the data 
into a storage backend.</p>
 </div>
-<div class="sect4">
-<h5 id="_kibana_6_x">Kibana 6.x</h5>
+<div class="sect3">
+<h4 id="_log_2">1.3.1. Log</h4>
 <div class="paragraph">
-<p>The <code>kibana</code> 6.x feature doesn&#8217;t really embeds Kibana like 
Kibana 3 or 4 features.</p>
+<p>The Decanter Log Appender creates a log message for each event received 
from the collectors.</p>
 </div>
 <div class="paragraph">
-<p>However, it&#8217;s a convenient feature that download and starts a Kibana 
instance for you.</p>
+<p>The <code>decanter-appender-log</code> feature installs the log 
appender:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>karaf@root()&gt; feature:install kibana/6.1.1</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>The Kibana instance is started in a dedicated JVM and bound to port 5601 by 
default. However, the Decanter Kibana feature creates a proxy servlet.</p>
-</div>
-<div class="paragraph">
-<p>So, as for other Kibana features, you can access Kibana using 
<code>http://host:8181/kibana</code> in your browser.</p>
+<pre>karaf@root()&gt; feature:install decanter-appender-log</pre>
 </div>
-<div class="admonitionblock note">
-<table>
-<tr>
-<td class="icon">
-<div class="title">Note</div>
-</td>
-<td class="content">
-<div class="paragraph">
-<p>Decanter Kibana 6.x automatically detects collector features and installs 
the corresponding dashboards.</p>
 </div>
 <div class="paragraph">
-<p>However, in order to work, the only setup you have to do is to create an 
index pattern <code>*</code> with <code>default</code> as name (in the advanced 
settings).</p>
-</div>
-</td>
-</tr>
-</table>
+<p>The log appender doesn&#8217;t require any configuration.</p>
 </div>
 </div>
-<div class="sect4">
-<h5 id="_elasticsearch_head_console">Elasticsearch Head console</h5>
+<div class="sect3">
+<h4 id="_elasticsearch_appender">1.3.2. Elasticsearch Appender</h4>
 <div class="paragraph">
-<p>In addition of the embedded elasticsearch instance, Decanter also provides 
a web console allowing you to monitor and
-manage your elasticsearch cluster. It&#8217;s a ready to use elastisearch-head 
console, directly embedded in Karaf.</p>
+<p>The Decanter Elasticsearch Appender stores the data (coming from the 
collectors) into an Elasticsearch instance.
+It transforms the data as a json document, stored into Elasticsearch.</p>
 </div>
 <div class="paragraph">
-<p>The <code>elasticsearch-head</code> feature installs the embedded 
elasticsearch-head web console, corresponding to the
-elasticsearch version you are using.</p>
+<p>The Decanter Elasticsearch appender uses the Elasticsearch Rest client 
provided since Elasticsearch 5.x. It can be used with any Elasticsearch 
versions.</p>
 </div>
 <div class="paragraph">
-<p>We can install <code>elasticsearch-head</code> 1.x feature, working with 
elasticsearch 1.x:</p>
+<p>The <code>decanter-appender-elasticsearch</code> feature installs this 
appender:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>karaf@root()&gt; feature:install elasticsearch-head/1.7.3</pre>
+<pre>karaf@root()&gt; feature:install decanter-appender-elasticsearch</pre>
 </div>
 </div>
 <div class="paragraph">
-<p>or 2.x feature, working with elasticsearch 2.x:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>karaf@root()&gt; feature:install elasticsearch-head/2.2.0</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>By default, the elasticsearch-head web console is available on 
<code>http://host:8181/elasticsearch-head</code>.</p>
-</div>
+<p>You can configure the appender (especially the Elasticsearch location) in 
<code>etc/org.apache.karaf.decanter.appender.elasticsearch.cfg</code> 
configuration file.</p>
 </div>
 </div>
 <div class="sect3">
@@ -4893,7 +4213,7 @@ using the <code>marshaller.target</code>
 <div class="sect3">
 <h4 id="_jdbc_2">1.3.4. JDBC</h4>
 <div class="paragraph">
-<p>The Decanter JDBC appender allows your to store the data (coming from the 
collectors) into a database.</p>
+<p>The Decanter JDBC appender allows you to store the data (coming from the 
collectors) into a database.</p>
 </div>
 <div class="paragraph">
 <p>The Decanter JDBC appender transforms the data as a json string. The 
appender stores the json string and the timestamp
@@ -4940,7 +4260,7 @@ create this datasource using the Karaf <
 </li>
 <li>
 <p>the <code>table.name</code> property contains the table name in the 
database. The Decanter JDBC appender automatically creates
-the table for you, but you can create the table by yourself. The table is 
simple and contains just two column:</p>
+the table for you, but you can create the table by yourself. The table is 
simple and contains just two columns:</p>
 <div class="ulist">
 <ul>
 <li>
@@ -5069,7 +4389,7 @@ destination.uri=direct-vm:decanter</pre>
 </ul>
 </div>
 <div class="paragraph">
-<p>The Camel appender send an exchange. The "in" message body contains a Map 
of the harvested data.</p>
+<p>The Camel appender sends an exchange. The "in" message body contains a Map 
of the harvested data.</p>
 </div>
 <div class="paragraph">
 <p>For instance, in this configuration file, you can specify:</p>
@@ -5215,7 +4535,7 @@ destination.uri=direct-vm:decanter</pre>
 <div class="ulist">
 <ul>
 <li>
-<p>the <code>bootstrap.servers</code> contains a lit of host:port of the Kafka 
brokers. Default value is <code>localhost:9092</code>.</p>
+<p>the <code>bootstrap.servers</code> contains a list of host:port of the 
Kafka brokers. Default value is <code>localhost:9092</code>.</p>
 </li>
 <li>
 <p>the <code>client.id</code> is optional. It identifies the client on the 
Kafka broker.</p>
@@ -5228,13 +4548,13 @@ destination.uri=direct-vm:decanter</pre>
 <div class="ulist">
 <ul>
 <li>
-<p><code>0</code> means the appender doesn&#8217;t wait acknowledge from the 
Kafka broker. Basically, it means there&#8217;s no guarantee that messages have 
been received completely by the broker.</p>
+<p><code>0</code> means the appender doesn&#8217;t wait for an acknowledge 
from the Kafka broker. Basically, it means there&#8217;s no guarantee that 
messages have been received completely by the broker.</p>
 </li>
 <li>
-<p><code>1</code> means the appender waits the acknowledge only from the 
leader. If the leader falls down, it&#8217;s possible messages are lost if the 
replicas are not yet be created on the followers.</p>
+<p><code>1</code> means the appender waits for the acknowledge only from the 
leader. If the leader falls down, its possible messages are lost if the 
replicas have not yet been created on the followers.</p>
 </li>
 <li>
-<p><code>all</code> means the appender waits the acknowledge from the leader 
and all followers. This mode is the most reliable as the appender will receive 
the acknowledge only when all replicas have been created. NB: this mode 
doesn&#8217;t make sense if you have a single node Kafka broker or a 
replication factor set to 1.</p>
+<p><code>all</code> means the appender waits for the acknowledge from the 
leader and all followers. This mode is the most reliable as the appender will 
receive the acknowledge only when all replicas have been created. NB: this mode 
doesn&#8217;t make sense if you have a single node Kafka broker or a 
replication factor set to 1.</p>
 </li>
 </ul>
 </div>
@@ -5249,10 +4569,10 @@ destination.uri=direct-vm:decanter</pre>
 <p>the <code>buffer.memory</code> defines the size of the buffer the appender 
uses to send to the Kafka broker. The default value is 33554432.</p>
 </li>
 <li>
-<p>the <code>key.serializer</code> defines the full qualified class name of 
the Serializer used to serializer the keys. The default is a String serializer 
(<code>org.apache.kafka.common.serialization.StringSerializer</code>).</p>
+<p>the <code>key.serializer</code> defines the fully qualified class name of 
the Serializer used to serialize the keys. The default is a String serializer 
(<code>org.apache.kafka.common.serialization.StringSerializer</code>).</p>
 </li>
 <li>
-<p>the <code>value.serializer</code> defines the full qualified class name of 
the Serializer used to serializer the values. The default is a String 
serializer 
(<code>org.apache.kafka.common.serialization.StringSerializer</code>).</p>
+<p>the <code>value.serializer</code> defines the full qualified class name of 
the Serializer used to serialize the values. The default is a String serializer 
(<code>org.apache.kafka.common.serialization.StringSerializer</code>).</p>
 </li>
 <li>
 <p>the <code>request.timeout.ms</code> is the time the producer wait before 
considering the message production on the broker fails (default is 5s).</p>
@@ -5270,7 +4590,7 @@ destination.uri=direct-vm:decanter</pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_redis">1.3.8. Redis</h4>
+<h4 id="_redis_2">1.3.8. Redis</h4>
 <div class="paragraph">
 <p>The Decanter Redis appender sends the data (collected by the collectors) to 
a Redis broker.</p>
 </div>
@@ -5549,7 +4869,7 @@ containing:</p>
 <div class="sect3">
 <h4 id="_network_socket_2">1.3.13. Network socket</h4>
 <div class="paragraph">
-<p>The Decanter network socket appender send the collected data to a remote 
Decanter network socket collector.</p>
+<p>The Decanter network socket appender sends the collected data to a remote 
Decanter network socket collector.</p>
 </div>
 <div class="paragraph">
 <p>The use case could be to dedicate a Karaf instance as a central monitoring 
platform, receiving collected data from
@@ -5736,7 +5056,7 @@ of the OrientDB database to use:</p>
 <div class="sect3">
 <h4 id="_dropwizard_metrics_2">1.3.15. Dropwizard Metrics</h4>
 <div class="paragraph">
-<p>The Dropwizard Metrics appender receives the harvested data from the 
dispatcher and push in a Dropwizard Metrics
+<p>The Dropwizard Metrics appender receives the harvested data from the 
dispatcher and pushes to a Dropwizard Metrics
 <code>MetricRegistry</code>. You can register this <code>MetricRegistry</code> 
in your own application or use a Dropwizard Metrics Reporter
 to "push" these metrics to some backend.</p>
 </div>
@@ -5828,7 +5148,7 @@ The table is simple and contains just tw
 <div class="sect4">
 <h5 id="_websocket_servlet">WebSocket Servlet</h5>
 <div class="paragraph">
-<p>The <code>decanter-appender-websocket-servlet</code> feature expose a 
websocket on wich client can register. Then, Decanter will send the collected 
data to the connected clients.</p>
+<p>The <code>decanter-appender-websocket-servlet</code> feature exposes a 
websocket on which clients can register. Then, Decanter will send the collected 
data to the connected clients.</p>
 </div>
 <div class="paragraph">
 <p>It&#8217;s very easy to use. First install the feature:</p>
@@ -5839,7 +5159,7 @@ The table is simple and contains just tw
 </div>
 </div>
 <div class="paragraph">
-<p>The feature register the WebSocket endpoint on 
<code>http://localhost:8181/decanter-websocket</code> by default:</p>
+<p>The feature registers the WebSocket endpoint on 
<code>http://localhost:8181/decanter-websocket</code> by default:</p>
 </div>
 <div class="listingblock">
 <div class="content">
@@ -5869,31 +5189,125 @@ ID │ Servlet                  â”�
 </div>
 </div>
 </div>
+<div class="sect4">
+<h5 id="_prometheus_2">Prometheus</h5>
+<div class="paragraph">
+<p>The <code>decanter-appender-prometheus</code> feature collects and exposes 
metrics on prometheus:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre class="CodeRay highlight"><code>karaf@root()&gt; feature:install 
decanter-appender-prometheus</code></pre>
+</div>
+</div>
+<div class="paragraph">
+<p>The feature registers the Prometheus HTTP servlet on 
<code>http://localhost:8181/decanter/prometheus</code> by default:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre class="CodeRay highlight"><code>karaf@root()&gt; http:list
+ID │ Servlet        │ Servlet-Name   │ State       │ Alias             
   │ Url
+───┼────────────────┼────────────────┼─────────────┼──────────────────────┼─────────────────────────
+51 │ MetricsServlet │ ServletModel-2 │ Deployed    │ 
/decanter/prometheus │ [/decanter/prometheus/*]</code></pre>
+</div>
+</div>
+<div class="paragraph">
+<p>You can change the servlet alias in 
<code>etc/org.apache.karaf.decanter.appender.prometheus.cfg</code> 
configuration file:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre class="CodeRay 
highlight"><code>################################################################################
+#
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+################################################################################
+
+################################################
+# Decanter Prometheus Appender Configuration
+################################################
+
+# Prometheus HTTP servlet alias
+#alias=/decanter/prometheus</code></pre>
+</div>
+</div>
+<div class="paragraph">
+<p>The Decanter Prometheus appender exports <code>io.prometheus*</code> 
packages, meaning that you can simple add your metrics to the Decanter 
Prometheus servlet.
+You just have to import <code>io.prometheus*</code> packages and simple use 
the regular Prometheus code:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre class="CodeRay highlight"><code>class YourClass {
+  static final Gauge inprogressRequests = Gauge.build()
+     .name("inprogress_requests").help("Inprogress requests.").register();
+
+  void processRequest() {
+    inprogressRequests.inc();
+    // Your code here.
+    inprogressRequests.dec();
+  }
+}</code></pre>
+</div>
+</div>
+<div class="paragraph">
+<p>Don&#8217;t forget to import <code>io.prometheus*</code> packages in your 
bundle <code>MANIFEST.MF</code>:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre class="CodeRay highlight"><code>Import-Package: 
io.prometheus.client;version="[0.8,1)"</code></pre>
+</div>
+</div>
+<div class="paragraph">
+<p>That&#8217;s the only thing you need: your metrics will be available on the 
Decanter Prometheus servlet (again on 
<code>http://localhost:8181/decanter/prometheus</code> by default).</p>
+</div>
+</div>
 </div>
 </div>
 <div class="sect2">
 <h3 id="_alerting">1.4. Alerting</h3>
 <div class="paragraph">
-<p>Decanter provides an alerting feature. It allows you to check values of 
harvested data (coming from
+<p>Decanter provides an alerting feature. It allows you to check values in 
harvested data (coming from
 the collectors) and send alerts when the data is not in the expected state.</p>
 </div>
 <div class="sect3">
-<h4 id="_checker">1.4.1. Checker</h4>
+<h4 id="_service">1.4.1. Service</h4>
 <div class="paragraph">
-<p>The checker is automatically installed as soon as you install an alerter 
feature.</p>
+<p>The alerting service is the core of Decanter alerting.</p>
 </div>
 <div class="paragraph">
-<p>It uses the <code>etc/org.apache.karaf.decanter.alerting.checker.cfg</code> 
configuration file.</p>
+<p>It&#8217;s configured in 
<code>etc/org.apache.karaf.decanter.alerting.service.cfg</code> configuration 
file where you define
+the alert rules:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre>rule.my="{'condition':'message:*','level':'ERROR'}</pre>
+</div>
 </div>
 <div class="paragraph">
-<p>This file contains the check to perform on the collected properties.</p>
+<p>The rule name has to start with <code>rule.</code> prefix (see 
<code>rule.my</code> here).</p>
 </div>
 <div class="paragraph">
-<p>The format of this file is:</p>
+<p>The rule definition is in JSON with the following syntax:</p>
 </div>
 <div class="listingblock">
 <div class="content">
-<pre>type.propertyName.alertLevel=checkType:value</pre>
+<pre>{
+    'condition':'QUERY',
+    'level':'LEVEL',
+    'period':'PERIOD',
+    'recoverable':true|false
+}</pre>
 </div>
 </div>
 <div class="paragraph">
@@ -5902,108 +5316,73 @@ the collectors) and send alerts when the
 <div class="ulist">
 <ul>
 <li>
-<p><code>type</code> is optional. It allows you to filter the check for a 
given type of collected data. It&#8217;s particulary interesting
-when Decanter collects multiple JMX object names or servers. You may want to 
perform different checks depending of the type
-or source of the collected data.</p>
+<p><code>condition</code> is a Apache Lucene query (<a 
href="https://lucene.apache.org/core/8_5_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package.description";
 
class="bare">https://lucene.apache.org/core/8_5_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package.description</a>).
+For instance:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>message:foo*</code> selects all events with <code>message</code> 
containing any string starting with <code>foo</code></p>
 </li>
 <li>
-<p><code>propertyName</code> is the data property key. For instance, 
<code>loggerName</code>, <code>message</code>, 
<code>HeapMemoryUsage.used</code>, etc.</p>
+<p><code>message:* AND other:*</code> selects all events with 
<code>message</code> and <code>other</code> containing anything</p>
 </li>
 <li>
-<p><code>alertLevel</code> is the alerting level for this check. The only two 
possible values are <code>error</code> (critical alert), or
-<code>warn</code> (severe alert).</p>
+<p><code>threadCount:[200 TO *]</code> selects all events with 
<code>threadCount</code> greater than 200</p>
 </li>
 <li>
-<p><code>checkType</code> is the check type. Possible values are 
<code>range</code>, <code>equal</code>, <code>notequal</code>, 
<code>match</code>, and <code>notmatch</code>.</p>
+<p><code>counter:[20 TO 100)</code> selects all events with 
<code>counter</code> between 20 and 100 (included)</p>
 </li>
 <li>
-<p><code>value</code> is the check value, where the data property value has to 
verify.</p>
+<p><code>foo:bar OR foo:bla</code> selects all events with <code>foo</code> 
containing <code>bar</code> or <code>bla</code></p>
 </li>
 </ul>
 </div>
-<div class="paragraph">
-<p>The Decanter Checker supports numeric or string check.</p>
-</div>
-<div class="paragraph">
-<p>To verify a numeric value, you can use:</p>
-</div>
+</li>
+<li>
+<p><code>level</code> is a string where you can set whatever you want to 
define the alert level. By default, it&#8217;s <code>WARN</code>.</p>
+</li>
+<li>
+<p><code>period</code> is optional and allows you to define a validity period 
for a condition. It means that the condition should match for the period 
duration and, if so, the alert with be thrown after the period.
+The period is a string like this:</p>
 <div class="ulist">
 <ul>
 <li>
-<p><code>range</code> to check if the metric is between two values</p>
+<p><code>5SECONDS</code></p>
 </li>
 <li>
-<p><code>equal</code> to check if the metric is equal to a value</p>
+<p><code>10MINUTES</code></p>
 </li>
 <li>
-<p><code>notequal</code> to check if the metric is not equal to a value</p>
+<p><code>2HOURS</code></p>
 </li>
 </ul>
 </div>
-<div class="paragraph">
-<p>For instance, if you want to check that the number of threads is between 0 
and 70, you can use:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>ThreadCount.error=range:[0,70]</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>You can also filter and specify the type on which we check:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>jmx-local.ThreadCount.error=range:[0,70]</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>If the thread count is out of this range, Decanter will create an error 
alert sent to the alerters.</p>
-</div>
-<div class="paragraph">
-<p>Another example is if you want to check if the myValue is equal to 10:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>myValue.warn=equal:10</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>If myValue is not equal to 10, Decanter will create a warn alert send to 
the alerters.</p>
+</li>
+<li>
+<p><code>recoverable</code> is a flag that define if the alert can be recover 
or not. By default it&#8217;s <code>false</code>. The main difference is the 
number of alert events you will have.
+If not recoverable, you will have an alert for each event matching the 
condition.
+If recoverable, you will have a single alert the first time an event matches 
the condition, and another alert (back to normal) when the alert is back (not 
matching the condition).</p>
+</li>
+</ul>
 </div>
 <div class="paragraph">
-<p>To verify a string value, you can use:</p>
+<p>You can use any event property in the rule condition. The alert service 
automatically add:</p>
 </div>
 <div class="ulist">
 <ul>
 <li>
-<p><code>match</code> to check if the metric matches a regex</p>
+<p><code>alertUUID</code> is a unique string generated by the alert service</p>
 </li>
 <li>
-<p><code>notmatch</code> to check if the matric doesn&#8217;t match a regex</p>
+<p><code>alertTimestamp</code> is the alert timestamp added by the alert 
service</p>
 </li>
 </ul>
 </div>
-<div class="paragraph">
-<p>For instance, if you want to create an alert when an ERROR log message 
happens, you can use:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>loggerLevel.error=match:ERROR</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>You can also use "complex" regex:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>loggerName.warn=match:(.*)my\.loggger\.name\.(.*)</pre>
-</div>
-</div>
 </div>
 <div class="sect3">
 <h4 id="_alerters">1.4.2. Alerters</h4>
 <div class="paragraph">
-<p>When the value doesn&#8217;t verify the check in the checker configuration, 
an alert is created an sent to the alerters.</p>
+<p>When the value doesn&#8217;t verify the check in the checker configuration, 
an alert is created and sent to the alerters.</p>
 </div>
 <div class="paragraph">
 <p>Apache Karaf Decanter provides ready to use alerters.</p>
@@ -6011,7 +5390,7 @@ or source of the collected data.</p>
 <div class="sect4">
 <h5 id="_log_3">Log</h5>
 <div class="paragraph">
-<p>The Decanter Log alerter log a message for each alert.</p>
+<p>The Decanter Log alerter logs a message for each alert.</p>
 </div>
 <div class="paragraph">
 <p>The <code>decanter-alerting-log</code> feature installs the log alerter:</p>
@@ -6028,7 +5407,7 @@ or source of the collected data.</p>
 <div class="sect4">
 <h5 id="_e_mail">E-mail</h5>
 <div class="paragraph">
-<p>The Decanter e-mail alerter sends an e-mail for each alert.</p>
+<p>The Decanter e-mail alerter sends an e-mail for alerts.</p>
 </div>
 <div class="paragraph">
 <p>The <code>decanter-alerting-email</code> feature installs the e-mail 
alerter:</p>
@@ -6051,7 +5430,8 @@ the SMTP server and e-mail addresses to
 # From e-mail address
 from=
 
-# To e-mail address
+# To e-mail addresses, you can define here a list of recipient separated with 
comma
+# For example: [email protected],[email protected],[email protected]
 to=
 
 # Hostname of the SMTP server
@@ -6071,7 +5451,12 @@ ssl=false
 #username=
 
 # Optionally, password for the SMTP server
-#password=</pre>
+#password=
+
+# e-mail velocity templates
+#subject.template=/path/to/subjectTemplate.vm
+#body.template=/path/to/bodyTemplate.vm
+#body.type=text/plain</pre>
 </div>
 </div>
 <div class="ulist">
@@ -6103,8 +5488,60 @@ ssl=false
 <li>
 <p>the <code>password</code> property is optional and specifies the password 
to connect to the SMTP server</p>
 </li>
+<li>
+<p>the <code>subject.template</code> property allows you to provide your own 
Velocity (<a href="http://velocity.apache.org"; 
class="bare">http://velocity.apache.org</a>) template to create the subject of 
the message</p>
+</li>
+<li>
+<p>the <code>body.template</code> property allows you to provide your own 
Velocity (<a href="http://velocity.apache.org"; 
class="bare">http://velocity.apache.org</a>) template to create and format the 
body of the message</p>
+</li>
+<li>
+<p>the <code>body.type</code> property allows you to define the message 
content type, depending if you send HTML or plain text message.</p>
+</li>
+</ul>
+</div>
+<div class="paragraph">
+<p>Optionally, you can add:</p>
+</div>
+<div class="ulist">
+<ul>
+<li>
+<p><code>cc</code> to add email carbon copy</p>
+</li>
+<li>
+<p><code>bcc</code> to add email blind carbon copy</p>
+</li>
 </ul>
 </div>
+<div class="paragraph">
+<p>The email alerter is also able to use collected data properties.</p>
+</div>
+<div class="paragraph">
+<p>For instance, <code>subject</code> can look like <code>This is my 
${property}</code> where <code>${property}</code> is replaced by the 
<code>property</code> value.</p>
+</div>
+<div class="paragraph">
+<p>The email alerter is also able to use collected data properties for subject 
or body (including replacement).
+It looks for <code>body.template.location</code> and 
<code>subject.template.location</code> collected data properties.</p>
+</div>
+<div class="paragraph">
+<p>For instance, a body Velocity template looks like this:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre class="CodeRay highlight"><code>#if ($event.get("alertBackToNormal") == 
true)
+$event.get("alertLevel") alert: $event.get("alertAttribute") was out of the 
pattern $event.get("alertPattern") but back to normal now
+#else
+$event.get("alertLevel") alert: $event.get("alertAttribute") is out of the 
pattern $event.get("alertPattern")
+#end
+
+Details:
+#foreach ($key in $event.keySet())
+ $key : $event.get($key)
+#end</code></pre>
+</div>
+</div>
+<div class="paragraph">
+<p>where <code>$event</code> is the map containing all event properties.</p>
+</div>
 </div>
 <div class="sect4">
 <h5 id="_camel_2">Camel</h5>
@@ -6174,6 +5611,114 @@ alert.destination.uri=direct-vm:decanter
 </div>
 </div>
 </div>
+<div class="sect4">
+<h5 id="_using_existing_appenders">Using existing appenders</h5>
+<div class="paragraph">
+<p>Actually, a Decanter alerter is a "regular" Decanter appender. The 
different is the events topic listening by the appender.
+By default, the appenders listen on <code>decanter/collect/*</code>.
+To "turn" an appender as an alerter, it just has to listen on 
<code>decanter/alert/*</code>.</p>
+</div>
+<div class="paragraph">
+<p>For instance, you can create a new instance of elasticsearch appender by 
creating 
<code>etc/org.apache.karaf.decanter.appender.elasticsearch-myalert.cfg</code> 
containing:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre>event.topics=decanter/alert/*
+...</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>With this configuration, you have a elasticearch alerter that will store 
the alerts into a elasticsearch instance.</p>
+</div>
+</div>
+</div>
+</div>
+<div class="sect2">
+<h3 id="_processors">1.5. Processors</h3>
+<div class="paragraph">
+<p>Decanter Processors are optional. They receive data from the collectors, 
apply a processing logic on the received event, and send a new event to the 
appenders.</p>
+</div>
+<div class="paragraph">
+<p>The processors are listening for incoming events on 
<code>decanter/collect/*</code> dispatcher topics and send processed events to 
<code>decanter/process/*</code> dispatcher topics.
+By default, the appenders are listening on <code>decanter/collect/*</code> 
topics. If you want to append processed events, you have to configure the 
appenders

[... 132 lines stripped ...]

Reply via email to