fiery06 commented on a change in pull request #373:
URL: https://github.com/apache/james-project/pull/373#discussion_r611533343



##########
File path: docs/modules/servers/pages/distributed/operate/metrics.adoc
##########
@@ -7,16 +7,101 @@ for keeping track of some core metrics of James.
 Such metrics are made available via JMX. You can connect for instance using 
VisualVM and the associated
 mbean plugins.
 
-You can also export the metrics to ElasticSearch and visualize them with 
https://grafana.com/[Grafana].
-See 
xref:distributed/configure/elasticsearch.adoc#_exporting_metrics_directly_to_elasticsearch[elaticseach.properties]
-documentation for more details on how to set this up.
+We also support displaying them via https://grafana.com/[Grafana]. Two methods 
can be used to back grafana display:
 
-If some metrics seem abnormally slow despite in depth database
-performance tuning, feedback is appreciated as well on the bug tracker,
-the user mailing list or our Gitter channel (see our
-http://james.apache.org/#second[community page]) . Any additional
-details categorizing the slowness are appreciated as well (details of
-the slow requests for instance).
+ - Prometheus metric collection - Data are exposed on a HTTP endpoint for 
Prometheus scrape.
+ - ElasticSearch metric collection - This method is depreciated and will be 
removed in next version.
+ 
+== Expose metrics for Prometheus collection
+
+Metrics can be exposed over HTTP and made available by using 
``extensions.routes`` in James 
https://github.com/apache/james-project/blob/master/docs/modules/servers/pages/distributed/configure/webadmin.adoc[webadmin.properties]
 file:
+....
+extensions.routes=org.apache.james.webadmin.dropwizard.MetricsRoutes
+....
+You can test the result by accessing to: 
+....
+http://james_server:8000/metrics
+....
+
+== Running James with Prometheus
+
+Make the below changes to scrape job config in ``prometheus.yml`` to collect 
the data for Grafana dashboard.
+....
+global:
+  scrape_interval:     15s # Set the scrape interval to every 15 seconds. 
Default is every 1 minute.
+  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is 
every 1 minute.
+  # scrape_timeout is set to the global default (10s).
+
+alerting:
+  alertmanagers:
+  - static_configs:
+    - targets:
+      # - alertmanager:9093
+
+rule_files:
+  # - "first_rules.yml"
+  # - "second_rules.yml"
+
+scrape_configs:
+  # The job name is added as a label `job=<job_name>` to any timeseries 
scraped from this config.
+  - job_name: 'prometheus'
+    scrape_interval: 5s
+    static_configs:
+      - targets: ['localhost:9090']
+  - job_name: 'Apache James'
+    scrape_interval: 5s
+    metrics_path: /metrics
+    static_configs:
+      - targets: ['james:8000']
+....
+
+You can download the dashboard json files and use 
https://grafana.com/tutorials/provision-dashboards-and-data-sources/[Grafana 
provision] to make the metrics datasource and dashboards available after 
container creation. [Insert link to Prometheus json files.]
+
+Update the 
https://github.com/grafana/grafana/blob/master/conf/sample.ini[grafana.ini] 
configuration file in the ``/etc/grafana/grafana.ini`` to override default 
configuration options. You only need to change the provisioning folder path:
+
+```
+;provisioning = /etc/grafana/provisioning
+```
+
+Create the provisioning folder tree and copy all the dashboard json files to 
``/provisioning/dashboards/james/``
+
+    |-- provisioning
+        |-- dashboards
+            |-- defaults.yml
+            |-- james
+        |-- datasources
+            |-- prometheus.yml

Review comment:
       name changed to avoid confuse




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to