I've worked on those things a year ago specifically for fabric, so there's a lot of overlap between what you propose and the insight stuff inside fabric. I think we should be able to contribute what we have too, trying to abstract a bit the things that are fabric specific. There's plenty of stuff we already have (jmx, camel, jetty and activemq and pax interceptors, elastic search + indices housekeeping, etc..), so no need to reinvent the wheel here. There's definitely a great need and potential here, so I'd love the idea of collaborating on this area.
2014-10-14 17:12 GMT+02:00 Jean-Baptiste Onofré <[email protected]>: > Hi all, > > First of all, sorry for this long e-mail ;) > > Some weeks ago, I blogged about the usage of ELK > (Logstash/Elasticsearch/Kibana) > with Karaf, Camel, ActiveMQ, etc to provide a monitoring dashboard (know > what's happen in Karaf and be able to store it for a long period): > > http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel- > activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/ > > If this solution works fine, there are some drawbacks: > - it requires additional middlewares on the machines. Additionally to > Karaf itself, we have to install logstash, elasticsearch nodes, and kibana > console > - it's not usable "out of the box": you need at least to configure > logstash (with the different input/output plugins), kibana (to create the > dashboard that you need) > - it doesn't cover all the monitoring needs, especially in term of SLA: we > want to be able to raise some alerts depending of some events (for > instance, when a regex is match in the log messages, when a feature is > uninstalled, when a JMX metric is greater than a given value, etc) > > Actually, Karaf (and related projects) already provides most (all) data > required for the monitoring. However, it would be very helpful to have a > "glue", ready to use and more user friendly, including a storage of the > metrics/monitoring data. > > Regarding this, I started a prototype of a monitoring solution for Karaf > and the applications running in Karaf. > The purpose is to be very extendible, flexible, easy to install and use. > > In term of architecture, we can find the following component: > > 1/ Collectors & SLA Policies > The collectors are services responsible of harvesting monitoring data. > We have two kinds of collectors: > - the polling collectors are invoked by a scheduler periodically. > - the event driven collectors react to some events. > Two collectors are already available: > - the JMX collector is a polling collector which harvest all MBeans > attributes > - the Log collector is a event driven collector, implementing a > PaxAppender which react when a log message occurs > We can planned the following collectors: > - a Camel Tracer collector would be an event driven collector, acting as a > Camel Interceptor. It would allow to trace any Exchange in Camel. > > It's very dynamic (thanks to OSGi services), so it's possible to add a new > custom collector (user/custom implementation). > > The Collectors are also responsible of checking the SLA. As the SLA > policies are tight to the collected data, it makes sense that the collector > validates the SLA and call/delegate the alert to SLA services. > > 2/ Scheduler > The scheduler service is responsible to call the Polling Collectors, > gather the harvested data, and delegate to the dispatcher. > We already have a simple scheduler (just a thread), but we can plan a > quartz scheduler (for advanced cron/trigger configuration), and another one > leveraging the Karaf scheduler. > > 3/ Dispatcher > The dispatcher is called by the scheduler or the event driven collectors > to dispatch the collected data to the appenders. > > 4/ Appenders > The appender services are responsible to send/store the collected data to > target systems. > For now, we have two appenders: > - a log appender which just log the collected data > - a elasticsearch appender which send the collected data to a > elasticsearch instance. For now, it uses "external" elasticsearch, but I'm > working on an elasticsearch feature allowing to embed elasticsearch in > Karaf (it's mostly done). > We can plan the following other appenders: > - redis to send the collected data in Redis messaging system > - jdbc to store the collected data in a database > - jms to send the collected data to a JMS broker (like ActiveMQ) > - camel to send the collected data to a Camel direct-vm/vm endpoint of a > route (it would create an internal route) > > 5/ Console/Kibana > The console is composed by two parts: > - a angularjs or bootstrap layer allowing to configure the SLA and global > settings > - embedded kibana instance with pre-configured dashboard (when the > elasticsearch appender is used). We will have a set of already created > lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard template. > The kibana instance will be embedded in Karaf (not external). > > Of course, we have ready to use features, allowing to very easily install > modules that we want. > > I named the prototype Karaf Decanter. I don't have preference about the > name, and the location of the code (it could be as Karaf subproject like > Cellar or Cave, or directly in the Karaf codebase). > > Thoughts ? > > Regards > JB > -- > Jean-Baptiste Onofré > [email protected] > http://blog.nanthrax.net > Talend - http://www.talend.com >
