We are inline !!

Thank you so much Jimmi.

Regards
JB

On 10/15/2014 10:47 AM, Jimmi Dyson wrote:
Hi JB,

Please don't think I'm advertising/trolling - love Karaf & love the idea
of decanter. There is definitely stuff we can contribute & when the
proposal goes through we can discuss what that is.

Thanks,
Jimmi

On 15 October 2014 09:27, Jean-Baptiste Onofré <[email protected]
<mailto:[email protected]>> wrote:

    Hi Jimmi,

    If you are ready to donate hawtio and the fabric metric stuff in
    Decanter, it would be great.
    However I'm not sure it's what you proposed.

    But you got the purpose of the Decanter proposal: build a monitoring
    platform for Karaf, as part of the Karaf subproject ecosystem. So
    any contribution, idea is more than welcome !

    As I said in previous e-mail, I don't want to have any troll or any
    "side project promotion/advertising" is this thread. I started
    Decanter prototype and proposal just in regard of a need in the
    Apache Karaf ecosystem.

    Thanks a lot for your feedback, and I'm eager to see contributions
    for hawtio and more.

    By the way, I think I will start a formal vote for donation later today.

    Regards
    JB


    On 10/15/2014 10:12 AM, Jimmi Dyson wrote:

        Continuing on from what Guillaume said, in fabric we have some
        of the
        features that you're proposing for Decanter & should be able to
        contibute back, refactoring anything that is fabric specific (of
        which
        there isn't much). We have an Elasticsearch configuration
        factory that
        uses the SMX Elasticsearch bundle. This includes plugin support
        (albeit
        plugins need to be fragment bundles), that could be used with no
        refactoring at all (in fabric we have a custom discovery plugin that
        doesn't need to be installed) - we can get this in to the SMX
        Elasticsearch bundle probably? This allows you to embed full
        clusters
        inside Karaf deployments & even have Elasticsearch
        client/transport/tribe nodes hooked up to externally configured
        clusters
        if you want.

        Fabric also has pluggable log/metric collection services & pluggable
        log/metric storage services & again this is something that should be
        usable with little/no refactoring in Decanter if you wanted.

        hawtio is a great front end (nothing fabric specific in it, although
        there are plugins for fabric) & would give you a single place
        where all
        your dashboards from different tools (Kibana, Grafana, etc) are in a
        single place.

        I'm really happy to see this proposal & hope we can contribute
        to it.

        On 15 October 2014 08:41, Charlie Mordant <[email protected]
        <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>> wrote:

             Hi J.B.,

             I saw an HawtIO plugin for Kibana (but didn't tested it), so it
             could be possible that you'll have nothing to do for you HawtIO
             integration :p.

             As an addition, a Decanter Karaf feature could be made to
        ease the
             user interaction with ElasticSearch (installing either
        Jelastic or
             Spring-data-elasticsearch).

             May be a Docker container can be implemented to ease
             Elastic+Kibana+logstash installation (and may be karaf
        command for
             installing all: the container in an Unix env, boot2docker + the
             container in a MacOSX one, Vagrant + CoreOS + container in
        a windows
             one....).

             Nice name however: Decanter sounds well with a Karaf and an old
             Pomard :), it also explains well the goal of the product.
             It's also nice idea, I thought to do the same for my distro
        and it's
             far better if it's an Apache/Karaf product!

             Good luck for the job, I'll try an Elastic Karaf feature on
        my side
             (then give it to you).

             Best regards,

             2014-10-15 5:17 GMT+02:00 Andreas Pieber
        <[email protected] <mailto:[email protected]>
             <mailto:[email protected] <mailto:[email protected]>>>:

                 Hey,

                 The collection definitely sounds like a perfect idea
        for a Karaf
                 sub project to me. Beside the great potential for the
        components
                 I like the especially fitting name 😊 +1

                 Kind regards,
                 Andreas

                 On Oct 14, 2014 5:13 PM, "Jean-Baptiste Onofré"
        <[email protected] <mailto:[email protected]>
                 <mailto:[email protected] <mailto:[email protected]>>> wrote:

                     Hi all,

                     First of all, sorry for this long e-mail ;)

                     Some weeks ago, I blogged about the usage of ELK
                     (Logstash/Elasticsearch/____Kibana) with Karaf, Camel,
                     ActiveMQ, etc to provide a monitoring dashboard
        (know what's
                     happen in Karaf and be able to store it for a long
        period):

        
http://blog.nanthrax.net/2014/____03/apache-karaf-cellar-__camel-__activemq-monitoring-__with-elk-__elasticsearch-__logstash-and-__kibana/
        
<http://blog.nanthrax.net/2014/__03/apache-karaf-cellar-camel-__activemq-monitoring-with-elk-__elasticsearch-logstash-and-__kibana/>


        
<http://blog.nanthrax.net/__2014/03/apache-karaf-cellar-__camel-activemq-monitoring-__with-elk-elasticsearch-__logstash-and-kibana/
        
<http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/>>

                     If this solution works fine, there are some drawbacks:
                     - it requires additional middlewares on the machines.
                     Additionally to Karaf itself, we have to install
        logstash,
                     elasticsearch nodes, and kibana console
                     - it's not usable "out of the box": you need at
        least to
                     configure logstash (with the different input/output
                     plugins), kibana (to create the dashboard that you
        need)
                     - it doesn't cover all the monitoring needs,
        especially in
                     term of SLA: we want to be able to raise some alerts
                     depending of some events (for instance, when a regex is
                     match in the log messages, when a feature is
        uninstalled,
                     when a JMX metric is greater than a given value, etc)

                     Actually, Karaf (and related projects) already
        provides most
                     (all) data required for the monitoring. However, it
        would be
                     very helpful to have a "glue", ready to use and
        more user
                     friendly, including a storage of the
        metrics/monitoring data.

                     Regarding this, I started a prototype of a monitoring
                     solution for Karaf and the applications running in
        Karaf.
                     The purpose is to be very extendible, flexible, easy to
                     install and use.

                     In term of architecture, we can find the following
        component:

                     1/ Collectors & SLA Policies
                     The collectors are services responsible of harvesting
                     monitoring data.
                     We have two kinds of collectors:
                     - the polling collectors are invoked by a scheduler
                     periodically.
                     - the event driven collectors react to some events.
                     Two collectors are already available:
                     - the JMX collector is a polling collector which
        harvest all
                     MBeans attributes
                     - the Log collector is a event driven collector,
                     implementing a PaxAppender which react when a log
        message occurs
                     We can planned the following collectors:
                     - a Camel Tracer collector would be an event driven
                     collector, acting as a Camel Interceptor. It would
        allow to
                     trace any Exchange in Camel.

                     It's very dynamic (thanks to OSGi services), so it's
                     possible to add a new custom collector (user/custom
                     implementation).

                     The Collectors are also responsible of checking the
        SLA. As
                     the SLA policies are tight to the collected data,
        it makes
                     sense that the collector validates the SLA and
        call/delegate
                     the alert to SLA services.

                     2/ Scheduler
                     The scheduler service is responsible to call the
        Polling
                     Collectors, gather the harvested data, and delegate
        to the
                     dispatcher.
                     We already have a simple scheduler (just a thread),
        but we
                     can plan a quartz scheduler (for advanced cron/trigger
                     configuration), and another one leveraging the
        Karaf scheduler.

                     3/ Dispatcher
                     The dispatcher is called by the scheduler or the event
                     driven collectors to dispatch the collected data to the
                     appenders.

                     4/ Appenders
                     The appender services are responsible to send/store the
                     collected data to target systems.
                     For now, we have two appenders:
                     - a log appender which just log the collected data
                     - a elasticsearch appender which send the collected
        data to
                     a elasticsearch instance. For now, it uses "external"
                     elasticsearch, but I'm working on an elasticsearch
        feature
                     allowing to embed elasticsearch in Karaf (it's
        mostly done).
                     We can plan the following other appenders:
                     - redis to send the collected data in Redis
        messaging system
                     - jdbc to store the collected data in a database
                     - jms to send the collected data to a JMS broker
        (like ActiveMQ)
                     - camel to send the collected data to a Camel
        direct-vm/vm
                     endpoint of a route (it would create an internal route)

                     5/ Console/Kibana
                     The console is composed by two parts:
                     - a angularjs or bootstrap layer allowing to
        configure the
                     SLA and global settings
                     - embedded kibana instance with pre-configured
        dashboard
                     (when the elasticsearch appender is used). We will
        have a
                     set of already created lucene queries and a kind of
                     "Karaf/Camel/ActiveMQ/CXF" dashboard template. The
        kibana
                     instance will be embedded in Karaf (not external).

                     Of course, we have ready to use features, allowing
        to very
                     easily install modules that we want.

                     I named the prototype Karaf Decanter. I don't have
                     preference about the name, and the location of the
        code (it
                     could be as Karaf subproject like Cellar or Cave, or
                     directly in the Karaf codebase).

                     Thoughts ?

                     Regards
                     JB
                     --
                     Jean-Baptiste Onofré
        [email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>
        http://blog.nanthrax.net
                     Talend - http://www.talend.com




             --
             Charlie Mordant

             Full OSGI/EE stack made with Karaf:
        https://github.com/__OsgiliathEnterprise/net.__osgiliath.parent
        <https://github.com/OsgiliathEnterprise/net.osgiliath.parent>



    --
    Jean-Baptiste Onofré
    [email protected] <mailto:[email protected]>
    http://blog.nanthrax.net
    Talend - http://www.talend.com



--
Jean-Baptiste Onofré
[email protected]
http://blog.nanthrax.net
Talend - http://www.talend.com

Reply via email to