Re: Control Karaf boot-up process

2019-01-13 Thread Jean-Baptiste Onofré
Hi Matteo,

you can define a control bundle in etc/startup.properties. The bundles
define here are executed before all other steps.

Regards
JB

On 13/01/2019 22:07, Matteo Rulli wrote:
> Hello,
> 
> I would like to know if there is a way in Karaf to delay the boot process 
> until a specific bundle activator has been executed. In particular, I would 
> like to perform some "environment validations” and checks and only if these 
> are OK I would like to proceed with the Karaf boot-up.
> 
> Thank you for you help,
> Matteo
> 
> 

-- 
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Control Karaf boot-up process

2019-01-13 Thread Matteo Rulli
Hello,

I would like to know if there is a way in Karaf to delay the boot process until 
a specific bundle activator has been executed. In particular, I would like to 
perform some "environment validations” and checks and only if these are OK I 
would like to proceed with the Karaf boot-up.

Thank you for you help,
Matteo




Re: karaf boot

2018-07-25 Thread Jean-Baptiste Onofré
Hi Scott,

yes, my Karaf Boot PoC is still on my github. I will move forward on
this one after 4.2.1 release and Vineyard donation.

You can  also find the karaf boot presentation I  did during last
ApacheCon NA.

Regards
JB

On 25/07/2018 19:39, Scott Lewis wrote:
> Some time ago there was discussion on this list about a smaller
> (smaller/fewer bundles) starting point for karaf called 'karaf boot'.  
> I don't see anything about this on karaf.apache.org...is there still
> work/planning, etc going on?
> 
> Thanks,
> 
> Scott
> 
> 
> 

-- 
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


karaf boot

2018-07-25 Thread Scott Lewis
Some time ago there was discussion on this list about a smaller 
(smaller/fewer bundles) starting point for karaf called 'karaf boot'.   
I don't see anything about this on karaf.apache.org...is there still 
work/planning, etc going on?


Thanks,

Scott





Re: How much of the system repository is needed for a karaf boot?

2018-01-16 Thread Jean-Baptiste Onofré

Hi Steinar,

All core jars are taken from system repository. Even startup.properties reads 
from system repo.


Basically the minimum that you should have is the ones in  startup.properties 
and, if you don't want remote resolution, the ones in boot features.


Regards
JB

On 01/16/2018 07:22 PM, Steinar Bang wrote:

The $KARAF_HOME/system directory is the top of an area laid out like a
maven repository, containing (in karaf 4.1.4) 64 jar files.

The repository is the first repository of the
org.ops4j.pax.url.mvn.defaultRepositories configuration setting.

How many of these jar files are needed for karaf to boot and start
pulling dependencies down from external maven repository?  None?  All?
Some? (if "some" are there a way to find out which ones are essential?)

Thanks!


  - Steinar (trying to figure out the proper way to do a debian package
of karaf https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=881297 )



--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


How much of the system repository is needed for a karaf boot?

2018-01-16 Thread Steinar Bang
The $KARAF_HOME/system directory is the top of an area laid out like a
maven repository, containing (in karaf 4.1.4) 64 jar files.

The repository is the first repository of the
org.ops4j.pax.url.mvn.defaultRepositories configuration setting.

How many of these jar files are needed for karaf to boot and start
pulling dependencies down from external maven repository?  None?  All?
Some? (if "some" are there a way to find out which ones are essential?)

Thanks!


 - Steinar (trying to figure out the proper way to do a debian package
   of karaf https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=881297 )



RE: karaf boot

2017-01-16 Thread Brad Johnson
Your questions about the other pieces of the stack and how they play in sort
of echoes what this discussion brought up for me.  If there is a collection
of libraries and standard idiomatic elements like CDI to be recommended,
what would that basic stack look like?

 

I'd sort of thought of this as a Service Core as opposed to a ServiceMix.
It might include libraries that are such common dependencies and those that
aren't necessarily used by a lot of other libraries but are so commonly used
that they'd belong.  The first category would include the obvious like
camel-core and slf4j.  The second would be things like JAXB and/or Jackson. 

 

Obviously if CDI is going to be idiomatic usage with the examples written
for it, it would make sense to include that.  

 

Keeping that stack as slim as possible but no less, would make it very easy
then to use features to add other things like ActiveMQ, CXF, rules engines.

 

But keeping it slim would make it easy to test and create updated revisions
while also keeping the number of configuration files down and the number of
dependencies down.

 

I'm not sure what that Service Core would look like but I think it would be
a useful spring board for building on.  It's easier to add to a stack than
it is to take away. 

 

 

From: Nick Baker [mailto:nba...@pentaho.com] 
Sent: Monday, January 16, 2017 11:13 AM
To: user <user@karaf.apache.org>
Subject: Re: karaf boot

 


I believe one of the goals of the "Boot" project should be to provide an
easier introduction to developing with OSGI, Karaf and the broader community
projects (CXF, Camel, etc.). One aspect of this is introducing developers to
the dynamism, breadth and yes complexity of OSGI. 

 

We try to ease people into it here, but there's always another "gotcha"
waiting for them around the corner. We have people here who can educate and
unblock those newbies. There will need to be a good deal of "art" in
progressively introducing these concepts in the documentation. 

 

There's also a lot of synergy between a simple/introduction Karaf instance
and one which is tuned for microservices (startupFeatures only, CDI or SCR,
read-only ConfigAdmin, simplified Aether resolution). This won't always be
the case. Production deployments will want a fully populated /system
directory and cache.clean turned off for instance.

 

As for the Microservice aspect, I do agree that it should have some Remote
Services implementation enabled out of the box, and a Message Bus to service
internal communication as well as remote. Where is Event Admin? We're using
Guava + Camel + JMS to handle inter/intraProcess communication. Anyway, the
combination of the two is a solid foundation to build upon.

 

-Nick Baker

  _  

From: Guillaume Nodet <gno...@apache.org <mailto:gno...@apache.org> >
Sent: Monday, January 16, 2017 10:06:53 AM
To: user
Subject: Re: karaf boot 

 

I have investigated reworking the blueprint core extender on top of DS
months ago, but I did not pursue.  The Felix SCR core is now more reusable
(I made it that way in order to reuse it in pax-cdi), so maybe I could have
another quick look about the feasibility. But I am pessimistic as IIRC, the
problems were more about some requirements in the blueprint spec which could
not be mapped correctly to DS. 

 

2017-01-16 15:44 GMT+01:00 Brad Johnson <bradj...@redhat.com
<mailto:bradj...@redhat.com> >:

I wonder if there's a way to start the implementation of a CDI common
practice with DS where possible but blueprint where not and then migrate
toward DS. 

 

>From my point of view when mentoring new developer's there are going to be
two general use cases for CDI, one is just for internal wiring inside a
bundle and dependency injection with internals.  Among other things it makes
testing a heck of a lot easier. 

 

The other use case is an easy way to export services and get references to
them. I'm not sure how well DS and blueprint play together.

 

It is also one of the reasons I've migrated away from using blueprint XML
for routes to the Java DSL.  Consistent and easy for Java developers to
understand.  Because I've used blueprint so much I have a limited
understanding of CDI but from what I've seen of it, it is a very sane way of
handling wire up.

 

So the question I guess is how hard is it to create a migration plan to move
pieces from blueprint to DS under the covers? Does using the Camel Java DSL
make that easier?

 

I don't see a problem for a "next generation" of the stack to say that the
XML variant is no longer being supported and recommend migration to the Java
DSL with CDI.  That isn't difficult in any case. But from the framework
perspective it may eliminate one level of indirection that requires XML
parsing, schema namespaces and the mapping of those through a plugin into
the constituent parts.

 

Would adopting such an approach make conversion away from blueprint easier?
Woul

Re: karaf boot

2017-01-16 Thread Nick Baker

I believe one of the goals of the "Boot" project should be to provide an easier 
introduction to developing with OSGI, Karaf and the broader community projects 
(CXF, Camel, etc.). One aspect of this is introducing developers to the 
dynamism, breadth and yes complexity of OSGI.


We try to ease people into it here, but there's always another "gotcha" waiting 
for them around the corner. We have people here who can educate and unblock 
those newbies. There will need to be a good deal of "art" in progressively 
introducing these concepts in the documentation.


There's also a lot of synergy between a simple/introduction Karaf instance and 
one which is tuned for microservices (startupFeatures only, CDI or SCR, 
read-only ConfigAdmin, simplified Aether resolution). This won't always be the 
case. Production deployments will want a fully populated /system directory and 
cache.clean turned off for instance.


As for the Microservice aspect, I do agree that it should have some Remote 
Services implementation enabled out of the box, and a Message Bus to service 
internal communication as well as remote. Where is Event Admin? We're using 
Guava + Camel + JMS to handle inter/intraProcess communication. Anyway, the 
combination of the two is a solid foundation to build upon.


-Nick Baker


From: Guillaume Nodet <gno...@apache.org>
Sent: Monday, January 16, 2017 10:06:53 AM
To: user
Subject: Re: karaf boot

I have investigated reworking the blueprint core extender on top of DS months 
ago, but I did not pursue.  The Felix SCR core is now more reusable (I made it 
that way in order to reuse it in pax-cdi), so maybe I could have another quick 
look about the feasibility. But I am pessimistic as IIRC, the problems were 
more about some requirements in the blueprint spec which could not be mapped 
correctly to DS.

2017-01-16 15:44 GMT+01:00 Brad Johnson 
<bradj...@redhat.com<mailto:bradj...@redhat.com>>:
I wonder if there’s a way to start the implementation of a CDI common practice 
with DS where possible but blueprint where not and then migrate toward DS.

>From my point of view when mentoring new developer’s there are going to be two 
>general use cases for CDI, one is just for internal wiring inside a bundle and 
>dependency injection with internals.  Among other things it makes testing a 
>heck of a lot easier.

The other use case is an easy way to export services and get references to 
them. I’m not sure how well DS and blueprint play together.

It is also one of the reasons I’ve migrated away from using blueprint XML for 
routes to the Java DSL.  Consistent and easy for Java developers to understand. 
 Because I’ve used blueprint so much I have a limited understanding of CDI but 
from what I’ve seen of it, it is a very sane way of handling wire up.

So the question I guess is how hard is it to create a migration plan to move 
pieces from blueprint to DS under the covers? Does using the Camel Java DSL 
make that easier?

I don’t see a problem for a “next generation” of the stack to say that the XML 
variant is no longer being supported and recommend migration to the Java DSL 
with CDI.  That isn’t difficult in any case. But from the framework perspective 
it may eliminate one level of indirection that requires XML parsing, schema 
namespaces and the mapping of those through a plugin into the constituent parts.

Would adopting such an approach make conversion away from blueprint easier? 
Would it make a migration path easier?

Brad

From: Christian Schneider 
[mailto:cschneider...@gmail.com<mailto:cschneider...@gmail.com>] On Behalf Of 
Christian Schneider
Sent: Monday, January 16, 2017 4:37 AM

To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

I generally like the idea of having one standard way to do dependency injection 
in OSGi. Unfortunately until now we do not have a single framework that most 
people are happy with.

I pushed a lot to make blueprint easier by using the CDI and JEE annotations 
and create blueprint from it using the aries blueprint maven plugin. This 
allows a CDI style development and works very well already. Recently Dominik 
extended my approach a lot and covered much of the CDI functionality. Currently 
this might be the best approach when your developers are experienced in JEE. 
Unfortunately blueprint has some bad behaviours like the blocking proxies when 
a mandatory service goes away. Blueprint is also quite complex internally and 
there is not standardized API for extension namespaces.

CDI would be great but it is is less well supported on OSGi than blueprint and 
the current implementations also have the same bad proxy behaviour. So while I 
would like to see a really good CDI implementation on OSGi with dynamic 
behaviour like DS we are not there yet.

DS is a little limited with its lack of extensibility but it works by far best 
of all frame

Re: karaf boot

2017-01-16 Thread Nick Baker
We're based on blueprint here, but that has more to do with our legacy as a 
Spring shop. Blueprint was familiar, standardized in the OSGI spec, and the 
hope in the past was to use the Gemini implementation so we could leverage 
Spring features and projects alongside OSGI (Transactions, AOP, Spring 
Security, Method-level security, etc.).


(Guillaume's recent work to support the Spring namespaces in Aries is something 
we're interested in)


For Service registration/discovery, blueprint works okay. Less well is the use 
of the blueprintContainer as an object factory. We end up passing around 
factory objects which delegate to the blueprintContainer where the built-in and 
custom scopes (session, request) manage what's returned for a particular call. 
These are true container-managed instances, but this is something we had to 
build on top of the container.


I'm also increasingly less a fan of proxying, damping and availability 
grace-periods. This may "help" a system, particularly one with poorly 
constructed feature files, but results in a lot of confusion and the perception 
that our OSGI container is non-deterministic. I wish the spec had kept 
SpringDMs ability to turn off proxying.


So yes, picking something like PAX-CDI or SCR sounds good. I'm not sure what 
level of maturation these are in. Last I checked CDI wasn't quite ready and SCR 
was in transition following the standardization.


-Nick Baker



From: Guillaume Nodet <gno...@apache.org>
Sent: Monday, January 16, 2017 10:06 AM
To: user
Subject: Re: karaf boot

I have investigated reworking the blueprint core extender on top of DS months 
ago, but I did not pursue.  The Felix SCR core is now more reusable (I made it 
that way in order to reuse it in pax-cdi), so maybe I could have another quick 
look about the feasibility. But I am pessimistic as IIRC, the problems were 
more about some requirements in the blueprint spec which could not be mapped 
correctly to DS.

2017-01-16 15:44 GMT+01:00 Brad Johnson 
<bradj...@redhat.com<mailto:bradj...@redhat.com>>:
I wonder if there’s a way to start the implementation of a CDI common practice 
with DS where possible but blueprint where not and then migrate toward DS.

>From my point of view when mentoring new developer’s there are going to be two 
>general use cases for CDI, one is just for internal wiring inside a bundle and 
>dependency injection with internals.  Among other things it makes testing a 
>heck of a lot easier.

The other use case is an easy way to export services and get references to 
them. I’m not sure how well DS and blueprint play together.

It is also one of the reasons I’ve migrated away from using blueprint XML for 
routes to the Java DSL.  Consistent and easy for Java developers to understand. 
 Because I’ve used blueprint so much I have a limited understanding of CDI but 
from what I’ve seen of it, it is a very sane way of handling wire up.

So the question I guess is how hard is it to create a migration plan to move 
pieces from blueprint to DS under the covers? Does using the Camel Java DSL 
make that easier?

I don’t see a problem for a “next generation” of the stack to say that the XML 
variant is no longer being supported and recommend migration to the Java DSL 
with CDI.  That isn’t difficult in any case. But from the framework perspective 
it may eliminate one level of indirection that requires XML parsing, schema 
namespaces and the mapping of those through a plugin into the constituent parts.

Would adopting such an approach make conversion away from blueprint easier? 
Would it make a migration path easier?

Brad

From: Christian Schneider 
[mailto:cschneider...@gmail.com<mailto:cschneider...@gmail.com>] On Behalf Of 
Christian Schneider
Sent: Monday, January 16, 2017 4:37 AM

To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

I generally like the idea of having one standard way to do dependency injection 
in OSGi. Unfortunately until now we do not have a single framework that most 
people are happy with.

I pushed a lot to make blueprint easier by using the CDI and JEE annotations 
and create blueprint from it using the aries blueprint maven plugin. This 
allows a CDI style development and works very well already. Recently Dominik 
extended my approach a lot and covered much of the CDI functionality. Currently 
this might be the best approach when your developers are experienced in JEE. 
Unfortunately blueprint has some bad behaviours like the blocking proxies when 
a mandatory service goes away. Blueprint is also quite complex internally and 
there is not standardized API for extension namespaces.

CDI would be great but it is is less well supported on OSGi than blueprint and 
the current implementations also have the same bad proxy behaviour. So while I 
would like to see a really good CDI implementation on OSGi with dynamic 
behaviour like DS we are 

Re: karaf boot

2017-01-16 Thread Guillaume Nodet
I have investigated reworking the blueprint core extender on top of DS
months ago, but I did not pursue.  The Felix SCR core is now more reusable
(I made it that way in order to reuse it in pax-cdi), so maybe I could have
another quick look about the feasibility. But I am pessimistic as IIRC, the
problems were more about some requirements in the blueprint spec which
could not be mapped correctly to DS.

2017-01-16 15:44 GMT+01:00 Brad Johnson <bradj...@redhat.com>:

> I wonder if there’s a way to start the implementation of a CDI common
> practice with DS where possible but blueprint where not and then migrate
> toward DS.
>
>
>
> From my point of view when mentoring new developer’s there are going to be
> two general use cases for CDI, one is just for internal wiring inside a
> bundle and dependency injection with internals.  Among other things it
> makes testing a heck of a lot easier.
>
>
>
> The other use case is an easy way to export services and get references to
> them. I’m not sure how well DS and blueprint play together.
>
>
>
> It is also one of the reasons I’ve migrated away from using blueprint XML
> for routes to the Java DSL.  Consistent and easy for Java developers to
> understand.  Because I’ve used blueprint so much I have a limited
> understanding of CDI but from what I’ve seen of it, it is a very sane way
> of handling wire up.
>
>
>
> So the question I guess is how hard is it to create a migration plan to
> move pieces from blueprint to DS under the covers? Does using the Camel
> Java DSL make that easier?
>
>
>
> I don’t see a problem for a “next generation” of the stack to say that the
> XML variant is no longer being supported and recommend migration to the
> Java DSL with CDI.  That isn’t difficult in any case. But from the
> framework perspective it may eliminate one level of indirection that
> requires XML parsing, schema namespaces and the mapping of those through a
> plugin into the constituent parts.
>
>
>
> Would adopting such an approach make conversion away from blueprint
> easier? Would it make a migration path easier?
>
>
>
> Brad
>
>
>
> *From:* Christian Schneider [mailto:cschneider...@gmail.com] *On Behalf
> Of *Christian Schneider
> *Sent:* Monday, January 16, 2017 4:37 AM
>
> *To:* user@karaf.apache.org
> *Subject:* Re: karaf boot
>
>
>
> I generally like the idea of having one standard way to do dependency
> injection in OSGi. Unfortunately until now we do not have a single
> framework that most people are happy with.
>
> I pushed a lot to make blueprint easier by using the CDI and JEE
> annotations and create blueprint from it using the aries blueprint maven
> plugin. This allows a CDI style development and works very well already.
> Recently Dominik extended my approach a lot and covered much of the CDI
> functionality. Currently this might be the best approach when your
> developers are experienced in JEE. Unfortunately blueprint has some bad
> behaviours like the blocking proxies when a mandatory service goes away.
> Blueprint is also quite complex internally and there is not standardized
> API for extension namespaces.
>
> CDI would be great but it is is less well supported on OSGi than blueprint
> and the current implementations also have the same bad proxy behaviour. So
> while I would like to see a really good CDI implementation on OSGi with
> dynamic behaviour like DS we are not there yet.
>
> DS is a little limited with its lack of extensibility but it works by far
> best of all frameworks in OSGi. The way it creates and destroy components
> when mandatory references come and go makes it so easy to implement code
> that works well in the dynamic OSGi environment. It also nicely supports
> configs even when using the config factories where you can have one
> instance of your component per config instance.
>
> So for the moment I would rather use DS as a default dependency injection
> for karaf boot. It is also the smallest footprint. When CDI is ready we
> could switch to CDI.
>
> Christian
>
>
> On 11.01.2017 22:03, Brad Johnson wrote:
>
> I definitely like the direction of the Karaf Boot with the CDI, blueprint,
> DS, etc. starters.  Now if we could integrate that with the Karaf profiles
> and have standardized Karaf Boot containers to configure like tinkertoys
> we’d be there.  I may work on some of that. I believe the synergy between
> Karaf Boot and the profiles could be outstanding. It would make any
> development easier by using all the standard OSGi libraries and mak
> microservices a snap.
>
>
>
> If we have a workable CDI version of service/reference annotation then I’m
> not sure why I’d use DS. It may be that the external c

RE: karaf boot

2017-01-16 Thread Brad Johnson
This sounds like a fantastic approach and one that can really standardize the 
uses, libraries, documentation and examples.

 

Because the libraries for use in examples are yet to be chosen, we can decide 
to give examples using those that fit better.

 

Brad

 

From: Guillaume Nodet [mailto:gno...@apache.org] 
Sent: Monday, January 16, 2017 6:26 AM
To: user <user@karaf.apache.org>
Subject: Re: karaf boot

 

 

 

2017-01-16 11:36 GMT+01:00 Christian Schneider <ch...@die-schneider.net 
<mailto:ch...@die-schneider.net> >:

I generally like the idea of having one standard way to do dependency injection 
in OSGi. Unfortunately until now we do not have a single framework that most 
people are happy with.

I pushed a lot to make blueprint easier by using the CDI and JEE annotations 
and create blueprint from it using the aries blueprint maven plugin. This 
allows a CDI style development and works very well already. Recently Dominik 
extended my approach a lot and covered much of the CDI functionality. Currently 
this might be the best approach when your developers are experienced in JEE. 
Unfortunately blueprint has some bad behaviours like the blocking proxies when 
a mandatory service goes away. Blueprint is also quite complex internally and 
there is not standardized API for extension namespaces.

CDI would be great but it is is less well supported on OSGi than blueprint and 
the current implementations also have the same bad proxy behaviour. So while I 
would like to see a really good CDI implementation on OSGi with dynamic 
behaviour like DS we are not there yet.

 

No, the work I've done on CDI is free from those drawbacks.  It has the same 
semantics as DS, so anything you can do in DS, you can do in CDI.  You can even 
do more than DS because you can wire services "internally", i.e. you don't have 
to expose your services to the OSGi registry to wire them together.

 


DS is a little limited with its lack of extensibility but it works by far best 
of all frameworks in OSGi. The way it creates and destroy components when 
mandatory references come and go makes it so easy to implement code that works 
well in the dynamic OSGi environment. It also nicely supports configs even when 
using the config factories where you can have one instance of your component 
per config instance. 

So for the moment I would rather use DS as a default dependency injection for 
karaf boot. It is also the smallest footprint. When CDI is ready we could 
switch to CDI.

 

I think it's ready.  The spec that the OSGi alliance is working on, is crap 
imho, but I've raised my hand several times already, so I won't try to bargain 
about all the limitations and creepy proxy things they want to do.  That said, 
the spec has 2 parts, the first one is about CDI applications in OSGi, and that 
one is good.  The second one is a CDI extension for OSGi service registry 
interaction, and that's the one that is bad, bad it's pluggable, so we can 
easily use the Pax-CDI one and that will cause no problems.

 

I think this CDI stuff has all the benefits of CDI + DS without the drawbacks 
of blueprint, so I'd rather have us focusing on it.

 



Christian




On 11.01.2017 22:03, Brad Johnson wrote:

I definitely like the direction of the Karaf Boot with the CDI, blueprint, DS, 
etc. starters.  Now if we could integrate that with the Karaf profiles and have 
standardized Karaf Boot containers to configure like tinkertoys we’d be there.  
I may work on some of that. I believe the synergy between Karaf Boot and the 
profiles could be outstanding. It would make any development easier by using 
all the standard OSGi libraries and mak microservices a snap.

 

If we have a workable CDI version of service/reference annotation then I’m not 
sure why I’d use DS. It may be that the external configuration of DS is more 
fleshed out but CDI has so much by way of easy injection that it makes coding 
and especially testing a lot easier.  I guess the CDI OSGi services could 
leverage much of DS.  Dunno.

 

In any case, I think that’s on the right track. 

 

From: Christian Schneider [mailto:cschneider...@gmail.com] On Behalf Of 
Christian Schneider
Sent: Wednesday, January 11, 2017 8:52 AM
To: user@karaf.apache.org <mailto:user@karaf.apache.org> 
Subject: Re: karaf boot

 

Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are missing in 
current karaf? Until now we only discussed internally about the scope and 
requirements of karaf boot. It would be very valuable to get some input from a 
real world case.

Christian

On 11.01.2017 13:41, Nick Baker wrote:

We'd be interested in this as well. Beginning to move toward Microservices 
deployments + Remote Services for interop. I'll have a look at your branch JB!

 

We've added support in our Karaf main for multiple instances from the same 
install on disk. Cache directories segmented, port 

RE: karaf boot

2017-01-16 Thread Brad Johnson
I suspect just providing examples and documentation showing the "idiomatic"
and preferred way of coding would take care of a lot of those issues. It
isn't that one is saying use Jersey or use CXF but that the examples
themselves use one and not the other.  Blog posts and secondary
documentation might show the other libraries and mechanisms but for
main-line documentation and examples we could just be mum about the issue.
One wouldn't even need to say, explicitly, that this or that is the
preferred approach.  Just that here is the sample code we provide.  An end
user can choose to follow it or ignore it and go their own way.

For new and intermediate developers they are simply going to follow the
examples they find.  For experienced developers they will move to a
different idiom if they find it easier to use.

As one example, the CDI implementation makes tesing a lot easier than by
using Camel Blueprint Test Support.  Too often I see groups just throw up
their hands and give up on unit or integration testing altogether.  Part of
that is due to the difficulty of working with CBTS and part of that is due
to the fact that the Camel documentation is rather poor at pushing for the
use of POJOs over Processors/Exchanges. Something like Processors/Exchanges
should be looked at when someone wants to know what's happening "under the
hood". 

While that isn't strictly a Karaf issue, I think it isn't an important
overall consideration. Push that single, idiomatic way of working with the
stack without saying anything about other mechanisms either pro or con. 



-Original Message-
From: Jean-Baptiste Onofré [mailto:j...@nanthrax.net] 
Sent: Monday, January 16, 2017 4:43 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

That's why karaf-boot can provide starter and clearly document pro/cons. 
End-users will choose the best match for their needs.

The same could happen for framework: I would like to create a REST service,
should I start with Jersey, with CXF-RS, ...

So, if in karaf-boot, it makes sense to support different starters for the
programming model (because one service exposed with blueprint can be used
from DS for instance), maybe we will have to make some choices as
"preferred" solution (for REST service, for JPA engine, etc).

Regards
JB

On 01/16/2017 11:36 AM, Christian Schneider wrote:
> I generally like the idea of having one standard way to do dependency 
> injection in OSGi. Unfortunately until now we do not have a single 
> framework that most people are happy with.
>
> I pushed a lot to make blueprint easier by using the CDI and JEE 
> annotations and create blueprint from it using the aries blueprint 
> maven plugin. This allows a CDI style development and works very well
already.
> Recently Dominik extended my approach a lot and covered much of the 
> CDI functionality. Currently this might be the best approach when your 
> developers are experienced in JEE. Unfortunately blueprint has some 
> bad behaviours like the blocking proxies when a mandatory service goes
away.
> Blueprint is also quite complex internally and there is not 
> standardized API for extension namespaces.
>
> CDI would be great but it is is less well supported on OSGi than 
> blueprint and the current implementations also have the same bad proxy 
> behaviour. So while I would like to see a really good CDI 
> implementation on OSGi with dynamic behaviour like DS we are not there
yet.
>
> DS is a little limited with its lack of extensibility but it works by 
> far best of all frameworks in OSGi. The way it creates and destroy 
> components when mandatory references come and go makes it so easy to 
> implement code that works well in the dynamic OSGi environment. It 
> also nicely supports configs even when using the config factories 
> where you can have one instance of your component per config instance.
>
> So for the moment I would rather use DS as a default dependency 
> injection for karaf boot. It is also the smallest footprint. When CDI 
> is ready we could switch to CDI.
>
> Christian
>
>
> On 11.01.2017 22:03, Brad Johnson wrote:
>>
>> I definitely like the direction of the Karaf Boot with the CDI, 
>> blueprint, DS, etc. starters.  Now if we could integrate that with 
>> the Karaf profiles and have standardized Karaf Boot containers to 
>> configure like tinkertoys we’d be there.  I may work on some of that.
>> I believe the synergy between Karaf Boot and the profiles could be 
>> outstanding. It would make any development easier by using all the 
>> standard OSGi libraries and mak microservices a snap.
>>
>>
>>
>> If we have a workable CDI version of service/reference annotation 
>> then I’m not sure why I’d use DS. It may be that the external 
>> configuration of DS is more fleshed out 

RE: karaf boot

2017-01-16 Thread Brad Johnson
I wonder if there's a way to start the implementation of a CDI common
practice with DS where possible but blueprint where not and then migrate
toward DS. 

 

>From my point of view when mentoring new developer's there are going to be
two general use cases for CDI, one is just for internal wiring inside a
bundle and dependency injection with internals.  Among other things it makes
testing a heck of a lot easier. 

 

The other use case is an easy way to export services and get references to
them. I'm not sure how well DS and blueprint play together.

 

It is also one of the reasons I've migrated away from using blueprint XML
for routes to the Java DSL.  Consistent and easy for Java developers to
understand.  Because I've used blueprint so much I have a limited
understanding of CDI but from what I've seen of it, it is a very sane way of
handling wire up.

 

So the question I guess is how hard is it to create a migration plan to move
pieces from blueprint to DS under the covers? Does using the Camel Java DSL
make that easier?

 

I don't see a problem for a "next generation" of the stack to say that the
XML variant is no longer being supported and recommend migration to the Java
DSL with CDI.  That isn't difficult in any case. But from the framework
perspective it may eliminate one level of indirection that requires XML
parsing, schema namespaces and the mapping of those through a plugin into
the constituent parts.

 

Would adopting such an approach make conversion away from blueprint easier?
Would it make a migration path easier?

 

Brad

 

From: Christian Schneider [mailto:cschneider...@gmail.com] On Behalf Of
Christian Schneider
Sent: Monday, January 16, 2017 4:37 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

 

I generally like the idea of having one standard way to do dependency
injection in OSGi. Unfortunately until now we do not have a single framework
that most people are happy with.

I pushed a lot to make blueprint easier by using the CDI and JEE annotations
and create blueprint from it using the aries blueprint maven plugin. This
allows a CDI style development and works very well already. Recently Dominik
extended my approach a lot and covered much of the CDI functionality.
Currently this might be the best approach when your developers are
experienced in JEE. Unfortunately blueprint has some bad behaviours like the
blocking proxies when a mandatory service goes away. Blueprint is also quite
complex internally and there is not standardized API for extension
namespaces.

CDI would be great but it is is less well supported on OSGi than blueprint
and the current implementations also have the same bad proxy behaviour. So
while I would like to see a really good CDI implementation on OSGi with
dynamic behaviour like DS we are not there yet.

DS is a little limited with its lack of extensibility but it works by far
best of all frameworks in OSGi. The way it creates and destroy components
when mandatory references come and go makes it so easy to implement code
that works well in the dynamic OSGi environment. It also nicely supports
configs even when using the config factories where you can have one instance
of your component per config instance. 

So for the moment I would rather use DS as a default dependency injection
for karaf boot. It is also the smallest footprint. When CDI is ready we
could switch to CDI.

Christian


On 11.01.2017 22:03, Brad Johnson wrote:

I definitely like the direction of the Karaf Boot with the CDI, blueprint,
DS, etc. starters.  Now if we could integrate that with the Karaf profiles
and have standardized Karaf Boot containers to configure like tinkertoys
we'd be there.  I may work on some of that. I believe the synergy between
Karaf Boot and the profiles could be outstanding. It would make any
development easier by using all the standard OSGi libraries and mak
microservices a snap.

 

If we have a workable CDI version of service/reference annotation then I'm
not sure why I'd use DS. It may be that the external configuration of DS is
more fleshed out but CDI has so much by way of easy injection that it makes
coding and especially testing a lot easier.  I guess the CDI OSGi services
could leverage much of DS.  Dunno.

 

In any case, I think that's on the right track. 

 

From: Christian Schneider [mailto:cschneider...@gmail.com] On Behalf Of
Christian Schneider
Sent: Wednesday, January 11, 2017 8:52 AM
To: user@karaf.apache.org <mailto:user@karaf.apache.org> 
Subject: Re: karaf boot

 

Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are missing
in current karaf? Until now we only discussed internally about the scope and
requirements of karaf boot. It would be very valuable to get some input from
a real world case.

Christian

On 11.01.2017 13:41, Nick Baker wrote:

We'd be interested in this as well. Beginning to move toward Microservices
deployments + Remote S

Re: karaf boot

2017-01-16 Thread Guillaume Nodet
2017-01-16 13:25 GMT+01:00 Guillaume Nodet <gno...@apache.org>:

>
>
> 2017-01-16 11:36 GMT+01:00 Christian Schneider <ch...@die-schneider.net>:
>
>> I generally like the idea of having one standard way to do dependency
>> injection in OSGi. Unfortunately until now we do not have a single
>> framework that most people are happy with.
>>
>> I pushed a lot to make blueprint easier by using the CDI and JEE
>> annotations and create blueprint from it using the aries blueprint maven
>> plugin. This allows a CDI style development and works very well already.
>> Recently Dominik extended my approach a lot and covered much of the CDI
>> functionality. Currently this might be the best approach when your
>> developers are experienced in JEE. Unfortunately blueprint has some bad
>> behaviours like the blocking proxies when a mandatory service goes away.
>> Blueprint is also quite complex internally and there is not standardized
>> API for extension namespaces.
>>
>> CDI would be great but it is is less well supported on OSGi than
>> blueprint and the current implementations also have the same bad proxy
>> behaviour. So while I would like to see a really good CDI implementation on
>> OSGi with dynamic behaviour like DS we are not there yet.
>>
>
> No, the work I've done on CDI is free from those drawbacks.  It has the
> same semantics as DS, so anything you can do in DS, you can do in CDI.  You
> can even do more than DS because you can wire services "internally", i.e.
> you don't have to expose your services to the OSGi registry to wire them
> together.
>
>
>>
>> DS is a little limited with its lack of extensibility but it works by far
>> best of all frameworks in OSGi. The way it creates and destroy components
>> when mandatory references come and go makes it so easy to implement code
>> that works well in the dynamic OSGi environment. It also nicely supports
>> configs even when using the config factories where you can have one
>> instance of your component per config instance.
>>
>> So for the moment I would rather use DS as a default dependency injection
>> for karaf boot. It is also the smallest footprint. When CDI is ready we
>> could switch to CDI.
>>
>
> I think it's ready.  The spec that the OSGi alliance is working on, is
> crap imho, but I've raised my hand several times already, so I won't try to
> bargain about all the limitations and creepy proxy things they want to do.
> That said, the spec has 2 parts, the first one is about CDI applications in
> OSGi, and that one is good.  The second one is a CDI extension for OSGi
> service registry interaction, and that's the one that is bad, bad it's
> pluggable, so we can easily use the Pax-CDI one and that will cause no
> problems.
>

Read "but it's pluggable".


>
> I think this CDI stuff has all the benefits of CDI + DS without the
> drawbacks of blueprint, so I'd rather have us focusing on it.
>
>
>>
>>
>> Christian
>>
>>
>>
>> On 11.01.2017 22:03, Brad Johnson wrote:
>>
>> I definitely like the direction of the Karaf Boot with the CDI,
>> blueprint, DS, etc. starters.  Now if we could integrate that with the
>> Karaf profiles and have standardized Karaf Boot containers to configure
>> like tinkertoys we’d be there.  I may work on some of that. I believe the
>> synergy between Karaf Boot and the profiles could be outstanding. It would
>> make any development easier by using all the standard OSGi libraries and
>> mak microservices a snap.
>>
>>
>>
>> If we have a workable CDI version of service/reference annotation then
>> I’m not sure why I’d use DS. It may be that the external configuration of
>> DS is more fleshed out but CDI has so much by way of easy injection that it
>> makes coding and especially testing a lot easier.  I guess the CDI OSGi
>> services could leverage much of DS.  Dunno.
>>
>>
>>
>> In any case, I think that’s on the right track.
>>
>>
>>
>> *From:* Christian Schneider [mailto:cschneider...@gmail.com
>> <cschneider...@gmail.com>] *On Behalf Of *Christian Schneider
>> *Sent:* Wednesday, January 11, 2017 8:52 AM
>> *To:* user@karaf.apache.org
>> *Subject:* Re: karaf boot
>>
>>
>>
>> Sounds like you have a good case to validate karaf boot on.
>>
>> Can you explain how you create your deployments now and what you are
>> missing in current karaf? Until now we only discussed internally about the
>> scope and requirements of karaf boot. It would be very valuable to get some
>> input f

Re: karaf boot

2017-01-16 Thread Guillaume Nodet
2017-01-16 11:36 GMT+01:00 Christian Schneider <ch...@die-schneider.net>:

> I generally like the idea of having one standard way to do dependency
> injection in OSGi. Unfortunately until now we do not have a single
> framework that most people are happy with.
>
> I pushed a lot to make blueprint easier by using the CDI and JEE
> annotations and create blueprint from it using the aries blueprint maven
> plugin. This allows a CDI style development and works very well already.
> Recently Dominik extended my approach a lot and covered much of the CDI
> functionality. Currently this might be the best approach when your
> developers are experienced in JEE. Unfortunately blueprint has some bad
> behaviours like the blocking proxies when a mandatory service goes away.
> Blueprint is also quite complex internally and there is not standardized
> API for extension namespaces.
>
> CDI would be great but it is is less well supported on OSGi than blueprint
> and the current implementations also have the same bad proxy behaviour. So
> while I would like to see a really good CDI implementation on OSGi with
> dynamic behaviour like DS we are not there yet.
>

No, the work I've done on CDI is free from those drawbacks.  It has the
same semantics as DS, so anything you can do in DS, you can do in CDI.  You
can even do more than DS because you can wire services "internally", i.e.
you don't have to expose your services to the OSGi registry to wire them
together.


>
> DS is a little limited with its lack of extensibility but it works by far
> best of all frameworks in OSGi. The way it creates and destroy components
> when mandatory references come and go makes it so easy to implement code
> that works well in the dynamic OSGi environment. It also nicely supports
> configs even when using the config factories where you can have one
> instance of your component per config instance.
>
> So for the moment I would rather use DS as a default dependency injection
> for karaf boot. It is also the smallest footprint. When CDI is ready we
> could switch to CDI.
>

I think it's ready.  The spec that the OSGi alliance is working on, is crap
imho, but I've raised my hand several times already, so I won't try to
bargain about all the limitations and creepy proxy things they want to do.
That said, the spec has 2 parts, the first one is about CDI applications in
OSGi, and that one is good.  The second one is a CDI extension for OSGi
service registry interaction, and that's the one that is bad, bad it's
pluggable, so we can easily use the Pax-CDI one and that will cause no
problems.

I think this CDI stuff has all the benefits of CDI + DS without the
drawbacks of blueprint, so I'd rather have us focusing on it.


>
>
> Christian
>
>
>
> On 11.01.2017 22:03, Brad Johnson wrote:
>
> I definitely like the direction of the Karaf Boot with the CDI, blueprint,
> DS, etc. starters.  Now if we could integrate that with the Karaf profiles
> and have standardized Karaf Boot containers to configure like tinkertoys
> we’d be there.  I may work on some of that. I believe the synergy between
> Karaf Boot and the profiles could be outstanding. It would make any
> development easier by using all the standard OSGi libraries and mak
> microservices a snap.
>
>
>
> If we have a workable CDI version of service/reference annotation then I’m
> not sure why I’d use DS. It may be that the external configuration of DS is
> more fleshed out but CDI has so much by way of easy injection that it makes
> coding and especially testing a lot easier.  I guess the CDI OSGi services
> could leverage much of DS.  Dunno.
>
>
>
> In any case, I think that’s on the right track.
>
>
>
> *From:* Christian Schneider [mailto:cschneider...@gmail.com
> <cschneider...@gmail.com>] *On Behalf Of *Christian Schneider
> *Sent:* Wednesday, January 11, 2017 8:52 AM
> *To:* user@karaf.apache.org
> *Subject:* Re: karaf boot
>
>
>
> Sounds like you have a good case to validate karaf boot on.
>
> Can you explain how you create your deployments now and what you are
> missing in current karaf? Until now we only discussed internally about the
> scope and requirements of karaf boot. It would be very valuable to get some
> input from a real world case.
>
> Christian
>
> On 11.01.2017 13:41, Nick Baker wrote:
>
> We'd be interested in this as well. Beginning to move toward Microservices
> deployments + Remote Services for interop. I'll have a look at your branch
> JB!
>
>
>
> We've added support in our Karaf main for multiple instances from the same
> install on disk. Cache directories segmented, port conflicts handled. This
> of course isn't an issue in container-based cloud deployme

Re: karaf boot

2017-01-16 Thread Jean-Baptiste Onofré
That's why karaf-boot can provide starter and clearly document pro/cons. 
End-users will choose the best match for their needs.


The same could happen for framework: I would like to create a REST 
service, should I start with Jersey, with CXF-RS, ...


So, if in karaf-boot, it makes sense to support different starters for 
the programming model (because one service exposed with blueprint can be 
used from DS for instance), maybe we will have to make some choices as 
"preferred" solution (for REST service, for JPA engine, etc).


Regards
JB

On 01/16/2017 11:36 AM, Christian Schneider wrote:

I generally like the idea of having one standard way to do dependency
injection in OSGi. Unfortunately until now we do not have a single
framework that most people are happy with.

I pushed a lot to make blueprint easier by using the CDI and JEE
annotations and create blueprint from it using the aries blueprint maven
plugin. This allows a CDI style development and works very well already.
Recently Dominik extended my approach a lot and covered much of the CDI
functionality. Currently this might be the best approach when your
developers are experienced in JEE. Unfortunately blueprint has some bad
behaviours like the blocking proxies when a mandatory service goes away.
Blueprint is also quite complex internally and there is not standardized
API for extension namespaces.

CDI would be great but it is is less well supported on OSGi than
blueprint and the current implementations also have the same bad proxy
behaviour. So while I would like to see a really good CDI implementation
on OSGi with dynamic behaviour like DS we are not there yet.

DS is a little limited with its lack of extensibility but it works by
far best of all frameworks in OSGi. The way it creates and destroy
components when mandatory references come and go makes it so easy to
implement code that works well in the dynamic OSGi environment. It also
nicely supports configs even when using the config factories where you
can have one instance of your component per config instance.

So for the moment I would rather use DS as a default dependency
injection for karaf boot. It is also the smallest footprint. When CDI is
ready we could switch to CDI.

Christian


On 11.01.2017 22:03, Brad Johnson wrote:


I definitely like the direction of the Karaf Boot with the CDI,
blueprint, DS, etc. starters.  Now if we could integrate that with the
Karaf profiles and have standardized Karaf Boot containers to
configure like tinkertoys we’d be there.  I may work on some of that.
I believe the synergy between Karaf Boot and the profiles could be
outstanding. It would make any development easier by using all the
standard OSGi libraries and mak microservices a snap.



If we have a workable CDI version of service/reference annotation then
I’m not sure why I’d use DS. It may be that the external configuration
of DS is more fleshed out but CDI has so much by way of easy injection
that it makes coding and especially testing a lot easier.  I guess the
CDI OSGi services could leverage much of DS.  Dunno.



In any case, I think that’s on the right track.



*From:*Christian Schneider [mailto:cschneider...@gmail.com] *On Behalf
Of *Christian Schneider
*Sent:* Wednesday, January 11, 2017 8:52 AM
*To:* user@karaf.apache.org
*Subject:* Re: karaf boot



Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are
missing in current karaf? Until now we only discussed internally about
the scope and requirements of karaf boot. It would be very valuable to
get some input from a real world case.

Christian

On 11.01.2017 13:41, Nick Baker wrote:

We'd be interested in this as well. Beginning to move toward
Microservices deployments + Remote Services for interop. I'll have
a look at your branch JB!



We've added support in our Karaf main for multiple instances from
the same install on disk. Cache directories segmented, port
conflicts handled. This of course isn't an issue in
container-based cloud deployments (Docker). Still, may be of use.



-Nick Baker



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: karaf boot

2017-01-12 Thread Nick Baker
As of right now, No. We're developing this as an enterprise feature (pay). 
After it's released later this year we'll likely open some of the core stuff 
like the Source-to-Image build.

There's not a lot of code here. Basically we take some files and config checked 
into a Git repo and construct a traditional Karaf Maven assembly which is built 
and the result turned into a Docker image. So it's a two-step process.

The following links may help:
https://hub.docker.com/r/fabric8/s2i-karaf/
http://fabric8.io/guide/karaf.html
https://dzone.com/articles/how-to-containerize-your-camel-route-on-karaf-with


-Nick Baker

From: Jason Pratt <jpratt3...@gmail.com>
Sent: Wednesday, January 11, 2017 7:52:19 PM
To: user@karaf.apache.org
Subject: Re: karaf boot

Do you have any examples on github for this?

Sent from my iPad

On Jan 11, 2017, at 4:38 PM, Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>> wrote:

We're deploying into Kubernetes (OpenShift), but it could be Mesos/Marathon, 
Docker Swarm, etc. the only important thing is for each pod to know where to 
find zookeeper.

-Nick
From: jason.pr...@windriver.com<mailto:jason.pr...@windriver.com>
Sent: January 11, 2017 7:19 PM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Reply-to: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: RE: karaf boot


This sounds very interesting. Would the Dockers then be deployed similar to 
VertX?

From: Nick Baker [mailto:nba...@pentaho.com]
Sent: Wednesday, January 11, 2017 11:31 AM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

Some background on what we've been playing with may be of use.

We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf 
instances (pods). Each pod simply runs a plain Karaf preconfigured with Remote 
Service support (ECF) and select features of our own design.

This implementation leverages the OpenShift Source-to-image feature which 
transforms a simple Karaf assembly template checked into a Git Repository into 
a Maven Karaf assembly, which is then run to produce a Docker Image containing 
the Karaf assembly. The Fabric8 team has done great work here and we used their 
S2I image as inspiration for our own.


Templated Assembly
I really like this templated assembly approach. We have a single configuration 
file specifying which features are to be installed, optionally supplying new 
Feature repository URLs, and environment variables. You can also supply extra 
CFG files and even artifacts to be placed in the /deploy directory.

One aspect about containerized deployments and microservice practices to 
consider is how they treat applications as static immutable images. You don't 
modify the capabilities or even configuration of running instances. Indeed 
instances themselves are not to be manipulated directly as the container 
environment will start/stop and scale out the base image as needed. Rather if 
you want to extend the capabilities or change configuration, you would create a 
new image or new version of an existing one and propagate that out to the 
cluster.

That said one of the goals for our application is the ability to deploy a small 
footprint instance and have it dynamically provision capabilities (features) as 
needed by the incoming workload. These would seem to run counter to the trend 
of static instances, but I disagree as the scope of what can be dynamically 
provisioned is controlled. Each of these runtime features contributes to an 
existing one -plugins to an existing capability.

TLDR: Support easy assemblies from a very simplified configuration. I'd 
probably introduce a command-line program to invoke the build and a maven 
plugin.


Run my template
Building off templated assemblies would be simple "run" support from the same 
configuration. Another command for the command-line program, maven plugin. Put 
everything in java.io.tmpdir, who cares.


Run Programatically
Another item I've wanted is a better Karaf Main class. Really, I would just 
like to use PAX-Exam as a Runner. I know... it originated from pax-runner. 
Something simple. Specify Karaf version, features, config, setup System Bundle 
packages, run. I guess if this was done it could be used in concert with the 
build template to support the run-from-template above.


Health Checks
We had to develop some custom health check code to ensure that all features and 
blueprint containers successfully start. Legacy portions of our application 
need to wait for Karaf to be fully realized before continuing execution. This 
was pretty important to our embedded Karaf usage, but that's certainly rare. 
Regardless, Health Checks are vital to microservice / cloud deployments. I 
recently found that the Fabric8 team pretty much already has this, and it's 
just about exactly what we developed [:(]  This needs to be documented for 
others to find.


Boot Features
Boot Feat

Re: karaf boot

2017-01-12 Thread Jean-Baptiste Onofré

Hi,

as mentioned some weeks ago on the mailing list, I have a test case 
using Mesos and Marathon (with Karaf & Cellar).

I didn't have time to complete the blog post, I will.

Regards
JB

On 01/12/2017 01:38 AM, Nick Baker wrote:

We're deploying into Kubernetes (OpenShift), but it could be
Mesos/Marathon, Docker Swarm, etc. the only important thing is for each
pod to know where to find zookeeper.

-Nick
*From:* jason.pr...@windriver.com
*Sent:* January 11, 2017 7:19 PM
*To:* user@karaf.apache.org
*Reply-to:* user@karaf.apache.org
*Subject:* RE: karaf boot


This sounds very interesting. Would the Dockers then be deployed similar
to VertX?



*From:*Nick Baker [mailto:nba...@pentaho.com]
*Sent:* Wednesday, January 11, 2017 11:31 AM
*To:* user@karaf.apache.org
*Subject:* Re: karaf boot



Some background on what we've been playing with may be of use.



We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf
instances (pods). Each pod simply runs a plain Karaf preconfigured with
Remote Service support (ECF) and select features of our own design.



This implementation leverages the OpenShift Source-to-image feature
which transforms a simple Karaf assembly template checked into a Git
Repository into a Maven Karaf assembly, which is then run to produce a
Docker Image containing the Karaf assembly. The Fabric8 team has done
great work here and we used their S2I image as inspiration for our own.





Templated Assembly

I really like this templated assembly approach. We have a single
configuration file specifying which features are to be installed,
optionally supplying new Feature repository URLs, and environment
variables. You can also supply extra CFG files and even artifacts to be
placed in the /deploy directory.



One aspect about containerized deployments and microservice practices to
consider is how they treat applications as static immutable images. You
don't modify the capabilities or even configuration of running
instances. Indeed instances themselves are not to be manipulated
directly as the container environment will start/stop and scale out the
base image as needed. Rather if you want to extend the capabilities or
change configuration, you would create a new image or new version of an
existing one and propagate that out to the cluster.



That said one of the goals for our application is the ability to deploy
a small footprint instance and have it dynamically provision
capabilities (features) as needed by the incoming workload. These would
seem to run counter to the trend of static instances, but I disagree as
the scope of what can be dynamically provisioned is controlled. Each of
these runtime features contributes to an existing one -plugins to an
existing capability.



TLDR: Support easy assemblies from a very simplified configuration. I'd
probably introduce a command-line program to invoke the build and a
maven plugin.





Run my template

Building off templated assemblies would be simple "run" support from the
same configuration. Another command for the command-line program, maven
plugin. Put everything in java.io.tmpdir, who cares.





Run Programatically

Another item I've wanted is a better Karaf Main class. Really, I would
just like to use PAX-Exam as a Runner. I know... it originated from
pax-runner. Something simple. Specify Karaf version, features, config,
setup System Bundle packages, run. I guess if this was done it could be
used in concert with the build template to support the run-from-template
above.





Health Checks

We had to develop some custom health check code to ensure that all
features and blueprint containers successfully start. Legacy portions of
our application need to wait for Karaf to be fully realized before
continuing execution. This was pretty important to our embedded Karaf
usage, but that's certainly rare. Regardless, Health Checks are vital to
microservice / cloud deployments. I recently found that the Fabric8 team
pretty much already has this, and it's just about exactly what we
developed ☹ This needs to be documented for others to find.





Boot Features

Boot Feature support in the assembly plugin is a Huge benefit for fast
lightweight Karaf instances. This would clearly be the preferred
configuration for a Nano-like distribution (shout-out to our Virgo
brothers). Unfortunately, I've had varying success moving our assemblies
from startupFeatures to bootFeatures. It may have to do with our custom
deployers. Honestly I haven't looked into it too deeply.





Easy Web Interface

Hawtio is nice, but can be a bit overwhelming. An easy interface,
especially for those new to OSGI/Karaf would go a long way.





I've reached-out to our OSGI guys here for their thoughts and will post
them here as they come in.



-Nick Baker



*From:*Christian Schneider <cschneider...@gmail.com
<mailto:cschneider...@gmail.com>> on behalf of Christian Schneide

Re: karaf boot

2017-01-11 Thread Nick Baker
We're deploying into Kubernetes (OpenShift), but it could be Mesos/Marathon, 
Docker Swarm, etc. the only important thing is for each pod to know where to 
find zookeeper.

-Nick
From: jason.pr...@windriver.com
Sent: January 11, 2017 7:19 PM
To: user@karaf.apache.org
Reply-to: user@karaf.apache.org
Subject: RE: karaf boot


This sounds very interesting. Would the Dockers then be deployed similar to 
VertX?

From: Nick Baker [mailto:nba...@pentaho.com]
Sent: Wednesday, January 11, 2017 11:31 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

Some background on what we've been playing with may be of use.

We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf 
instances (pods). Each pod simply runs a plain Karaf preconfigured with Remote 
Service support (ECF) and select features of our own design.

This implementation leverages the OpenShift Source-to-image feature which 
transforms a simple Karaf assembly template checked into a Git Repository into 
a Maven Karaf assembly, which is then run to produce a Docker Image containing 
the Karaf assembly. The Fabric8 team has done great work here and we used their 
S2I image as inspiration for our own.


Templated Assembly
I really like this templated assembly approach. We have a single configuration 
file specifying which features are to be installed, optionally supplying new 
Feature repository URLs, and environment variables. You can also supply extra 
CFG files and even artifacts to be placed in the /deploy directory.

One aspect about containerized deployments and microservice practices to 
consider is how they treat applications as static immutable images. You don't 
modify the capabilities or even configuration of running instances. Indeed 
instances themselves are not to be manipulated directly as the container 
environment will start/stop and scale out the base image as needed. Rather if 
you want to extend the capabilities or change configuration, you would create a 
new image or new version of an existing one and propagate that out to the 
cluster.

That said one of the goals for our application is the ability to deploy a small 
footprint instance and have it dynamically provision capabilities (features) as 
needed by the incoming workload. These would seem to run counter to the trend 
of static instances, but I disagree as the scope of what can be dynamically 
provisioned is controlled. Each of these runtime features contributes to an 
existing one -plugins to an existing capability.

TLDR: Support easy assemblies from a very simplified configuration. I'd 
probably introduce a command-line program to invoke the build and a maven 
plugin.


Run my template
Building off templated assemblies would be simple "run" support from the same 
configuration. Another command for the command-line program, maven plugin. Put 
everything in java.io.tmpdir, who cares.


Run Programatically
Another item I've wanted is a better Karaf Main class. Really, I would just 
like to use PAX-Exam as a Runner. I know... it originated from pax-runner. 
Something simple. Specify Karaf version, features, config, setup System Bundle 
packages, run. I guess if this was done it could be used in concert with the 
build template to support the run-from-template above.


Health Checks
We had to develop some custom health check code to ensure that all features and 
blueprint containers successfully start. Legacy portions of our application 
need to wait for Karaf to be fully realized before continuing execution. This 
was pretty important to our embedded Karaf usage, but that's certainly rare. 
Regardless, Health Checks are vital to microservice / cloud deployments. I 
recently found that the Fabric8 team pretty much already has this, and it's 
just about exactly what we developed [☹]  This needs to be documented for 
others to find.


Boot Features
Boot Feature support in the assembly plugin is a Huge benefit for fast 
lightweight Karaf instances. This would clearly be the preferred configuration 
for a Nano-like distribution (shout-out to our Virgo brothers). Unfortunately, 
I've had varying success moving our assemblies from startupFeatures to 
bootFeatures. It may have to do with our custom deployers. Honestly I haven't 
looked into it too deeply.


Easy Web Interface
Hawtio is nice, but can be a bit overwhelming. An easy interface, especially 
for those new to OSGI/Karaf would go a long way.


I've reached-out to our OSGI guys here for their thoughts and will post them 
here as they come in.

-Nick Baker

From: Christian Schneider 
<cschneider...@gmail.com<mailto:cschneider...@gmail.com>> on behalf of 
Christian Schneider <ch...@die-schneider.net<mailto:ch...@die-schneider.net>>
Sent: Wednesday, January 11, 2017 9:51:56 AM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

Sounds like you have a good case to validate karaf boot on.

Can you explain how you 

RE: karaf boot

2017-01-11 Thread Pratt, Jason
This sounds very interesting. Would the Dockers then be deployed similar to 
VertX?

From: Nick Baker [mailto:nba...@pentaho.com]
Sent: Wednesday, January 11, 2017 11:31 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

Some background on what we've been playing with may be of use.

We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf 
instances (pods). Each pod simply runs a plain Karaf preconfigured with Remote 
Service support (ECF) and select features of our own design.

This implementation leverages the OpenShift Source-to-image feature which 
transforms a simple Karaf assembly template checked into a Git Repository into 
a Maven Karaf assembly, which is then run to produce a Docker Image containing 
the Karaf assembly. The Fabric8 team has done great work here and we used their 
S2I image as inspiration for our own.


Templated Assembly
I really like this templated assembly approach. We have a single configuration 
file specifying which features are to be installed, optionally supplying new 
Feature repository URLs, and environment variables. You can also supply extra 
CFG files and even artifacts to be placed in the /deploy directory.

One aspect about containerized deployments and microservice practices to 
consider is how they treat applications as static immutable images. You don't 
modify the capabilities or even configuration of running instances. Indeed 
instances themselves are not to be manipulated directly as the container 
environment will start/stop and scale out the base image as needed. Rather if 
you want to extend the capabilities or change configuration, you would create a 
new image or new version of an existing one and propagate that out to the 
cluster.

That said one of the goals for our application is the ability to deploy a small 
footprint instance and have it dynamically provision capabilities (features) as 
needed by the incoming workload. These would seem to run counter to the trend 
of static instances, but I disagree as the scope of what can be dynamically 
provisioned is controlled. Each of these runtime features contributes to an 
existing one -plugins to an existing capability.

TLDR: Support easy assemblies from a very simplified configuration. I'd 
probably introduce a command-line program to invoke the build and a maven 
plugin.


Run my template
Building off templated assemblies would be simple "run" support from the same 
configuration. Another command for the command-line program, maven plugin. Put 
everything in java.io.tmpdir, who cares.


Run Programatically
Another item I've wanted is a better Karaf Main class. Really, I would just 
like to use PAX-Exam as a Runner. I know... it originated from pax-runner. 
Something simple. Specify Karaf version, features, config, setup System Bundle 
packages, run. I guess if this was done it could be used in concert with the 
build template to support the run-from-template above.


Health Checks
We had to develop some custom health check code to ensure that all features and 
blueprint containers successfully start. Legacy portions of our application 
need to wait for Karaf to be fully realized before continuing execution. This 
was pretty important to our embedded Karaf usage, but that's certainly rare. 
Regardless, Health Checks are vital to microservice / cloud deployments. I 
recently found that the Fabric8 team pretty much already has this, and it's 
just about exactly what we developed [☹]  This needs to be documented for 
others to find.


Boot Features
Boot Feature support in the assembly plugin is a Huge benefit for fast 
lightweight Karaf instances. This would clearly be the preferred configuration 
for a Nano-like distribution (shout-out to our Virgo brothers). Unfortunately, 
I've had varying success moving our assemblies from startupFeatures to 
bootFeatures. It may have to do with our custom deployers. Honestly I haven't 
looked into it too deeply.


Easy Web Interface
Hawtio is nice, but can be a bit overwhelming. An easy interface, especially 
for those new to OSGI/Karaf would go a long way.


I've reached-out to our OSGI guys here for their thoughts and will post them 
here as they come in.

-Nick Baker

From: Christian Schneider 
<cschneider...@gmail.com<mailto:cschneider...@gmail.com>> on behalf of 
Christian Schneider <ch...@die-schneider.net<mailto:ch...@die-schneider.net>>
Sent: Wednesday, January 11, 2017 9:51:56 AM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are missing in 
current karaf? Until now we only discussed internally about the scope and 
requirements of karaf boot. It would be very valuable to get some input from a 
real world case.

Christian

On 11.01.2017 13:41, Nick Baker wrote:
We'd be interested in this as well. Beginning t

RE: karaf boot

2017-01-11 Thread Brad Johnson
I definitely like the direction of the Karaf Boot with the CDI, blueprint,
DS, etc. starters.  Now if we could integrate that with the Karaf profiles
and have standardized Karaf Boot containers to configure like tinkertoys
we'd be there.  I may work on some of that. I believe the synergy between
Karaf Boot and the profiles could be outstanding. It would make any
development easier by using all the standard OSGi libraries and mak
microservices a snap.

 

If we have a workable CDI version of service/reference annotation then I'm
not sure why I'd use DS. It may be that the external configuration of DS is
more fleshed out but CDI has so much by way of easy injection that it makes
coding and especially testing a lot easier.  I guess the CDI OSGi services
could leverage much of DS.  Dunno.

 

In any case, I think that's on the right track. 

 

From: Christian Schneider [mailto:cschneider...@gmail.com] On Behalf Of
Christian Schneider
Sent: Wednesday, January 11, 2017 8:52 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

 

Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are missing
in current karaf? Until now we only discussed internally about the scope and
requirements of karaf boot. It would be very valuable to get some input from
a real world case.

Christian

On 11.01.2017 13:41, Nick Baker wrote:

We'd be interested in this as well. Beginning to move toward Microservices
deployments + Remote Services for interop. I'll have a look at your branch
JB!

 

We've added support in our Karaf main for multiple instances from the same
install on disk. Cache directories segmented, port conflicts handled. This
of course isn't an issue in container-based cloud deployments (Docker).
Still, may be of use.

 

-Nick Baker





-- 
Christian Schneider
http://www.liquid-reality.de
 
Open Source Architect
http://www.talend.com


Re: karaf boot

2017-01-11 Thread Nick Baker
Some background on what we've been playing with may be of use.

We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf 
instances (pods). Each pod simply runs a plain Karaf preconfigured with Remote 
Service support (ECF) and select features of our own design.

This implementation leverages the OpenShift Source-to-image feature which 
transforms a simple Karaf assembly template checked into a Git Repository into 
a Maven Karaf assembly, which is then run to produce a Docker Image containing 
the Karaf assembly. The Fabric8 team has done great work here and we used their 
S2I image as inspiration for our own.


Templated Assembly
I really like this templated assembly approach. We have a single configuration 
file specifying which features are to be installed, optionally supplying new 
Feature repository URLs, and environment variables. You can also supply extra 
CFG files and even artifacts to be placed in the /deploy directory.

One aspect about containerized deployments and microservice practices to 
consider is how they treat applications as static immutable images. You don't 
modify the capabilities or even configuration of running instances. Indeed 
instances themselves are not to be manipulated directly as the container 
environment will start/stop and scale out the base image as needed. Rather if 
you want to extend the capabilities or change configuration, you would create a 
new image or new version of an existing one and propagate that out to the 
cluster.

That said one of the goals for our application is the ability to deploy a small 
footprint instance and have it dynamically provision capabilities (features) as 
needed by the incoming workload. These would seem to run counter to the trend 
of static instances, but I disagree as the scope of what can be dynamically 
provisioned is controlled. Each of these runtime features contributes to an 
existing one -plugins to an existing capability.

TLDR: Support easy assemblies from a very simplified configuration. I'd 
probably introduce a command-line program to invoke the build and a maven 
plugin.


Run my template
Building off templated assemblies would be simple "run" support from the same 
configuration. Another command for the command-line program, maven plugin. Put 
everything in java.io.tmpdir, who cares.


Run Programatically
Another item I've wanted is a better Karaf Main class. Really, I would just 
like to use PAX-Exam as a Runner. I know... it originated from pax-runner. 
Something simple. Specify Karaf version, features, config, setup System Bundle 
packages, run. I guess if this was done it could be used in concert with the 
build template to support the run-from-template above.


Health Checks
We had to develop some custom health check code to ensure that all features and 
blueprint containers successfully start. Legacy portions of our application 
need to wait for Karaf to be fully realized before continuing execution. This 
was pretty important to our embedded Karaf usage, but that's certainly rare. 
Regardless, Health Checks are vital to microservice / cloud deployments. I 
recently found that the Fabric8 team pretty much already has this, and it's 
just about exactly what we developed [☹]  This needs to be documented for 
others to find.


Boot Features
Boot Feature support in the assembly plugin is a Huge benefit for fast 
lightweight Karaf instances. This would clearly be the preferred configuration 
for a Nano-like distribution (shout-out to our Virgo brothers). Unfortunately, 
I've had varying success moving our assemblies from startupFeatures to 
bootFeatures. It may have to do with our custom deployers. Honestly I haven't 
looked into it too deeply.


Easy Web Interface
Hawtio is nice, but can be a bit overwhelming. An easy interface, especially 
for those new to OSGI/Karaf would go a long way.


I've reached-out to our OSGI guys here for their thoughts and will post them 
here as they come in.

-Nick Baker

From: Christian Schneider <cschneider...@gmail.com> on behalf of Christian 
Schneider <ch...@die-schneider.net>
Sent: Wednesday, January 11, 2017 9:51:56 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are missing in 
current karaf? Until now we only discussed internally about the scope and 
requirements of karaf boot. It would be very valuable to get some input from a 
real world case.

Christian

On 11.01.2017 13:41, Nick Baker wrote:
We'd be interested in this as well. Beginning to move toward Microservices 
deployments + Remote Services for interop. I'll have a look at your branch JB!

We've added support in our Karaf main for multiple instances from the same 
install on disk. Cache directories segmented, port conflicts handled. This of 
course isn't an issue in container-based cloud deployments (Docker). Still, 

Re: karaf boot

2017-01-11 Thread Christian Schneider

Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are 
missing in current karaf? Until now we only discussed internally about 
the scope and requirements of karaf boot. It would be very valuable to 
get some input from a real world case.


Christian

On 11.01.2017 13:41, Nick Baker wrote:
We'd be interested in this as well. Beginning to move toward 
Microservices deployments + Remote Services for interop. I'll have a 
look at your branch JB!


We've added support in our Karaf main for multiple instances from the 
same install on disk. Cache directories segmented, port conflicts 
handled. This of course isn't an issue in container-based cloud 
deployments (Docker). Still, may be of use.


-Nick Baker


--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: karaf boot

2017-01-11 Thread Jean-Baptiste Onofré

Hi guys,

that's really great and we are looking for help and ideas there !

The early stage branch is there:

https://github.com/jbonofre/karaf-boot

And this one contains also some PoC:

https://github.com/jbonofre/karaf-boot/tree/jpa

Regards
JB

On 01/11/2017 01:41 PM, Nick Baker wrote:

We'd be interested in this as well. Beginning to move toward
Microservices deployments + Remote Services for interop. I'll have a
look at your branch JB!

We've added support in our Karaf main for multiple instances from the
same install on disk. Cache directories segmented, port conflicts
handled. This of course isn't an issue in container-based cloud
deployments (Docker). Still, may be of use.

-Nick Baker

Sent via the BlackBerry Hub for Android
<http://play.google.com/store/apps/details?id=com.blackberry.hub>
*From:* bradj...@redhat.com
*Sent:* January 11, 2017 12:54 AM
*To:* user@karaf.apache.org
*Reply-to:* user@karaf.apache.org
*Subject:* RE: karaf boot


I’d be very interested in this project and will definitely give it a
look.  I’ve been using the Karaf 4 static profiles to create compact
microservices containers and it works well.  I’m not sure if that’s what
the Karaf Boot project is aiming at since I haven’t had a chance to look
at it yet.  But I’ll definitely give it a look tomorrow.



Brad



*From:*Jean-Baptiste Onofré [mailto:j...@nanthrax.net]
*Sent:* Tuesday, January 10, 2017 11:30 PM
*To:* user@karaf.apache.org
*Subject:* Re: karaf boot



Hi Scott

There were a discussion in progress on the mailing list about Karaf boot.
A PoC branch is available on my GitHub in early stage.

I would like to restart the discussion based on this branch.

Regards
JB

On Jan 11, 2017, at 02:25, Scott Lewis <sle...@composent.com
<mailto:sle...@composent.com>> wrote:

    The page about Karaf boot that I've found:
http://karaf.apache.org/projects.html#boot says 'not yet available'.  Is
there an expected timeline for Karaf Boot?  Also, is there a branch upon
    which the Karaf boot work is being done?

Thannksinadvance,

Scott



--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: karaf boot

2017-01-11 Thread Nick Baker
We'd be interested in this as well. Beginning to move toward Microservices 
deployments + Remote Services for interop. I'll have a look at your branch JB!

We've added support in our Karaf main for multiple instances from the same 
install on disk. Cache directories segmented, port conflicts handled. This of 
course isn't an issue in container-based cloud deployments (Docker). Still, may 
be of use.

-Nick Baker

Sent via the BlackBerry Hub for 
Android<http://play.google.com/store/apps/details?id=com.blackberry.hub>
From: bradj...@redhat.com
Sent: January 11, 2017 12:54 AM
To: user@karaf.apache.org
Reply-to: user@karaf.apache.org
Subject: RE: karaf boot


I'd be very interested in this project and will definitely give it a look.  
I've been using the Karaf 4 static profiles to create compact microservices 
containers and it works well.  I'm not sure if that's what the Karaf Boot 
project is aiming at since I haven't had a chance to look at it yet.  But I'll 
definitely give it a look tomorrow.

Brad

From: Jean-Baptiste Onofr? [mailto:j...@nanthrax.net]
Sent: Tuesday, January 10, 2017 11:30 PM
To: user@karaf.apache.org
Subject: Re: karaf boot

Hi Scott
There were a discussion in progress on the mailing list about Karaf boot.
A PoC branch is available on my GitHub in early stage.
I would like to restart the discussion based on this branch.
Regards
JB
On Jan 11, 2017, at 02:25, Scott Lewis 
<sle...@composent.com<mailto:sle...@composent.com>> wrote:

The page about Karaf boot that I've found:
http://karaf.apache.org/projects.html#boot says 'not yet available'.  Is
there an expected timeline for Karaf Boot?  Also, is there a branch upon
which the Karaf boot work is being done?

Thannksinadvance,

Scott



RE: karaf boot

2017-01-10 Thread Brad Johnson
I’d be very interested in this project and will definitely give it a look.  
I’ve been using the Karaf 4 static profiles to create compact microservices 
containers and it works well.  I’m not sure if that’s what the Karaf Boot 
project is aiming at since I haven’t had a chance to look at it yet.  But I’ll 
definitely give it a look tomorrow.

 

Brad

 

From: Jean-Baptiste Onofré [mailto:j...@nanthrax.net] 
Sent: Tuesday, January 10, 2017 11:30 PM
To: user@karaf.apache.org
Subject: Re: karaf boot

 

Hi Scott

There were a discussion in progress on the mailing list about Karaf boot.
A PoC branch is available on my GitHub in early stage.

I would like to restart the discussion based on this branch.

Regards 
JB

On Jan 11, 2017, at 02:25, Scott Lewis <sle...@composent.com 
<mailto:sle...@composent.com> > wrote:

The page about Karaf boot that I've found: 
http://karaf.apache.org/projects.html#boot says 'not yet available'.  Is 
there an expected timeline for Karaf Boot?  Also, is there a branch upon 
which the Karaf boot work is being done?

Thannksinadvance,

Scott





Re: karaf boot

2017-01-10 Thread Jean-Baptiste Onofré
Hi Scott

There were a discussion in progress on the mailing list about Karaf boot.
A PoC branch is available on my GitHub in early stage.

I would like to restart the discussion based on this branch.

Regards 
JB⁣​

On Jan 11, 2017, 02:25, at 02:25, Scott Lewis <sle...@composent.com> wrote:
>The page about Karaf boot that I've found: 
>http://karaf.apache.org/projects.html#boot says 'not yet available'. 
>Is 
>there an expected timeline for Karaf Boot?  Also, is there a branch
>upon 
>which the Karaf boot work is being done?
>
>Thannksinadvance,
>
>Scott


karaf boot

2017-01-10 Thread Scott Lewis
The page about Karaf boot that I've found: 
http://karaf.apache.org/projects.html#boot says 'not yet available'.  Is 
there an expected timeline for Karaf Boot?  Also, is there a branch upon 
which the Karaf boot work is being done?


Thannksinadvance,

Scott