Re: Remote Services and multiple consumers of services

2017-02-16 Thread Nick Baker
It was indeed port conflicts with Fastbin! Thanks.


How far away are you guys from releasing 1.11? We're Very interested in 
CompletableFuture and InputStream


-Nick Baker


From: Nick Baker
Sent: Thursday, February 16, 2017 11:24:57 AM
To: user@karaf.apache.org
Subject: Re: Remote Services and multiple consumers of services


Very possible! This is all on single development machines right now. Thanks 
I'll look into it and report back my findings.


-Nick Baker


From: Christian Schneider <cschneider...@gmail.com> on behalf of Christian 
Schneider <ch...@die-schneider.net>
Sent: Thursday, February 16, 2017 10:42:37 AM
To: user@karaf.apache.org
Subject: Re: Remote Services and multiple consumers of services

Hmm ... interesting.

Aries RSA posts the available endpoints to zookeeper. There is no mechanism to 
tell zookeeper that one of the client containers uses the service. So the 
expected behaviour is that all clients get the all the services from zookeeper.

Can you try to use the tcp transport to make sure this is not related to 
fastbin? Maybe the fastbin clients use the same port on their side (if it is on 
the same machine) and so block each other.

Christian

On 16.02.2017 15:58, Nick Baker wrote:
Hey All,

We've seeing something strange and I'm hoping someone here can provide some 
insight (*cough* Christian).

We have 3 Karaf containers all joined via Aries RSA (Zookeeper/Fastbin). A 
Service published by one container (C) is designed to be used by the other two 
(A / B) at the same time. What we're seeing is only one of those (A) is getting 
the service. B's tracker receives nothing. I can bounce "B" over and over and 
still nothing. If I shut-down "A", then the next bounce of "B" shows the 
service in the tracker.

Any insight appreciated!
-Nick Baker



This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed to. 
If you are not the named addressee you should not disseminate, distribute, copy 
or alter this email. Please notify the sender immediately by e-mail if you have 
received this email by mistake and delete this email from your systems. If you 
are not the intended addressee, please note that disclosing, copying, 
distributing or taking any action in reliance on the contents of such an email 
is strictly prohibited.

Any views or opinions presented in this email are solely those of the author 
and might not represent those of Pentaho Corporation and its affiliates. Please 
be aware that emails are not a secure mode of communication and may be 
intercepted by third parties.

WARNING: Computer viruses and other malicious codes can be transmitted by email 
and its attachments. You or your organization are advised to scan this email 
and its attachments for the presence of any computer viruses and malicious 
codes. Although Pentaho Corporation has taken reasonable precautions to ensure 
no viruses and other malicious codes are present in this email, Pentaho 
Corporation cannot accept responsibility for any loss or damage arising from 
the use of this email or its attachments.

Pentaho Corporation is a company incorporated in the State of Delaware in the 
USA and its principal place of business is at Suite 460, 5950 Hazeltine 
National Drive, Orlando, FL32822, USA.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com





This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed to. 
If you are not the named addressee you should not disseminate, distribute, copy 
or alter this email. Please notify the sender immediately by e-mail if you have 
received this email by mistake and delete this email from your systems. If you 
are not the intended addressee, please note that disclosing, copying, 
distributing or taking any action in reliance on the contents of such an email 
is strictly prohibited.

Any views or opinions presented in this email are solely those of the author 
and might not represent those of Pentaho Corporation and its affiliates. Please 
be aware that emails are not a secure mode of communication and may be 
intercepted by third parties.

WARNING: Computer viruses and other malicious codes can be transmitted by email 
and its attachments. You or your organization are advised to scan this email 
and its attachments for the presence of any computer viruses and malicious 
codes. Although Pentaho Corporation has taken reasonable precautions to ensure 
no viruses and other malicious codes are present in this email, Pentaho 
Corporation cannot accept responsibility for any loss or damage arising from 
the use of this email or its attachments.

Pentaho Corporation is a company incorpo

Re: Remote Services and multiple consumers of services

2017-02-16 Thread Nick Baker
Very possible! This is all on single development machines right now. Thanks 
I'll look into it and report back my findings.


-Nick Baker


From: Christian Schneider <cschneider...@gmail.com> on behalf of Christian 
Schneider <ch...@die-schneider.net>
Sent: Thursday, February 16, 2017 10:42:37 AM
To: user@karaf.apache.org
Subject: Re: Remote Services and multiple consumers of services

Hmm ... interesting.

Aries RSA posts the available endpoints to zookeeper. There is no mechanism to 
tell zookeeper that one of the client containers uses the service. So the 
expected behaviour is that all clients get the all the services from zookeeper.

Can you try to use the tcp transport to make sure this is not related to 
fastbin? Maybe the fastbin clients use the same port on their side (if it is on 
the same machine) and so block each other.

Christian

On 16.02.2017 15:58, Nick Baker wrote:
Hey All,

We've seeing something strange and I'm hoping someone here can provide some 
insight (*cough* Christian).

We have 3 Karaf containers all joined via Aries RSA (Zookeeper/Fastbin). A 
Service published by one container (C) is designed to be used by the other two 
(A / B) at the same time. What we're seeing is only one of those (A) is getting 
the service. B's tracker receives nothing. I can bounce "B" over and over and 
still nothing. If I shut-down "A", then the next bounce of "B" shows the 
service in the tracker.

Any insight appreciated!
-Nick Baker



This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed to. 
If you are not the named addressee you should not disseminate, distribute, copy 
or alter this email. Please notify the sender immediately by e-mail if you have 
received this email by mistake and delete this email from your systems. If you 
are not the intended addressee, please note that disclosing, copying, 
distributing or taking any action in reliance on the contents of such an email 
is strictly prohibited.

Any views or opinions presented in this email are solely those of the author 
and might not represent those of Pentaho Corporation and its affiliates. Please 
be aware that emails are not a secure mode of communication and may be 
intercepted by third parties.

WARNING: Computer viruses and other malicious codes can be transmitted by email 
and its attachments. You or your organization are advised to scan this email 
and its attachments for the presence of any computer viruses and malicious 
codes. Although Pentaho Corporation has taken reasonable precautions to ensure 
no viruses and other malicious codes are present in this email, Pentaho 
Corporation cannot accept responsibility for any loss or damage arising from 
the use of this email or its attachments.

Pentaho Corporation is a company incorporated in the State of Delaware in the 
USA and its principal place of business is at Suite 460, 5950 Hazeltine 
National Drive, Orlando, FL32822, USA.



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com





This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed to. 
If you are not the named addressee you should not disseminate, distribute, copy 
or alter this email. Please notify the sender immediately by e-mail if you have 
received this email by mistake and delete this email from your systems. If you 
are not the intended addressee, please note that disclosing, copying, 
distributing or taking any action in reliance on the contents of such an email 
is strictly prohibited.

Any views or opinions presented in this email are solely those of the author 
and might not represent those of Pentaho Corporation and its affiliates. Please 
be aware that emails are not a secure mode of communication and may be 
intercepted by third parties.

WARNING: Computer viruses and other malicious codes can be transmitted by email 
and its attachments. You or your organization are advised to scan this email 
and its attachments for the presence of any computer viruses and malicious 
codes. Although Pentaho Corporation has taken reasonable precautions to ensure 
no viruses and other malicious codes are present in this email, Pentaho 
Corporation cannot accept responsibility for any loss or damage arising from 
the use of this email or its attachments.

Pentaho Corporation is a company incorporated in the State of Delaware in the 
USA and its principal place of business is at Suite 460, 5950 Hazeltine 
National Drive, Orlando, FL32822, USA.


Remote Services and multiple consumers of services

2017-02-16 Thread Nick Baker
Hey All,

We've seeing something strange and I'm hoping someone here can provide some 
insight (*cough* Christian).

We have 3 Karaf containers all joined via Aries RSA (Zookeeper/Fastbin). A 
Service published by one container (C) is designed to be used by the other two 
(A / B) at the same time. What we're seeing is only one of those (A) is getting 
the service. B's tracker receives nothing. I can bounce "B" over and over and 
still nothing. If I shut-down "A", then the next bounce of "B" shows the 
service in the tracker.

Any insight appreciated!
-Nick Baker



This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed to. 
If you are not the named addressee you should not disseminate, distribute, copy 
or alter this email. Please notify the sender immediately by e-mail if you have 
received this email by mistake and delete this email from your systems. If you 
are not the intended addressee, please note that disclosing, copying, 
distributing or taking any action in reliance on the contents of such an email 
is strictly prohibited.

Any views or opinions presented in this email are solely those of the author 
and might not represent those of Pentaho Corporation and its affiliates. Please 
be aware that emails are not a secure mode of communication and may be 
intercepted by third parties.

WARNING: Computer viruses and other malicious codes can be transmitted by email 
and its attachments. You or your organization are advised to scan this email 
and its attachments for the presence of any computer viruses and malicious 
codes. Although Pentaho Corporation has taken reasonable precautions to ensure 
no viruses and other malicious codes are present in this email, Pentaho 
Corporation cannot accept responsibility for any loss or damage arising from 
the use of this email or its attachments.

Pentaho Corporation is a company incorporated in the State of Delaware in the 
USA and its principal place of business is at Suite 460, 5950 Hazeltine 
National Drive, Orlando, FL32822, USA.


Re: Bundle restart are hard to tackle

2017-01-31 Thread Nick Baker
I do agree that Felix can be very opaque when restarts occur. We usually end up 
sprinkling breakpoints around or attaching BTrace to log framework internals. 
refreshOptionalPackages() is often the cause. Some better logging about the 
trigger and cause of the refresh would be great. I know that we lose visibility 
to the root trigger as the events propagate across different threads, but 
someone ambitious/crafty should be able to handle it.


-Nick Baker


From: Charlie Mordant <cmorda...@gmail.com>
Sent: Tuesday, January 31, 2017 4:39:32 AM
To: user@karaf.apache.org
Subject: Bundle restart are hard to tackle

Hi users of the world's best application server ever ;-),

I'm struggling with feature resolution, and would like to have your POV.

This issue is concerning, the JDBC, the Blueprint and the transactional feature.

Here's the issue with feature resolution in my pax-exam test:


* The jdbc-pool-aries feature starts first (+ the config one and the cfg \o/).
* Then the transactional feature
* The third feature thats start is the aries jndi one, so the two datasources 
(the XA and non-XA ones) are exposed (\o/\o/).
* But the third that starts is the blueprint one, which starts the 
xbean-blueprint bundle.
That blueprint bundle restarts the 'optional' resolutions of aries transaction 
manager, which then restarts aries-jdbc modules (optional res again) & config, 
which stops the datasource and starts another one.
​Unfortunately, at this point, my 'daos' bundle is already linked with a proxy 
to the 'old' datasource​.

A way to fix it is to reference the 'transaction' feature in the 'pool-aries' 
one, and the 'blueprint' one in the 'transaction' one, but it would then break 
some modularity.

Have you got an idea on how to properly handle it?

The actual xml of my features, they're slightly similar to the karaf ones, with 
bundle versions aligned:
[code]

 
osgiliath-pax-jdbc-spec

mvn:org.ops4j.pax.jdbc/pax-jdbc-pool-common/${org.ops4j.pax.jdbc_pax-jdbc-pool-common.version}

mvn:org.ops4j.pax.jdbc/pax-jdbc-pool-aries/${org.ops4j.pax.jdbc_pax-jdbc-pool-aries.version}

osgiliath-aries-blueprint
mvn:org.apache.xbean/xbean-blueprint/${org.apache.xbean_xbean-blueprint.version}


osgiliath-transaction
mvn:org.apache.aries.transaction/org.apache.aries.transaction.jdbc/${org.apache.aries.transaction_org.apache.aries.transaction.jdbc.version}






aries.transaction.recoverable = true
aries.transaction.timeout = 600
aries.transaction.howl.logFileDir = ${karaf.data}/txlog
aries.transaction.howl.maxLogFiles = 2
aries.transaction.howl.maxBlocksPerFile = 512
aries.transaction.howl.bufferSize = 4

osgiliath-transaction-api
mvn:org.apache.aries/org.apache.aries.util/${org.apache.aries_org.apache.aries.util.version}

mvn:org.apache.aries.transaction/org.apache.aries.transaction.manager/${org.apache.aries.transaction_org.apache.aries.transaction.manager.version}

osgiliath-aries-blueprint
mvn:org.apache.felix/org.apache.felix.coordinator/${org.apache.felix_org.apache.felix.coordinator.version}

mvn:org.apache.aries.transaction/org.apache.aries.transaction.blueprint/${org.apache.aries.transaction_org.apache.aries.transaction.blueprint.version1}

mvn:org.apache.aries.transaction/org.apache.aries.transaction.blueprint/${org.apache.aries.transaction_org.apache.aries.transaction.blueprint.version}
mvn:org.apache.xbean/xbean-blueprint/${org.apache.xbean_xbean-blueprint.version}


osgiliath-spring
osgiliath-spring-tx




* Additional 
informations **


mvn:javax.interceptor/javax.interceptor-api/${javax.interceptor_javax.interceptor-api.version}
mvn:org.apache.geronimo.specs/geronimo-atinject_1.0_spec/${org.apache.geronimo.specs_geronimo-atinject_1.0_spec.version}
mvn:javax.el/javax.el-api/${javax.el_javax.el-api.version}
mvn:javax.enterprise/cdi-api/${javax.enterprise_cdi-api.version}

mvn:javax.transaction/javax.transaction-api/${javax.transaction_javax.transaction-api.version}

 

mvn:org.apache.geronimo.specs/geronimo-jpa_2.0_spec/${org.apache.geronimo.specs_geronimo-jpa_2.0_spec.version}

mvn:org.apache.geronimo.specs/geronimo-osgi-registry/${org.apache.geronimo.specs_geronimo-osgi-registry.version}

 
osgiliath-transaction-api
osgiliath-persistence-api
mvn:org.apache.felix/org.apache.felix.coordinator/${org.apache.felix_org.apache.felix.coordinator.version}
mvn:org.osgi/org.osgi.service.jdbc/${org.osgi_org.osgi.service.jd

Re: Levels of Containerization - focus on Docker and Karaf

2017-01-17 Thread Nick Baker
That would be good JB. People will be looking to deploy Karaf in container 
solutions ever more increasingly in the future.


-Nick Baker


From: Jean-Baptiste Onofré <j...@nanthrax.net>
Sent: Tuesday, January 17, 2017 8:26:16 AM
To: user@karaf.apache.org
Subject: Re: Levels of Containerization - focus on Docker and Karaf

It makes sense.

The only "lacking" part is about global service registry, API gateway
and governance.

I talked a bit with Guillaume some month ago and with Krystof during
last ApacheCon in Sevilla: I think it would be great to have something
around that as part of ServiceMix.

I prepared a document describing this that I can share with people
interested.

Regards
JB

On 01/17/2017 12:19 AM, souciance wrote:
> Someone may have mentioned this before but they way we do build our services
> is building karaf inside a docker container.
>
> We also deploy each Karaf instance which contains multiple Camel projects
> inside separate containers. So we have a one to one mapping between karaf
> instance and container. This is mainly because we don't want a container
> going down affecting more than one instance.
>
> Off course if our instances increase we may later have to consolidate but so
> far it was has worked. We also create our Karaf instances using custom
> distributions and then add them using dockerfiles. The main problem has been
> to get the distribution working correctly.
>
>
>
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/Levels-of-Containerization-focus-on-Docker-and-Karaf-tp4049162p4049260.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>

--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: Opinionated...

2017-01-16 Thread Nick Baker
Nothing much worth seeing from a Karaf OSGI perspective yet. We're working on 
some stuff for the next release that's not going to be open until then, but you 
can see the API now. We have one main Producer interface:


https://github.com/pentaho/pentaho-kettle/blob/ael/pdi-execution-engine/api/src/main/java/org/pentaho/di/engine/api/reporting/IProgressReporting.java



From: Nick Baker
Sent: Monday, January 16, 2017 3:58:04 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

Nothing much worth seeing from a Karaf OSGI perspective yet. We're working on 
some stuff for the next release that's not going to be open until then, but you 
can see the API now:


From: David Daniel <david.daniel.1...@gmail.com>
Sent: Monday, January 16, 2017 1:56:10 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

Nick do you have a link to pentaho where you are doing some of this.  I am 
guessing you are using flow instead of the OSGI pushstreams api when you say 
that streaming was considered for the OSGI standards.

David Daniel

On Mon, Jan 16, 2017 at 1:36 PM, Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>> wrote:
The event bus model has served us well for certain things, broadcasting 
application events consumed by unknown plugins for instance. It's certainly 
extensible and easy from the consumer and producer standpoint.

That said, we are basing much of our new work on Reactive Streams APIs. This 
provides backpressure for composed streams inside the application and ensures 
that no one is wasting time putting things on the bus which aren't actually 
listened to.

We're RxJava internally at the moment. Remote subscriptions is something we're 
just now dealing with which is prompting a look at Akka Streams.

I know Streaming is something that was considered for the OSGi standards, but 
with Java 9 looking to adopt Reactive Streams as the new "Flow" API, I would 
encourage us to look forward to that. In combination with remote services and a 
simple event bus like Guava which can be remoted, I think you have a pretty 
competent set of utilities to work with.

-Nick Baker

From: Scott Lewis <sle...@composent.com<mailto:sle...@composent.com>>
Sent: Monday, January 16, 2017 12:49:31 PM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: Opinionated...

On 1/16/2017 2:20 AM, Christian Schneider wrote:
> 

> - One way messaging. I think the purest form of remote communication
> are one way messages backed by JMS or Kafka or other messaging
> brokers. Unfortunately I think this is only partially supported in
> Remote Services.

and Nick Baker just wrote:

 >Where is Event Admin?

In terms of standardization, there was a DistributedEventing rfp 158
[1], but I don't know what's planned for that now by the EEG.  There was
also some work on push streams and perhaps that's somehow absorbed the
distributed eventing.   Someone on this list currently on the EEG can
probably speak to the state of standardization.

In terms of implementation, ECF has had a DistributedEventAdmin
implementation for a very long time [2].   The description at [2] is
based upon ActiveMQ, but like ECF's remote services implementation, a
provider approach allows the substitution of other pub/sub
providers...for example mqtt [3], and others (e.g. Camel...and plenty of
others).

Scott

[1] https://github.com/osgi/design

[2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
[3] https://github.com/ECF/Mqtt-Provider





Re: Opinionated...

2017-01-16 Thread Nick Baker
Nothing much worth seeing from a Karaf OSGI perspective yet. We're working on 
some stuff for the next release that's not going to be open until then, but you 
can see the API now:


From: David Daniel <david.daniel.1...@gmail.com>
Sent: Monday, January 16, 2017 1:56:10 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

Nick do you have a link to pentaho where you are doing some of this.  I am 
guessing you are using flow instead of the OSGI pushstreams api when you say 
that streaming was considered for the OSGI standards.

David Daniel

On Mon, Jan 16, 2017 at 1:36 PM, Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>> wrote:
The event bus model has served us well for certain things, broadcasting 
application events consumed by unknown plugins for instance. It's certainly 
extensible and easy from the consumer and producer standpoint.

That said, we are basing much of our new work on Reactive Streams APIs. This 
provides backpressure for composed streams inside the application and ensures 
that no one is wasting time putting things on the bus which aren't actually 
listened to.

We're RxJava internally at the moment. Remote subscriptions is something we're 
just now dealing with which is prompting a look at Akka Streams.

I know Streaming is something that was considered for the OSGi standards, but 
with Java 9 looking to adopt Reactive Streams as the new "Flow" API, I would 
encourage us to look forward to that. In combination with remote services and a 
simple event bus like Guava which can be remoted, I think you have a pretty 
competent set of utilities to work with.

-Nick Baker

From: Scott Lewis <sle...@composent.com<mailto:sle...@composent.com>>
Sent: Monday, January 16, 2017 12:49:31 PM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: Opinionated...

On 1/16/2017 2:20 AM, Christian Schneider wrote:
> 

> - One way messaging. I think the purest form of remote communication
> are one way messages backed by JMS or Kafka or other messaging
> brokers. Unfortunately I think this is only partially supported in
> Remote Services.

and Nick Baker just wrote:

 >Where is Event Admin?

In terms of standardization, there was a DistributedEventing rfp 158
[1], but I don't know what's planned for that now by the EEG.  There was
also some work on push streams and perhaps that's somehow absorbed the
distributed eventing.   Someone on this list currently on the EEG can
probably speak to the state of standardization.

In terms of implementation, ECF has had a DistributedEventAdmin
implementation for a very long time [2].   The description at [2] is
based upon ActiveMQ, but like ECF's remote services implementation, a
provider approach allows the substitution of other pub/sub
providers...for example mqtt [3], and others (e.g. Camel...and plenty of
others).

Scott

[1] https://github.com/osgi/design

[2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
[3] https://github.com/ECF/Mqtt-Provider





Re: Opinionated...

2017-01-16 Thread Nick Baker
The event bus model has served us well for certain things, broadcasting 
application events consumed by unknown plugins for instance. It's certainly 
extensible and easy from the consumer and producer standpoint.

That said, we are basing much of our new work on Reactive Streams APIs. This 
provides backpressure for composed streams inside the application and ensures 
that no one is wasting time putting things on the bus which aren't actually 
listened to.

We're RxJava internally at the moment. Remote subscriptions is something we're 
just now dealing with which is prompting a look at Akka Streams.

I know Streaming is something that was considered for the OSGi standards, but 
with Java 9 looking to adopt Reactive Streams as the new "Flow" API, I would 
encourage us to look forward to that. In combination with remote services and a 
simple event bus like Guava which can be remoted, I think you have a pretty 
competent set of utilities to work with.

-Nick Baker

From: Scott Lewis <sle...@composent.com>
Sent: Monday, January 16, 2017 12:49:31 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

On 1/16/2017 2:20 AM, Christian Schneider wrote:
> 

> - One way messaging. I think the purest form of remote communication
> are one way messages backed by JMS or Kafka or other messaging
> brokers. Unfortunately I think this is only partially supported in
> Remote Services.

and Nick Baker just wrote:

 >Where is Event Admin?

In terms of standardization, there was a DistributedEventing rfp 158
[1], but I don't know what's planned for that now by the EEG.  There was
also some work on push streams and perhaps that's somehow absorbed the
distributed eventing.   Someone on this list currently on the EEG can
probably speak to the state of standardization.

In terms of implementation, ECF has had a DistributedEventAdmin
implementation for a very long time [2].   The description at [2] is
based upon ActiveMQ, but like ECF's remote services implementation, a
provider approach allows the substitution of other pub/sub
providers...for example mqtt [3], and others (e.g. Camel...and plenty of
others).

Scott

[1] https://github.com/osgi/design

[2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
[3] https://github.com/ECF/Mqtt-Provider




Re: karaf boot

2017-01-16 Thread Nick Baker

I believe one of the goals of the "Boot" project should be to provide an easier 
introduction to developing with OSGI, Karaf and the broader community projects 
(CXF, Camel, etc.). One aspect of this is introducing developers to the 
dynamism, breadth and yes complexity of OSGI.


We try to ease people into it here, but there's always another "gotcha" waiting 
for them around the corner. We have people here who can educate and unblock 
those newbies. There will need to be a good deal of "art" in progressively 
introducing these concepts in the documentation.


There's also a lot of synergy between a simple/introduction Karaf instance and 
one which is tuned for microservices (startupFeatures only, CDI or SCR, 
read-only ConfigAdmin, simplified Aether resolution). This won't always be the 
case. Production deployments will want a fully populated /system directory and 
cache.clean turned off for instance.


As for the Microservice aspect, I do agree that it should have some Remote 
Services implementation enabled out of the box, and a Message Bus to service 
internal communication as well as remote. Where is Event Admin? We're using 
Guava + Camel + JMS to handle inter/intraProcess communication. Anyway, the 
combination of the two is a solid foundation to build upon.


-Nick Baker


From: Guillaume Nodet <gno...@apache.org>
Sent: Monday, January 16, 2017 10:06:53 AM
To: user
Subject: Re: karaf boot

I have investigated reworking the blueprint core extender on top of DS months 
ago, but I did not pursue.  The Felix SCR core is now more reusable (I made it 
that way in order to reuse it in pax-cdi), so maybe I could have another quick 
look about the feasibility. But I am pessimistic as IIRC, the problems were 
more about some requirements in the blueprint spec which could not be mapped 
correctly to DS.

2017-01-16 15:44 GMT+01:00 Brad Johnson 
<bradj...@redhat.com<mailto:bradj...@redhat.com>>:
I wonder if there’s a way to start the implementation of a CDI common practice 
with DS where possible but blueprint where not and then migrate toward DS.

>From my point of view when mentoring new developer’s there are going to be two 
>general use cases for CDI, one is just for internal wiring inside a bundle and 
>dependency injection with internals.  Among other things it makes testing a 
>heck of a lot easier.

The other use case is an easy way to export services and get references to 
them. I’m not sure how well DS and blueprint play together.

It is also one of the reasons I’ve migrated away from using blueprint XML for 
routes to the Java DSL.  Consistent and easy for Java developers to understand. 
 Because I’ve used blueprint so much I have a limited understanding of CDI but 
from what I’ve seen of it, it is a very sane way of handling wire up.

So the question I guess is how hard is it to create a migration plan to move 
pieces from blueprint to DS under the covers? Does using the Camel Java DSL 
make that easier?

I don’t see a problem for a “next generation” of the stack to say that the XML 
variant is no longer being supported and recommend migration to the Java DSL 
with CDI.  That isn’t difficult in any case. But from the framework perspective 
it may eliminate one level of indirection that requires XML parsing, schema 
namespaces and the mapping of those through a plugin into the constituent parts.

Would adopting such an approach make conversion away from blueprint easier? 
Would it make a migration path easier?

Brad

From: Christian Schneider 
[mailto:cschneider...@gmail.com<mailto:cschneider...@gmail.com>] On Behalf Of 
Christian Schneider
Sent: Monday, January 16, 2017 4:37 AM

To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

I generally like the idea of having one standard way to do dependency injection 
in OSGi. Unfortunately until now we do not have a single framework that most 
people are happy with.

I pushed a lot to make blueprint easier by using the CDI and JEE annotations 
and create blueprint from it using the aries blueprint maven plugin. This 
allows a CDI style development and works very well already. Recently Dominik 
extended my approach a lot and covered much of the CDI functionality. Currently 
this might be the best approach when your developers are experienced in JEE. 
Unfortunately blueprint has some bad behaviours like the blocking proxies when 
a mandatory service goes away. Blueprint is also quite complex internally and 
there is not standardized API for extension namespaces.

CDI would be great but it is is less well supported on OSGi than blueprint and 
the current implementations also have the same bad proxy behaviour. So while I 
would like to see a really good CDI implementation on OSGi with dynamic 
behaviour like DS we are not there yet.

DS is a little limited with its lack of extensibility but it works by far best 
of all frame

Re: karaf boot

2017-01-16 Thread Nick Baker
We're based on blueprint here, but that has more to do with our legacy as a 
Spring shop. Blueprint was familiar, standardized in the OSGI spec, and the 
hope in the past was to use the Gemini implementation so we could leverage 
Spring features and projects alongside OSGI (Transactions, AOP, Spring 
Security, Method-level security, etc.).


(Guillaume's recent work to support the Spring namespaces in Aries is something 
we're interested in)


For Service registration/discovery, blueprint works okay. Less well is the use 
of the blueprintContainer as an object factory. We end up passing around 
factory objects which delegate to the blueprintContainer where the built-in and 
custom scopes (session, request) manage what's returned for a particular call. 
These are true container-managed instances, but this is something we had to 
build on top of the container.


I'm also increasingly less a fan of proxying, damping and availability 
grace-periods. This may "help" a system, particularly one with poorly 
constructed feature files, but results in a lot of confusion and the perception 
that our OSGI container is non-deterministic. I wish the spec had kept 
SpringDMs ability to turn off proxying.


So yes, picking something like PAX-CDI or SCR sounds good. I'm not sure what 
level of maturation these are in. Last I checked CDI wasn't quite ready and SCR 
was in transition following the standardization.


-Nick Baker



From: Guillaume Nodet <gno...@apache.org>
Sent: Monday, January 16, 2017 10:06 AM
To: user
Subject: Re: karaf boot

I have investigated reworking the blueprint core extender on top of DS months 
ago, but I did not pursue.  The Felix SCR core is now more reusable (I made it 
that way in order to reuse it in pax-cdi), so maybe I could have another quick 
look about the feasibility. But I am pessimistic as IIRC, the problems were 
more about some requirements in the blueprint spec which could not be mapped 
correctly to DS.

2017-01-16 15:44 GMT+01:00 Brad Johnson 
<bradj...@redhat.com<mailto:bradj...@redhat.com>>:
I wonder if there’s a way to start the implementation of a CDI common practice 
with DS where possible but blueprint where not and then migrate toward DS.

>From my point of view when mentoring new developer’s there are going to be two 
>general use cases for CDI, one is just for internal wiring inside a bundle and 
>dependency injection with internals.  Among other things it makes testing a 
>heck of a lot easier.

The other use case is an easy way to export services and get references to 
them. I’m not sure how well DS and blueprint play together.

It is also one of the reasons I’ve migrated away from using blueprint XML for 
routes to the Java DSL.  Consistent and easy for Java developers to understand. 
 Because I’ve used blueprint so much I have a limited understanding of CDI but 
from what I’ve seen of it, it is a very sane way of handling wire up.

So the question I guess is how hard is it to create a migration plan to move 
pieces from blueprint to DS under the covers? Does using the Camel Java DSL 
make that easier?

I don’t see a problem for a “next generation” of the stack to say that the XML 
variant is no longer being supported and recommend migration to the Java DSL 
with CDI.  That isn’t difficult in any case. But from the framework perspective 
it may eliminate one level of indirection that requires XML parsing, schema 
namespaces and the mapping of those through a plugin into the constituent parts.

Would adopting such an approach make conversion away from blueprint easier? 
Would it make a migration path easier?

Brad

From: Christian Schneider 
[mailto:cschneider...@gmail.com<mailto:cschneider...@gmail.com>] On Behalf Of 
Christian Schneider
Sent: Monday, January 16, 2017 4:37 AM

To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

I generally like the idea of having one standard way to do dependency injection 
in OSGi. Unfortunately until now we do not have a single framework that most 
people are happy with.

I pushed a lot to make blueprint easier by using the CDI and JEE annotations 
and create blueprint from it using the aries blueprint maven plugin. This 
allows a CDI style development and works very well already. Recently Dominik 
extended my approach a lot and covered much of the CDI functionality. Currently 
this might be the best approach when your developers are experienced in JEE. 
Unfortunately blueprint has some bad behaviours like the blocking proxies when 
a mandatory service goes away. Blueprint is also quite complex internally and 
there is not standardized API for extension namespaces.

CDI would be great but it is is less well supported on OSGi than blueprint and 
the current implementations also have the same bad proxy behaviour. So while I 
would like to see a really good CDI implementation on OSGi with dynamic 
behaviour like DS we are 

Re: Levels of Containerization - focus on Docker and Karaf

2017-01-13 Thread Nick Baker
Injecting configuration into a containerized app (docker) is considered 
standard practice. The friction here is the level of sophistication in OSGI 
Configuration.

It seems to me what you need isn't some hack to push configurations through 
environment variables, but a new implementation of ConfigurationAdmin, or an 
agent which interacts with CM mirroring configurations in from an external 
system.

In our usage it's common to have a "tenant", think Walmart vs Target. Setting 
the tenant ID as an environment variable then having the configurations loaded 
from Zookeeper or whatever, injected into CM seems right.

-Nick

From: Dario Amiri <dariusham...@hotmail.com>
Sent: Friday, January 13, 2017 3:21:17 PM
To: user@karaf.apache.org
Subject: Re: Levels of Containerization - focus on Docker and Karaf


Let me expand on why this is desirable. Without the ability to set 
configuration through environment variables, I essentially have to create a 
docker image for each deployment. I have a root Dockerfile which assembles the 
main Karaf container image and brings in dependencies such as the JRE, then I 
have a Dockerfile for each deployment environment which builds on top of the 
root image by overriding deployment specific configuration. Automation reduces 
this burden but it is not ideal.

If I could set the contents of a config file in an environment variable, I 
could just pass the configuration directly to my root karaf docker image 
without having to build on top of it.


Being able to start Karaf as "java -jar karaf.jar" is desirable because it 
makes it easier to use a Karaf based application with PaaS such as Heroku and 
Cloud Foundry.

D

On 01/13/2017 12:10 PM, Dario Amiri wrote:

Ideally, I want to be able to do:

java -jar my-karaf.jar

And I can override individual configuration files using some environment 
variable convention.

D

On 01/13/2017 11:56 AM, Brad Johnson wrote:
Does it have to be an executable jar file or just a standalone executable? The 
static profiles actually create and zip up a full 
Karaf/felix/dependency/application implementation that when unzipped has all 
the standard bin directory items.

Brad

From: Dario Amiri [mailto:dariusham...@hotmail.com]
Sent: Friday, January 13, 2017 1:28 PM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: Levels of Containerization - focus on Docker and Karaf


I use Docker and Karaf. I've never had a problem creating a Docker image of my 
Karaf container. What I gain is freedom from having to worry about dependency 
related issues such as whether the right JRE is available.



That being said there are some challenges when using Karaf to build 12-factor 
apps. FWIW here's my two item list of what would make Karaf a more attractive 
platform from a 12-factor app perspective.



1. The ability to inject Karaf configuration through the environment (e.g. 
environment variables). Not just a single property, but an entire config admin 
managed configuration file if necessary. Even the existing support for reading 
property values from the environment is cumbersome because it requires having 
to setup that relationship as a Java system property as well.

2. The ability to package Karaf as a standalone runnable jar. Looks like Karaf 
boot is addressing this. I hope it comes with tooling that makes it easy to 
transition to this kind of model.



D

On 01/12/2017 04:44 AM, Nick Baker wrote:

Thanks Guillaume!



This is perfect for our microservice/containerized Karaf. I'll give this a try 
and see if we can get our features in startup. We've had issues in the past 
here.



-Nick Baker


From: Guillaume Nodet <gno...@apache.org><mailto:gno...@apache.org>
Sent: Thursday, January 12, 2017 5:55:24 AM
To: user
Subject: Re: Levels of Containerization - focus on Docker and Karaf

Fwiw, starting with Karaf 4.x, you can build custom distributions which are 
mostly static, and that more closely map to micro-services / docker images.  
The "static" images are called this way because you they kinda remove all the 
OSGi dynamism, i.e. no feature service, no deploy folder, read-only config 
admin, all bundles being installed at startup time from etc/startup.properties.
This can be easily done by using the karaf maven plugin and configuring 
startupFeatures and referencing the static kar, as shown in:
  https://github.com/apache/karaf/blob/master/demos/profiles/static/pom.xml


2017-01-11 21:07 GMT+01:00 CodeCola 
<prasen...@rogers.com<mailto:prasen...@rogers.com>>:
Not a question but a request for comments. With a focus on Java.

Container technology has traditionally been messy with dependencies and no
easy failsafe way until Docker came along to really pack ALL dependencies
(including the JVM) together in one ready-to-ship image that was faster,
more comfortable, and easier to understand than other container

Re: Levels of Containerization - focus on Docker and Karaf

2017-01-12 Thread Nick Baker
I do agree that an "opinionated" or "prescriptive" stack would help. It 
shouldn't prohibit the usage of any Karaf feature of course.


New users gravitate to full-stack solutions. Agnostic platforms with lots of 
options and no predefined stack, while obviously having many merits and longer 
legs (don't be Wicket), just haven't been winning out in adoption. This applies 
across the spectrum of computing.


From: Brad Johnson 
Sent: Thursday, January 12, 2017 10:46:55 AM
To: user@karaf.apache.org
Subject: RE: Levels of Containerization - focus on Docker and Karaf

Guillaume,

I’d mentioned that in an early post as my preferred way to do microservices and 
perhaps a good way of doing a Karaf Boot. I’ve worked with the Karaf 4 profiles 
and they are great.  Also used your CDI OSGi service.  If we could use the 
Karaf 4 profiles with the CDI implementation with OSGi services and the Camel 
Java DSL as a standard stack, it would permit focused development and 
standardized bundle configurations.

When I created a zip with Karaf, CXF and Camel the footprint was 30MB.

While having Karaf Boot support DS, Blueprint, CDI, etc. I’m not sure that’s 
the healthiest move to encourage adoption.  We need less fragmentation in the 
OSGi world and not more.  Obviously even if Karaf Boot adopts one as the 
recommended standard it doesn’t mean that the others can’t be used.  When 
reading through Camel documentation on-line, for example, the confusion such 
fragmentation brings is obvious.  One becomes adept at converting Java DSL to 
blueprint or from blueprint to the Java DSL in one’s mind.

The static profiles work great and will let us create a number of standardized 
appliances for a wide variety of topology concerns and not just for 
microservices.  A “switchboard” appliance, for example, might be used for 
orchestrating microservices and managing the APIs.  A “gateway” appliance might 
have standard JAAS, web service configuration and a routing mechanism for 
calling microservices.  An “AB” appliance could be used for 80/20 testing.  And 
so on.  Take the idea of enterprise integration patterns and bring it up to 
enterprise integration patterns using Karaf appliances.

Many appliances might be “sealed”.  An appliance for AB testing, for example, 
would have configuration for two addresses in the configuration file and a 
percentage of traffic going to each.  Non need to actually program or 
re-program the internals any more than we’d usually re-program a Camel 
component.  But the source would be there if one wanted to create a new 
component or modify how an existing one functioned.

I’d vote for using CDI along with the Camel Java DSL for a couple of reasons.  
The first would be the standardization and portability of both code and skills. 
 Using CDI would mean that any Glassfish, JBoss, etc. developer would feel 
comfortable using the code.  Using Java Camel DSL would be for the same reason. 
 It would also give a programmer a sense that if they give Karaf Boot with the 
static profiles a shot, they aren’t locked in but can easily move to a 
different stack if necessary.  In a sense this is the same reason that Google 
chose Java language to run on the DVM. It tapped into a large existing 
skillbase instead of trying to get the world to adopt and learn a new language. 
 CDI with OSGi extensions also allows developers to use one paradigm for 
everything from lashing up internal dependency injections to working with OSGi 
services. I believe when you put that CDI extension out there you used 
blueprint style proxies under the cover.  As a developer using the CDI OSGi 
extension it was transparent to me.  If you later decided to rework that as a 
DS service, it would remain transparent and very much in the whole spirit of 
OSGi and its mechanisms for allowing refactoring and even rewriting without 
breaking the world.  It also makes unit testing a snap.  Any of us who have 
wrestled with Camel Blueprint Test Support can appreciate that.

This would also permit for standardization of documentation and of Karaf Boot 
appliance project structures and Maven plug-in use.  A bit of convention over 
configuration.  Projects would have a standard configuration.cfg file that gets 
deployed via features to the pid.cfg. A standard features  file in the filtered 
folder. Those already exist, of course, but it isn’t as standardized as it 
could be.

Personally I think this sort of goal with CDI, Karaf 4 and its profiles, and 
Camel Java DSL should be accelerated since Spring Boot is already out there.  
Waiting for another couple of years to release this as a standard might be too 
late.

The pieces are already there so it isn’t like we’d have to start from scratch. 
This would also play well with larger container concerns like Docker and 
Kubernettes.

Brad

From: Guillaume Nodet [mailto:gno...@apache.org]
Sent: Thursday, January 12, 2017 4:55 AMJ
To: user 

Re: karaf boot

2017-01-12 Thread Nick Baker
As of right now, No. We're developing this as an enterprise feature (pay). 
After it's released later this year we'll likely open some of the core stuff 
like the Source-to-Image build.

There's not a lot of code here. Basically we take some files and config checked 
into a Git repo and construct a traditional Karaf Maven assembly which is built 
and the result turned into a Docker image. So it's a two-step process.

The following links may help:
https://hub.docker.com/r/fabric8/s2i-karaf/
http://fabric8.io/guide/karaf.html
https://dzone.com/articles/how-to-containerize-your-camel-route-on-karaf-with


-Nick Baker

From: Jason Pratt <jpratt3...@gmail.com>
Sent: Wednesday, January 11, 2017 7:52:19 PM
To: user@karaf.apache.org
Subject: Re: karaf boot

Do you have any examples on github for this?

Sent from my iPad

On Jan 11, 2017, at 4:38 PM, Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>> wrote:

We're deploying into Kubernetes (OpenShift), but it could be Mesos/Marathon, 
Docker Swarm, etc. the only important thing is for each pod to know where to 
find zookeeper.

-Nick
From: jason.pr...@windriver.com<mailto:jason.pr...@windriver.com>
Sent: January 11, 2017 7:19 PM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Reply-to: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: RE: karaf boot


This sounds very interesting. Would the Dockers then be deployed similar to 
VertX?

From: Nick Baker [mailto:nba...@pentaho.com]
Sent: Wednesday, January 11, 2017 11:31 AM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

Some background on what we've been playing with may be of use.

We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf 
instances (pods). Each pod simply runs a plain Karaf preconfigured with Remote 
Service support (ECF) and select features of our own design.

This implementation leverages the OpenShift Source-to-image feature which 
transforms a simple Karaf assembly template checked into a Git Repository into 
a Maven Karaf assembly, which is then run to produce a Docker Image containing 
the Karaf assembly. The Fabric8 team has done great work here and we used their 
S2I image as inspiration for our own.


Templated Assembly
I really like this templated assembly approach. We have a single configuration 
file specifying which features are to be installed, optionally supplying new 
Feature repository URLs, and environment variables. You can also supply extra 
CFG files and even artifacts to be placed in the /deploy directory.

One aspect about containerized deployments and microservice practices to 
consider is how they treat applications as static immutable images. You don't 
modify the capabilities or even configuration of running instances. Indeed 
instances themselves are not to be manipulated directly as the container 
environment will start/stop and scale out the base image as needed. Rather if 
you want to extend the capabilities or change configuration, you would create a 
new image or new version of an existing one and propagate that out to the 
cluster.

That said one of the goals for our application is the ability to deploy a small 
footprint instance and have it dynamically provision capabilities (features) as 
needed by the incoming workload. These would seem to run counter to the trend 
of static instances, but I disagree as the scope of what can be dynamically 
provisioned is controlled. Each of these runtime features contributes to an 
existing one -plugins to an existing capability.

TLDR: Support easy assemblies from a very simplified configuration. I'd 
probably introduce a command-line program to invoke the build and a maven 
plugin.


Run my template
Building off templated assemblies would be simple "run" support from the same 
configuration. Another command for the command-line program, maven plugin. Put 
everything in java.io.tmpdir, who cares.


Run Programatically
Another item I've wanted is a better Karaf Main class. Really, I would just 
like to use PAX-Exam as a Runner. I know... it originated from pax-runner. 
Something simple. Specify Karaf version, features, config, setup System Bundle 
packages, run. I guess if this was done it could be used in concert with the 
build template to support the run-from-template above.


Health Checks
We had to develop some custom health check code to ensure that all features and 
blueprint containers successfully start. Legacy portions of our application 
need to wait for Karaf to be fully realized before continuing execution. This 
was pretty important to our embedded Karaf usage, but that's certainly rare. 
Regardless, Health Checks are vital to microservice / cloud deployments. I 
recently found that the Fabric8 team pretty much already has this, and it's 
just about exactly what we developed [:(]  This needs to be documented for 
others to find.


Boot Features
Boot Feat

Re: Levels of Containerization - focus on Docker and Karaf

2017-01-12 Thread Nick Baker
Thanks Guillaume!


This is perfect for our microservice/containerized Karaf. I'll give this a try 
and see if we can get our features in startup. We've had issues in the past 
here.


-Nick Baker


From: Guillaume Nodet <gno...@apache.org>
Sent: Thursday, January 12, 2017 5:55:24 AM
To: user
Subject: Re: Levels of Containerization - focus on Docker and Karaf

Fwiw, starting with Karaf 4.x, you can build custom distributions which are 
mostly static, and that more closely map to micro-services / docker images.  
The "static" images are called this way because you they kinda remove all the 
OSGi dynamism, i.e. no feature service, no deploy folder, read-only config 
admin, all bundles being installed at startup time from etc/startup.properties.
This can be easily done by using the karaf maven plugin and configuring 
startupFeatures and referencing the static kar, as shown in:
  https://github.com/apache/karaf/blob/master/demos/profiles/static/pom.xml


2017-01-11 21:07 GMT+01:00 CodeCola 
<prasen...@rogers.com<mailto:prasen...@rogers.com>>:
Not a question but a request for comments. With a focus on Java.

Container technology has traditionally been messy with dependencies and no
easy failsafe way until Docker came along to really pack ALL dependencies
(including the JVM) together in one ready-to-ship image that was faster,
more comfortable, and easier to understand than other container and code
shipping methods out there. The spectrum from (Classical) Java EE Containers
(e.g. Tomcat, Jetty) --> Java Application Servers that are containerized
(Karaf, Wildfly, etc), Application Delivery Containers (Docker) and
Virtualization (VMWare, Hyper-V) etc. offers a different level of isolation
with different goals (abstraction, isolation and delivery).

What are the choices, how should they play together, should they be used in
conjunction with each other as they offer different kinds of
Containerization?

<http://karaf.922171.n3.nabble.com/file/n4049162/Levels_of_Containerization.png>



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Levels-of-Containerization-focus-on-Docker-and-Karaf-tp4049162.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com<mailto:gno...@redhat.com>
Web: http://fusesource.com<http://fusesource.com/>
Blog: http://gnodet.blogspot.com/



Re: karaf boot

2017-01-11 Thread Nick Baker
We're deploying into Kubernetes (OpenShift), but it could be Mesos/Marathon, 
Docker Swarm, etc. the only important thing is for each pod to know where to 
find zookeeper.

-Nick
From: jason.pr...@windriver.com
Sent: January 11, 2017 7:19 PM
To: user@karaf.apache.org
Reply-to: user@karaf.apache.org
Subject: RE: karaf boot


This sounds very interesting. Would the Dockers then be deployed similar to 
VertX?

From: Nick Baker [mailto:nba...@pentaho.com]
Sent: Wednesday, January 11, 2017 11:31 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

Some background on what we've been playing with may be of use.

We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf 
instances (pods). Each pod simply runs a plain Karaf preconfigured with Remote 
Service support (ECF) and select features of our own design.

This implementation leverages the OpenShift Source-to-image feature which 
transforms a simple Karaf assembly template checked into a Git Repository into 
a Maven Karaf assembly, which is then run to produce a Docker Image containing 
the Karaf assembly. The Fabric8 team has done great work here and we used their 
S2I image as inspiration for our own.


Templated Assembly
I really like this templated assembly approach. We have a single configuration 
file specifying which features are to be installed, optionally supplying new 
Feature repository URLs, and environment variables. You can also supply extra 
CFG files and even artifacts to be placed in the /deploy directory.

One aspect about containerized deployments and microservice practices to 
consider is how they treat applications as static immutable images. You don't 
modify the capabilities or even configuration of running instances. Indeed 
instances themselves are not to be manipulated directly as the container 
environment will start/stop and scale out the base image as needed. Rather if 
you want to extend the capabilities or change configuration, you would create a 
new image or new version of an existing one and propagate that out to the 
cluster.

That said one of the goals for our application is the ability to deploy a small 
footprint instance and have it dynamically provision capabilities (features) as 
needed by the incoming workload. These would seem to run counter to the trend 
of static instances, but I disagree as the scope of what can be dynamically 
provisioned is controlled. Each of these runtime features contributes to an 
existing one -plugins to an existing capability.

TLDR: Support easy assemblies from a very simplified configuration. I'd 
probably introduce a command-line program to invoke the build and a maven 
plugin.


Run my template
Building off templated assemblies would be simple "run" support from the same 
configuration. Another command for the command-line program, maven plugin. Put 
everything in java.io.tmpdir, who cares.


Run Programatically
Another item I've wanted is a better Karaf Main class. Really, I would just 
like to use PAX-Exam as a Runner. I know... it originated from pax-runner. 
Something simple. Specify Karaf version, features, config, setup System Bundle 
packages, run. I guess if this was done it could be used in concert with the 
build template to support the run-from-template above.


Health Checks
We had to develop some custom health check code to ensure that all features and 
blueprint containers successfully start. Legacy portions of our application 
need to wait for Karaf to be fully realized before continuing execution. This 
was pretty important to our embedded Karaf usage, but that's certainly rare. 
Regardless, Health Checks are vital to microservice / cloud deployments. I 
recently found that the Fabric8 team pretty much already has this, and it's 
just about exactly what we developed [☹]  This needs to be documented for 
others to find.


Boot Features
Boot Feature support in the assembly plugin is a Huge benefit for fast 
lightweight Karaf instances. This would clearly be the preferred configuration 
for a Nano-like distribution (shout-out to our Virgo brothers). Unfortunately, 
I've had varying success moving our assemblies from startupFeatures to 
bootFeatures. It may have to do with our custom deployers. Honestly I haven't 
looked into it too deeply.


Easy Web Interface
Hawtio is nice, but can be a bit overwhelming. An easy interface, especially 
for those new to OSGI/Karaf would go a long way.


I've reached-out to our OSGI guys here for their thoughts and will post them 
here as they come in.

-Nick Baker

From: Christian Schneider 
<cschneider...@gmail.com<mailto:cschneider...@gmail.com>> on behalf of 
Christian Schneider <ch...@die-schneider.net<mailto:ch...@die-schneider.net>>
Sent: Wednesday, January 11, 2017 9:51:56 AM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: karaf boot

Sounds like you have a good case to validate karaf boot on.

Can you explain how you 

Re: karaf boot

2017-01-11 Thread Nick Baker
Some background on what we've been playing with may be of use.

We've worked on a Kubenetes/OpenShift deployment of micro-service Karaf 
instances (pods). Each pod simply runs a plain Karaf preconfigured with Remote 
Service support (ECF) and select features of our own design.

This implementation leverages the OpenShift Source-to-image feature which 
transforms a simple Karaf assembly template checked into a Git Repository into 
a Maven Karaf assembly, which is then run to produce a Docker Image containing 
the Karaf assembly. The Fabric8 team has done great work here and we used their 
S2I image as inspiration for our own.


Templated Assembly
I really like this templated assembly approach. We have a single configuration 
file specifying which features are to be installed, optionally supplying new 
Feature repository URLs, and environment variables. You can also supply extra 
CFG files and even artifacts to be placed in the /deploy directory.

One aspect about containerized deployments and microservice practices to 
consider is how they treat applications as static immutable images. You don't 
modify the capabilities or even configuration of running instances. Indeed 
instances themselves are not to be manipulated directly as the container 
environment will start/stop and scale out the base image as needed. Rather if 
you want to extend the capabilities or change configuration, you would create a 
new image or new version of an existing one and propagate that out to the 
cluster.

That said one of the goals for our application is the ability to deploy a small 
footprint instance and have it dynamically provision capabilities (features) as 
needed by the incoming workload. These would seem to run counter to the trend 
of static instances, but I disagree as the scope of what can be dynamically 
provisioned is controlled. Each of these runtime features contributes to an 
existing one -plugins to an existing capability.

TLDR: Support easy assemblies from a very simplified configuration. I'd 
probably introduce a command-line program to invoke the build and a maven 
plugin.


Run my template
Building off templated assemblies would be simple "run" support from the same 
configuration. Another command for the command-line program, maven plugin. Put 
everything in java.io.tmpdir, who cares.


Run Programatically
Another item I've wanted is a better Karaf Main class. Really, I would just 
like to use PAX-Exam as a Runner. I know... it originated from pax-runner. 
Something simple. Specify Karaf version, features, config, setup System Bundle 
packages, run. I guess if this was done it could be used in concert with the 
build template to support the run-from-template above.


Health Checks
We had to develop some custom health check code to ensure that all features and 
blueprint containers successfully start. Legacy portions of our application 
need to wait for Karaf to be fully realized before continuing execution. This 
was pretty important to our embedded Karaf usage, but that's certainly rare. 
Regardless, Health Checks are vital to microservice / cloud deployments. I 
recently found that the Fabric8 team pretty much already has this, and it's 
just about exactly what we developed [☹]  This needs to be documented for 
others to find.


Boot Features
Boot Feature support in the assembly plugin is a Huge benefit for fast 
lightweight Karaf instances. This would clearly be the preferred configuration 
for a Nano-like distribution (shout-out to our Virgo brothers). Unfortunately, 
I've had varying success moving our assemblies from startupFeatures to 
bootFeatures. It may have to do with our custom deployers. Honestly I haven't 
looked into it too deeply.


Easy Web Interface
Hawtio is nice, but can be a bit overwhelming. An easy interface, especially 
for those new to OSGI/Karaf would go a long way.


I've reached-out to our OSGI guys here for their thoughts and will post them 
here as they come in.

-Nick Baker

From: Christian Schneider <cschneider...@gmail.com> on behalf of Christian 
Schneider <ch...@die-schneider.net>
Sent: Wednesday, January 11, 2017 9:51:56 AM
To: user@karaf.apache.org
Subject: Re: karaf boot

Sounds like you have a good case to validate karaf boot on.

Can you explain how you create your deployments now and what you are missing in 
current karaf? Until now we only discussed internally about the scope and 
requirements of karaf boot. It would be very valuable to get some input from a 
real world case.

Christian

On 11.01.2017 13:41, Nick Baker wrote:
We'd be interested in this as well. Beginning to move toward Microservices 
deployments + Remote Services for interop. I'll have a look at your branch JB!

We've added support in our Karaf main for multiple instances from the same 
install on disk. Cache directories segmented, port conflicts handled. This of 
course isn't an issue in container-based cloud deployments (Docker). Still, 

Re: karaf boot

2017-01-11 Thread Nick Baker
We'd be interested in this as well. Beginning to move toward Microservices 
deployments + Remote Services for interop. I'll have a look at your branch JB!

We've added support in our Karaf main for multiple instances from the same 
install on disk. Cache directories segmented, port conflicts handled. This of 
course isn't an issue in container-based cloud deployments (Docker). Still, may 
be of use.

-Nick Baker

Sent via the BlackBerry Hub for 
Android<http://play.google.com/store/apps/details?id=com.blackberry.hub>
From: bradj...@redhat.com
Sent: January 11, 2017 12:54 AM
To: user@karaf.apache.org
Reply-to: user@karaf.apache.org
Subject: RE: karaf boot


I'd be very interested in this project and will definitely give it a look.  
I've been using the Karaf 4 static profiles to create compact microservices 
containers and it works well.  I'm not sure if that's what the Karaf Boot 
project is aiming at since I haven't had a chance to look at it yet.  But I'll 
definitely give it a look tomorrow.

Brad

From: Jean-Baptiste Onofr? [mailto:j...@nanthrax.net]
Sent: Tuesday, January 10, 2017 11:30 PM
To: user@karaf.apache.org
Subject: Re: karaf boot

Hi Scott
There were a discussion in progress on the mailing list about Karaf boot.
A PoC branch is available on my GitHub in early stage.
I would like to restart the discussion based on this branch.
Regards
JB
On Jan 11, 2017, at 02:25, Scott Lewis 
<sle...@composent.com<mailto:sle...@composent.com>> wrote:

The page about Karaf boot that I've found:
http://karaf.apache.org/projects.html#boot says 'not yet available'.  Is
there an expected timeline for Karaf Boot?  Also, is there a branch upon
which the Karaf boot work is being done?

Thannksinadvance,

Scott



Re: Behavior of feature dependencies in 3.0.x

2016-10-03 Thread Nick Baker
Thanks JB,


Unfortunately, I cannot share the xml. The names definitely match. I'll try to 
troll through the debug output. This is a large feature with 8 dependent 
features and 20 bundles. No other features should be declaring this bundle, but 
I'll make sure that's the case. That's about the only scenario I can think of.


-Nick


From: Jean-Baptiste Onofré <j...@nanthrax.net>
Sent: Monday, October 3, 2016 1:35:53 PM
To: user@karaf.apache.org
Subject: Re: Behavior of feature dependencies in 3.0.x

Hi Nick,

I confirm that feature:install b should install a (by transitivity).

Can you share your actually features XML ? Are you sure the feature
names match ?

Regards
JB

On 10/03/2016 06:49 PM, Nick Baker wrote:
> Hey All,
>
>
> We're seeing something strange where a feature has a dependency on
> another and has a bundle which counts on that other feature being
> available. However, we're seeing that bundle fail on package imports
> from this other feature's bundles. It almost seems like this dependent
> feature isn't being installed fully before this bundle starts.
>
>
> 
>
>  x
>
> 
>
>
> 
>
>   a
>
>   y <- depends-on 'x'
>
> 
>
>
> If I manually feature:install "a" before "b" the "y" bundle is fine.
> However, if feature "a" isn't installed before-hand it fails.
>
>
> I know that the 4.x codeline has a prerequisite=true flag. What's the
> significance of this? Would it help in this scenario?
>
>
> Thanks,
>
> Nick
>

--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: Regarding etc directory

2016-08-29 Thread Nick Baker
Nice!
It would be good to update the felix.apache.org site with this.

-Nick

From: Guillaume Nodet <gno...@apache.org>
Reply-To: "user@karaf.apache.org" <user@karaf.apache.org>
Date: Monday, August 29, 2016 at 2:35 PM
To: user <user@karaf.apache.org>
Subject: Re: Regarding etc directory

Yes, that's doable with file install 3.5.2.
You can use the flag
  felix.fileinstall.subdir.mode = jar | skip | recurse


2016-08-29 19:34 GMT+02:00 Achim Nierbeck 
<bcanh...@googlemail.com<mailto:bcanh...@googlemail.com>>:
Afaik the version used by 4.0.x is already capable of handling subdirectories.
I think there was a bug about that and it is fixed in that version.
Just give it a try. Maybe you still need to add them to the cfg, but I know 
it's possible.

Regards, Achim

2016-08-29 18:03 GMT+02:00 Leschke, Scott 
<slesc...@medline.com<mailto:slesc...@medline.com>>:
Thank you.  I was aware of this but I guess I really didn’t think it all the 
way through. Simply creating a FileInstall config should meet my needs just 
fine, although I do agree that a ‘recursive’ option might be a nice addition.

From: Nick Baker [mailto:nba...@pentaho.com<mailto:nba...@pentaho.com>]
Sent: Monday, August 29, 2016 10:01 AM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Re: Regarding etc directory


/etc is just an Apache FileInstall configuration [1]. You can create as many as 
you'd like or even do so programmatically. We've gone the programmatic route in 
the past before Karaf to group together bundles in a hierarchical structure. 
The same can be done for configurations.



Unfortunately there's no recursive capability to FileInstall. That would make a 
neat enhancement.



[1] 
http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html
Apache Felix - Apache Felix File 
Install<http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html>
felix.apache.org<http://felix.apache.org>
Apache Felix File Install. File Install is a directory based OSGi management 
agent. It uses a directory in the file system to install and start a bundle 
when it is ...



From: Leschke, Scott <slesc...@medline.com<mailto:slesc...@medline.com>>
Sent: Monday, August 29, 2016 10:47:52 AM
To: user@karaf.apache.org<mailto:user@karaf.apache.org>
Subject: Regarding etc directory

I’m curious if it’s possible to configure Karaf/fileinstall to look in 
subdirectories within the KARAF_BASE/etc directory so that all .cfg files for 
an app can be grouped?

Scott



--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & 
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master




--

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com<mailto:gno...@redhat.com>
Web: http://fusesource.com<http://fusesource.com/>
Blog: http://gnodet.blogspot.com/



Re: Regarding etc directory

2016-08-29 Thread Nick Baker
/etc is just an Apache FileInstall configuration [1]. You can create as many as 
you'd like or even do so programmatically. We've gone the programmatic route in 
the past before Karaf to group together bundles in a hierarchical structure. 
The same can be done for configurations.


Unfortunately there's no recursive capability to FileInstall. That would make a 
neat enhancement.


[1] 
http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html

Apache Felix - Apache Felix File 
Install
felix.apache.org
Apache Felix File Install. File Install is a directory based OSGi management 
agent. It uses a directory in the file system to install and start a bundle 
when it is ...




From: Leschke, Scott 
Sent: Monday, August 29, 2016 10:47:52 AM
To: user@karaf.apache.org
Subject: Regarding etc directory

I'm curious if it's possible to configure Karaf/fileinstall to look in 
subdirectories within the KARAF_BASE/etc directory so that all .cfg files for 
an app can be grouped?

Scott


Re: Detecting persistent-id sharing...

2016-08-23 Thread Nick Baker
We actually just had an occurrence of the “spinning” blueprint due to a 
 but it wasn’t two bundles. It was 
one bundle referenced in two features! Removing the reference from one and 
adding a feature dependency instead “fixed” it for us.

-Nick

From: James Carman 
Reply-To: "user@karaf.apache.org" 
Date: Tuesday, August 23, 2016 at 9:14 PM
To: "user@karaf.apache.org" 
Subject: Re: Detecting persistent-id sharing...

No ideas?
On Sun, Aug 21, 2016 at 9:44 AM James Carman 
> wrote:
And, yes, that is exactly what I am talking about. A property placeholder is 
defined in two different blueprint containers in two different bundles. They 
both try to use the same configuration and it starts this "spinning" process

On Sun, Aug 21, 2016 at 1:55 AM Jean-Baptiste Onofré 
> wrote:
Hi James,

you mean that you use  in two blueprint
descriptors using the same persistent-id ?

Are the blueprint descriptors in two containers (two bundles) or the
same one ?

Regards
JB

On 08/20/2016 09:43 PM, James Carman wrote:
> We have a situation where we don't really know where we have the issue
> of two Blueprint bundles attempting to share the same persistent-id.  We
> just see the "spinning" symptom, so we assume that it's there.  Is there
> any command that can help me narrow it down?
>

--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: Using Aries Blueprint Spring Extender instead of SpringDM

2016-08-23 Thread Nick Baker
I found the “aries-blueprint-spring” feature in Karaf’s Spring feature file:
https://github.com/apache/karaf/blob/master/assemblies/features/spring/src/main/feature/feature.xml#L452

Seems like I’ve got the same setup.

From: Nicholas Baker 
Reply-To: "user@karaf.apache.org" 
Date: Tuesday, August 23, 2016 at 4:26 PM
To: "user@karaf.apache.org" 
Subject: Using Aries Blueprint Spring Extender instead of SpringDM

Hey All,

I’m back to migrating another one of our legacy plugins over to OSGI. It 
creates a customized Spring ApplicationContext as part of its lifecycle and 
uses it directly in code to factory objects. I cannot change this last detail 
this release.

I had immediate success using SpringDM and a fragment bundle to tell DM which 
ApplicationContext class to instantiate. I was then able to reference the 
ApplicationContext in a blueprint running within the same bundle. SpringDM is 
nice enough to publish it to the registry. All well and good. Except now I’m 
having issues with Java 8 bytecode coming into the project. SpringDM is using 
the old Spring 3.x codeline and apparently has a copy of some old ASM classes 
not compatible with Java 8.

I’m trying to update to Spring 4 and use the Aries Spring Extender instead of 
SpringDM since I’ve heard such good things. I’ve deployed the following bundles 
as well as Snapshots of the latest of all Aries blueprint bundles:

org.apache.aries.blueprint.spring
org.apache.aries.blueprint.spring.extender

Nothing’s happening though. The bundle times-out trying to find the  
to the ApplicationContext. Has anyone had success replacing SpringDM? If so can 
you detail what you had to do?

Thanks,
Nick




Using Aries Blueprint Spring Extender instead of SpringDM

2016-08-23 Thread Nick Baker
Hey All,

I’m back to migrating another one of our legacy plugins over to OSGI. It 
creates a customized Spring ApplicationContext as part of its lifecycle and 
uses it directly in code to factory objects. I cannot change this last detail 
this release.

I had immediate success using SpringDM and a fragment bundle to tell DM which 
ApplicationContext class to instantiate. I was then able to reference the 
ApplicationContext in a blueprint running within the same bundle. SpringDM is 
nice enough to publish it to the registry. All well and good. Except now I’m 
having issues with Java 8 bytecode coming into the project. SpringDM is using 
the old Spring 3.x codeline and apparently has a copy of some old ASM classes 
not compatible with Java 8.

I’m trying to update to Spring 4 and use the Aries Spring Extender instead of 
SpringDM since I’ve heard such good things. I’ve deployed the following bundles 
as well as Snapshots of the latest of all Aries blueprint bundles:

org.apache.aries.blueprint.spring
org.apache.aries.blueprint.spring.extender

Nothing’s happening though. The bundle times-out trying to find the  
to the ApplicationContext. Has anyone had success replacing SpringDM? If so can 
you detail what you had to do?

Thanks,
Nick




Re: Access control of OSGi Web app?

2016-08-01 Thread Nick Baker
Is Shiro even active at this point?

We do some of what you’re looking for, but it’s all custom code. We have the 
concept of logical permissions which can be bound to Users and/or Groups. Our 
UI queries for these and uses the information to remove/disable UI elements. As 
was mentioned though, you need to be doing the same checks on the server-side 
or you’re going to get hacked.

-Nick

From: Jason Pratt <jpratt3...@gmail.com>
Reply-To: "user@karaf.apache.org" <user@karaf.apache.org>
Date: Monday, August 1, 2016 at 11:05 AM
To: "user@karaf.apache.org" <user@karaf.apache.org>
Subject: Re: Access control of OSGi Web app?

Take a look at Shiro and JWT. You should be able to string something together 
from that.

On Sun, Jul 31, 2016 at 11:08 PM, Sigmund Lee 
<wua...@gmail.com<mailto:wua...@gmail.com>> wrote:
Hi all,

Thanks for advice and solutions you guys provided.

Seems like they are all proper ways to protect server-side services. But as I 
said we are a website, what I need is a solution can integrate frontend & 
backend together, provide page-level access control. basically two steps 
involved:

1. A externalized access control system to protect access to exposed 
services(for example, restful service, web url, etc).
2. After access is permitted, return corresponding respond page to client(aka, 
browser), and every button or link on this responded page can be display or 
hidden based on permissions of current user.

Basically, what I need is a solution not only free backend engineers from 
hard-coded authz code, but also free frontend engineers from hard-coding.

Thanks again!

Bests.
--
Sig



On Fri, Jul 29, 2016 at 10:02 PM, Achim Nierbeck 
<bcanh...@googlemail.com<mailto:bcanh...@googlemail.com>> wrote:
yes, as filters without servlets can't be served. They don't have a URI binding.

regards, Achim

2016-07-29 15:33 GMT+02:00 Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>>:
Hey Achim,

Thanks for this example. We’re looking part of our ongoing OSGi migration will 
be URL security as well. We’re using Spring Security in the legacy non-OSGI 
space. So this is a timely conversation for us ☺

Quick question: are we still working with the limitation that Filters are only 
invoked if a Servlet or Resource would already serve the URL?

-Nick

From: Achim Nierbeck <bcanh...@googlemail.com<mailto:bcanh...@googlemail.com>>
Reply-To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Friday, July 29, 2016 at 8:54 AM
To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: Access control of OSGi Web app?

Hi Sigmund,

sorry for being late to the party ... if those solutions above don't work for 
you you still have the possibility to create a customized filter which you can 
re-use with your own applications.
For this you can either go the "classical" way of using web-fragments, or you 
can share the httpContext between your osgi bundles. For this you need to 
declare your httpContext to be sharable and after that you just need to attach 
your filter bundle to that sharable httpContext.

Take a look at the following Sample, or better integration test of Pax Web [1].

regards, Achim

[1] - 
https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/CrossServiceIntegrationTest.java#L59-L95

2016-07-26 16:05 GMT+02:00 Christian Schneider 
<ch...@die-schneider.net<mailto:ch...@die-schneider.net>>:
In karaf authentication is based on JAAS. Using login modules you can define 
what source to authenticate against.
The karaf web console is protected by this by default. It is also possible to 
enable JAAS based authentication for CXF e.g. for your REST services.
There is also role based  and group based authentication out of the box.

There is no attribute based access control available but you can create this 
based on the JAAS authentication.

This code can give you an idea of how to get the subject and the principals 
from JAAS in karaf: 
https://github.com/apache/aries/blob/trunk/blueprint/blueprint-authz/src/main/java/org/apache/aries/blueprint/authorization/impl/AuthorizationInterceptor.java#L69-L81

You could create your own annotations or OSGi service to handle the attribute 
based authorization based on the authentication information.

Christian


On 26.07.2016 08:29, Sigmund Lee wrote:
We are a website, using OSGi as microservices implementation. every feature of 
our site is a standalone osgi-based webapp, and splited into several OSGi 
bundles(api, impl, webapp, rest, etc).

But there are functions that coupled with more that one bundle, for example 
Access Control & Authorization. C

Re: Access control of OSGi Web app?

2016-07-29 Thread Nick Baker
Hey Achim,

Thanks for this example. We’re looking part of our ongoing OSGi migration will 
be URL security as well. We’re using Spring Security in the legacy non-OSGI 
space. So this is a timely conversation for us ☺

Quick question: are we still working with the limitation that Filters are only 
invoked if a Servlet or Resource would already serve the URL?

-Nick

From: Achim Nierbeck 
Reply-To: "user@karaf.apache.org" 
Date: Friday, July 29, 2016 at 8:54 AM
To: "user@karaf.apache.org" 
Subject: Re: Access control of OSGi Web app?

Hi Sigmund,

sorry for being late to the party ... if those solutions above don't work for 
you you still have the possibility to create a customized filter which you can 
re-use with your own applications.
For this you can either go the "classical" way of using web-fragments, or you 
can share the httpContext between your osgi bundles. For this you need to 
declare your httpContext to be sharable and after that you just need to attach 
your filter bundle to that sharable httpContext.

Take a look at the following Sample, or better integration test of Pax Web [1].

regards, Achim

[1] - 
https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/CrossServiceIntegrationTest.java#L59-L95

2016-07-26 16:05 GMT+02:00 Christian Schneider 
>:
In karaf authentication is based on JAAS. Using login modules you can define 
what source to authenticate against.
The karaf web console is protected by this by default. It is also possible to 
enable JAAS based authentication for CXF e.g. for your REST services.
There is also role based  and group based authentication out of the box.

There is no attribute based access control available but you can create this 
based on the JAAS authentication.

This code can give you an idea of how to get the subject and the principals 
from JAAS in karaf: 
https://github.com/apache/aries/blob/trunk/blueprint/blueprint-authz/src/main/java/org/apache/aries/blueprint/authorization/impl/AuthorizationInterceptor.java#L69-L81

You could create your own annotations or OSGi service to handle the attribute 
based authorization based on the authentication information.

Christian


On 26.07.2016 08:29, Sigmund Lee wrote:
We are a website, using OSGi as microservices implementation. every feature of 
our site is a standalone osgi-based webapp, and splited into several OSGi 
bundles(api, impl, webapp, rest, etc).

But there are functions that coupled with more that one bundle, for example 
Access Control & Authorization. Currently our authorization code is hard-coded 
everywhere and was so hard to maintain.

My question is, what's the proper way to handle with access control when using 
OSGi? Is there any osgi-compatible ABAC(Attribute-based access control, because 
our authorization model need calculated based on attribute of resource and 
context/environment) framework?


Thanks.

--
Sig




--

Christian Schneider

http://www.liquid-reality.de



Open Source Architect

http://www.talend.com



--

Apache Member
Apache Karaf  Committer & PMC
OPS4J Pax Web  Committer & 
Project Lead
blog 
Co-Author of Apache Karaf Cookbook 

Software Architect / Project Manager / Scrum Master



Re: More entropy on bundle startup.

2016-07-14 Thread Nick Baker
Thanks Guillaume! This will definitely help us as we find those holding onto 
stale service references.

-Nick

From: Guillaume Nodet 
Reply-To: "user@karaf.apache.org" 
Date: Thursday, July 14, 2016 at 3:11 PM
To: user 
Subject: Re: More entropy on bundle startup.

Try the load-test command inside the karaf console.  It randomly starts / stop 
/ refresh bundles with multiple threads in loop, so such things are quickly 
seen usually.

2016-07-13 22:56 GMT+02:00 Benson Margulies 
>:
Folks,

We've had a couple of incidents of latent problems stemming from
invalid assumptions on bundle start order. Everything seems to be
fine, then some trivial change reveals that we've failed to ensure
that service 'a' is available before component 'b' needs it. by and
large, we use DS to get this right, but there are a few cases where it
does not serve.

I am wondering: is there some way to get _more_ randomness out of the
startup process, to shake out mistakes like this?

thanks,
benson



--

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/



Re: More entropy on bundle startup.

2016-07-13 Thread Nick Baker
I love this idea! We’ve been fighting the same issues as well.

-Nick

On 7/13/16, 4:56 PM, "Benson Margulies"  wrote:

>Folks,
>
>We've had a couple of incidents of latent problems stemming from
>invalid assumptions on bundle start order. Everything seems to be
>fine, then some trivial change reveals that we've failed to ensure
>that service 'a' is available before component 'b' needs it. by and
>large, we use DS to get this right, but there are a few cases where it
>does not serve.
>
>I am wondering: is there some way to get _more_ randomness out of the
>startup process, to shake out mistakes like this?
>
>thanks,
>benson



Re: Reasons that triggers IllegalStateException: Invalid BundleContext

2016-06-30 Thread Nick Baker
A more complete stack trace would help us point you to the offending code. I 
can say that I usually see this when a Service is still held by someone when it 
should have been removed from "play" by a ServiceTracker or other similar 
mechanism.

-Nick

From: Cristiano Costantini 
>
Date: Thu Jun 30 2016 14:57:18 GMT-0400 (EDT)
To: user@karaf.apache.org >
Subject: Reasons that triggers IllegalStateException: Invalid BundleContext

Hello All,

I'n our application it happen sometime to find in situations where we get the 
"Invalid BundleContext" exception:

java.lang.IllegalStateException: Invalid BundleContext.
at 
org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:453)

What are the potential reasons such exception may be thrown?
I'm searching to understand so I can hunt for a potential design issue in some 
of our bundles... I've searched the web but I've found no hint.

Thank you!
Cristiano



Re: Conditional Feature Implemented?

2016-06-24 Thread Nick Baker
Aren’t those examples simple conditionally installing bundles? He was trying to 
conditionally add a dependent feature.

Thanks,
Nick

On 6/24/16, 12:03 PM, "Jean-Baptiste Onofré" <j...@nanthrax.net> wrote:

>By the way, to use conditional, you have to use at least Karaf 3.x.
>
>Regards
>JB
>
>On 06/24/2016 06:02 PM, Jean-Baptiste Onofré wrote:
>> Hi Nick,
>>
>> it's implemented and used Karaf internally.
>>
>> For instance:
>>
>>  
>>  
>>  realm=karaf
>>  
>>  http
>>  > start-level="30">mvn:org.apache.felix/org.apache.felix.metatype/${felix.metatype.version}
>>
>>  > start-level="30">mvn:org.apache.karaf.webconsole/org.apache.karaf.webconsole.branding/${project.version}
>>
>>  > start-level="30">mvn:org.apache.karaf.webconsole/org.apache.karaf.webconsole.console/${project.version}
>>
>>  
>>  eventadmin
>>  > start-level="30">mvn:org.apache.felix/org.apache.felix.webconsole.plugins.event/${felix.eventadmin.webconsole.plugin.version}
>>
>>  
>>  
>>  scr
>>  > start-level="30">mvn:org.apache.felix/org.apache.felix.webconsole.plugins.ds/${felix.scr.webconsole.plugin.version}
>>
>>  
>>  
>>
>> you can see that the webconsole ds and eventadmin plugins will be
>> installed only if/when the eventadmin or scr features are installed.
>>
>> Regards
>> JB
>>
>> On 06/24/2016 05:57 PM, Nick Baker wrote:
>>> Hey All, quick question. One of my developers is trying to use a Feature
>>> conditional to optionally depend on another feature.
>>>
>>> 
>>>
>>> 
>>>
>>> foo
>>>
>>> b
>>>
>>> 
>>>
>>> I’ve told him I don’t think it was actually implemented even though it
>>> was mentioned in the original case:
>>> https://issues.apache.org/jira/browse/KARAF-1718 Instead, I’ve
>>> instructed him to move the conditional down a level into “b”.
>>>
>>> Anyone know off the top of their heads if I’m right?
>>>
>>> Thanks,
>>>
>>> Nick
>>>
>>
>
>-- 
>Jean-Baptiste Onofré
>jbono...@apache.org
>http://blog.nanthrax.net
>Talend - http://www.talend.com



Conditional Feature Implemented?

2016-06-24 Thread Nick Baker
Hey All, quick question. One of my developers is trying to use a Feature 
conditional to optionally depend on another feature.



foo
b


I’ve told him I don’t think it was actually implemented even though it was 
mentioned in the original case: 
https://issues.apache.org/jira/browse/KARAF-1718 Instead, I’ve instructed him 
to move the conditional down a level into “b”.

Anyone know off the top of their heads if I’m right?

Thanks,
Nick


Re: Working WebSocket example?

2016-04-04 Thread Nick Baker
Thanks Achim,

This is interesting. I should have mentioned that we're using the CXF features 
from Karaf 4.0.3 and Blueprint XML. We're unfortunately not able to leverage 
WAB capabilities.

-Nick

From: Achim Nierbeck <bcanh...@googlemail.com<mailto:bcanh...@googlemail.com>>
Reply-To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Monday, April 4, 2016 at 3:47 PM
To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: Working WebSocket example?

Hi,

might want to take a look at this one [1].
It should work right away with K4 and Pax-Web 4.2.x

regards, Achim

[1] - 
https://github.com/ops4j/org.ops4j.pax.web/tree/pax-web-4.2.x/samples/websocket-jsr356


2016-04-04 21:40 GMT+02:00 Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>>:
Hey All,

Does anyone have a working project using WebSockets? I was trying a quick 
prototype repurposing the CXF example but haven't had any luck:

https://github.com/apache/cxf/blob/master/distribution/src/main/release/samples/jax_rs/websocket/src/main/java/demo/jaxrs/server/CustomerService.java


I keep receiving a 200 from the server, whereas 101 should be returned:

[Error] WebSocket connection to 
'ws://localhost:8181/cxf/sockets/infinitymachine/clock' failed: Unexpected 
response code: 200

Thanks,
Nick



--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & 
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master



Working WebSocket example?

2016-04-04 Thread Nick Baker
Hey All,

Does anyone have a working project using WebSockets? I was trying a quick 
prototype repurposing the CXF example but haven't had any luck:

https://github.com/apache/cxf/blob/master/distribution/src/main/release/samples/jax_rs/websocket/src/main/java/demo/jaxrs/server/CustomerService.java


I keep receiving a 200 from the server, whereas 101 should be returned:

[Error] WebSocket connection to 
'ws://localhost:8181/cxf/sockets/infinitymachine/clock' failed: Unexpected 
response code: 200

Thanks,
Nick


Re: Embedding Karaf in a WAR (Tomcat)

2016-03-30 Thread Nick Baker
Our current approach is also a "migration" strategy. This is an unfortunate 
reality for those with large existing code-bases.

We anticipate "flipping the architecture" soon which will put Everything in 
Karaf. The non-OSGI libraries will be collected together as a "legacy" bundle. 
At that point we'll be leveraging PAX-Web instead of deploying inside an 
existing servlet container.

-Nick

From: Serge Huber >
Reply-To: "user@karaf.apache.org" 
>
Date: Wednesday, March 30, 2016 at 3:21 AM
To: "user@karaf.apache.org" 
>
Subject: Re: Embedding Karaf in a WAR (Tomcat)

Hi Achim and Martin,

Actually my own company is going through this transition, but for the moment 
our main application (CMS) still has to be deployed within web containers such 
as Tomcat or WebSphere. So using an Http Bridge is a hard requirement for the 
moment.

As soon as we can drop this requirement we will but in the meantime we want a 
bridge that can be as feature-ful as possible. Currently we use the Felix Http 
Bridge in production but I’m really hoping we can switch to the new Pax Web 
Bridge soon, as it is a *lot* better.

Yes it is indeed strange to have this bridge because you can potentially have a 
web application within a web application (but at the same time this is kinda 
cool, ok I’m a nerd :)). Anyway, I believe it can be an interesting migration 
path for a lot of existing web applications out there.

Let’s get everyone on Karaf & Pax Web :) Who needs Node.js ? :)

cheers,
  Serge…


On 29 mars 2016, at 17:47, Achim Nierbeck 
> wrote:

Hi Martin,

I'm still in favor of using plain Karaf with Web-Container instead of the 
opposite, but I can see the benefits for easier transition of doing the bridge 
thing. Thanks to the Help of Serge, we now have a special branch [1], cause the 
bridge is still work-in-progress [2].

regards, Achim

[1] - https://github.com/ops4j/org.ops4j.pax.web/tree/PAXWEB-606-Servlet-Bridge
[2] - https://ops4j1.jira.com/browse/PAXWEB-606

2016-03-29 16:45 GMT+02:00 mjelen 
>:
Hi Serge,

thank you for your reply! I'll look into the PAX Web Bridge, I guess the
right place to talk about it is the OPS4J Google Group.

I admit I haven't thought of looking at PAX Web for my purpose because I'd
seen it as having the opposite purpose of what I'm looking for - that is,
starting an embedded Tomcat (or Jetty or Undertow) rather than running
inside a Tomcat instance. Especially since I've read several comments from
Achim dissuading people from embedding Karaf in Tomcat (which I can
understand, I'm not happy about it myself and I'll keep trying to get rid of
this requirement).

Regards
Martin



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Embedding-Karaf-in-a-WAR-Tomcat-tp4045931p4046031.html
Sent from the Karaf - User mailing list archive at 
Nabble.com.



--

Apache Member
Apache Karaf  Committer & PMC
OPS4J Pax Web  Committer & 
Project Lead
blog 
Co-Author of Apache Karaf Cookbook 

Software Architect / Project Manager / Scrum Master




Re: ECF 3.13 released

2016-03-23 Thread Nick Baker
Thanks a lot Scott,

We'll definitely keep you guys in mind as we get to our distributed feature 
later this year.

I've been particularly curious as to if anyone has tried integrating Java 
Chronicle ‎https://github.com/OpenHFT/Chronicle-Queue as the transport layer. 
Obviously not a solution for machine clusters but could make a micro-services 
architecture more appealing in HPC applications.

Sent from my BlackBerry 10 smartphone.
From: Scott Lewis
Sent: Wednesday, March 23, 2016 3:55 PM
To: user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Re: ECF 3.13 released


On 3/23/2016 12:13 PM, Nick Baker wrote:
Hey Scott,

Thanks for the update. We're actually looking to deploy Remote Services in our 
next release. Can you speak to the relative merits of ECF vs the Apache CXF 
Distributed OSGi subproject?

I don't want to explicitly or implicitly criticize CXF, so I'll just list some 
of the things that I think ECF has that are advantages.   Some of these may be 
shared with CXF...I don't know enough about CXF to say which.  Also I should 
say that I've been working with Christian and others to create a general 
distribution provider API for Aries that will work with OSGi RSA.

1) We implement the latest OSGi R6 specs...both Remote Service (chap 100 in 
enterprise spec) and Remote Service Admin (chap 122 in enterprise spec)
2) We test our implementation against the OSGi compatibility test suite as part 
of our continuous integration
3) ECF has an open modular architecture, which allows new distribution (and 
discovery) providers to be easily used (see [2] and [3] below)
4) ECF's RS/RSA implementation is small/lightweight
5) We've recently created and documented a number of new distribution 
providers...see [3]
6) One of these providers uses CXF (Jax-RS), allowing backward compatibility 
with CXF-based remote services
7) We are also gradually introducing tooling for developing, debugging, 
deploying RS/RSA apps [2].  This will continue.
8) We have few dependencies (4), and so we run on any OSGi R5+ compatible 
framework
9) We have started building remote services for monitoring/management of remote 
(and/or local) frameworks [5]

Scott

[5] https://github.com/ECF/OSGIRemoteManagement


Thanks,
Nick Baker

From: Scott Lewis 
<<mailto:sle...@composent.com>sle...@composent.com<mailto:sle...@composent.com>>
Reply-To: 
"<mailto:user@karaf.apache.org>user@karaf.apache.org<mailto:user@karaf.apache.org>"
 <user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Wednesday, March 23, 2016 at 12:39 PM
To: 
"<mailto:user@karaf.apache.org>user@karaf.apache.org<mailto:user@karaf.apache.org>"
 <user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: ECF 3.13 released

iv)  ECF 3.13 also supports using maven to install Karaf features [4].

On 3/17/2016 9:37 AM, Scott Lewis wrote:
ECF 3.13 has just been released [1].

ECF provides a modular and CT-tested implementation of OSGi R6 Remote Services 
and Remote Service Admin (1.1) specifications.

The important additions in 3.13 [2]

i) New API (and tutorial) to simplify the creation of custom remote services 
distribution providers.   The distribution provider API makes it easy to 
introduce alternative/new/private protocols, serialization formats, or 
communication patterns (e.g. client/server or pub-sub groups) *without* 
modifying the service API or implementation
ii) Distribution provider implementations based upon MQTT, CXF, Jersey, 
Hazelcast and associated technical documentation [3]
iii) Eclipse tooling to aid in the development, debugging, testing, and 
deployment of remote services [2]

Scott

[1] https://www.eclipse.org/ecf/downloads.php
[2] https://www.eclipse.org/ecf/NewAndNoteworthy.html
[3] https://wiki.eclipse.org/Distribution_Providers
[4] https://wiki.eclipse.org/EIG:Install_into_Apache_Karaf




Re: ECF 3.13 released

2016-03-23 Thread Nick Baker
Hey Scott,

Thanks for the update. We're actually looking to deploy Remote Services in our 
next release. Can you speak to the relative merits of ECF vs the Apache CXF 
Distributed OSGi subproject?

Thanks,
Nick Baker

From: Scott Lewis <sle...@composent.com<mailto:sle...@composent.com>>
Reply-To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Wednesday, March 23, 2016 at 12:39 PM
To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: ECF 3.13 released

iv)  ECF 3.13 also supports using maven to install Karaf features [4].

On 3/17/2016 9:37 AM, Scott Lewis wrote:
ECF 3.13 has just been released [1].

ECF provides a modular and CT-tested implementation of OSGi R6 Remote Services 
and Remote Service Admin (1.1) specifications.

The important additions in 3.13 [2]

i) New API (and tutorial) to simplify the creation of custom remote services 
distribution providers.   The distribution provider API makes it easy to 
introduce alternative/new/private protocols, serialization formats, or 
communication patterns (e.g. client/server or pub-sub groups) *without* 
modifying the service API or implementation
ii) Distribution provider implementations based upon MQTT, CXF, Jersey, 
Hazelcast and associated technical documentation [3]
iii) Eclipse tooling to aid in the development, debugging, testing, and 
deployment of remote services [2]

Scott

[1] https://www.eclipse.org/ecf/downloads.php
[2] https://www.eclipse.org/ecf/NewAndNoteworthy.html
[3] https://wiki.eclipse.org/Distribution_Providers
[4] https://wiki.eclipse.org/EIG:Install_into_Apache_Karaf



Re: Embedding Karaf in a WAR (Tomcat)

2016-03-23 Thread Nick Baker
We boot Karaf on the Servlet Context Listener chain. Regular Karaf Main class 
with a lot of special code to handle write access issues to the installation 
directory. We also support multiple instances from the same install so the 
caches are segmented and port conflicts resolved prior to startup (this is 
being refactored now):
https://github.com/pentaho/pentaho-platform/blob/master/extensions/src/org/pentaho/platform/osgi/KarafBoot.java

Many of the classes are still in the WAR so we have added support for package 
wild-cards to system.packages.extra just make make it maintainable. Those 
wildcards are expanded out on boot resolving all packages available in the 
parent classloader ( war/WEB-INF/lib, JBoss module support as well):
https://github.com/pentaho/pentaho-platform/blob/master/extensions/src/org/pentaho/platform/osgi/SystemPackageExtrapolator.java

We bridge the HTTP service in OSGI out to the Servlet Container using the Felix 
HTTP Bridge Proxy Servlet:
http://felix.apache.org/documentation/subprojects/apache-felix-http-service.html

I know JBoss and Apache Sling both do something very similar if you're looking 
for an out of the box solution

-Nick

From: Tom Barber >
Reply-To: "user@karaf.apache.org" 
>
Date: Wednesday, March 23, 2016 at 7:58 AM
To: "user@karaf.apache.org" 
>
Subject: Re: Embedding Karaf in a WAR (Tomcat)

Hi Martin

I'm not sure quite how they bootstrap it, but Pentaho BI Server 6.x runs Tomcat 
& Karaf as a single service.

Tom

On Wed, Mar 23, 2016 at 11:53 AM, mjelen 
> wrote:
Dear Karaf developers und fellow users,

due to customer requirements, we have to deliver all our software as web
archives that are deployable on Tomcat. I'm hoping for this requirement to
change in the future and I've been reasoning with our customer for over a
year now about it, but at the moment they won't budge.

We're currently developing a couple of web applications using Karaf and for
production deployment, we have built a reasonably generic Felix WAR that
starts the OSGi Container and includes the Felix File Install to read and
start our application bundles from a custom directory. This approach has
been working within Tomcat for over a year, but it has several drawbacks.
Those that I'm aware of are:
  - The development and production environment are different from each
other, introducing a new bug source (I can live with the necessary
difference between embedded Jetty and bridged Tomcat, but I don't want more
than that).
  - In production, we lose a lot of Karaf's features (such as the console,
"feature"s, wrappers, KARs) and have to fall back on the basic File Install
(no start levels etc.) and have to package the applications differently for
development/production.
  - We have to maintain our custom Felix WAR distribution.

To solve these problems, I would ideally have a generic Karaf launcher
packaged as a WAR with the path to a Karaf home directory a parameter. That
way I could simply decide whether Karaf gets started by the shell script or
from my web application WAR. However, I can see several hurdles on the way
and would be interested to hear if anyone has successfully done this before.

Things I'm unsure of right now:
  - The default Karaf launcher (.bat/.sh scripts) uses the "endorsed
libraries" mechanism of the JRE to override even classes like
java.lang.Exception. Even if that works with current Tomcat versions (I
haven't tried that yet), it seems fragile for the future in the embedded
scenario and I'm not happy about changing Tomcat's libraries to that extent.
  - Will I have to include any/many libraries in Tomcat's classloader (e.g.
in my WAR's WEB-INF/lib and modified framework.properties)? I already had to
do that for my Felix WAR with Geronimo JTA-spec and it works fine at the
moment, but again it makes me nervous regarding future enhancements.

Any help will be appreciated, whether you have a few pointers for a
solution, some code to share or even a horror story about how it can't be
done :-).

Kind regards

--
Martin Jelen
ISB AG



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Embedding-Karaf-in-a-WAR-Tomcat-tp4045931.html
Sent from the Karaf - User mailing list archive at Nabble.com.



Re: Blueprint or DS or what to use?

2016-03-19 Thread Nick Baker
Can you point me to any documentation on this? I found a thread on the 
aries-dev but that's about it. When we abandoned Gemini we lost some of the 
functionality and would be interested in trying this out.

http://mail-archives.apache.org/mod_mbox/aries-dev/201511.mbox/%3ccaa66tppfyb33d7m234ddeugxa6dj5jv24m09vv7u77dbjlo...@mail.gmail.com%3E

Thanks!

From: Guillaume Nodet >
Reply-To: "user@karaf.apache.org" 
>
Date: Friday, March 18, 2016 at 8:40 AM
To: user >
Subject: Re: Blueprint or DS or what to use?

Fwiw, Aries Blueprint now has almost full support for Spring namespaces and 
Spring-DM bundles, you simply need to deploy blueprint, spring and the 
blueprint-spring + blueprint-spring-extender bundles.

2016-03-18 11:24 GMT+01:00 akuhtz 
>:
Hello,

I've an application running with spring-dm and because spring-dm is no
longer developed I'm looking for a replacement.

Now I'm confused because there was a shift in karaf (core) from blueprint to
DS and today I saw a post on the dev list, saying that DS does not support
as much as blueprint does.
As an end-user I would like to know what the proposed way to go is: DS or
blueprint, both or what else?



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Blueprint-or-DS-or-what-to-use-tp4045845.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/



Re: Strategies for wrapping dependencies

2016-02-11 Thread Nick Baker
Thanks Achim,

Ideally we would like to continue using the standard Maven GAV until 
Feature.xml creation time. This was we benefit from the dependency management 
of Maven, transitive dependencies, conflict resolution, etc.

The scenario which really troubles me is pointing our projects to a wrapped 
version of a dependency then having the standard non-bundle version come in 
transitively from somewhere else.

I guess I’ll think about it more, but seems like we might have to build some 
special tooling to help this. I’ve got the following in mind:

  1.  Bind custom plugin to “verify” phase
  2.  Build Feature.xml based on plain maven dependencies, some bundles, some 
plain jars
  3.  Analyze resulting feature to identify “wrap:” bundles
  4.  Run BND on plain jars and deploy as new GAV (org.pentaho.[GROUP])
  5.  Re-write feature.xml to point to new wrapped versions
  6.  Deploy feature.xml

If we do go this route I’ll be sure to let everyone know in case the plugin is 
useful to others.

-Nick


From: Achim Nierbeck <bcanh...@googlemail.com<mailto:bcanh...@googlemail.com>>
Reply-To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Thursday, February 11, 2016 at 3:56 PM
To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: Strategies for wrapping dependencies

Hi Nick,

afaik there is no "easy" path of handling this.
Strategies are most likely the ones used by either ServiceMix or the Pax Tipi 
project [1]
or if you have "good" connections to the originators of such libraries maybe 
convince those of
providing osgi ready bundles.


regards, Achim

[1] - https://github.com/ops4j/org.ops4j.pax.tipi



2016-02-11 18:55 GMT+01:00 Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>>:


Hey Everyone,

We’re presently embedding Karaf in other applications which contain an large 
amount of existing functionality comprising some 350 jars. Some of those are 
imported into the OSGI system today with system.packages.extras, but most 
remain unavailable to OSGI.

I’m in the process of investigating an architectural “flip” in which what is 
today outside of OSGI will be moved inside as a series of features. Karaf would 
then be the main application.

So far I’m making good progress. However, the resulting feature definitions 
contain hundreds of wrapped bundle entries as the libraries aren’t OSGI 
bundles. The overhead in wrapping these upon startup of a new instance is 
enormous!

My options as I see it are:

  1.  Create some wrapper bundles replacing some of the features which 
utiliizes Bundle-Classpath: to embed dependent libraries and atomize 
functionality. Downside being potential duplication of libraries and 
ClassCastExceptions if those leak out of the bundles.
  2.  Run BND on these libraries and check them into our Maven repository under 
a different GAV. This is the route the Springsource and Servicemix teams went.

My question to you all is have any of you come up with an automated way of 
handling this? What strategies and advice can you give?

Thanks,
Nick



--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & 
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master



Strategies for wrapping dependencies

2016-02-11 Thread Nick Baker


Hey Everyone,

We’re presently embedding Karaf in other applications which contain an large 
amount of existing functionality comprising some 350 jars. Some of those are 
imported into the OSGI system today with system.packages.extras, but most 
remain unavailable to OSGI.

I’m in the process of investigating an architectural “flip” in which what is 
today outside of OSGI will be moved inside as a series of features. Karaf would 
then be the main application.

So far I’m making good progress. However, the resulting feature definitions 
contain hundreds of wrapped bundle entries as the libraries aren’t OSGI 
bundles. The overhead in wrapping these upon startup of a new instance is 
enormous!

My options as I see it are:

  1.  Create some wrapper bundles replacing some of the features which 
utiliizes Bundle-Classpath: to embed dependent libraries and atomize 
functionality. Downside being potential duplication of libraries and 
ClassCastExceptions if those leak out of the bundles.
  2.  Run BND on these libraries and check them into our Maven repository under 
a different GAV. This is the route the Springsource and Servicemix teams went.

My question to you all is have any of you come up with an automated way of 
handling this? What strategies and advice can you give?

Thanks,
Nick


Re: [ANN] New Karaf website online

2016-02-04 Thread Nick Baker
Nice! I’ve got conflicting feelings though. Sorta like seeing an old friend 
retire

-Nick




On 2/4/16, 10:31 AM, "Jean-Baptiste Onofré"  wrote:

>Hi all,
>
>as you may have seen that the new Karaf website is now online.
>
>Don't hesitate to create Jira (with website component) if you see some 
>broken links and rendering issue.
>
>Thanks !
>Regards
>JB
>-- 
>Jean-Baptiste Onofré
>jbono...@apache.org
>http://blog.nanthrax.net
>Talend - http://www.talend.com


karaf-maven-plugin adding blueprint xml as "wrap:_" not "blueprint:_"

2016-01-19 Thread Nick Baker
Hey Everyone,

Hope you all had a good New Year. I’m finally back in the swing of things. 
Trying to convert some of our manually crafted feature.xml files to the new 
karaf-maven-plugin style with a POM for each feature file. Our build-team is 
really excited about getting some visibility into the karaf feature 
dependencies by using this plugin.

The problem I’m having right now is with bare blueprint xml files. They’re 
being added with “wrap:” instead of “blueprint:"

Example:

  pentaho
  pentaho-blueprint-activators
  6.1-SNAPSHOT
  kettle-jms
  xml


Result:
wrap:mvn:pentaho/pentaho-blueprint-activators/6.1-SNAPSHOT/xml/kettle-jms

I suspect I need to change our artifacts to classify as “feature”. I can dive 
into the code but thought one of you might know off the top of your head.

Appreciate the help,
Nick


Re: karaf-maven-plugin adding blueprint xml as "wrap:_" not "blueprint:_"

2016-01-19 Thread Nick Baker
Doesn’t look encouraging guys:

https://github.com/apache/karaf/blob/master/tooling/karaf-maven-plugin/src/main/java/org/apache/karaf/tooling/features/GenerateDescriptorMojo.java#L355

-Nick

From: Nicholas Baker >
Reply-To: "user@karaf.apache.org" 
>
Date: Tuesday, January 19, 2016 at 3:26 PM
To: "user@karaf.apache.org" 
>
Subject: karaf-maven-plugin adding blueprint xml as "wrap:_" not "blueprint:_"

Hey Everyone,

Hope you all had a good New Year. I’m finally back in the swing of things. 
Trying to convert some of our manually crafted feature.xml files to the new 
karaf-maven-plugin style with a POM for each feature file. Our build-team is 
really excited about getting some visibility into the karaf feature 
dependencies by using this plugin.

The problem I’m having right now is with bare blueprint xml files. They’re 
being added with “wrap:” instead of “blueprint:"

Example:

  pentaho
  pentaho-blueprint-activators
  6.1-SNAPSHOT
  kettle-jms
  xml


Result:
wrap:mvn:pentaho/pentaho-blueprint-activators/6.1-SNAPSHOT/xml/kettle-jms

I suspect I need to change our artifacts to classify as “feature”. I can dive 
into the code but thought one of you might know off the top of your head.

Appreciate the help,
Nick


Re: Merry Christmas

2015-12-25 Thread Nick Baker
Karaf has been a gift already. Enjoy some time with family and friends everyone!

-Nick
  Original Message
From: j...@nanthrax.net
Sent: Friday, December 25, 2015 5:00 AM
To: d...@karaf.apache.org; user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Merry Christmas


On behalf of the Karaf team, we wish a happy christmas to all Karaf
users !

We are preparing a couple of gifts for you, especially the new website.
I worked on it yesterday and I will work again on it today. I hope to
send a vote e-mail soon.

Again Merry Christmas
JB


Re: Updating Snapshot bundles from installedFeatures

2015-12-17 Thread Nick Baker
Tom, we instruct our developers to delete the system/ directory when in dev. 
Can you send a link to the case you found?

Thanks,
Nick Baker

From: Tom Barber <tom.bar...@meteorite.bi<mailto:tom.bar...@meteorite.bi>>
Reply-To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Thursday, December 17, 2015 at 11:50 AM
To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: Updating Snapshot bundles from installedFeatures

Scratch that, found the global update policy thing on jira.

Tom

On Thu, Dec 17, 2015 at 4:39 PM, Tom Barber 
<tom.bar...@meteorite.bi<mailto:tom.bar...@meteorite.bi>> wrote:
Hello folks,

I have a custom distro with an installedFeature (will later be a bootfeature), 
so it ships with the custom distro.

Currently its 1.0-SNAPSHOT, and it exists in distro/system.

After we push a build up to our build server its deployed to our maven repo, 
how do I get karaf to fetch the updated snapshot, because it already exists in 
distro/system it just uses the local one unless I manually delete it. Is there 
an alternative?

Thanks

Tom



features-add-to-repository and multiple versions of a feature

2015-09-17 Thread Nick Baker
Hey All,

We're supplying customized versions for some of the core features: http, kar in 
order to provide different bundles. These features are all versioned higher 
than the stock ones and are indeed being used instead of the stock ones at 
runtime.

The issue is that in assembly these override features' bundles are not being 
added to the system/ repository. Production environments are downloading them 
from maven which is unacceptable for our customers. It seems that 
features-add-to-repository isn't honoring the highest-version feature the same 
way the FeaturesService is.

Anyone run into this one before? I'm about to create a bogus feature with these 
bundles, referenced in the assembly simply to get around the problem but it's 
not a long-term solution.

Thanks,
Nick


Re: features-add-to-repository and multiple versions of a feature

2015-09-17 Thread Nick Baker
Sure kar was an easy one. We're waiting on the 3.0.5 release so we're 
overriding it to load our patched core bundle instead. Yes this is something 
that we have to constantly maintain as releases increase :(

  
mvn:org.apache.karaf.kar/org.apache.karaf.kar.core/3.0.5-p
mvn:org.apache.karaf.kar/org.apache.karaf.kar.command/3.0.3
mvn:org.apache.karaf.deployer/org.apache.karaf.deployer.kar/3.0.3
  

Http is another one we override as Karaf in some of our products is running 
within an existing Application Server and we bridge the HttpService out to the 
Servlet environment supplied by the server (felix-http-bridge). That one is 
more involved as some of the features declare a specific version of http so 
we've overridden them as well to direct them to our customized version.

-Nick

From: Achim Nierbeck <bcanh...@googlemail.com<mailto:bcanh...@googlemail.com>>
Reply-To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Thursday, September 17, 2015 at 2:50 PM
To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: features-add-to-repository and multiple versions of a feature

Hi Nick,

I'm not quite sure what you're doing, so let's try clear some question marks 
from my view.
core features in higher version, are you merely updating to newer available 
versions or are you creating "new" versions which will collide in future if 
those
bundles are actually bumped to that version?

Is it possible that some other bundles/features do have requirements for 
certain features which are exactly the version from the std. repo.
As an example take the http feature and the http commands, those commands 
require certain pax-web- bundles in certain versions. So might it
be possible that you have some "transitive" dependencies in there somewhere?

Could you give us a view of the customized features you got?

regards, Achim


2015-09-17 16:50 GMT+02:00 Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>>:
Hey All,

We're supplying customized versions for some of the core features: http, kar in 
order to provide different bundles. These features are all versioned higher 
than the stock ones and are indeed being used instead of the stock ones at 
runtime.

The issue is that in assembly these override features' bundles are not being 
added to the system/ repository. Production environments are downloading them 
from maven which is unacceptable for our customers. It seems that 
features-add-to-repository isn't honoring the highest-version feature the same 
way the FeaturesService is.

Anyone run into this one before? I'm about to create a bogus feature with these 
bundles, referenced in the assembly simply to get around the problem but it's 
not a long-term solution.

Thanks,
Nick



--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & 
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master



Feature Uninstall and Dependent Features

2015-09-17 Thread Nick Baker
Hey All,

We're seeing that dependent features aren't being uninstalled when 
feature:uninstall is called (Actually calling the FeaturesService in code), 
even though no others depend on them. What's the expected behavior here? We're 
on 3.0.3

Thanks,
Nick



Re: Feature Uninstall and Dependent Features

2015-09-17 Thread Nick Baker
Thanks that's what the code was telling me. Damn.

Thanks Achim!

From: Achim Nierbeck <bcanh...@googlemail.com<mailto:bcanh...@googlemail.com>>
Reply-To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Date: Thursday, September 17, 2015 at 4:57 PM
To: "user@karaf.apache.org<mailto:user@karaf.apache.org>" 
<user@karaf.apache.org<mailto:user@karaf.apache.org>>
Subject: Re: Feature Uninstall and Dependent Features

Hi Nick,

works as expected for the Karaf 3 line.
With Karaf 4 the feature installer has been improved in that area.

regards, Achim


2015-09-17 22:50 GMT+02:00 Nick Baker 
<nba...@pentaho.com<mailto:nba...@pentaho.com>>:
Hey All,

We're seeing that dependent features aren't being uninstalled when 
feature:uninstall is called (Actually calling the FeaturesService in code), 
even though no others depend on them. What's the expected behavior here? We're 
on 3.0.3

Thanks,
Nick




--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & 
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master



Features Core is hardcoded for ${karaf.etc}/org.apache.karaf.features.cfg

2015-09-15 Thread Nick Baker
We've run into an issue with Karaf insisting on the 
org.apache.karaf.features.cfg file being in etc/.

One of our products can be launched in several different configurations from 
the same base installation, all concurrently.  These different configurations 
necessitate different feature profiles.

We'll eventually be moving away from featuresBoot and installing these 
application features ourselves directly with the FeaturesService, as part of 
our generic CapabilityManager, but we've run out of time for that.

So I tried to have the startup of of karaf add an extra configuration directory 
appropriate for the launch profile. These configuration directories contain 
only the org.apache.karaf.features.cfg file. The plain /etc does not contain 
one. I thought for sure Karaf would be loading from the ConfigurationAdmin, but 
It's actually loading straight from the properties file on disk by way of the 
ext:property-placeholder:

https://github.com/apache/karaf/blob/karaf-3.0.3/features/core/src/main/resources/OSGI-INF/blueprint/blueprint.xml#L30

I've modified features-core to use the config admin and now our application 
works as expected:
https://github.com/pentaho-nbaker/karaf-1/commit/6c88b575d40f2519012e35bb56b9f4effd2b5b60

-Nick


Kara 3.0.5 Release date?

2015-09-09 Thread Nick Baker
Hey All,

I see that the current plan is to release 3.0.5 on 9/15. Is this still the 
case? There's a bug fix we need in there and our release is planned for the end 
of the month. We can ship a patched version if needed, but would definitely 
like to ship the official build.

-Nick


Re: Construct a Web Application From Multiple Bundles

2015-07-15 Thread Nick Baker
Can someone direct me to the changes to the HTTP Service which make this 
possible? How did they handle the getResource method with a shared HTTPContext? 
We're investigating exactly this scenario now.

Thanks,
Nick

From: Achim Nierbeck
Sent: Wednesday, July 15, 2015 4:59 PM
To: user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Re: Construct a Web Application From Multiple Bundles


Hi,

never tried to run Pax-Web 4 with Karaf 3.0.2 but if you do it you won't have 
the web and http commands available.
AFAIK Karaf 4 and JPA shouldn't be of an issue, you might need to make sure you 
have the right version of it installed though.

You should give it a try :-)

regards, Achim


2015-07-15 22:20 GMT+02:00 jtkb 
ka...@avionicengineers.commailto:ka...@avionicengineers.com:
Just to clarify, I meant Pax-Web 4 in Karaf 3.0.2.



--
View this message in context: 
http://karaf.922171.n3.nabble.com/Construct-a-Web-Application-From-Multiple-Bundles-tp4041252p4041440.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--

Apache Member
Apache Karaf http://karaf.apache.org/ Committer  PMC
OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/ Committer  
Project Lead
blog http://notizblog.nierbeck.de/
Co-Author of Apache Karaf Cookbook http://bit.ly/1ps9rkS

Software Architect / Project Manager / Scrum Master



Re: Integration Tests hang for approx. 30 seconds after Karaf 2.3.3 upgrade to 3.0.3

2015-02-13 Thread Nick Baker
We are experiencing the same slowdown in PAX-Exam with 3.0.0, also using
TinyBundles. Pretty painful.

MavenArtifactUrlReference karafUrl = maven()
.groupId( org.apache.karaf )
.artifactId( apache-karaf )
.version( 3.0.0 )
.type( tar.gz );


...

karafDistributionConfiguration()
.frameworkUrl( karafUrl )
.unpackDirectory( new File( target/exam ) )
.useDeployFolder( false ),




On 2/13/15, 6:10 AM, koslowskyj johannes.koslow...@younicos.com wrote:

Hi,

I am not sure whether this belongs on the karaf or rather on the pax
mailinglist.
After i upgraded from karaf 2.3.3 to 3.0.3 i noticed that our integration
tests slowed down dramatically (30 seconds to 1 minute per test).

So i tried a few things, setted up a minimal test with PaxExam for
KarafContainer and added also a Tinybundle to the Test (because most of
our
tests use TinyBundles). I noticed that when i was using the same Test and
same maven settings for PaxExam with diffrent Karaf Version they were a
lot
slower on Karaf 3.0.3 (a ~30 second timeout during the Test with 3.0.3).
With Karaf 2.3.3 they behaved as expected. This only seems to happen when
using TinyBundles and Karaf 3.0.3.
Is this a bug? Does anybody know how to fix this?

Best regards


2.3.3 result:

[INFO] Total time: 24.384 s

3.0.3 result:

[INFO] Total time: 54.138 s


Test:


import static aQute.bnd.osgi.Constants.BUNDLE_SYMBOLICNAME;
import static aQute.bnd.osgi.Constants.EXPORT_PACKAGE;
import static aQute.bnd.osgi.Constants.IMPORT_PACKAGE;
import static junit.framework.Assert.assertTrue;
import static org.ops4j.pax.exam.CoreOptions.maven;
import static org.ops4j.pax.exam.CoreOptions.streamBundle;
import static org.ops4j.pax.tinybundles.core.TinyBundles.bundle;
import static org.osgi.framework.Constants.DYNAMICIMPORT_PACKAGE;

import java.io.File;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.ops4j.pax.exam.Configuration;
import org.ops4j.pax.exam.CoreOptions;
import org.ops4j.pax.exam.Option;
import org.ops4j.pax.exam.junit.PaxExam;
import org.ops4j.pax.exam.karaf.options.KarafDistributionOption;
import org.ops4j.pax.exam.karaf.options.LogLevelOption.LogLevel;
import org.ops4j.pax.exam.spi.reactors.ExamReactorStrategy;
import org.ops4j.pax.exam.spi.reactors.PerClass;
import org.ops4j.pax.tinybundles.core.TinyBundle;

import sample.minimal.karaf.test.second.MyClass;

@RunWith(PaxExam.class)
@ExamReactorStrategy(PerClass.class)
public class VersionAsInProjectKarafTest
{

@Configuration
public Option[] config()
{
return new Option[]{
KarafDistributionOption.logLevel(LogLevel.TRACE),
   
KarafDistributionOption.karafDistributionConfiguration().frameworkUrl(
  
maven().groupId(org.apache.karaf).artifactId(apache-karaf).type(tar.g
z).versionAsInProject())
.karafVersion(2.3.3)
//.karafVersion(3.0.3)
.name(Apache
Karaf).useDeployFolder(false).unpackDirectory(new
File(target/paxexam)),
   
KarafDistributionOption.editConfigurationFileExtend(etc/config.properties
,
org.apache.aries.blueprint.synchronous, false),
runtimeUtils()
};
}


/**
 * Sets up and provisions a bundle packaged with the runtime utitlity
classes and corresponding headers. This method
 * should not be called directly as it is called from {@link
#provisionRuntimeUtilities()}.
 */
private static Option runtimeUtils()
{

final TinyBundle tinyBundle = bundle()
.add(MyClass.class)
.set(BUNDLE_SYMBOLICNAME,
sample.minimal.karaf.test.second)
.set(EXPORT_PACKAGE, sample.minimal.karaf.test.second)
.set(IMPORT_PACKAGE, *)
.set(DYNAMICIMPORT_PACKAGE, *);

return CoreOptions.provision(streamBundle(tinyBundle.build()));
}


@Test
public void test() throws Exception
{
assertTrue(true);
}

}

pom.xml


project xmlns=http://maven.apache.org/POM/4.0.0;
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
 xsi:schemaLocation=http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd;
   modelVersion4.0.0/modelVersion
   groupIdsome.test/groupId
   
 artifactIdsample.minimal.karaf.test.second/artifactId
   version0.0.1-SNAPSHOT/version
   packagingbundle/packaging

   dependencies

   dependency
   *setting for karaf 3*
/  
  groupIdorg.apache.karaf.features/groupId
   
 

Re: Integration Tests hang for approx. 30 seconds after Karaf 2.3.3 upgrade to 3.0.3

2015-02-13 Thread Nick Baker
We’re using 4.1.0. It hasn’t been a problem for us until this week when we 
started a new round of integration test authoring.

Most of the time spent waiting seems to be after JVM start and before Karaf 
boots. Our plain Felix PAM Exam tests are still very fast. I’ll be back in 
there later today and will attach Yourkit to it and hopefully get you something 
more useful than “it’s slow”.

-Nick

From: Achim Nierbeck bcanh...@googlemail.commailto:bcanh...@googlemail.com
Reply-To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Date: Friday, February 13, 2015 at 10:19 AM
To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Subject: Re: Integration Tests hang for approx. 30 seconds after Karaf 2.3.3 
upgrade to 3.0.3

Hi,

which version of Pax Exam are you using?
And did you try to upgrade to the latest version of it?

regards, Achim


2015-02-13 15:40 GMT+01:00 Nick Baker 
nba...@pentaho.commailto:nba...@pentaho.com:
We are experiencing the same slowdown in PAX-Exam with 3.0.0, also using
TinyBundles. Pretty painful.

MavenArtifactUrlReference karafUrl = maven()
.groupId( org.apache.karaf )
.artifactId( apache-karaf )
.version( 3.0.0 )
.type( tar.gz );


...

karafDistributionConfiguration()
.frameworkUrl( karafUrl )
.unpackDirectory( new File( target/exam ) )
.useDeployFolder( false ),




On 2/13/15, 6:10 AM, koslowskyj 
johannes.koslow...@younicos.commailto:johannes.koslow...@younicos.com wrote:

Hi,

I am not sure whether this belongs on the karaf or rather on the pax
mailinglist.
After i upgraded from karaf 2.3.3 to 3.0.3 i noticed that our integration
tests slowed down dramatically (30 seconds to 1 minute per test).

So i tried a few things, setted up a minimal test with PaxExam for
KarafContainer and added also a Tinybundle to the Test (because most of
our
tests use TinyBundles). I noticed that when i was using the same Test and
same maven settings for PaxExam with diffrent Karaf Version they were a
lot
slower on Karaf 3.0.3 (a ~30 second timeout during the Test with 3.0.3).
With Karaf 2.3.3 they behaved as expected. This only seems to happen when
using TinyBundles and Karaf 3.0.3.
Is this a bug? Does anybody know how to fix this?

Best regards


2.3.3 result:

[INFO] Total time: 24.384 s

3.0.3 result:

[INFO] Total time: 54.138 s


Test:


import static aQute.bnd.osgi.Constants.BUNDLE_SYMBOLICNAME;
import static aQute.bnd.osgi.Constants.EXPORT_PACKAGE;
import static aQute.bnd.osgi.Constants.IMPORT_PACKAGE;
import static junit.framework.Assert.assertTrue;
import static org.ops4j.pax.exam.CoreOptions.maven;
import static org.ops4j.pax.exam.CoreOptions.streamBundle;
import static org.ops4j.pax.tinybundles.core.TinyBundles.bundle;
import static org.osgi.framework.Constants.DYNAMICIMPORT_PACKAGE;

import java.io.File;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.ops4j.pax.exam.Configuration;
import org.ops4j.pax.exam.CoreOptions;
import org.ops4j.pax.exam.Option;
import org.ops4j.pax.exam.junit.PaxExam;
import org.ops4j.pax.exam.karaf.options.KarafDistributionOption;
import org.ops4j.pax.exam.karaf.options.LogLevelOption.LogLevel;
import org.ops4j.pax.exam.spi.reactors.ExamReactorStrategy;
import org.ops4j.pax.exam.spi.reactors.PerClass;
import org.ops4j.pax.tinybundles.core.TinyBundle;

import sample.minimal.karaf.test.second.MyClass;

@RunWith(PaxExam.class)
@ExamReactorStrategy(PerClass.class)
public class VersionAsInProjectKarafTest
{

@Configuration
public Option[] config()
{
return new Option[]{
KarafDistributionOption.logLevel(LogLevel.TRACE),

KarafDistributionOption.karafDistributionConfiguration().frameworkUrl(

maven().groupId(org.apache.karaf).artifactId(apache-karaf).type(tar.g
z).versionAsInProject())
.karafVersion(2.3.3)
//.karafVersion(3.0.3)
.name(Apache
Karaf).useDeployFolder(false).unpackDirectory(new
File(target/paxexam)),

KarafDistributionOption.editConfigurationFileExtend(etc/config.properties
,
org.apache.aries.blueprint.synchronous, false),
runtimeUtils()
};
}


/**
 * Sets up and provisions a bundle packaged with the runtime utitlity
classes and corresponding headers. This method
 * should not be called directly as it is called from {@link
#provisionRuntimeUtilities()}.
 */
private static Option runtimeUtils()
{

final TinyBundle tinyBundle = bundle()
.add(MyClass.class)
.set(BUNDLE_SYMBOLICNAME,
sample.minimal.karaf.test.second)
.set(EXPORT_PACKAGE, sample.minimal.karaf.test.second)
.set(IMPORT_PACKAGE, *)
.set(DYNAMICIMPORT_PACKAGE, *);

return CoreOptions.provision(streamBundle(tinyBundle.build()));
}


@Test
public void test() throws

Re: How to execute code after bundles started by Felix FileInstall?

2014-12-11 Thread Nick Baker
I was thinking about our use-case today. We know the full Class name of the 
components supplied by the bundles. We could perhaps leverage an OBR to find 
the bundle symbolic name for the package and leverage that in a bundle listener 
to execute as soon as it's available. This would be a good adjunct tomorrow he 
current loop/sleep/timeout, not a replacement. Of course there's no guarantee 
that the actual bundle will be correctly reported by the OBR, but it would be 
an improvement

-Nick

Sent from my BlackBerry. I am AFK at the moment
From: Achim Nierbeck
Sent: Thursday, December 11, 2014 7:06 PM
To: user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Re: How to execute code after bundles started by Felix FileInstall?


Well FileInstaller sets the right startlevel, but this  will be just used in 
case you restart the server where those bundles are already installed and 
inside the cache.

regards, Achim

2014-12-12 1:02 GMT+01:00 thully 
tmh...@eng.ucsd.edumailto:tmh...@eng.ucsd.edu:
The issue is that this is not a server - this is a client application. Our
command scripts work essentially like any other script, but can include
commands that reference either core bundles or non-core bundles (i.e. those
loaded by FileInstall).  We do not control the non-core bundles - these are
created by third-party developers.

If the user specifies a command script to run (i.e. using a command line
argument specifying a file), we want to run that script, but we want to wait
until all core and non-core bundles have started up and run all commands
sequentially.  When a specified command is not registered with our handler,
we throw up an error and stop execution.

Anyway, it does seem like enabling FileInstall's
felix.fileinstall.noInitialDelay to true makes the bundles load (but not
start) before anything in the features.xml loads or starts. However, for
some reason, it doesn't seem that felix.fileinstall.start.level is respected
- it will set the start level for the bundles to that value, but they are
still getting started at the very end. It seems FileInstall doesn't respect
start level order in the same way that Karaf (or at least 3.0+) does with
the features.xml.

Does anyone know how exactly this is supposed to work in the context of
Karaf 3.x? It seems the FileInstall documentation is fairly sparse...



--
View this message in context: 
http://karaf.922171.n3.nabble.com/How-to-execute-code-after-bundles-started-by-Felix-FileInstall-tp4037120p4037173.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--

Apache Member
Apache Karaf http://karaf.apache.org/ Committer  PMC
OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/ Committer  
Project Lead
blog http://notizblog.nierbeck.de/
Co-Author of Apache Karaf Cookbook http://bit.ly/1ps9rkS

Software Architect / Project Manager / Scrum Master



Re: How to execute code after bundles started by Felix FileInstall?

2014-12-11 Thread Nick Baker
I suppose we could just inspect each bundle as it becomes active for the 
packages as well.
-Nick
From: Nick Baker
Sent: Thursday, December 11, 2014 7:31 PM
To: user@karaf.apache.org; user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Re: How to execute code after bundles started by Felix FileInstall?


I was thinking about our use-case today. We know the full Class name of the 
components supplied by the bundles. We could perhaps leverage an OBR to find 
the bundle symbolic name for the package and leverage that in a bundle listener 
to execute as soon as it's available. This would be a good adjunct tomorrow he 
current loop/sleep/timeout, not a replacement. Of course there's no guarantee 
that the actual bundle will be correctly reported by the OBR, but it would be 
an improvement

-Nick

Sent from my BlackBerry. I am AFK at the moment
From: Achim Nierbeck
Sent: Thursday, December 11, 2014 7:06 PM
To: user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Re: How to execute code after bundles started by Felix FileInstall?


Well FileInstaller sets the right startlevel, but this  will be just used in 
case you restart the server where those bundles are already installed and 
inside the cache.

regards, Achim

2014-12-12 1:02 GMT+01:00 thully 
tmh...@eng.ucsd.edumailto:tmh...@eng.ucsd.edu:
The issue is that this is not a server - this is a client application. Our
command scripts work essentially like any other script, but can include
commands that reference either core bundles or non-core bundles (i.e. those
loaded by FileInstall).  We do not control the non-core bundles - these are
created by third-party developers.

If the user specifies a command script to run (i.e. using a command line
argument specifying a file), we want to run that script, but we want to wait
until all core and non-core bundles have started up and run all commands
sequentially.  When a specified command is not registered with our handler,
we throw up an error and stop execution.

Anyway, it does seem like enabling FileInstall's
felix.fileinstall.noInitialDelay to true makes the bundles load (but not
start) before anything in the features.xml loads or starts. However, for
some reason, it doesn't seem that felix.fileinstall.start.level is respected
- it will set the start level for the bundles to that value, but they are
still getting started at the very end. It seems FileInstall doesn't respect
start level order in the same way that Karaf (or at least 3.0+) does with
the features.xml.

Does anyone know how exactly this is supposed to work in the context of
Karaf 3.x? It seems the FileInstall documentation is fairly sparse...



--
View this message in context: 
http://karaf.922171.n3.nabble.com/How-to-execute-code-after-bundles-started-by-Felix-FileInstall-tp4037120p4037173.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--

Apache Member
Apache Karaf http://karaf.apache.org/ Committer  PMC
OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/ Committer  
Project Lead
blog http://notizblog.nierbeck.de/
Co-Author of Apache Karaf Cookbook http://bit.ly/1ps9rkS

Software Architect / Project Manager / Scrum Master



Re: How to execute code after bundles started by Felix FileInstall?

2014-12-10 Thread Nick Baker
We have a similar requirement which I¹m sad to say is being worked around
with wait code. We can¹t know the list of bundles/features which will be
deployed and required ahead of time. Our ETL application runs
transformations after startup which may contain steps provided by plugins
(deployed as bundles). When can the application be certain that the Karaf
environment is fully initialized, all features have been installed and
FileInstall has done a pass over the configured directories? Today if the
step component isn¹t available we loop/wait for a configurable timeout
then fail if it hasn¹t become available.

-Nick


On 12/10/14, 2:50 PM, Jean-Baptiste Onofré j...@nanthrax.net wrote:

You can implement a BundleListener to catch when the bundle is
installed/started and do some code.

Regards
JB

On 12/10/2014 08:20 PM, thully wrote:
 Our project (Cytoscape) utilizes Karaf for OSGi bundle loading and
 management. We are using Felix FileInstall to load additional bundles
 (apps) installed by users. As it stands, the bundles loaded by Felix
 FileInstall load after all other bundles. While this is generally what
we
 want, we do have some code that we'd like to trigger after the bundles
 loaded by FileInstall have started - or at least after an attempt has
been
 made to start them.

 Is there a way to do this? It doesn't seem like Felix FileInstall has
much
 of an API to speak of, though it seems this may be possible by using
Karaf
 runlevels - particularly with the ability to respect load order on
3.0.2.
 If not, it seems like we may need to move away from Felix FileInstall
and
 write our own bundle installation mechanism...



 --
 View this message in context:
http://karaf.922171.n3.nabble.com/How-to-execute-code-after-bundles-start
ed-by-Felix-FileInstall-tp4037120.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


-- 
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com



Re: How to execute code after bundles started by Felix FileInstall?

2014-12-10 Thread Nick Baker
Yea we’re in a transitional phase where Karaf is embedded within a non-OSGI 
environment. It would be very useful for this outside application to know when 
Karaf is fully started as I described. I guess we’ll continue employing the 
loop/sleep code for now.

We may need to revisit this soon though. I can customize Apache FileInstall to 
send an Event when it’s completed one directory scan. Is there anything 
available to me on the Karaf-side to know when the features list has been 
processed?

Thanks,
-Nick

From: Achim Nierbeck bcanh...@googlemail.commailto:bcanh...@googlemail.com
Reply-To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Date: Wednesday, December 10, 2014 at 4:27 PM
To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Subject: Re: How to execute code after bundles started by Felix FileInstall?

Hmm, sorry to say this, but this seems like a flaw in design here.
As all in Karaf/OSGi is a service which might be there or might not be there you
never know if everything is loaded. If your application needs certain 
dependencies to be available
make sure your application is depending on other services. That's the 
modularity level you want to go.
Start levels are only there to help a bit with this scenario during the startup 
phase of the container as some
services should/could be available a bit more earlier, like a logging service. 
So all other services can profit from it.

Another way of watching or waiting for other Bundles is to use BundleTrackers 
or Extenders.

Regards, Achim



2014-12-10 21:29 GMT+01:00 Nick Baker 
nba...@pentaho.commailto:nba...@pentaho.com:
We have a similar requirement which I¹m sad to say is being worked around
with wait code. We can¹t know the list of bundles/features which will be
deployed and required ahead of time. Our ETL application runs
transformations after startup which may contain steps provided by plugins
(deployed as bundles). When can the application be certain that the Karaf
environment is fully initialized, all features have been installed and
FileInstall has done a pass over the configured directories? Today if the
step component isn¹t available we loop/wait for a configurable timeout
then fail if it hasn¹t become available.

-Nick


On 12/10/14, 2:50 PM, Jean-Baptiste Onofré 
j...@nanthrax.netmailto:j...@nanthrax.net wrote:

You can implement a BundleListener to catch when the bundle is
installed/started and do some code.

Regards
JB

On 12/10/2014 08:20 PM, thully wrote:
 Our project (Cytoscape) utilizes Karaf for OSGi bundle loading and
 management. We are using Felix FileInstall to load additional bundles
 (apps) installed by users. As it stands, the bundles loaded by Felix
 FileInstall load after all other bundles. While this is generally what
we
 want, we do have some code that we'd like to trigger after the bundles
 loaded by FileInstall have started - or at least after an attempt has
been
 made to start them.

 Is there a way to do this? It doesn't seem like Felix FileInstall has
much
 of an API to speak of, though it seems this may be possible by using
Karaf
 runlevels - particularly with the ability to respect load order on
3.0.2.
 If not, it seems like we may need to move away from Felix FileInstall
and
 write our own bundle installation mechanism...



 --
 View this message in context:
http://karaf.922171.n3.nabble.com/How-to-execute-code-after-bundles-start
ed-by-Felix-FileInstall-tp4037120.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


--
Jean-Baptiste Onofré
jbono...@apache.orgmailto:jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com




--

Apache Member
Apache Karaf http://karaf.apache.org/ Committer  PMC
OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/ Committer  
Project Lead
blog http://notizblog.nierbeck.de/
Co-Author of Apache Karaf Cookbook http://bit.ly/1ps9rkS

Software Architect / Project Manager / Scrum Master



Re: Dependency on Camel brings in a jaxb impl that causes a use constraint violation

2014-12-07 Thread Nick Baker
We've been overriding various features to tailor the dependencies in our setup. 
Not ideal nor easy to maintain. An exclude would be great for us as well.

-Nick

Sent from my BlackBerry. I am AFK at the moment
  Original Message
From: Jean-Baptiste Onofré
Sent: Sunday, December 7, 2014 12:28 PM
To: user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Re: Dependency on Camel brings in a jaxb impl that causes a use 
constraint violation


Hi,

The karaf-maven-plugin resolves recursively all features/bundles.

However, you raise a valid point: I will create a Jira to add
excludes/ for features and bundles, to give more control.

Thanks,
Regards
JB

On 12/07/2014 12:08 PM, MarkD wrote:
 Hi all,

 I have a dependency on Camel which is added to the feature list in my kar by
 the karaf maven plugin.

 It has a transitive dependency on jaxb-impl which I think is exposed by the
 system bundle. Is there any way of telling the karaf maven plugin to ignore
 this so I only get the provided bundle?

 Thanks in advance!



 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Dependency-on-Camel-brings-in-a-jaxb-impl-that-causes-a-use-constraint-violation-tp4037029.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: Free-form Version Coercion to OSGi Compatible Version

2014-11-26 Thread Nick Baker
Haha yep. That makes more sense. People laugh at ny BlackBerry, but the auto 
correct is awesome.

Thanks!
Nick

Sent from my BlackBerry. I am AFK at the moment
  Original Message
From: chris.g...@kiffer.ltd.uk
Sent: Wednesday, November 26, 2014 3:04 PM
To: user@karaf.apache.org
Reply To: user@karaf.apache.org
Subject: Re: Free-form Version Coercion to OSGi Compatible Version


Hi Achim,

I guess you mean:

Karaf is using *bnd* under the hood.
So you best take a look at the *bnd* tools.

Gotta love those whacky spellcheckers ;-)

Regards, Chris

 Hi,

 Karaf is using and under the hood.
 So you best take a look at the end tools.

 Regards, Achim

 sent from mobile device
 Am 26.11.2014 16:54 schrieb Nick Baker nba...@pentaho.com:

  Hey All,

  We’ve written a deployer to transform WebJars
 (http://www.webjars.org)
 into bundles compatible with our RequireJS setup. Part of this process
 involves transforming the maven version into an OSGI Version. Most
 artifacts adhere to maven version standards, though some are just
 strings
 (SHAs, TRUNK-SNAPSHOT, etc.). I’ve written a simple parser to handle
 this.
 However, I’m noticing that Karaf or perhaps BND is doing the same
 version
 coercion. Can anyone point me to this class. I’m concerned I’m
 re-inventing
 the wheel and stumbling over the same edge cases.

  Here’s our VersionParser:

 https://github.com/pentaho/pentaho-osgi-bundles/blob/master/pentaho-webjars-deployer/src/main/java/org/pentaho/osgi/platform/webjars/VersionParser.java

  Thanks,
 -Nick Baker






Re: Minimal/core Karaf 4

2014-11-24 Thread Nick Baker
I can still see value without the Shell or JMX. It would be akin to Virgo Nano. 
Feature, KAR and Deployers alone make Karaf valuable.

-Nick Baker

From: Achim Nierbeck bcanh...@googlemail.commailto:bcanh...@googlemail.com
Reply-To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Date: Monday, November 24, 2014 at 8:53 AM
To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Subject: Re: Minimal/core Karaf 4

Hi,

maybe you should give us a hint on your requirements, cause right now the 
*.jmx.* bundles are needed to configure Karaf via JMX which itself is a valid 
requirement.
Again org.apache.karaf.shell.* is needed to run the Karaf shell, so why would 
you want to run Karaf without shell and without JMX,
in that scenario you can't use it at all for anything.

At best give us an idea of what kind of requirements you have for a core 
distribution.
And please also give us an idea, where using Karaf-Core is more helpfull then 
using a plain Framework like Equinox or Felix?
At the moment I wouldn't know where to strip-off more bundles.

regards, Achim


2014-11-24 14:43 GMT+01:00 Kim Hansen 
kimh...@gmail.commailto:kimh...@gmail.com:
I have tried installing Karaf and starting it and can see that it takes 7 
seconds (on my laptop) to load fx.:

  *   org.osgi.jmx.*
  *   org.apache.aries.jmx.*
  *   org.apache.karaf.shell.*

But I can't understand why these are being loaded at all for a minimal 
distribution?

I would really like to get a Karaf core version that loads nothing and starts 
up in less than a second, and then a manual/guide for how to easily create core 
distribution with nothing/one/multiple of these installed.

While looking through the archive I found these related issues:

  *   KARAF-2651https://issues.apache.org/jira/browse/KARAF-2651 -- Minimal 
distribution should really be minimal
  *   KARAF-2652https://issues.apache.org/jira/browse/KARAF-2652 -- Create 
net distribution

Why does Karaf core load these?



--

Apache Member
Apache Karaf http://karaf.apache.org/ Committer  PMC
OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/ Committer  
Project Lead
blog http://notizblog.nierbeck.de/
Co-Author of Apache Karaf Cookbook http://bit.ly/1ps9rkS

Software Architect / Project Manager / Scrum Master



Re: Best way to determine HttpService URL

2014-11-18 Thread Nick Baker
Thanks Achim. This was pretty much my conclusion as well.

-Nick

From: Achim Nierbeck bcanh...@googlemail.commailto:bcanh...@googlemail.com
Reply-To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Date: Tuesday, November 18, 2014 at 9:01 AM
To: user@karaf.apache.orgmailto:user@karaf.apache.org 
user@karaf.apache.orgmailto:user@karaf.apache.org
Subject: Re: Best way to determine HttpService URL

Hi Nick,

I feared an answer like that was about to come ...
... still there is not auto-magic way of knowing ports from the inside of the 
container. Most likely you will
have to do something yourself.
Maybe you could have an Rest Servlet running inside your Pax Web container, 
that will pick up messages for configuring ports.
As the default will be the one you initially set or 8181.
From there on the servlet will need to take the new parameters and set these 
parameters through the configuration admin service, which again will fire up 
the Pax Web Jetty instance with the new parameters configured.
That's most likely the only way to control this from the outside.

regards, Achim

2014-11-17 23:00 GMT+01:00 Nick Baker 
nba...@pentaho.commailto:nba...@pentaho.com:
Achim:
We¹re in a transitional phase right now. OSGI is replacing 4 different
home-grown plugin systems to become the modular framework for future
development. In time we will have everything within the OSGI container,
but that time is not now.

JB:
We do use Pax-Web in our Thick Client applications (Swing, SWT). There
embedded browser components within those applications display web content
served from the OSGI container. This is the primary reason why I need to
know the URL of the Http Service.

Our Server offering runs in any J2EE application server. We maintain an
embedded Karaf instance there with the HttpService bridged out to the
outside Servlet Container (primarily Tomcat). I know some people proxy out
to PAX-Web, but this won¹t work for us.

-Nick


On 11/17/14, 1:35 PM, Jean-Baptiste Onofré 
j...@nanthrax.netmailto:j...@nanthrax.net wrote:

Hi Nick,

why not just using Pax Web ?

Pax Web register the servlet as service, so you can do a simple lookup.

Regards
JB

On 11/17/2014 04:34 PM, Nick Baker wrote:
 Hey All,

 I¹ve got a mixed environment where a Bundle can be deployed in an
 environment using the standard PAX-Web HttpService as well as one
 bridging out to Tomcat by way of Felix-HTTP Bridge. I need to be able to
 programmatically determine the URL for the HttpService regardless of
 where a bundle is deployed. There doesn¹t seem to be any way of
 determining this using the standard OSGI APIs.

 Karaf is embedded within both environments. The PAX-Web setup will
 dynamically find an open port on startup and set the appropriate
 Configuration Admin entries before starting Karaf. The Tomcat
 environment does not need to do this work.

 Any ideas appreciated,
 -Nick

--
Jean-Baptiste Onofré
jbono...@apache.orgmailto:jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com




--

Apache Member
Apache Karaf http://karaf.apache.org/ Committer  PMC
OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/ Committer  
Project Lead
blog http://notizblog.nierbeck.de/
Co-Author of Apache Karaf Cookbook http://bit.ly/1ps9rkS

Software Architect / Project Manager / Scrum Master



Best way to determine HttpService URL

2014-11-17 Thread Nick Baker
Hey All,

I've got a mixed environment where a Bundle can be deployed in an environment 
using the standard PAX-Web HttpService as well as one bridging out to Tomcat by 
way of Felix-HTTP Bridge. I need to be able to programmatically determine the 
URL for the HttpService regardless of where a bundle is deployed. There doesn't 
seem to be any way of determining this using the standard OSGI APIs.

Karaf is embedded within both environments. The PAX-Web setup will dynamically 
find an open port on startup and set the appropriate Configuration Admin 
entries before starting Karaf. The Tomcat environment does not need to do this 
work.

Any ideas appreciated,
-Nick


Re: Best way to determine HttpService URL

2014-11-17 Thread Nick Baker
Achim: 
We¹re in a transitional phase right now. OSGI is replacing 4 different
home-grown plugin systems to become the modular framework for future
development. In time we will have everything within the OSGI container,
but that time is not now.

JB:
We do use Pax-Web in our Thick Client applications (Swing, SWT). There
embedded browser components within those applications display web content
served from the OSGI container. This is the primary reason why I need to
know the URL of the Http Service.

Our Server offering runs in any J2EE application server. We maintain an
embedded Karaf instance there with the HttpService bridged out to the
outside Servlet Container (primarily Tomcat). I know some people proxy out
to PAX-Web, but this won¹t work for us.

-Nick


On 11/17/14, 1:35 PM, Jean-Baptiste Onofré j...@nanthrax.net wrote:

Hi Nick,

why not just using Pax Web ?

Pax Web register the servlet as service, so you can do a simple lookup.

Regards
JB

On 11/17/2014 04:34 PM, Nick Baker wrote:
 Hey All,

 I¹ve got a mixed environment where a Bundle can be deployed in an
 environment using the standard PAX-Web HttpService as well as one
 bridging out to Tomcat by way of Felix-HTTP Bridge. I need to be able to
 programmatically determine the URL for the HttpService regardless of
 where a bundle is deployed. There doesn¹t seem to be any way of
 determining this using the standard OSGI APIs.

 Karaf is embedded within both environments. The PAX-Web setup will
 dynamically find an open port on startup and set the appropriate
 Configuration Admin entries before starting Karaf. The Tomcat
 environment does not need to do this work.

 Any ideas appreciated,
 -Nick

-- 
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com



Load features xml file from the local /system repo

2014-08-09 Thread Nick Baker
Hi All,

I’m trying to load a features repository file from within the embedded /system 
repository in 2.3.5. Here’s my setup:

etc/org.apache.karaf.features.cfg:
featuresRepositories=mvn:org.apache.karaf.assemblies.features/standard/2.3.5/xml/features,mvn:org.apache.karaf.assemblies.features/enterprise/2.3.5/xml/features,mvn:io.hawt/hawtio-karaf/1.4.11/xml/features,mvn:org.apache.camel.karaf/apache-camel/2.13.2/xml/features,mvn:pentaho/pentaho-server-core/1.0-SNAPSHOT/xml/features

The file in question is 
mvn:pentaho/pentaho-server-core/1.0-SNAPSHOT/xml/features”

Now I’ve placed this in the “karaf.default.repository” (system) as
karaf/system/pentaho/pentaho-server-core/1.0-SNAPSHOT/pentaho-server-core-features.xml

However, it’s not finding it. I tried from the console with features:addurl as 
well with no luck. I can use 
features:addurlfile://${karaf.base}/system/pentaho/pentaho-server-core/1.0-SNAPSHOT/pentaho-server-core-features.xml.
 Unfortunately this URL doesn’t work in the featuresRepositories entry as it 
doesn’t seem to support property replacements.

Any help is appreciated. I may have to go the KAR route if I can’t get this 
working.

Thanks,
Nick