Re: missing requirement [org.apache.karaf.features.core/4.0.5] osgi.wiring.package; filter:="(osgi.wiring.package=org.eclipse.equinox.region.management)

2016-06-30 Thread Allan C.
Please ignore this.

It's working after I've updated the xml namespace from 1.0.0 to 1.3.0.

Regards,
Allan C.


On Fri, Jul 1, 2016 at 8:15 AM, Allan C.  wrote:
> Hi,
>
> I am trying to get my feature installed but encountered this exception:
>
> 08:10:49,424 | ERROR | ShellUtil:149 | 44 | Exception caught while executing
> command
> org.osgi.service.resolver.ResolutionException: Unable to resolve
> org.apache.karaf.features.core/4.0.5: missing requirement
> [org.apache.karaf.features.core/4.0.5] osgi.wiring.package;
> filter:="(osgi.wiring.package=org.eclipse.equinox.region.management)"
> at
> org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)[org.apache.felix.framework-5.4.0.jar:]
> at
> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:404)[org.apache.felix.framework-5.4.0.jar:]
> at
> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:179)[org.apache.felix.framework-5.4.0.jar:]
> at
> org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:216)[9:org.apache.karaf.features.core:4.0.5]
> at
> org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:263)[9:org.apache.karaf.features.core:4.0.5]
> at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1152)[9:org.apache.karaf.features.core:4.0.5]
> at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl$1.call(FeaturesServiceImpl.java:1048)[9:org.apache.karaf.features.core:4.0.5]
> at
> java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_101]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_101]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_101]
> at java.lang.Thread.run(Thread.java:745)[:1.7.0_101]
>
> This is my features.xml.
>
>  xmlns="http://karaf.apache.org/xmlns/features/v1.0.0;>
> mvn:org.apache.karaf.features/framework/4.0.5/xml/features
> mvn:org.apache.karaf.features/standard/4.0.5/xml/features
> mvn:org.apache.karaf.features/enterprise/4.0.5/xml/features
> mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.8.0/xml/features
> mvn:org.apache.camel.karaf/apache-camel/2.15.1/xml/features
> mvn:org.apache.cxf.karaf/apache-cxf/3.0.4/xml/features
>
> 
> transaction
> jpa
> jndi
> hibernate
>
> camel-core
> camel-blueprint
> camel-bindy
> camel-jpa
> camel-cxf
> camel-spring
> camel-jetty
>
> pax-jdbc-mysql
> pax-jdbc-pool-dbcp2
> pax-jdbc-config
>
> mvn:net.sf.ehcache/ehcache/2.9.1
> mvn:org.apache.shiro/shiro-core/1.2.3
> mvn:org.apache.shiro/shiro-ehcache/1.2.3
> mvn:com.fasterxml.jackson.core/jackson-core/2.4.3
> mvn:com.fasterxml.jackson.core/jackson-annotations/2.4.3
> mvn:com.fasterxml.jackson.core/jackson-databind/2.4.3
> mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-json-provider/2.4.3
> mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-base/2.4.3
> mvn:javax.ws.rs/javax.ws.rs-api/2.0.1
> mvn:org.apache.geronimo.specs/geronimo-jpa_2.0_spec/1.1
> wrap:mvn:org.hibernate.javax.persistence/hibernate-jpa-2.0-api/1.0.1.Final
> 
> 
>
> Does anyone know which bundle or feature I am missing?
>
> Regards,
> Allan C.


missing requirement [org.apache.karaf.features.core/4.0.5] osgi.wiring.package; filter:="(osgi.wiring.package=org.eclipse.equinox.region.management)

2016-06-30 Thread Allan C.
Hi,

I am trying to get my feature installed but encountered this exception:

08:10:49,424 | ERROR | ShellUtil:149 | 44 | Exception caught while
executing command
org.osgi.service.resolver.ResolutionException: Unable to resolve
org.apache.karaf.features.core/4.0.5: missing requirement
[org.apache.karaf.features.core/4.0.5] osgi.wiring.package;
filter:="(osgi.wiring.package=org.eclipse.equinox.region.management)"
at
org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)[org.apache.felix.framework-5.4.0.jar:]
at
org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:404)[org.apache.felix.framework-5.4.0.jar:]
at
org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:179)[org.apache.felix.framework-5.4.0.jar:]
at
org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:216)[9:org.apache.karaf.features.core:4.0.5]
at
org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:263)[9:org.apache.karaf.features.core:4.0.5]
at
org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1152)[9:org.apache.karaf.features.core:4.0.5]
at
org.apache.karaf.features.internal.service.FeaturesServiceImpl$1.call(FeaturesServiceImpl.java:1048)[9:org.apache.karaf.features.core:4.0.5]
at
java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_101]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_101]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_101]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_101]

This is my features.xml.

http://karaf.apache.org/xmlns/features/v1.0.0;>
mvn:org.apache.karaf.features/framework/4.0.5/xml/features
mvn:org.apache.karaf.features/standard/4.0.5/xml/features
mvn:org.apache.karaf.features/enterprise/4.0.5/xml/features
mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.8.0/xml/features
mvn:org.apache.camel.karaf/apache-camel/2.15.1/xml/features
mvn:org.apache.cxf.karaf/apache-cxf/3.0.4/xml/features


transaction
jpa
jndi
hibernate

camel-core
camel-blueprint
camel-bindy
camel-jpa
camel-cxf
camel-spring
camel-jetty

pax-jdbc-mysql
pax-jdbc-pool-dbcp2
pax-jdbc-config

mvn:net.sf.ehcache/ehcache/2.9.1
mvn:org.apache.shiro/shiro-core/1.2.3
mvn:org.apache.shiro/shiro-ehcache/1.2.3
mvn:com.fasterxml.jackson.core/jackson-core/2.4.3
mvn:com.fasterxml.jackson.core/jackson-annotations/2.4.3
mvn:com.fasterxml.jackson.core/jackson-databind/2.4.3
mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-json-provider/2.4.3
mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-base/2.4.3
mvn:javax.ws.rs/javax.ws.rs-api/2.0.1
mvn:org.apache.geronimo.specs/geronimo-jpa_2.0_spec/1.1
wrap:mvn:org.hibernate.javax.persistence/hibernate-jpa-2.0-api/1.0.1.Final



Does anyone know which bundle or feature I am missing?

Regards,
Allan C.


Re: Different log level for different Karaf Bundles

2016-06-30 Thread Debraj Manna
My bundles were using different loggers. So created a logger category in
the pax-logging as mentioned by JB.

Bengt's solution also looks interesting I will give it a try.

On Thu, Jun 30, 2016 at 6:18 PM, Jean-Baptiste Onofré 
wrote:

> Yes, it's what I said: create new sift appender.
>
> Regards
> JB
>
> On 06/30/2016 02:33 PM, Bengt Rodehav wrote:
>
>> You can do this by using MDC combined with filters (I implemented that
>> in Pax logging a few years back).
>>
>> E g if you use this root logger:
>>
>> log4j.rootLogger=INFO, stdout, info, error, bundle, context, osgi:*
>>
>> And you define the "bundle" log as follows:
>>
>> log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
>> log4j.appender.bundle.key=bundle.name 
>> log4j.appender.bundle.default=karaf
>> log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
>> log4j.appender.bundle.appender.MaxFileSize=1MB
>> log4j.appender.bundle.appender.MaxBackupIndex=2
>> log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
>> log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} |
>> %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
>> log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name
>> \\}.log
>> log4j.appender.bundle.appender.append=true
>> log4j.appender.bundle.threshold=INFO
>>
>> You will end up with a separate log file per bundle (named with the
>> bundle's name). I use a custom variable (${logdir}) to specify where to
>> create the log file but you can do as you wish. In this case these log
>> files will be at INFO level.
>>
>> Sometimes I want TRACE logging on a specific bundle. I can then do as
>> follows:
>>
>> log4j.rootLogger=TRACE, stdout, info, error, bundle, context, osgi:*,
>> bundle_trace
>>
>> log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
>> log4j.appender.bundle_trace.key=bundle.name 
>> log4j.appender.bundle_trace.default=karaf
>> log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
>> log4j.appender.bundle_trace.appender.MaxFileSize=10MB
>> log4j.appender.bundle_trace.appender.MaxBackupIndex=2
>> log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
>> log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601}
>> |
>> %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
>> log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{
>> bundle.name
>> \\}.log
>> log4j.appender.bundle_trace.appender.append=true
>> log4j.appender.bundle_trace.threshold=TRACE
>>
>> log4j.appender.bundle_trace.filter.a=org.apache.log4j.filter.MDCMatchFilter
>> log4j.appender.bundle_trace.filter.a.exactMatch=false
>> log4j.appender.bundle_trace.filter.a.keyToMatch=bundle.name
>> 
>>
>> log4j.appender.bundle_trace.filter.a.valueToMatch=org.apache.aries.blueprint.core
>> # DenyAllFilter should always be the last filter
>> log4j.appender.bundle_trace.filter.z=org.apache.log4j.varia.DenyAllFilter
>>
>> In the above example I create a separate TRACE log for the bundle with
>> the name "org.apache.aries.blueprint.core".
>>
>> It is also possible to configure custom logging for a particular camel
>> context which we do in our integration platform based on Karaf and Camel.
>>
>> /Bengt
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 2016-06-30 13:59 GMT+02:00 Jean-Baptiste Onofré > >:
>>
>> Then it's different sift appenders that you have to define.
>>
>> Generally speaking, you don't need sift for what you want: if your
>> bundles use different loggers, then, just create the logger category
>> in the pax-logging config.
>>
>> Regards
>> JB
>>
>> On 06/30/2016 01:56 PM, Debraj Manna wrote:
>>
>>
>> Yeah if I enable sifting appender let's say with a config  and
>> add it to
>> rootLogger
>>
>> log4j.appender.sift.threshold=DEBUG
>>
>>
>> Then this will make log level DEBUGfor all bundles. I am trying
>> to ask
>> is let's say I have two bundles1& bundles2and I want bundle1 's
>> log
>> level to be DEBUGand bundle2log level to be ERROR.
>>
>>
>> On Thu, Jun 30, 2016 at 2:12 PM, Jean-Baptiste Onofré
>> 
>> >> wrote:
>>
>>  Hi,
>>
>>  I don't see the sift appender enable for the root logger.
>>
>>  You should have:
>>
>>  log4j.rootLogger=DEBUG, async, sift, osgi:*
>>
>>  Regards
>>  JB
>>
>>  On 06/30/2016 08:23 AM, Debraj Manna wrote:
>>
>>  In |Karaf 3.0.5| running under |Servicemix 6.1.0| my
>>  |org.ops4j.pax.logging.cfg| looks like below:-
>>
>>  |# Root logger log4j.rootLogger=DEBUG, async, osgi:*
>>
>> 

Re: Reasons that triggers IllegalStateException: Invalid BundleContext

2016-06-30 Thread Cristiano Costantini
Hello all and thank you!

Well, I cannot easily copy the stack because it comes from a server I'm
connected to in VPN and remote desktop, and I cannot make copy
so I've copied the first line of the stack from one of my searches for info
on the web.

Interestingly, we use Spring-DM, so we don't use directly OSGi API, and the
exception was thrown from a Camel Route (the full stack trace has
org.apache.camel.core entries just below top entries from
org.apache.felix.framework.BundleContextImpl).

More interestingly, the Camel error shows that the Invalid Bundle context
exception is thrown at the end of a route, with id route33, which is not
listed anymore as soon as I launch the camel:route-list command !


I was involved in the issue late, but the developer told me that he
restarted a bundle, and after the restart camel started logging many
exceptions on the route, Camel exceptions with a cause of type
IllegalStateException originating from
org.apache.felix.framework.BundleContextImpl

The bundle that has been restarted is connected to another bundle via a
camel vm: endpoint, the bundle that has been restarted is the consumer (it
has the ) and after the restart it seems like the
exchanges keep being sent to a reference of the previous camel context
created before, this is a reasonable cause of the invalid bundle context
but I do not have an idea of I we got into this situation.

it is important to point out that we are using ServiceMix 5.3.0 on that
server, which has Camel 2.13.2: it could be that this version has a buggy
vm: component and maybe now this problem is fixed.

I ask myself if there is something I can do to prevent this happening agian
and ho I can safely manage the life cycle of these bundle so I can restart
them individualyl. I ask if the bundle with the "bad" design is the one
that produce on the vm or the one that consumes on the vm, or if it is a
camel bug :-)

If you have any other idea please give me some more hint! I would
appreciate it.

Thank you all,
Cristiano


Il giorno gio 30 giu 2016 alle ore 21:01 Nick Baker  ha
scritto:

> A more complete stack trace would help us point you to the offending code.
> I can say that I usually see this when a Service is still held by someone
> when it should have been removed from "play" by a ServiceTracker or other
> similar mechanism.
>
> -Nick
>
> From: Cristiano Costantini 
> Date: Thu Jun 30 2016 14:57:18 GMT-0400 (EDT)
> To: user@karaf.apache.org 
> Subject: Reasons that triggers IllegalStateException: Invalid
> BundleContext
>
> Hello All,
>
> I'n our application it happen sometime to find in situations where we get
> the "Invalid BundleContext" exception:
>
> java.lang.IllegalStateException: Invalid BundleContext.
> at
> org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:453)
>
>
> What are the potential reasons such exception may be thrown?
> I'm searching to understand so I can hunt for a potential design issue in
> some of our bundles... I've searched the web but I've found no hint.
>
> Thank you!
> Cristiano
>
>


Re: Reasons that triggers IllegalStateException: Invalid BundleContext

2016-06-30 Thread Nick Baker
A more complete stack trace would help us point you to the offending code. I 
can say that I usually see this when a Service is still held by someone when it 
should have been removed from "play" by a ServiceTracker or other similar 
mechanism.

-Nick

From: Cristiano Costantini 
>
Date: Thu Jun 30 2016 14:57:18 GMT-0400 (EDT)
To: user@karaf.apache.org >
Subject: Reasons that triggers IllegalStateException: Invalid BundleContext

Hello All,

I'n our application it happen sometime to find in situations where we get the 
"Invalid BundleContext" exception:

java.lang.IllegalStateException: Invalid BundleContext.
at 
org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:453)

What are the potential reasons such exception may be thrown?
I'm searching to understand so I can hunt for a potential design issue in some 
of our bundles... I've searched the web but I've found no hint.

Thank you!
Cristiano



pax-jdbc-dbcp2 and oracle connection pooling

2016-06-30 Thread dpravin
Hello,

We are exploring to use pax-jdbc component/utility for managing database
connections. 

My environment is,

Jboss Fuse 6.2.1
pax-jdbc 0.9.0
Oracle - 11.2.0.4.0

Installed following features from pax-jdbc,

>features:install transaction jndi pax-jdbc-oracle pax-jdbc-pool-dbcp2
pax-jdbc-config


I am trying to create a XA connection pool for Oracle database with
following properties,

dataSourceName = abc
datasource.name=abc
osgi.jndi.service.name = abc
osgi.jdbc.driver.name = oracle-pool-xa
url = jdbc:oracle:thin:@//<>:1521/<>
user = user
password = pwd
pool.maxTotal = 3

I use following to get reference to data source in a Camel blueprint 



I am getting following errors when I invoke a getConnection() on the
datasource,

**

Caused by: java.lang.UnsupportedOperationException
at
org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:156)[307:org.apache.commons.dbcp2:2.1.0]
at
org.tranql.connector.jdbc.AbstractLocalDataSourceMCF.getPhysicalConnection(AbstractLocalDataSourceMCF.java:72)[281:org.apache.aries.transaction.jdbc:2.1.1]
at
org.tranql.connector.jdbc.AbstractLocalDataSourceMCF.createManagedConnection(AbstractLocalDataSourceMCF.java:66)[281:org.apache.aries.transaction.jdbc:2.1.1]
at
org.apache.geronimo.connector.outbound.MCFConnectionInterceptor.getConnection(MCFConnectionInterceptor.java:48)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.LocalXAResourceInsertionInterceptor.getConnection(LocalXAResourceInsertionInterceptor.java:41)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.SinglePoolConnectionInterceptor.internalGetConnection(SinglePoolConnectionInterceptor.java:70)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.AbstractSinglePoolConnectionInterceptor.getConnection(AbstractSinglePoolConnectionInterceptor.java:80)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.TransactionEnlistingInterceptor.getConnection(TransactionEnlistingInterceptor.java:49)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.TransactionCachingInterceptor.getConnection(TransactionCachingInterceptor.java:109)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.ConnectionHandleInterceptor.getConnection(ConnectionHandleInterceptor.java:43)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.TCCLInterceptor.getConnection(TCCLInterceptor.java:39)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.ConnectionTrackingInterceptor.getConnection(ConnectionTrackingInterceptor.java:66)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.apache.geronimo.connector.outbound.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:81)[282:org.apache.geronimo.components.geronimo-connector:3.1.1]
at
org.tranql.connector.jdbc.TranqlDataSource.getConnection(TranqlDataSource.java:62)[281:org.apache.aries.transaction.jdbc:2.1.1]
at Proxybaa265ff_652c_44b3_9c20_b47e2464e11a.getConnection(Unknown
Source)[:]

***

Appreciate if someone can help me resolve this issue.

Thanks,
Pravin





--
View this message in context: 
http://karaf.922171.n3.nabble.com/pax-jdbc-dbcp2-and-oracle-connection-pooling-tp4047028.html
Sent from the Karaf - User mailing list archive at Nabble.com.


version 4.0.6

2016-06-30 Thread Leschke, Scott
I was wondering if there is an ETA on the next version of Karaf.  Might we 
still see it today or will it be delayed a bit. Not trying to be pushy, just 
doing some planning.

Scott


Re: Reasons that triggers IllegalStateException: Invalid BundleContext

2016-06-30 Thread Jean-Baptiste Onofré

Hi Cristiano,

I bet you have a refresh that cause the bundle context you are using 
doesn't exist anymore (recreated).


Regards
JB

On 06/30/2016 04:57 PM, Cristiano Costantini wrote:

Hello All,

I'n our application it happen sometime to find in situations where we
get the "Invalid BundleContext" exception:

java.lang.IllegalStateException: Invalid BundleContext.
at
org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:453)


What are the potential reasons such exception may be thrown?
I'm searching to understand so I can hunt for a potential design issue
in some of our bundles... I've searched the web but I've found no hint.

Thank you!
Cristiano



--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: bundles starting automatically

2016-06-30 Thread Jean-Baptiste Onofré

Hi Laci,

yes, the feature resolver can deal with the bundle startup now.

Take a look in etc/org.apache.karaf.features.cfg, you have a property to 
control the resolver behavior.


Generally speaking, the resolver does it for a reason (maybe a refresh), 
so, I would check what it does using -v -t options on feature:install.


Regards
JB

On 06/30/2016 03:42 PM, Laci Gaspar wrote:

Hi

we have several bundles (features) installed in karaf 4.0.5.
I noticed that if I stop some of them and after that install a new
feature, the stopped bundles
start automatically.
If I remember correctly this didn't happen in karaf 2.x.

Is it possible to configure the features so that they won't start up
when stopped
with the client?

Thanks.
Regards
Laci


--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: JAX-RS Annotations and Apache Karaf 4.0.5

2016-06-30 Thread Scott Lewis
Also to consider:  ECF provides an impl of the OSGi R6 Remote 
Service/Remote Service Admin specifications [1] with pluggable 
distribution providers [2].


Our Jax-RS provider [3] uses/is based upon CXF (or Jersey).

Scott

[1] https://wiki.eclipse.org/Eclipse_Communication_Framework_Project
[2] https://wiki.eclipse.org/Distribution_Providers
[3] https://github.com/ECF/JaxRSProviders

On 6/30/2016 6:12 AM, James Carman wrote:

Consider using CXF. Very well tested.
On Thu, Jun 30, 2016 at 8:46 AM Artur Lojewski > wrote:


OK,

thanks both of you for your response! I forgot to mention that I
am using
the 'OSGi JAX-RS Connector' v5.3.1 from EclipseSource.

So one can use the config admin to set the 'root' path as follows:

/config:property-set -p com.eclipsesource.jaxrs.connector root /foo/

This works for root paths like '/abc', '/ab' and '/a' - but not
'/' alone!
When I use '/' as root path value I cannot call
http://localhost/abd/def.
This seems to be a bug in the implementation.

Moreover, configuring the 2nd config admin parameter
'publishDelay' results
in a ClassCastException...

So I guess I have to contact Holger Staudacher (OSGi JAX-RS
Connector /
GitHub).

Thanks for your help!



--
View this message in context:

http://karaf.922171.n3.nabble.com/JAX-RS-Annotations-and-Apache-Karaf-4-0-5-tp4047001p4047014.html
Sent from the Karaf - User mailing list archive at Nabble.com.





Re: Reasons that triggers IllegalStateException: Invalid BundleContext

2016-06-30 Thread Tim Ward
Hi Cristiano,

That exception means that you are trying to use a bundle context which is no 
longer valid because the bundle has been stopped.

There are all sorts of ways that code can end up hanging on to a Bundle Context 
when it shouldn't, and it may be caused by something as simple as a race 
condition on shutdown, all the way through to a completely invalid design.

My advice would be not to use the BundleContext or a Bundle Activator in your 
code at all, and to use a framework like DS instead. DS will a manage the 
lifecycle of your components so that you don't need to use a BundleContext at 
all.

Best Regards,

Tim Ward

OSGi Alliance IoT EG Chair

> On 30 Jun 2016, at 15:57, Cristiano Costantini 
>  wrote:
> 
> Hello All,
> 
> I'n our application it happen sometime to find in situations where we get the 
> "Invalid BundleContext" exception:
> 
> java.lang.IllegalStateException: Invalid BundleContext. 
> at 
> org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:453)
> 
> What are the potential reasons such exception may be thrown?
> I'm searching to understand so I can hunt for a potential design issue in 
> some of our bundles... I've searched the web but I've found no hint.
> 
> Thank you!
> Cristiano
> 


Reasons that triggers IllegalStateException: Invalid BundleContext

2016-06-30 Thread Cristiano Costantini
Hello All,

I'n our application it happen sometime to find in situations where we get
the "Invalid BundleContext" exception:

java.lang.IllegalStateException: Invalid BundleContext.
at
org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:453)


What are the potential reasons such exception may be thrown?
I'm searching to understand so I can hunt for a potential design issue in
some of our bundles... I've searched the web but I've found no hint.

Thank you!
Cristiano


Re: Log4j NTEventLogAppender in Karaf 4.0.5

2016-06-30 Thread Achim Nierbeck
Hi Bengt,

newer versions of Pax-Logging don't use log4j2 per default so this should
still work ...
the underlying impl is still log4j 1 unless someone changed it on a minor
version update ...

regards, Achim


2016-06-30 16:23 GMT+02:00 Bengt Rodehav :

> Thanks JB,
>
> Tried it though and no diffference.
>
> When investigating this it seems like newer versions of pax-logging uses
> log4j2. Unfortunately the NTEventLogAppender is incompatible with log4j2.
>
> I've found the project log4jna that seems to target this. Unfortunately I
> cannot find a released version that supports log4j2.
>
> Anyone else encountered this?
>
> /Bengt
>
> 2016-06-30 14:48 GMT+02:00 Jean-Baptiste Onofré :
>
>> In Karaf 4, the dll should go in lib/ext.
>>
>> Regards
>> JB
>>
>> On 06/30/2016 02:16 PM, Bengt Rodehav wrote:
>>
>>> I have a feeling that I need to put the NTEventLogAppender.amd4.dll in
>>> another directory in Karaf 4.0.5 then in Karaf 2.4.1.
>>>
>>> I have always put it in the directory %KARAF_HOME%/lib which works for
>>> Karaf 2.4.1. Where should DLL's be put in Karaf 4.0.5?
>>>
>>> /Bengt
>>>
>>> 2016-06-29 17:37 GMT+02:00 Bengt Rodehav >> >:
>>>
>>>
>>> I'm trying to upgrade from Karaf 2..1 to 4.0.5 and I run into
>>> problems regarding NTEventLogAppender. I get the following on
>>> startup:
>>>
>>> 2016-06-29 17:16:05,354 | ERROR | 4j.pax.logging]) | configadmin
>>>   | ?
>>>  ? | [org.osgi.service.log.LogService,
>>> org.knopflerfish.service.log.LogService,
>>> org.ops4j.pax.logging.PaxLoggingService,
>>> org.osgi.service.cm.ManagedService, id=34,
>>> bundle=6/mvn:org.ops4j.pax.logging/pax-logging-service/1.8.5]:
>>> Unexpected problem updating configuration org.ops4j.pax.logging
>>> java.lang.UnsatisfiedLinkError: no NTEventLogAppender in
>>> java.library.path
>>>  at
>>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1864)[:1.8.0_74]
>>>  at
>>> java.lang.Runtime.loadLibrary0(Runtime.java:870)[:1.8.0_74]
>>>  at java.lang.System.loadLibrary(System.java:1122)[:1.8.0_74]
>>>  at
>>>
>>> org.apache.log4j.nt.NTEventLogAppender.(NTEventLogAppender.java:179)
>>>  at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)[:1.8.0_74]
>>>  at
>>>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[:1.8.0_74]
>>>  at
>>>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[:1.8.0_74]
>>>  at
>>>
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:423)[:1.8.0_74]
>>>  at java.lang.Class.newInstance(Class.java:442)[:1.8.0_74]
>>>  at
>>>
>>> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:336)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:123)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.apache.log4j.PaxLoggingConfigurator.parseAppender(PaxLoggingConfigurator.java:97)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:639)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:504)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.apache.log4j.PaxLoggingConfigurator.doConfigure(PaxLoggingConfigurator.java:72)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.updated(PaxLoggingServiceImpl.java:214)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl$1ManagedPaxLoggingService.updated(PaxLoggingServiceImpl.java:362)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>>  at
>>>
>>> org.apache.felix.cm.impl.helper.ManagedServiceTracker.updated(ManagedServiceTracker.java:189)[7:org.apache.felix.configadmin:1.8.8]
>>>  at
>>>
>>> org.apache.felix.cm.impl.helper.ManagedServiceTracker.updateService(ManagedServiceTracker.java:152)[7:org.apache.felix.configadmin:1.8.8]
>>>  at
>>>
>>> org.apache.felix.cm.impl.helper.ManagedServiceTracker.provideConfiguration(ManagedServiceTracker.java:85)[7:org.apache.felix.configadmin:1.8.8]
>>>  at
>>>
>>> 

Re: Log4j NTEventLogAppender in Karaf 4.0.5

2016-06-30 Thread Bengt Rodehav
Thanks JB,

Tried it though and no diffference.

When investigating this it seems like newer versions of pax-logging uses
log4j2. Unfortunately the NTEventLogAppender is incompatible with log4j2.

I've found the project log4jna that seems to target this. Unfortunately I
cannot find a released version that supports log4j2.

Anyone else encountered this?

/Bengt

2016-06-30 14:48 GMT+02:00 Jean-Baptiste Onofré :

> In Karaf 4, the dll should go in lib/ext.
>
> Regards
> JB
>
> On 06/30/2016 02:16 PM, Bengt Rodehav wrote:
>
>> I have a feeling that I need to put the NTEventLogAppender.amd4.dll in
>> another directory in Karaf 4.0.5 then in Karaf 2.4.1.
>>
>> I have always put it in the directory %KARAF_HOME%/lib which works for
>> Karaf 2.4.1. Where should DLL's be put in Karaf 4.0.5?
>>
>> /Bengt
>>
>> 2016-06-29 17:37 GMT+02:00 Bengt Rodehav > >:
>>
>>
>> I'm trying to upgrade from Karaf 2..1 to 4.0.5 and I run into
>> problems regarding NTEventLogAppender. I get the following on startup:
>>
>> 2016-06-29 17:16:05,354 | ERROR | 4j.pax.logging]) | configadmin
>>   | ?
>>  ? | [org.osgi.service.log.LogService,
>> org.knopflerfish.service.log.LogService,
>> org.ops4j.pax.logging.PaxLoggingService,
>> org.osgi.service.cm.ManagedService, id=34,
>> bundle=6/mvn:org.ops4j.pax.logging/pax-logging-service/1.8.5]:
>> Unexpected problem updating configuration org.ops4j.pax.logging
>> java.lang.UnsatisfiedLinkError: no NTEventLogAppender in
>> java.library.path
>>  at
>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1864)[:1.8.0_74]
>>  at
>> java.lang.Runtime.loadLibrary0(Runtime.java:870)[:1.8.0_74]
>>  at java.lang.System.loadLibrary(System.java:1122)[:1.8.0_74]
>>  at
>>
>> org.apache.log4j.nt.NTEventLogAppender.(NTEventLogAppender.java:179)
>>  at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)[:1.8.0_74]
>>  at
>>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[:1.8.0_74]
>>  at
>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[:1.8.0_74]
>>  at
>>
>> java.lang.reflect.Constructor.newInstance(Constructor.java:423)[:1.8.0_74]
>>  at java.lang.Class.newInstance(Class.java:442)[:1.8.0_74]
>>  at
>>
>> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:336)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:123)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.apache.log4j.PaxLoggingConfigurator.parseAppender(PaxLoggingConfigurator.java:97)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:639)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:504)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.apache.log4j.PaxLoggingConfigurator.doConfigure(PaxLoggingConfigurator.java:72)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.updated(PaxLoggingServiceImpl.java:214)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl$1ManagedPaxLoggingService.updated(PaxLoggingServiceImpl.java:362)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
>>  at
>>
>> org.apache.felix.cm.impl.helper.ManagedServiceTracker.updated(ManagedServiceTracker.java:189)[7:org.apache.felix.configadmin:1.8.8]
>>  at
>>
>> org.apache.felix.cm.impl.helper.ManagedServiceTracker.updateService(ManagedServiceTracker.java:152)[7:org.apache.felix.configadmin:1.8.8]
>>  at
>>
>> org.apache.felix.cm.impl.helper.ManagedServiceTracker.provideConfiguration(ManagedServiceTracker.java:85)[7:org.apache.felix.configadmin:1.8.8]
>>  at
>>
>> org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.provide(ConfigurationManager.java:1444)[7:org.apache.felix.configadmin:1.8.8]
>>  at
>>
>> org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.run(ConfigurationManager.java:1400)[7:org.apache.felix.configadmin:1.8.8]
>>  at
>>
>> org.apache.felix.cm.impl.UpdateThread.run0(UpdateThread.java:143)[7:org.apache.felix.configadmin:1.8.8]
>>  at
>>
>> 

bundles starting automatically

2016-06-30 Thread Laci Gaspar

Hi

we have several bundles (features) installed in karaf 4.0.5.
I noticed that if I stop some of them and after that install a new 
feature, the stopped bundles

start automatically.
If I remember correctly this didn't happen in karaf 2.x.

Is it possible to configure the features so that they won't start up 
when stopped

with the client?

Thanks.
Regards
Laci


Re: JAX-RS Annotations and Apache Karaf 4.0.5

2016-06-30 Thread James Carman
Consider using CXF. Very well tested.
On Thu, Jun 30, 2016 at 8:46 AM Artur Lojewski  wrote:

> OK,
>
> thanks both of you for your response! I forgot to mention that I am using
> the 'OSGi JAX-RS Connector' v5.3.1 from EclipseSource.
>
> So one can use the config admin to set the 'root' path as follows:
>
> /config:property-set -p com.eclipsesource.jaxrs.connector root /foo/
>
> This works for root paths like '/abc', '/ab' and '/a' - but not '/' alone!
> When I use '/' as root path value I cannot call http://localhost/abd/def.
> This seems to be a bug in the implementation.
>
> Moreover, configuring the 2nd config admin parameter 'publishDelay' results
> in a ClassCastException...
>
> So I guess I have to contact Holger Staudacher (OSGi JAX-RS Connector /
> GitHub).
>
> Thanks for your help!
>
>
>
> --
> View this message in context:
> http://karaf.922171.n3.nabble.com/JAX-RS-Annotations-and-Apache-Karaf-4-0-5-tp4047001p4047014.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>


Re: Log4j NTEventLogAppender in Karaf 4.0.5

2016-06-30 Thread Jean-Baptiste Onofré

In Karaf 4, the dll should go in lib/ext.

Regards
JB

On 06/30/2016 02:16 PM, Bengt Rodehav wrote:

I have a feeling that I need to put the NTEventLogAppender.amd4.dll in
another directory in Karaf 4.0.5 then in Karaf 2.4.1.

I have always put it in the directory %KARAF_HOME%/lib which works for
Karaf 2.4.1. Where should DLL's be put in Karaf 4.0.5?

/Bengt

2016-06-29 17:37 GMT+02:00 Bengt Rodehav >:

I'm trying to upgrade from Karaf 2..1 to 4.0.5 and I run into
problems regarding NTEventLogAppender. I get the following on startup:

2016-06-29 17:16:05,354 | ERROR | 4j.pax.logging]) | configadmin
  | ?
 ? | [org.osgi.service.log.LogService,
org.knopflerfish.service.log.LogService,
org.ops4j.pax.logging.PaxLoggingService,
org.osgi.service.cm.ManagedService, id=34,
bundle=6/mvn:org.ops4j.pax.logging/pax-logging-service/1.8.5]:
Unexpected problem updating configuration org.ops4j.pax.logging
java.lang.UnsatisfiedLinkError: no NTEventLogAppender in
java.library.path
 at
java.lang.ClassLoader.loadLibrary(ClassLoader.java:1864)[:1.8.0_74]
 at java.lang.Runtime.loadLibrary0(Runtime.java:870)[:1.8.0_74]
 at java.lang.System.loadLibrary(System.java:1122)[:1.8.0_74]
 at
org.apache.log4j.nt.NTEventLogAppender.(NTEventLogAppender.java:179)
 at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)[:1.8.0_74]
 at

sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[:1.8.0_74]
 at

sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[:1.8.0_74]
 at
java.lang.reflect.Constructor.newInstance(Constructor.java:423)[:1.8.0_74]
 at java.lang.Class.newInstance(Class.java:442)[:1.8.0_74]
 at

org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:336)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:123)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.apache.log4j.PaxLoggingConfigurator.parseAppender(PaxLoggingConfigurator.java:97)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:639)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:504)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.apache.log4j.PaxLoggingConfigurator.doConfigure(PaxLoggingConfigurator.java:72)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.updated(PaxLoggingServiceImpl.java:214)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl$1ManagedPaxLoggingService.updated(PaxLoggingServiceImpl.java:362)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
 at

org.apache.felix.cm.impl.helper.ManagedServiceTracker.updated(ManagedServiceTracker.java:189)[7:org.apache.felix.configadmin:1.8.8]
 at

org.apache.felix.cm.impl.helper.ManagedServiceTracker.updateService(ManagedServiceTracker.java:152)[7:org.apache.felix.configadmin:1.8.8]
 at

org.apache.felix.cm.impl.helper.ManagedServiceTracker.provideConfiguration(ManagedServiceTracker.java:85)[7:org.apache.felix.configadmin:1.8.8]
 at

org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.provide(ConfigurationManager.java:1444)[7:org.apache.felix.configadmin:1.8.8]
 at

org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.run(ConfigurationManager.java:1400)[7:org.apache.felix.configadmin:1.8.8]
 at

org.apache.felix.cm.impl.UpdateThread.run0(UpdateThread.java:143)[7:org.apache.felix.configadmin:1.8.8]
 at

org.apache.felix.cm.impl.UpdateThread.run(UpdateThread.java:110)[7:org.apache.felix.configadmin:1.8.8]
 at java.lang.Thread.run(Thread.java:745)[:1.8.0_74]

Like I did on Karaf 2.4.1, I have put the
file NTEventLogAppender.amd64.dll in the "lib" directory under
Karaf. It has the version 1.2.16.1.

Does anyone know how to get the NTEventLogAppender to work with
Karaf 4.0.5?

/Bengt







--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: Different log level for different Karaf Bundles

2016-06-30 Thread Jean-Baptiste Onofré

Yes, it's what I said: create new sift appender.

Regards
JB

On 06/30/2016 02:33 PM, Bengt Rodehav wrote:

You can do this by using MDC combined with filters (I implemented that
in Pax logging a few years back).

E g if you use this root logger:

log4j.rootLogger=INFO, stdout, info, error, bundle, context, osgi:*

And you define the "bundle" log as follows:

log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.bundle.key=bundle.name 
log4j.appender.bundle.default=karaf
log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
log4j.appender.bundle.appender.MaxFileSize=1MB
log4j.appender.bundle.appender.MaxBackupIndex=2
log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name
\\}.log
log4j.appender.bundle.appender.append=true
log4j.appender.bundle.threshold=INFO

You will end up with a separate log file per bundle (named with the
bundle's name). I use a custom variable (${logdir}) to specify where to
create the log file but you can do as you wish. In this case these log
files will be at INFO level.

Sometimes I want TRACE logging on a specific bundle. I can then do as
follows:

log4j.rootLogger=TRACE, stdout, info, error, bundle, context, osgi:*,
bundle_trace

log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.bundle_trace.key=bundle.name 
log4j.appender.bundle_trace.default=karaf
log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
log4j.appender.bundle_trace.appender.MaxFileSize=10MB
log4j.appender.bundle_trace.appender.MaxBackupIndex=2
log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{bundle.name
\\}.log
log4j.appender.bundle_trace.appender.append=true
log4j.appender.bundle_trace.threshold=TRACE
log4j.appender.bundle_trace.filter.a=org.apache.log4j.filter.MDCMatchFilter
log4j.appender.bundle_trace.filter.a.exactMatch=false
log4j.appender.bundle_trace.filter.a.keyToMatch=bundle.name

log4j.appender.bundle_trace.filter.a.valueToMatch=org.apache.aries.blueprint.core
# DenyAllFilter should always be the last filter
log4j.appender.bundle_trace.filter.z=org.apache.log4j.varia.DenyAllFilter

In the above example I create a separate TRACE log for the bundle with
the name "org.apache.aries.blueprint.core".

It is also possible to configure custom logging for a particular camel
context which we do in our integration platform based on Karaf and Camel.

/Bengt










2016-06-30 13:59 GMT+02:00 Jean-Baptiste Onofré >:

Then it's different sift appenders that you have to define.

Generally speaking, you don't need sift for what you want: if your
bundles use different loggers, then, just create the logger category
in the pax-logging config.

Regards
JB

On 06/30/2016 01:56 PM, Debraj Manna wrote:


Yeah if I enable sifting appender let's say with a config  and
add it to
rootLogger

log4j.appender.sift.threshold=DEBUG


Then this will make log level DEBUGfor all bundles. I am trying
to ask
is let's say I have two bundles1& bundles2and I want bundle1 's log
level to be DEBUGand bundle2log level to be ERROR.


On Thu, Jun 30, 2016 at 2:12 PM, Jean-Baptiste Onofré

>> wrote:

 Hi,

 I don't see the sift appender enable for the root logger.

 You should have:

 log4j.rootLogger=DEBUG, async, sift, osgi:*

 Regards
 JB

 On 06/30/2016 08:23 AM, Debraj Manna wrote:

 In |Karaf 3.0.5| running under |Servicemix 6.1.0| my
 |org.ops4j.pax.logging.cfg| looks like below:-

 |# Root logger log4j.rootLogger=DEBUG, async, osgi:*

log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer # To
 avoid flooding the log when using DEBUG level on an ssh
 connection and
 doing log:tail

log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO #
 CONSOLE appender not used by default
 log4j.appender.stdout=org.apache.log4j.ConsoleAppender
 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} |
 %-5.5p |
 %-16.16t | 

Re: JAX-RS Annotations and Apache Karaf 4.0.5

2016-06-30 Thread Artur Lojewski
OK,

thanks both of you for your response! I forgot to mention that I am using
the 'OSGi JAX-RS Connector' v5.3.1 from EclipseSource.

So one can use the config admin to set the 'root' path as follows:

/config:property-set -p com.eclipsesource.jaxrs.connector root /foo/

This works for root paths like '/abc', '/ab' and '/a' - but not '/' alone!
When I use '/' as root path value I cannot call http://localhost/abd/def.
This seems to be a bug in the implementation.

Moreover, configuring the 2nd config admin parameter 'publishDelay' results
in a ClassCastException...

So I guess I have to contact Holger Staudacher (OSGi JAX-RS Connector /
GitHub).

Thanks for your help!



--
View this message in context: 
http://karaf.922171.n3.nabble.com/JAX-RS-Annotations-and-Apache-Karaf-4-0-5-tp4047001p4047014.html
Sent from the Karaf - User mailing list archive at Nabble.com.


Re: Different log level for different Karaf Bundles

2016-06-30 Thread James Carman
Are you perhaps looking for the "additivity" flag?

https://logging.apache.org/log4j/1.2/manual.html

On Thu, Jun 30, 2016 at 8:33 AM Bengt Rodehav  wrote:

> You can do this by using MDC combined with filters (I implemented that in
> Pax logging a few years back).
>
> E g if you use this root logger:
>
> log4j.rootLogger=INFO, stdout, info, error, bundle, context, osgi:*
>
> And you define the "bundle" log as follows:
>
> log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
> log4j.appender.bundle.key=bundle.name
> log4j.appender.bundle.default=karaf
> log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
> log4j.appender.bundle.appender.MaxFileSize=1MB
> log4j.appender.bundle.appender.MaxBackupIndex=2
> log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
> log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} |
> %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
> log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name
> \\}.log
> log4j.appender.bundle.appender.append=true
> log4j.appender.bundle.threshold=INFO
>
> You will end up with a separate log file per bundle (named with the
> bundle's name). I use a custom variable (${logdir}) to specify where to
> create the log file but you can do as you wish. In this case these log
> files will be at INFO level.
>
> Sometimes I want TRACE logging on a specific bundle. I can then do as
> follows:
>
> log4j.rootLogger=TRACE, stdout, info, error, bundle, context, osgi:*,
> bundle_trace
>
> log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
> log4j.appender.bundle_trace.key=bundle.name
> log4j.appender.bundle_trace.default=karaf
> log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
> log4j.appender.bundle_trace.appender.MaxFileSize=10MB
> log4j.appender.bundle_trace.appender.MaxBackupIndex=2
> log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
> log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601}
> | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
> log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{
> bundle.name\\}.log
> log4j.appender.bundle_trace.appender.append=true
> log4j.appender.bundle_trace.threshold=TRACE
> log4j.appender.bundle_trace.filter.a=org.apache.log4j.filter.MDCMatchFilter
> log4j.appender.bundle_trace.filter.a.exactMatch=false
> log4j.appender.bundle_trace.filter.a.keyToMatch=bundle.name
>
> log4j.appender.bundle_trace.filter.a.valueToMatch=org.apache.aries.blueprint.core
> # DenyAllFilter should always be the last filter
> log4j.appender.bundle_trace.filter.z=org.apache.log4j.varia.DenyAllFilter
>
> In the above example I create a separate TRACE log for the bundle with the
> name "org.apache.aries.blueprint.core".
>
> It is also possible to configure custom logging for a particular camel
> context which we do in our integration platform based on Karaf and Camel.
>
> /Bengt
>
>
>
>
>
>
>
>
>
>
> 2016-06-30 13:59 GMT+02:00 Jean-Baptiste Onofré :
>
>> Then it's different sift appenders that you have to define.
>>
>> Generally speaking, you don't need sift for what you want: if your
>> bundles use different loggers, then, just create the logger category in the
>> pax-logging config.
>>
>> Regards
>> JB
>>
>> On 06/30/2016 01:56 PM, Debraj Manna wrote:
>>
>>>
>>> Yeah if I enable sifting appender let's say with a config  and add it to
>>> rootLogger
>>>
>>> log4j.appender.sift.threshold=DEBUG
>>>
>>>
>>> Then this will make log level DEBUGfor all bundles. I am trying to ask
>>> is let's say I have two bundles1& bundles2and I want bundle1 's log
>>> level to be DEBUGand bundle2log level to be ERROR.
>>>
>>>
>>> On Thu, Jun 30, 2016 at 2:12 PM, Jean-Baptiste Onofré >> > wrote:
>>>
>>> Hi,
>>>
>>> I don't see the sift appender enable for the root logger.
>>>
>>> You should have:
>>>
>>> log4j.rootLogger=DEBUG, async, sift, osgi:*
>>>
>>> Regards
>>> JB
>>>
>>> On 06/30/2016 08:23 AM, Debraj Manna wrote:
>>>
>>> In |Karaf 3.0.5| running under |Servicemix 6.1.0| my
>>> |org.ops4j.pax.logging.cfg| looks like below:-
>>>
>>> |# Root logger log4j.rootLogger=DEBUG, async, osgi:*
>>> log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer #
>>> To
>>> avoid flooding the log when using DEBUG level on an ssh
>>> connection and
>>> doing log:tail
>>> log4j.logger.org.apache.sshd.server.channel.ChannelSession =
>>> INFO #
>>> CONSOLE appender not used by default
>>> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
>>> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
>>> log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} |
>>> %-5.5p |
>>> %-16.16t | %-32.32c{1} | %X{bundle.id 
>>> } -

Re: Different log level for different Karaf Bundles

2016-06-30 Thread Bengt Rodehav
You can do this by using MDC combined with filters (I implemented that in
Pax logging a few years back).

E g if you use this root logger:

log4j.rootLogger=INFO, stdout, info, error, bundle, context, osgi:*

And you define the "bundle" log as follows:

log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.bundle.key=bundle.name
log4j.appender.bundle.default=karaf
log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
log4j.appender.bundle.appender.MaxFileSize=1MB
log4j.appender.bundle.appender.MaxBackupIndex=2
log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name\\}.log
log4j.appender.bundle.appender.append=true
log4j.appender.bundle.threshold=INFO

You will end up with a separate log file per bundle (named with the
bundle's name). I use a custom variable (${logdir}) to specify where to
create the log file but you can do as you wish. In this case these log
files will be at INFO level.

Sometimes I want TRACE logging on a specific bundle. I can then do as
follows:

log4j.rootLogger=TRACE, stdout, info, error, bundle, context, osgi:*,
bundle_trace

log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.bundle_trace.key=bundle.name
log4j.appender.bundle_trace.default=karaf
log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
log4j.appender.bundle_trace.appender.MaxFileSize=10MB
log4j.appender.bundle_trace.appender.MaxBackupIndex=2
log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{
bundle.name\\}.log
log4j.appender.bundle_trace.appender.append=true
log4j.appender.bundle_trace.threshold=TRACE
log4j.appender.bundle_trace.filter.a=org.apache.log4j.filter.MDCMatchFilter
log4j.appender.bundle_trace.filter.a.exactMatch=false
log4j.appender.bundle_trace.filter.a.keyToMatch=bundle.name
log4j.appender.bundle_trace.filter.a.valueToMatch=org.apache.aries.blueprint.core
# DenyAllFilter should always be the last filter
log4j.appender.bundle_trace.filter.z=org.apache.log4j.varia.DenyAllFilter

In the above example I create a separate TRACE log for the bundle with the
name "org.apache.aries.blueprint.core".

It is also possible to configure custom logging for a particular camel
context which we do in our integration platform based on Karaf and Camel.

/Bengt










2016-06-30 13:59 GMT+02:00 Jean-Baptiste Onofré :

> Then it's different sift appenders that you have to define.
>
> Generally speaking, you don't need sift for what you want: if your bundles
> use different loggers, then, just create the logger category in the
> pax-logging config.
>
> Regards
> JB
>
> On 06/30/2016 01:56 PM, Debraj Manna wrote:
>
>>
>> Yeah if I enable sifting appender let's say with a config  and add it to
>> rootLogger
>>
>> log4j.appender.sift.threshold=DEBUG
>>
>>
>> Then this will make log level DEBUGfor all bundles. I am trying to ask
>> is let's say I have two bundles1& bundles2and I want bundle1 's log
>> level to be DEBUGand bundle2log level to be ERROR.
>>
>>
>> On Thu, Jun 30, 2016 at 2:12 PM, Jean-Baptiste Onofré > > wrote:
>>
>> Hi,
>>
>> I don't see the sift appender enable for the root logger.
>>
>> You should have:
>>
>> log4j.rootLogger=DEBUG, async, sift, osgi:*
>>
>> Regards
>> JB
>>
>> On 06/30/2016 08:23 AM, Debraj Manna wrote:
>>
>> In |Karaf 3.0.5| running under |Servicemix 6.1.0| my
>> |org.ops4j.pax.logging.cfg| looks like below:-
>>
>> |# Root logger log4j.rootLogger=DEBUG, async, osgi:*
>> log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer #
>> To
>> avoid flooding the log when using DEBUG level on an ssh
>> connection and
>> doing log:tail
>> log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO
>> #
>> CONSOLE appender not used by default
>> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
>> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
>> log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} |
>> %-5.5p |
>> %-16.16t | %-32.32c{1} | %X{bundle.id 
>> } -
>> %X{bundle.name  } -
>> %X{bundle.version} | %X | %m%n #
>> File appender
>> log4j.appender.out=org.apache.log4j.RollingFileAppender
>> log4j.appender.out.layout=org.apache.log4j.PatternLayout
>> log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
>>  

Re: Log4j NTEventLogAppender in Karaf 4.0.5

2016-06-30 Thread Bengt Rodehav
I have a feeling that I need to put the NTEventLogAppender.amd4.dll in
another directory in Karaf 4.0.5 then in Karaf 2.4.1.

I have always put it in the directory %KARAF_HOME%/lib which works for
Karaf 2.4.1. Where should DLL's be put in Karaf 4.0.5?

/Bengt

2016-06-29 17:37 GMT+02:00 Bengt Rodehav :

> I'm trying to upgrade from Karaf 2..1 to 4.0.5 and I run into problems
> regarding NTEventLogAppender. I get the following on startup:
>
> 2016-06-29 17:16:05,354 | ERROR | 4j.pax.logging]) | configadmin
>| ?
> ? | [org.osgi.service.log.LogService,
> org.knopflerfish.service.log.LogService,
> org.ops4j.pax.logging.PaxLoggingService,
> org.osgi.service.cm.ManagedService, id=34,
> bundle=6/mvn:org.ops4j.pax.logging/pax-logging-service/1.8.5]: Unexpected
> problem updating configuration org.ops4j.pax.logging
> java.lang.UnsatisfiedLinkError: no NTEventLogAppender in java.library.path
> at
> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1864)[:1.8.0_74]
> at java.lang.Runtime.loadLibrary0(Runtime.java:870)[:1.8.0_74]
> at java.lang.System.loadLibrary(System.java:1122)[:1.8.0_74]
> at
> org.apache.log4j.nt.NTEventLogAppender.(NTEventLogAppender.java:179)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)[:1.8.0_74]
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[:1.8.0_74]
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[:1.8.0_74]
> at
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)[:1.8.0_74]
> at java.lang.Class.newInstance(Class.java:442)[:1.8.0_74]
> at
> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:336)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:123)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.apache.log4j.PaxLoggingConfigurator.parseAppender(PaxLoggingConfigurator.java:97)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:639)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:504)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.apache.log4j.PaxLoggingConfigurator.doConfigure(PaxLoggingConfigurator.java:72)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.updated(PaxLoggingServiceImpl.java:214)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl$1ManagedPaxLoggingService.updated(PaxLoggingServiceImpl.java:362)[6:org.ops4j.pax.logging.pax-logging-service:1.8.5]
> at
> org.apache.felix.cm.impl.helper.ManagedServiceTracker.updated(ManagedServiceTracker.java:189)[7:org.apache.felix.configadmin:1.8.8]
> at
> org.apache.felix.cm.impl.helper.ManagedServiceTracker.updateService(ManagedServiceTracker.java:152)[7:org.apache.felix.configadmin:1.8.8]
> at
> org.apache.felix.cm.impl.helper.ManagedServiceTracker.provideConfiguration(ManagedServiceTracker.java:85)[7:org.apache.felix.configadmin:1.8.8]
> at
> org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.provide(ConfigurationManager.java:1444)[7:org.apache.felix.configadmin:1.8.8]
> at
> org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.run(ConfigurationManager.java:1400)[7:org.apache.felix.configadmin:1.8.8]
> at
> org.apache.felix.cm.impl.UpdateThread.run0(UpdateThread.java:143)[7:org.apache.felix.configadmin:1.8.8]
> at
> org.apache.felix.cm.impl.UpdateThread.run(UpdateThread.java:110)[7:org.apache.felix.configadmin:1.8.8]
> at java.lang.Thread.run(Thread.java:745)[:1.8.0_74]
>
> Like I did on Karaf 2.4.1, I have put the
> file NTEventLogAppender.amd64.dll in the "lib" directory under Karaf. It
> has the version 1.2.16.1.
>
> Does anyone know how to get the NTEventLogAppender to work with Karaf
> 4.0.5?
>
> /Bengt
>
>
>
>
>


Re: Different log level for different Karaf Bundles

2016-06-30 Thread Jean-Baptiste Onofré

Then it's different sift appenders that you have to define.

Generally speaking, you don't need sift for what you want: if your 
bundles use different loggers, then, just create the logger category in 
the pax-logging config.


Regards
JB

On 06/30/2016 01:56 PM, Debraj Manna wrote:


Yeah if I enable sifting appender let's say with a config  and add it to
rootLogger

log4j.appender.sift.threshold=DEBUG


Then this will make log level DEBUGfor all bundles. I am trying to ask
is let's say I have two bundles1& bundles2and I want bundle1 's log
level to be DEBUGand bundle2log level to be ERROR.


On Thu, Jun 30, 2016 at 2:12 PM, Jean-Baptiste Onofré > wrote:

Hi,

I don't see the sift appender enable for the root logger.

You should have:

log4j.rootLogger=DEBUG, async, sift, osgi:*

Regards
JB

On 06/30/2016 08:23 AM, Debraj Manna wrote:

In |Karaf 3.0.5| running under |Servicemix 6.1.0| my
|org.ops4j.pax.logging.cfg| looks like below:-

|# Root logger log4j.rootLogger=DEBUG, async, osgi:*
log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer # To
avoid flooding the log when using DEBUG level on an ssh
connection and
doing log:tail
log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO #
CONSOLE appender not used by default
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} |
%-5.5p |
%-16.16t | %-32.32c{1} | %X{bundle.id 
} -
%X{bundle.name  } -
%X{bundle.version} | %X | %m%n #
File appender
log4j.appender.out=org.apache.log4j.RollingFileAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
%-16.16t | %-32.32c{1} | %X{bundle.id 
} -
%X{bundle.name  } -
%X{bundle.version} | %X | %m%n
log4j.appender.out.file=/tmp/servicemix.log
log4j.appender.out.append=true log4j.appender.out.maxFileSize=1024MB
log4j.appender.out.maxBackupIndex=10 # Sift appender
log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.sift.key=bundle.name 

log4j.appender.sift.default=servicemix
log4j.appender.sift.appender=org.apache.log4j.FileAppender
log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %X | %m%n
log4j.appender.sift.appender.file=/tmp/$\\{bundle.name

\\}.log
log4j.appender.sift.appender.append=true #
Async appender log4j.appender.async=org.apache.log4j.AsyncAppender
log4j.appender.async.appenders=out|

|

Now this logger config is dumping Karaf's debug log as well
whereas my
intention is to |DEBUG| only a specific bundle.

Can some one let me know if it is possible to set different log
levels
for different bundles?

|

|
|


--
Jean-Baptiste Onofré
jbono...@apache.org 
http://blog.nanthrax.net
Talend - http://www.talend.com




--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: Different log level for different Karaf Bundles

2016-06-30 Thread Debraj Manna
Yeah if I enable sifting appender let's say with a config  and add it to
rootLogger

log4j.appender.sift.threshold=DEBUG


Then this will make log level DEBUG for all bundles. I am trying to ask is
let's say I have two bundles1 & bundles2 and I want bundle1 's log level to
be DEBUG and bundle2 log level to be ERROR.

On Thu, Jun 30, 2016 at 2:12 PM, Jean-Baptiste Onofré 
wrote:

> Hi,
>
> I don't see the sift appender enable for the root logger.
>
> You should have:
>
> log4j.rootLogger=DEBUG, async, sift, osgi:*
>
> Regards
> JB
>
> On 06/30/2016 08:23 AM, Debraj Manna wrote:
>
>> In |Karaf 3.0.5| running under |Servicemix 6.1.0| my
>> |org.ops4j.pax.logging.cfg| looks like below:-
>>
>> |# Root logger log4j.rootLogger=DEBUG, async, osgi:*
>> log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer # To
>> avoid flooding the log when using DEBUG level on an ssh connection and
>> doing log:tail
>> log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO #
>> CONSOLE appender not used by default
>> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
>> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
>> log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
>> %-16.16t | %-32.32c{1} | %X{bundle.id } -
>> %X{bundle.name } - %X{bundle.version} | %X | %m%n #
>> File appender log4j.appender.out=org.apache.log4j.RollingFileAppender
>> log4j.appender.out.layout=org.apache.log4j.PatternLayout
>> log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
>> %-16.16t | %-32.32c{1} | %X{bundle.id } -
>> %X{bundle.name } - %X{bundle.version} | %X | %m%n
>> log4j.appender.out.file=/tmp/servicemix.log
>> log4j.appender.out.append=true log4j.appender.out.maxFileSize=1024MB
>> log4j.appender.out.maxBackupIndex=10 # Sift appender
>> log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
>> log4j.appender.sift.key=bundle.name 
>> log4j.appender.sift.default=servicemix
>> log4j.appender.sift.appender=org.apache.log4j.FileAppender
>> log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
>> log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} |
>> %-5.5p | %-16.16t | %-32.32c{1} | %X | %m%n
>> log4j.appender.sift.appender.file=/tmp/$\\{bundle.name
>> \\}.log log4j.appender.sift.appender.append=true #
>> Async appender log4j.appender.async=org.apache.log4j.AsyncAppender
>> log4j.appender.async.appenders=out|
>>
>> |
>>
>> Now this logger config is dumping Karaf's debug log as well whereas my
>> intention is to |DEBUG| only a specific bundle.
>>
>> Can some one let me know if it is possible to set different log levels
>> for different bundles?
>>
>> |
>>
>> |
>> |
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: Different log level for different Karaf Bundles

2016-06-30 Thread Jean-Baptiste Onofré

Hi,

I don't see the sift appender enable for the root logger.

You should have:

log4j.rootLogger=DEBUG, async, sift, osgi:*

Regards
JB

On 06/30/2016 08:23 AM, Debraj Manna wrote:

In |Karaf 3.0.5| running under |Servicemix 6.1.0| my
|org.ops4j.pax.logging.cfg| looks like below:-

|# Root logger log4j.rootLogger=DEBUG, async, osgi:*
log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer # To
avoid flooding the log when using DEBUG level on an ssh connection and
doing log:tail
log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO #
CONSOLE appender not used by default
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
%-16.16t | %-32.32c{1} | %X{bundle.id } -
%X{bundle.name } - %X{bundle.version} | %X | %m%n #
File appender log4j.appender.out=org.apache.log4j.RollingFileAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
%-16.16t | %-32.32c{1} | %X{bundle.id } -
%X{bundle.name } - %X{bundle.version} | %X | %m%n
log4j.appender.out.file=/tmp/servicemix.log
log4j.appender.out.append=true log4j.appender.out.maxFileSize=1024MB
log4j.appender.out.maxBackupIndex=10 # Sift appender
log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.sift.key=bundle.name 
log4j.appender.sift.default=servicemix
log4j.appender.sift.appender=org.apache.log4j.FileAppender
log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %X | %m%n
log4j.appender.sift.appender.file=/tmp/$\\{bundle.name
\\}.log log4j.appender.sift.appender.append=true #
Async appender log4j.appender.async=org.apache.log4j.AsyncAppender
log4j.appender.async.appenders=out|

|

Now this logger config is dumping Karaf's debug log as well whereas my
intention is to |DEBUG| only a specific bundle.

Can some one let me know if it is possible to set different log levels
for different bundles?

|

|
|



--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: JAX-RS Annotations and Apache Karaf 4.0.5

2016-06-30 Thread Jean-Baptiste Onofré

Hi Artur,

/services is the CXF context.

To change it, you have to change the CXF config in 
etc/org.apache.cxf.osgi.cfg:


org.apache.cxf.servlet.context=/services

Regards
JB

On 06/30/2016 07:28 AM, Artur Lojewski wrote:

Hi,

I am implementing a REST service with Apache Karaf 4.0.5 with JAX-RS
annotations only, i.e. no web.xml. My implementation  uses the
@ApplicationPath("/abc") and @Path("/def") annotations.

When I deploy my bundle into Karaf I can successfully access the service via

http://localhost/services/abc/def

However, I want

http://localhost/abc/def

It seems like my implementation missed something. I assumed that
@ApplicationPag("abc") should be sufficient, but it does not.

Can anybody give me a hint how this can be fixed?


Best Regards,

Artur





--
View this message in context: 
http://karaf.922171.n3.nabble.com/JAX-RS-Annotations-and-Apache-Karaf-4-0-5-tp4047001.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Apache Karaf feature prerequisite

2016-06-30 Thread Patrik Strömvall
I have the follow pseudo-feature:
http://karaf.apache.org/xmlns/features/v1.3.0; xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance;
 xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.3.0
http://karaf.apache.org/xmlns/features/v1.3.0;>...BundleC1...
C...BundleA1...
C...BundleB1...
C is independantA depends on CB depends on C
In this example the bundle 'BundleB1' imports wrong major version of
'BundleC1' and we get the "missing requirement" error (as expected).
However if i log into the karaf console and run 'feature:list' i will see
that C is Started, A is Uninstalled and B is Uninstalled.
I expect A to be Started since it only has dependencies to C. A will start
fine if i comment out the entire C feature or if i afterwards run
'feature:install A'
If i put each of these three features in seperate feature.xml files i get
the expected outcome of C+A as Started and B as Uninstalled.
What am i doing wrong?
Am i missunderstanding how the prerequisite attribute works? As a sidenote;
if i skip the prerequisite attribute all together, no feature will get
installed whatsover...


Different log level for different Karaf Bundles

2016-06-30 Thread Debraj Manna
In Karaf 3.0.5 running under Servicemix 6.1.0 my
org.ops4j.pax.logging.cfg looks
like below:-

# Root logger
log4j.rootLogger=DEBUG, async, osgi:*
log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer

# To avoid flooding the log when using DEBUG level on an ssh
connection and doing log:tail
log4j.logger.org.apache.sshd.server.channel.ChannelSession = INFO

# CONSOLE appender not used by default
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
%-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} -
%X{bundle.version} | %X | %m%n

# File appender
log4j.appender.out=org.apache.log4j.RollingFileAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p |
%-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} -
%X{bundle.version} | %X | %m%n
log4j.appender.out.file=/tmp/servicemix.log
log4j.appender.out.append=true
log4j.appender.out.maxFileSize=1024MB
log4j.appender.out.maxBackupIndex=10

# Sift appender
log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.sift.key=bundle.name
log4j.appender.sift.default=servicemix
log4j.appender.sift.appender=org.apache.log4j.FileAppender
log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} |
%-5.5p | %-16.16t | %-32.32c{1} | %X | %m%n
log4j.appender.sift.appender.file=/tmp/$\\{bundle.name\\}.log
log4j.appender.sift.appender.append=true

# Async appender
log4j.appender.async=org.apache.log4j.AsyncAppender
log4j.appender.async.appenders=out

Now this logger config is dumping Karaf's debug log as well whereas my
intention is to DEBUG only a specific bundle.

Can some one let me know if it is possible to set different log levels
for different bundles?