Re: JDK requirement

2010-08-07 Thread Guillaume Nodet
I don't think the requirement is on the jdk but on the jre.

On Thursday, August 5, 2010, S Brady step...@bitlev.com wrote:

 I am new to Karaf and OSGi in general, but I'm trying to understand why Karaf
 has a JDK requirement if it is a runtime container instead of just JRE.

 Is there a way I can work around this requirement?  If so, how?  Or what
 assumption/context am I not getting?

 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/JDK-requirement-tp1027660p1027660.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Karaf, Camel and startup race conditons

2010-08-15 Thread Guillaume Nodet
The issue comes from Spring-DM which has a big flaw in the sense it
can not provide any way to handle those dependencies.  We had the same
problem and the best way to solve those is to use Aries Blueprint and
Camel Blueprint support instead of spring.
You could try playing with the bundle start level, but I won't solve
all the problems as spring does some things asynchronously wrt the
bundles starting, so you'll still have race conditions.  It might help
a bit though, so feel free to give it a try.   I think the easiest way
would be to configure fileinstall so that the bundles deployed have a
bundle start level greater than the default bundle start level.  I
can't lookup the exact mechanism right now, but will try this evening
if no one gave it to you (I think Charles has experimented things like
that IIRC).

On Mon, Aug 16, 2010 at 06:21, Thiago Souza tcostaso...@gmail.com wrote:

 Hello there,

       I've setup a production server based on karaf and camel. I've
 installed camel through the features mechanism and deployed several spring
 xml files that defines components and camel contexts instantiation.

       Everything is fine until the server is restarted. In this case I get
 all sort of random problem like type converters that can not be found,
 routes based on quartz throwing exceptions and other weird behavior. The
 only way to solve this problem is to undeploy all the xml files, restart the
 system and redeploy them in a controlled way.

       How can I solve this? Is there any way to start the xmls AFTER all
 the camel bundles and have them started in an ordered way?

 Regards,
 Thiago Souza
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Karaf-Camel-and-startup-race-conditons-tp1165020p1165020.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Setting karaf.instances system property has no effect

2010-08-18 Thread Guillaume Nodet
I think it would be better if you raised the issue yourself.  Anyone
can create a JIRA issue, you just need to create an account on JIRA.
It seems you also have a patch ready, so if you could attach it, that
would be even better :-)

On Wed, Aug 18, 2010 at 16:27, odeak o...@ztec.ch wrote:

 Hi

 I'm trying to separate the static files from the dynamic ones in a karaf
 deployment but setting the karaf.instances system property is ignored
 (karaf 2.0.0).

 This is because in Main.java on line 191, validate=false when calling
 Utils.getKarafDirectory(...) so it always revert to the default. Could you
 please raise an issue to fix this?

 Regards,
 Oliver

 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Setting-karaf-instances-system-property-has-no-effect-tp1206801p1206801.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Strange behaviour with SSH

2010-08-24 Thread Guillaume Nodet
Not sure what's wrong as the parameters seem correct:
  
http://svn.apache.org/repos/asf/karaf/tags/karaf-2.0.0/shell/wrapper/src/main/resources/org/apache/karaf/shell/wrapper/unix/karaf-wrapper.conf
You could try to turn on debug logging for JSW and see if the actual
parameters are correct.
The config file should be located in the etc/ folder after running the
wrapper:install command.

On Tue, Aug 24, 2010 at 10:24, Achim Nierbeck achim.nierb...@ptv.de wrote:

 Sorry, forgot to tell you that I'm using the 2.0 version of the Karaf server.

 The jce.jar is included in the lib folder ${JAVA_HOME}/jre/lib of the jdk
 (was downloaded last week from the dl-server)

 Any Idea how to configure JSW (JavaServiceWrapper?) different.



 Guillaume Nodet wrote:

 This problem is usually caused by the JCE libraries not being included
 in the classpath.
 On some systems, the jars for cryptography are located in
 ${JAVA_HOME}/jre/lib/endorsed or ${JAVA_HOME}/lib/endorsed.  Those
 folders are configured in the batch file, but maybe there's something
 wrong in the way the JSW is configured.  Does that happen using trunk
 / 2.0.0 too or only 1.6.0 ?

 On Tue, Aug 24, 2010 at 10:08, Achim Nierbeck achim.nierb...@ptv.de
 wrote:

 Hi,

 I do have a strange behavior when trying to connect to the Karaf server
 on a
 Linux machine with ssh.

 If I do run the Karaf as a shell application (starting through the Karaf
 shell script) I'm able to connect to it via SSH. If I run the Karaf as a
 service with the service wrapper I'm not able to connect to it, which is
 quite strange.

 Googling for this strange behavior showed me that there was some sort of
 issue with the SSH implementation back in 1.60 which should be fixed by
 now.

 I do get the following exception:


 [ INFO] 10:00:04,066 (NioProcessor-1) Session created...
 [ INFO] 10:00:04,086 (NioProcessor-1) Client version string:
 SSH-2.0-PuTTY_Release_0.60
 [ INFO] 10:00:04,086 (NioProcessor-1) Received SSH_MSG_KEXINIT
 [ WARN] 10:00:04,088 (NioProcessor-1) Exception caught
 java.lang.IllegalStateException: Unable to negociate key exchange for
 item 2
        at
 org.apache.sshd.common.session.AbstractSession.negociate(AbstractSession.java:886)
        at
 org.apache.sshd.server.session.ServerSession.handleMessage(ServerSession.java:151)
        at
 org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:522)
        at
 org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:225)
        at
 org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:58)
        at
 org.apache.mina.core.filterchain.DefaultIoFilterChain$TailFilter.messageReceived(DefaultIoFilterChain.java:713)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:46)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:793)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:119)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:638)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:598)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:587)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:61)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:969)[31:org.apache.mina.core:2.0.0.RC1]
        at
 org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)[31:org.apache.mina.core:2.0.0.RC1]
        at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[:1.6.0_21]
        at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[:1.6.0_21]
        at java.lang.Thread.run(Thread.java:619)[:1.6.0_21]
 [ INFO] 10:00:04,092 (NioProcessor-1) Closing session

 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Strange-behaviour-with-SSH-tp1306643p1306643.html
 Sent

Re: Karaf 2.0.0 Jersey 1.3.0

2010-08-26 Thread Guillaume Nodet
JB, you're right, but it still should not cause a resolution failure,
especially if a single version of this bundle is installed in the
framework.  Jérôme, I would try to switch to equinox (in the
etc/config file) and restart to see what happen.  Also, the Felix
Framework 3.0.2 has been released recently which may fix the problem,
so it's worth a try too.  I'll upgrade to this latest version in trunk
asap anyway.

On Thu, Aug 26, 2010 at 10:20, Jean-Baptiste Onofré j...@nanthrax.net wrote:
 Hi Jérôme,

 The com.sun.jersey.api package is provided by jersey-core jar.
 It's exported by the jersey-core bundle (according to the MANIFEST).

 You are right, the package shouldn't be imported and exported.

 The bundle package statement should looks like :

 Export-Package
   com.sun.jersey*...
 /Export-Package
 Import-Package
   !com.sun.jersey*,
   *
 /Import-Package

 Like this, it can support several version of jersey-core bundle in the same
 OSGi framework and avoid this kind of issue.

 A possible workaround is to create your own jersey-core bundle using
 maven-bundle-plugin for example.

 Regards
 JB

 On 08/26/2010 10:10 AM, Jérôme Pochat wrote:

 Hi

 I tried to install Jersey REST implementation into fresh Karaf 2.0.0
 instance without any success. This was working fine in previous version of
 Karaf. What is strange is that there is no problem to install it inside
 Felix 3.0.*.

 install jsr311-api-1.1.1.jar ==  Success
 install jersey-core-1.3.jar ==  Failed

 Error executing command: Unresolved constraint in bundle
 com.sun.jersey.jersey-core [34]: Unable to resolve 34.0: missing
 requirement
 [34.0] package; (package=com.sun.jersey.api.representation) - [34.0]
 package; (package=com.sun.jersey.api.representation)

 This package is imported and exported by the same bundle. Could it be a
 reason of the issue?

 Thanks in advance for help.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Please enlighten me

2010-08-27 Thread Guillaume Nodet
If a bundle explicitely import the package, I don't think boot
delegation will be enough, as the resolver will need someone to export
the package nonetheless (afaik).

On Fri, Aug 27, 2010 at 12:53, Achim Nierbeck achim.nierb...@ptv.de wrote:

 OK,

 this is exactly the way I expected it to be. But where am I wrong in
 expecting that the package

 com.sun.org.apache.xerces.internal.dom

 is already available through the bootdelegation.
 Why did I need to configure org.osgi.framework.system.packages.extra
 or I could probably also have configured org.osgi.framework.system.packages
 Or is it just that I need to tell the system.packages through either
 org.osgi.framework.system.packages or
 org.osgi.framework.system.packages.extra
 and specify through the
 org.osgi.framework.bootdelegation
 that certain packages are only available through the bootdelegation?
 Till now I thought that this is implicitly done through the bootdelegation
 parameter.




 Guillaume Nodet wrote:

 org.osgi.framework.system.packages.extra =
 Framework environment property identifying extra packages which the
 system bundle must export from the current execution environment.
 This property is useful for configuring extra system packages in
 addition to the system packages calculated by the framework.

 org.osgi.framework.bootdelegation =
 Framework environment property identifying packages for which the
 Framework must delegate class loading to the parent class loader of
 the bun- dle.

 Boot delegation does not require a bundle to import the pacakge,
 whereas the extra property will only make the system bundle export the
 given packages.

 On Fri, Aug 27, 2010 at 11:25, Achim Nierbeck achim.nierb...@ptv.de
 wrote:

 How does the property

 org.osgi.framework.system.packages.extra

 work different than the

 org.osgi.framework.bootdelegation

 Because com.sun.* is set in the bootdelegation but the bundle
 Apache ServiceMix Bundles: saaj-impl-1.3.2 (1.3.2.1)
 wasn't able to get access to com.sun.org.apache.xerces.internal.dom

 when configuring the
 org.osgi.framework.system.packages.extra
 with this package (com.sun.org.apache.xerces.internal.dom)
 it was starting.

 As far as I understood the bootdelegation should already have done this,
 right?

 BTW, using Karaf 2.0.0 and I found the hint by accident on Jamie
 Goodyears
 Blog

 Thanx, Achim
 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Please-enlighten-me-tp1367765p1367765.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com



 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Please-enlighten-me-tp1367765p1368720.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Karaf 2.0.0 Jersey 1.3.0

2010-09-03 Thread Guillaume Nodet
Yeah, Karaf is consistent so the same packages are exported whtether
you use felix or equinox (or it should be).
However, we have dicussed not restricting the packages provided by the
jre, so i think we need to get back to this discussion.

On Fri, Sep 3, 2010 at 17:19, Jérôme Pochat jpoc...@sopragroup.com wrote:

 Hi again

 I'm back :)

 Since I switch to Equinox as workaround, I now face new issue about Java 6
 API that is also provided by ServiceMix bundles (i.e. StAX).

 Using Felix, some system packages are hidden in jre.properties. Using
 Equinox, all system packages seems to be exported. In some cases (such as
 mine :-p), this could create conflicts at runtime (java.lang.LinkageError).

 Does jre.properties file is read with using Equinox as well as Felix?

 Thanks in advance.
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Karaf-2-0-0-Jersey-1-3-0-tp1347655p1412755.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Karaf 2.0.1-SNAPSHOT and webconsole, cannot login

2010-09-13 Thread Guillaume Nodet
Jean-Baptiste is working on the JAAS side to add new features (the
ability to have encrypted passwords for better security).  The problem
was a small regression caused by the ongoing work, that's all

On Mon, Sep 13, 2010 at 08:56, Bengt Rodehav be...@rodehav.com wrote:
 Hi Guillaume,
 I wasn't sure that it was an issue. Was it a missing JAAS configuration?
 /Bengt

 2010/9/13 Guillaume Nodet gno...@gmail.com

 I've just fixed the issue. Thx for reporting!

 On Sun, Sep 12, 2010 at 22:18, Bengt Rodehav be...@rodehav.com wrote:
  I looked at the log a bit more carefully and found:
  javax.security.auth.login.LoginException: Inga inloggningsmoduler har
  konfigurerats för karaf
  The above has been translated to Swedish but in English it says that no
  login modules have been configured for karaf.
  I know I should probably know how to do this but if anyone can give me
  some
  assistance I would really appreciate it. I also think that it is a good
  idea
  to have an easy login configured from scratch when installing Karaf -
  it's a
  lot easier to customize an existing configuration than creating a
  configuration.
  /Bengt
 
  2010/9/12 Bengt Rodehav be...@rodehav.com
 
  I've installed Karaf 2.0.1-SNAPSHOT and installed the Webconsole
  feature.
  However, I cannot login to the web console. I've always logged in (on
  Karaf
  1.6.0 and also Karaf 2.0.0) with user=karaf and pwd=karaf. If I enter
  those
  credentials the login dialog just pops up again, and again, and
  again...
  I've found someone with similar problems that recommended that a file
  with
  the name org.apache.karaf.webconsole.cfg should be placed in the etc
  directory. The contents should be:
  config name=org.apache.karaf.webconsole
    realm=karaf
  /config
  I've tried that but the problem remains.
  Can anyone help me out here?
  /Bengt
 



 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Karaf - Certificate Keystore

2010-09-27 Thread Guillaume Nodet
I suppose you're talking about connecting through the SSH console, right ?
In that case, the answer is no, because it can't be done atm.
The SSH layer does not configure any PublicKeyAuthenticator so that you
can't rely on SSH key based authentication.
I do agree this is something we should do, it's just that I've never had the
time to work on it.
See KARAF-32, it's an old bug.

Another related thing is captured in KARAF-32, which is the use of agent
forwarding to be able to log into another karaf instance using the same
credentials.  Currently, you have to give them again, even if you're already
authenticated, because they are not kept in memory.
This would be much better.

I guess one possible problem is the use of the local console.  The user
isn't really authenticated and we have no way to know the password or public
key in that case.  Not sure how to handle that yet.


On Mon, Sep 27, 2010 at 17:21, Charles Moulliard cmoulli...@gmail.comwrote:

 Hi,

 Do we have an example or a test case showing how to configure karaf to
 use certificate + keystore to authenticate admin user instead of
 simple login mechanism provided by default ?

 Regards,

 Charles Moulliard

 Senior Solution Architect - Fuse Consultant

 Open Source Integration: http://fusesource.com
 Blog : http://cmoulliard.blogspot.com
 Twitter : http://twitter.com/cmoulliard
 Linkedin : http://www.linkedin.com/in/charlesmoulliard
 Skype: cmoulliard




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: features.xml - Option to dicate JDK levels

2010-10-12 Thread Guillaume Nodet
The nice thing I see with OBR is that the resolution is much closer to the
real environement, as OBR takes into account the already deployed bundles.
For example if you try to install camel-blueprint on Karaf 2.1, the whole
environment is kinda messed up because you end up with two versions of the
aries blueprint implementation (which does not work very well).  Ideally,
the camel-blueprint feature would put a requirement on a blueprint extender
being present, and this requirement would be solved by the current
environment, so obr would not try to install another version of blueprint.

In our case, for camel-jaspyt, it 's not very easy to model (though it's
possible), but I honestly thing this is a very rare case.  The way to model
that would be to have the camel-jasypt bundle (or is it the jasypt bundle
itself?) have a requirement on a specific capability, let's say foo which
happen to be provided by the iucla bundle and also by the system bundle (or
the environment) if running on jdk6.  Though, IIUC it does not really harm
to install icu4j even on JDK 6, so at worst, there's an unused bundle in the
system.

On Tue, Oct 12, 2010 at 08:25, Jean-Baptiste Onofré j...@nanthrax.net wrote:

 Guillaume,

 Where to put the bundle resolution logic in the OBR ?
 The user should be able to define the bundle set, so maybe the easiest
 location is the feature descriptor.

 I know that the OBR supports properties on repo, but the user will have to
 tune the repo to add some properties for bundle provisioning.

 I wonder what the easiest way for the user to define it.

 Regards
 JB


 On 10/12/2010 08:10 AM, Guillaume Nodet wrote:

 I'm not sure we should add too much of this in the features
 descriptors.   I think a better idea would be to start leveraging OBR
 to determine the best set of dependencies for a given set of bundles
 to install.   If needed we could also leverage the obr url handler to
 use a filter to actually select a bundle.

 On Tuesday, October 12, 2010, Jean-Baptiste Onofréj...@nanthrax.net
  wrote:

 Hi Claus,

 Up to now, AFAIK, it's not possible to define a feature with JDK specific
 bundles (the descriptor is static). You can add some JRE/JDK specific
 definition in etc/jre.properties but it's global to the kernel (not
 dedicated to a given feature).

 Anyway, I think it's interesting.

 We can extend the feature deployer to support this kind of conditions.

 I'm gonna raise a Jira task around this.

 Regards
 JB

 On 10/12/2010 06:16 AM, Claus Ibsen wrote:

 Hi

 I wonder if its possible in the features.xml file to define a bundle
 being qualified depending on the current JDK?

 For example if you run JDK 1.5 you want the bundle included. If you
 run JDK 1.6+ you do NOT.
 The option should most likely support a range similar to the OSGi
 versioning.

 Maybe something similar to this:
 bundle jdk=[1.5,1.6)mvn:xxx/yyy/2.2/bundle

 An example would be many of the encryption frameworks which requires
 additional jars to run on JDK 1.5, where as 1.6 provides API and
 chipers out of the box.
 And we could have a similar situation when JDK 1.7 comes out. Where
 you may need additional JARs on 1.6 and not on 1.7.

 I could not find such information at
 http://karaf.apache.org/46-provisioning.html

 But it could be the documentation is outdated







-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: How to avoid that log4j classes are loaded twice

2010-10-21 Thread Guillaume Nodet
That's where the JEE fun begins.  I suppose you need to configure you JEE
web server to not make the log4j classes available to the war.  I don't
think there's a standard way of doing that though ...
Is the behavior the same wether you use Felix or Equinox ?

On Thu, Oct 21, 2010 at 13:50, Charles Moulliard cmoulli...@gmail.comwrote:

 Hi,

 When Karaf is deployed as a WAR in Jetty or Tomcat, log4j classes are
 loaded twice and of course Karaf is not able to report trace in
 servicemix.log file.

 log4j:ERROR A org.apache.log4j.ConsoleAppender object is not assignable
 to a org.apache.log4j.Appender variable.
 log4j:ERROR The class org.apache.log4j.Appender was loaded by
 log4j:ERROR [4.0] whereas object of type
 log4j:ERROR org.apache.log4j.ConsoleAppender was loaded by
 [contextloa...@servicemix Embedded Example].
 log4j:ERROR Could not instantiate appender named A1.
 log4j:ERROR A org.apache.log4j.ConsoleAppender object is not assignable
 to a org.apache.log4j.Appender variable.
 log4j:ERROR The class org.apache.log4j.Appender was loaded by
 log4j:ERROR [4.0] whereas object of type
 log4j:ERROR org.apache.log4j.ConsoleAppender was loaded by
 [contextloa...@servicemix Embedded Example].

 Does anybody has an idea to avoid that ?

 Regards,

 Charles




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: How about to to facilitate to use JNDI(boot delegation) in karaf?

2010-11-05 Thread Guillaume Nodet
The boot delegation property can be modified in etc/config.properties
or overriden in etc/custom.properties.
Can you give a bit more detail on how to use JNDI ? I'm not sure to
understand why the package is used but not directly referenced.

On Fri, Nov 5, 2010 at 10:35, ext2 x...@tongtech.com wrote:
 Hi:
        When  I using jndi in osgi, the javax.naming package is required.
 But because javax.naming package is not direct refereced in the java's
 source code(; So the generated bundle's Import-Package header doesn't
 contains javax.naming package;
        Although the karaf's etc/jre.properties  file could define
 javax.naming to be exportable from boot class loader. But because we
 doesn't declare import  for javax.naming package in bundle,so  a
 ClassNotFound Exception will still occurs;
        So we want the package javax.naming which could be used, even
 there is not import declaration in bundle;
        To support this,  I can only define
 org.osgi.framework.bootdelegation  system property in karaf's startup
 script;
        So I am wondering if karaf could support a configure file which
 allow we configure the  org.osgi.framework.bootdelegation ?  if so , it's
 easy to use control boot delegation;






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Features in Karaf / Pax Exam

2010-11-05 Thread Guillaume Nodet
I think pax-exam has its own features parser, so it may have been kept
behing what karaf provides.
We really need to use a schema and version that to be able to detect
such problems more easily.

On Fri, Nov 5, 2010 at 10:17, Andreas Gies andr...@wayofquality.de wrote:
 Hello,

 I am sorry if am hitting the wrong list with my question, but it seemed a
 good starting point for me ...

 We have started the development of a karaf based application and wnat to
 exploit the features support
 for packaging. In my features I have the following:

 features
 feature name=YConX::Whiteboard version=${project.version}

 !-- This line works for deploying into karaf, but not in conjunction with
 pax - exam --
 !--feature version=${project.version}YConX::Equinox/feature--

 !-- TODO: avoid repetition of the derby version --
 bundlemvn:org.apache.derby/derby/10.6.2.1/bundle
 bundlemvn:${project.groupId}/${project.groupId}.core/${project.version}/bundle
 !--bundlemvn:${project.groupId}/${project.groupId}.database/${project.version}/bundle
 bundlemvn:${project.groupId}/${project.groupId}.dataproviderservice/${project.version}/bundle
 bundlemvn:${project.groupId}/${project.groupId}.gui/${project.version}/bundle
 bundlemvn:${project.groupId}/${project.groupId}.logger/${project.version}/bundle--
 /feature
 /features

 For my integration test the first thing i want to test is that the packaging
 works ok and have copied/modified
 the FeaturesTest found in karaf itself. In my configuration I have:

 @Configuration
 public static Option[] configuration() throws Exception {
  return OptionUtils.combine(
    Helper.getDefaultOptions(

  systemProperty(org.ops4j.pax.logging.DefaultServiceLog.level).value(DEBUG)
    ),
    PaxRunnerOptions.scanFeatures(

  maven().groupId(de.yconx.osgi).artifactId(features).type(xml).classifier(features).version(0.0.1-SNAPSHOT),
      YConX::Equinox/0.0.1-SNAPSHOT
    ),
    PaxRunnerOptions.scanFeatures(

  maven().groupId(de.yconx.whiteboard).artifactId(features).type(xml).classifier(features).version(0.0.1-SNAPSHOT),
      YConX::Whiteboard/0.0.1-SNAPSHOT
    ),
    waitForFrameworkStartup(),
    equinox()
  );
 }

 The test succeeds if I omit the feature reference in my feature definition,
 but fails if i include it.
 The basic idea was, that the feature is my unit of deployment into the karaf
 container and that features
 kind of build on top of each other.

 So, the question is:

 Am I doing something out of the ordinary ?
 Am I hitting a pax-exam limitation that is already known? (happy to log a
 bug there...)
 Is there a syntax issue I am not spotting and would make it work in both
 environments ?


 Thanks in advance
 Andreas







-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Change ka...@root prompt?

2010-11-05 Thread Guillaume Nodet
The default PROMPT is defined as:

public static final String DEFAULT_PROMPT =
\u001b[1m${user}\u001b...@${application} ;

To change this, you can set the PROMPT variable in
etc/shell.init.script to something that suit your needs better.
Either as a plain variable:
   PROMPT=\u001b[1m${user}\u001b...@${application} ;
Or as a closure
'#PROMPT' = { echo \u001b[1m${user}\u001b...@${application}ww  }

The branding does not currently enable changing the default prompt,
but maybe it could be enhanced.

On Fri, Nov 5, 2010 at 19:29, Kit Plummer kitplum...@gmail.com wrote:

 Is it possible to change the karaf part of the console prompt in
 configuration?

 Kit
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Change-karaf-root-prompt-tp184p184.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Change ka...@root prompt?

2010-11-05 Thread Guillaume Nodet
You're right.  The correct value for karaf 2.1.0 would be:

PROMPT= '\u001b\\[1m${user}\u001b\\...@${application} ';

Though the value I provided will work on 2.2.0 (trunk for now).

On Fri, Nov 5, 2010 at 21:15, Mike Van mvangeert...@comcast.net wrote:



 Guillaume,



 Are you sure that's correct? I added:

 PROMPT=\u001b[1m${user}\u001b...@${application} ;

 to my shell.init.script file, and I received the following error in the 
 console:

 Error in initialization script: Eof found in the middle of a compound for 
 '][', begins at b...@${application}



 v/r,



 Mike Van


 - Original Message -
 From: Guillaume Nodet [via Karaf] 
 ml-node+1850211-615221409-228...@n3.nabble.com
 To: Mike Van mvangeert...@comcast.net
 Sent: Friday, November 5, 2010 3:07:07 PM
 Subject: Re: Change ka...@root prompt?

 The default PROMPT is defined as:

     public static final String DEFAULT_PROMPT =
 \u001b[1m${user}\u001b...@${application} ;

 To change this, you can set the PROMPT variable in
 etc/shell.init.script to something that suit your needs better.
 Either as a plain variable:
    PROMPT=\u001b[1m${user}\u001b...@${application} ;
 Or as a closure
     '#PROMPT' = { echo \u001b[1m${user}\u001b...@${application}ww  }

 The branding does not currently enable changing the default prompt,
 but maybe it could be enhanced.

 On Fri, Nov 5, 2010 at 19:29, Kit Plummer  [hidden email]  wrote:

 Is it possible to change the karaf part of the console prompt in
 configuration?

 Kit
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Change-karaf-root-prompt-tp184p184.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com






 View message @ 
 http://karaf.922171.n3.nabble.com/Change-karaf-root-prompt-tp184p1850211.html
 To start a new topic under Karaf - User, email 
 ml-node+930749-917263437-228...@n3.nabble.com
 To unsubscribe from Karaf - User, click here .

 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Change-karaf-root-prompt-tp184p1850561.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Bundle lost after shutdown

2010-11-08 Thread Guillaume Nodet
The problem happens because the features service does not remove the
blank characters before and after the bundles urls and it looks like
the felix framework will not reload bundles with such invalid urls.
I've just raised KARAF-268 which I'll try to fix later today.

On Mon, Nov 8, 2010 at 20:23, Jorge Riquelme to...@totex.cl wrote:
 Guillaume, I can reproduce the issue with karaf 2.1.0 and the attached
 files: a test feature file, plus custom.properties and jre.properties
 (stolen from smix :p).

 First, load the features.xml file and install the contentcompass feature:

 ka...@root features:addurl file:/home/totex/features.xml
 ka...@root features:install contentcompass
 ka...@root list
 START LEVEL 100 , List Threshold: 50
   ID   State         Blueprint      Level  Name
 [  31] [Active     ] [Created     ] [   60] Apache Karaf :: Shell
 ConfigAdmin Commands (2.1.0)
 [  32] [Active     ] [            ] [   60] Apache Aries Transaction
 Manager (0.2.0.incubating)
 [  33] [Active     ] [            ] [   60] Apache ServiceMix :: Specs
 :: Stax API 1.0 (1.6.0.SNAPSHOT)
 ...
 [  70] [Active  ] [            ] [   60] Apache CXF Bundle Jar (2.2.11)
 ...
 [  84] [Active     ] [            ] [   60] Clerezza Ext - Jena OSGi
 Bundle (0.6.0.incubating-SNAPSHOT)

 All bundles are active. Then, shutdown (ctrl+d) and restart:

 ka...@root ERROR: Error starting mvn:org.apache.cxf/cxf-bundle/2.2.11
 (org.osgi.framework.BundleException: Unresolved constraint in bundle
 org.apache.cxf.bundle [70]: Unable to resolve 70.0: missing
 requirement [70.0] package; (package=javax.transaction.xa))
 org.osgi.framework.BundleException: Unresolved constraint in bundle
 org.apache.cxf.bundle [70]: Unable to resolve 70.0: missing
 requirement [70.0] package; (package=javax.transaction.xa)
        at org.apache.felix.framework.Felix.resolveBundle(Felix.java:3409)
        at org.apache.felix.framework.Felix.startBundle(Felix.java:1709)
        at 
 org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1143)
        at 
 org.apache.felix.framework.StartLevelImpl.run(StartLevelImpl.java:264)
        at java.lang.Thread.run(Thread.java:636)

 ka...@root list
 START LEVEL 100 , List Threshold: 50
   ID   State         Blueprint      Level  Name
 [  31] [Active     ] [Created     ] [   60] Apache Karaf :: Shell
 ConfigAdmin Commands (2.1.0)
 [  33] [Active     ] [            ] [   60] Apache ServiceMix :: Specs
 :: Stax API 1.0 (1.6.0.SNAPSHOT)
 [  34] [Active     ] [            ] [   60] Stax2 API (3.0.2)
 ...
 [  70] [Installed  ] [            ] [   60] Apache CXF Bundle Jar (2.2.11)
 ...
 [  84] [Active     ] [            ] [   60] Clerezza Ext - Jena OSGi
 Bundle (0.6.0.incubating-SNAPSHOT)

 Bundle 32 is lost, and bundle 70 isn't running for the missing dependency. 
 Then:

 ka...@root install -s
 mvn:org.apache.aries.transaction/org.apache.aries.transaction.manager/0.2-incubating
 Bundle ID: 85
 ka...@root restart 70

 After manually reinstall the bundle, the problem doesn't happen
 anymore (all loads fine after shutdown and restart).


 saludos

 2010/11/8 Guillaume Nodet gno...@gmail.com:
 Can you reproduce the problem easily ? If so, could you please give
 the exact steps you use to reproduce the problem ?

 On Mon, Nov 8, 2010 at 04:21, Jorge Riquelme to...@totex.cl wrote:
 Hi list, i'm having a problem with karaf 2.1.1-SNAPSHOT with a
 particular bundle
 (mvn:org.apache.aries.transaction/org.apache.aries.transaction.manager/0.2-incubating).
 I start from a fresh install of karaf and deploy my feature; all
 fine:

 ka...@root list
 START LEVEL 100 , List Threshold: 50
   ID   State         Blueprint      Spring    Level  Name
 ...
 [  43] [Active     ] [            ] [       ] [   60]
 spring-osgi-extender (1.2.0)
 [  44] [Active     ] [            ] [       ] [   60]
 spring-osgi-annotation (1.2.0)
 [  45] [Active     ] [            ] [       ] [   60] Apache Aries
 Transaction Manager (0.2.0.incubating)
 [  46] [Active     ] [            ] [       ] [   60] Apache
 ServiceMix :: Specs :: Stax API 1.0 (1.6.0.SNAPSHOT)
 [  47] [Active     ] [            ] [       ] [   60] Stax2 API (3.0.2)
 ...

 After, when I restart karaf, the bundle 45 is lost and I get several
 exceptions from the other dependent bundles (of aries tx):

 ka...@root ERROR: Error starting mvn:org.apache.cxf/cxf-bundle/2.2.11
 (org.osgi.framework.BundleException: Unresolved constraint in bundle
 org.apache.cxf.bundle [83]: Unable to resolve 83.0: missing
 requirement [83.0] package; (package=javax.transaction.xa))
 org.osgi.framework.BundleException: Unresolved constraint in bundle
 org.apache.cxf.bundle [83]: Unable to resolve 83.0: missing
 requirement [83.0] package; (package=javax.transaction.xa)
        at org.apache.felix.framework.Felix.resolveBundle(Felix.java:3409)
        at org.apache.felix.framework.Felix.startBundle(Felix.java:1709)
        at 
 org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1143

Re: Open Ports in Karaf

2010-11-09 Thread Guillaume Nodet
One of them must be the tcp port opened to stop karaf as in tomcat
(i.e. a free random port is opened and bound to the localhost only and
written to the data/port file).
It can be configured using the karaf.shutdown.port property (the
default value of 0 means a random port).

Not sure about the other one though.

On Tue, Nov 9, 2010 at 17:56, Roshan A. Punnoose
rpunno...@proteuseng.com wrote:
 Hi,

 I am trying to figure out what ports Karaf opens up. I was able to get most 
 of them (ssh, jmx rmi, etc) except a few. Everytime Karaf starts up (v. 
 2.1.0) it opens up two randomly assigned ports in the high numbers. For 
 example:

 3:tcp4       0      0  127.0.0.1.57165        *.*                    LISTEN
 4:tcp46      0      0  *.57164                *.*                    LISTEN

 If I turn off the DisableAttachMechanism flag so that jconsole cannot 
 connect to karaf, the bottom one is gone, but the top one remains:
 3:tcp4       0      0  127.0.0.1.57168        *.*                    LISTEN

 (Notice, they are different numbers because they are randomly assigned)

 Does anyone know why this port is opened, and how to close it?

 Roshan Punnoose
 rpunno...@proteuseng.com
 Proteus Technologies







-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Deploying plain spring-dm config file...

2010-11-11 Thread Guillaume Nodet
Just use spring:xxx where xxx is any valid url (using file, http or mvn) ...

On Thu, Nov 11, 2010 at 17:08, Brad Beck bb...@peoplenetonline.com wrote:
 Is there any support for deploying a plain spring-dm configuration file
 using osgi:install or as part of a feature xml? Appears the spring deployer
 currently supports dropping plain configuration files in the deploy
 directory. I’d like to be able to do this as part of a feature xml.



 -Brad





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Startup problems: javax.naming.NoInitialContextException: Unable to determine caller'sBundleContext

2010-11-30 Thread Guillaume Nodet
This exception comes from the aries jndi bundle it seems.
I've tested various combinations of karaf and aries jndi, but all seem
to have this behavior.
COuld you please raise a JIRA issue ?
I'm not sure yet, but I suspect the problem is in the aries jndi bundle.

On Tue, Nov 30, 2010 at 08:56, Bengt Rodehav be...@rodehav.com wrote:
 I use Karaf 2.1.2. On a clean startup (the data directory is empty)
 everything works OK. However, when I restart Karaf (without cleaning out the
 data directory) I consistently get the following exception:

 Exception in thread JMX Connector Thread
 [service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root]
 java.lang.RuntimeException: Could not start JMX connector server

        at
 org.apache.karaf.management.ConnectorServerFactory$1.run(ConnectorServerFactory.java:103)

 Caused by: java.io.IOException: Cannot bind to URL
 [rmi://localhost:1099/karaf-root]: javax.naming.NoInitialContextException:
 Unable to determine caller's BundleContext

        at
 javax.management.remote.rmi.RMIConnectorServer.newIOException(RMIConnectorServer.java:804)

        at
 javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:417)

        at
 org.apache.karaf.management.ConnectorServerFactory$1.run(ConnectorServerFactory.java:101)

 Caused by: javax.naming.NoInitialContextException: Unable to determine
 caller'sBundleContext

        at
 org.apache.aries.jndi.OSGiInitialContextFactoryBuilder.getInitialContext(OSGiInitialContextFactoryBuilder.java:53)

        at
 javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667)

        at
 javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)

        at
 javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:316)

        at javax.naming.InitialContext.bind(InitialContext.java:400)

        at
 javax.management.remote.rmi.RMIConnectorServer.bind(RMIConnectorServer.java:625)

        at
 javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:412)

        ... 1 more

 Does anyone have any idea what could be wrong? I'm thinking there may be
 timing errors. Note that the exception is only logged to the console - not
 to the log file. Perhaps it occurs before the logging bundle is installed.

 I haven't seen this in previous versions of Karaf (2.1.0 and 1.6.0) but I've
 also added and changed a lot of bundles in addition to upgrading to Karaf
 2.1.2. Thus I don't know for sure if this is a problem specific to Karaf
 2.1.2.

 /Bengt



-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Ather URL Handler not available.

2010-11-30 Thread Guillaume Nodet
Yes, those are harmless and can be safely ignored.

On Tue, Nov 30, 2010 at 11:58, Bengt Rodehav be...@rodehav.com wrote:
 Starting from Karaf 2.1.2 I get the following in my log file:

 2010-11-30 09:21:45,911 | INFO  | rint Extender: 2 |
 AetherBridgeConnection           | .internal.AetherBridgeConnection   61 |
 Ather URL Handler not available.

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 | core
             | ?                                   ? | ServiceEvent
 REGISTERED

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintEventDispatcher         | ntainer.BlueprintEventDispatcher  123 |
 Sending blueprint container event BlueprintEvent[type=CREATED] for bundle
 org.apache.karaf.admin.core

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintContainerImpl           | container.BlueprintContainerImpl  231 |
 Running blueprint container for bundle org.apache.karaf.admin.core in state
 Created

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintContainerImpl           | container.BlueprintContainerImpl  231 |
 Running blueprint container for bundle org.apache.karaf.deployer.blueprint
 in state Unknown

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintContainerImpl           | container.BlueprintContainerImpl  190 |
 Grace-period directive: false

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintEventDispatcher         | ntainer.BlueprintEventDispatcher  123 |
 Sending blueprint container event BlueprintEvent[type=CREATING] for bundle
 org.apache.karaf.deployer.blueprint

 2010-11-30 09:21:45,911 | INFO  | rint Extender: 2 |
 AetherBridgeConnection           | .internal.AetherBridgeConnection   66 |
 Using mvn fallback to resolve
 mvn:se.digia.connect/karaf/1.1.0-SNAPSHOT/xml/features

 I get lots of these, probably for all the bundles I have installed. Is this
 normal? What does it mean?
 /Bengt



-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Ather URL Handler not available.

2010-11-30 Thread Guillaume Nodet
That's in pax-url, see http://issues.ops4j.org/browse/PAXURL-90

2010/11/30 Łukasz Dywicki l...@code-house.org:
 What do you think to increase log level for this category by default?

 (Or report decrease proposal in aries)

 Best regards,
 Lukasz


 From: bengt.rode...@gmail.com [mailto:bengt.rode...@gmail.com] On Behalf Of
 Bengt Rodehav
 Sent: Tuesday, November 30, 2010 1:38 PM
 To: user@karaf.apache.org
 Subject: Re: Ather URL Handler not available.

 OK - thanks,

 /Bengt
 2010/11/30 Guillaume Nodet gno...@gmail.com
 Yes, those are harmless and can be safely ignored.

 On Tue, Nov 30, 2010 at 11:58, Bengt Rodehav be...@rodehav.com wrote:
 Starting from Karaf 2.1.2 I get the following in my log file:

 2010-11-30 09:21:45,911 | INFO  | rint Extender: 2 |
 AetherBridgeConnection           | .internal.AetherBridgeConnection   61
 |
 Ather URL Handler not available.

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 | core
             | ?                                   ? | ServiceEvent
 REGISTERED

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintEventDispatcher         | ntainer.BlueprintEventDispatcher  123
 |
 Sending blueprint container event BlueprintEvent[type=CREATED] for bundle
 org.apache.karaf.admin.core

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintContainerImpl           | container.BlueprintContainerImpl  231
 |
 Running blueprint container for bundle org.apache.karaf.admin.core in
 state
 Created

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintContainerImpl           | container.BlueprintContainerImpl  231
 |
 Running blueprint container for bundle
 org.apache.karaf.deployer.blueprint
 in state Unknown

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintContainerImpl           | container.BlueprintContainerImpl  190
 |
 Grace-period directive: false

 2010-11-30 09:21:45,911 | DEBUG | rint Extender: 3 |
 BlueprintEventDispatcher         | ntainer.BlueprintEventDispatcher  123
 |
 Sending blueprint container event BlueprintEvent[type=CREATING] for
 bundle
 org.apache.karaf.deployer.blueprint

 2010-11-30 09:21:45,911 | INFO  | rint Extender: 2 |
 AetherBridgeConnection           | .internal.AetherBridgeConnection   66
 |
 Using mvn fallback to resolve
 mvn:se.digia.connect/karaf/1.1.0-SNAPSHOT/xml/features

 I get lots of these, probably for all the bundles I have installed. Is
 this
 normal? What does it mean?
 /Bengt


 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Placing properties files in the classpath

2010-12-09 Thread Guillaume Nodet
Right, using ConfigAdmin is the way to go in OSGi.  If you use
blueprint for example, you just have to define a specific proprty
placeholder and it's really easy to use.

On Thursday, December 9, 2010, Freeman Fang freeman.f...@gmail.com wrote:
 Hi,
 You can put properties in $KARAF_HOME/etc folder and reference it from all 
 bundles using ConfigAdmin service.
 Freeman
 On 2010-12-9, at 上午7:41, Mike Van wrote:

 All,

 The application we are building has several properties files used by all of
 our bundles.  Some of these properties files are .xml, some are plain old
 .properties files.  Currently, I am including them in each bundle, something
 which makes thier maintenance quite a pain.  We would like them available to
 all bundles, and to have them stored in a single place.

 So far I have tried placing them in packages which contain code, but the
 compiler only picks up the .class files.  I've also tried to create a
 package underneath resources named myApp.configurations.  This is picked
 up, but because it isn't exported, it is unavailable for wiring.

 How does one make a single properties file available to all bundles deployed
 in Karaf?

 v/r,

 Mike Van
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Placing-properties-files-in-the-classpath-tp2054553p2054553.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


 -- Freeman Fang
 
 FuseSource: http://fusesource.comblog: 
 http://freemanfang.blogspot.comtwitter: http://twitter.com/freemanfangApache 
 Servicemix:http://servicemix.apache.orgApache Cxf: 
 http://cxf.apache.orgApache Karaf: http://karaf.apache.orgApache Felix: 
 http://felix.apache.org


-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Placing properties files in the classpath

2010-12-09 Thread Guillaume Nodet
If you use ConfigAdmin directly or indirectly, you need to specify a
configuration id to load the properties from.  A single configuration
can be used by multiple bundles at the same time, so it's just about
using the same id for multiple bundles.
If you name the file myApp.cfg, the id of the configuration will be 'myApp'.

On Thu, Dec 9, 2010 at 22:52, Mike Van mvangeert...@comcast.net wrote:



 We were just talking about the differences between bundles we should use as 
 services, and bundles that simply need to be wired.  In my definition, all 
 cross-cutting concerns should be services consumed by their bundles.  We also 
 have been discussing whether or not the services should all be stateless (I 
 beleive they should be).



 So, for the time being, if my bundles are all myApp.*, would a myApp.cfg file 
 placed in the etc directory be read by all bundles whose packages start with 
 myApp?




 - Original Message -
 From: Łukasz Dywicki [via Karaf] 
 ml-node+2060113-309240584-228...@n3.nabble.com
 To: Mike Van mvangeert...@comcast.net
 Sent: Thursday, December 9, 2010 4:06:15 PM
 Subject: RE: Placing properties files in the classpath

 No,
 These bundles may reffer same persistent id (configuration file) without 
 problems.

 In fact - you may introduce new bundle which produces connection factory and 
 export it as service to reduce number of configuration dependencies.

 Best regards,
 Lukasz

 -Original Message-
 From: Mike Van [mailto: [hidden email] ]
 Sent: Thursday, December 09, 2010 10:03 PM
 To: [hidden email]
 Subject: Re: Placing properties files in the classpath




 Ok.



 If I have 4 bundles that all use JMS, and they are named:

 myApp.bundle1

 myApp.bundle2

 myApp.bundle3

 myApp.bundle4



 Would I need 4 configuration files in etc:

 myApp.bundle1.cfg

 myApp.bundle2.cfg

 myApp.bundle3.cfg

 myApp.bundle4.cfg



 ?


 - Original Message -
 From: Łukasz Dywicki [via Karaf]  [hidden email] 
 To: Mike Van  [hidden email] 
 Sent: Thursday, December 9, 2010 3:58:32 PM
 Subject: RE: Placing properties files in the classpath

 It depends on the configuration admin. Karaf uses etc directory for these
 configurations - eg. If you persistence id is set to com.mycompany any
 changes in $KARAF_BASE/etc/com.mycompany.cfg will be visible for your
 components. It doesn't look classpath, it looks into etc directory. That's
 better than classpath because operations can do changes without JAR
 modification. Even more fantastic is fact that your component can be
 notified about configuration change..


 Best regards,
 Lukasz


 -Original Message-
 From: Mike Van [mailto: [hidden email] ]
 Sent: Thursday, December 09, 2010 9:42 PM
 To: [hidden email]
 Subject: RE: Placing properties files in the classpath


 In those cases, where does OSGi look to find the properties? And, what are
 the property file names?

 Mike Van
 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Placing-properties-files-in-the-classpath-
 tp2054553p2060007.html
 Sent from the Karaf - User mailing list archive at Nabble.com.







 View message @ 
 http://karaf.922171.n3.nabble.com/Placing-properties-files-in-the-classpath-tp2054553p2060078.html
 To start a new topic under Karaf - User, email [hidden email]
 To unsubscribe from Karaf - User, click here .
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Placing-properties-files-in-the-classpath-tp2054553p2060101.html
 Sent from the Karaf - User mailing list archive at Nabble.com.







 View message @ 
 http://karaf.922171.n3.nabble.com/Placing-properties-files-in-the-classpath-tp2054553p2060113.html
 To start a new topic under Karaf - User, email 
 ml-node+930749-917263437-228...@n3.nabble.com
 To unsubscribe from Karaf - User, click here .
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Placing-properties-files-in-the-classpath-tp2054553p2060367.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Graceful shutdown of Windows service, revisited

2010-12-09 Thread Guillaume Nodet
 and
 issue the shutdown command before stopping the service but then it becomes
 too complicated for most people.
 Karaf-176 seems to not have solved the problem (for Windows anyway). Shall I
 reopen that ticket?
 /Bengt



-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: [maven-bundle-plugin] shading version issue

2010-12-14 Thread Guillaume Nodet
Not sure if that's a typo, but the Export-Package element must be
contained inside the instructions element for the maven plugin to
recognize those.

On Tue, Dec 14, 2010 at 22:16, Mike Van mvangeert...@comcast.net wrote:

 When using the maven-bundle-plugin to group non-bundled or improperly bundled
 packages into a new bundle, I noticed that the version number of the
 resulting packages is set to 0.0.0.  Is there any way to fix this?  Below
 is some code that should allow you to see the problem:

 build
  plugins
     plugin
       groupIdorg.apache.felix/groupId
       artifactIdmaven-bundle-plugin/artifactId
       version2.1.0/version
       extensionstrue/extensions
       configuration
        Export-Package
          bad.vendor.package1;version=0.1.2,
          bad.vendor.package2;version=3.2.3,
          bad.vendor.package3;version=4.3.4
        /Export-Package
      /configuration
     /plugin
   /plugins
   dependenciesall the bad.vendor dependencies containing the above
 packages/dependencies

 This results in a file where the packages are not versioned, so when the
 resultant bundle is deployed into OSGi, all of the packages are set to
 version 0.0.0.

 In cases where we are shading together .jar files that are no longer in
 development, or .jar files from vendors that simply won't play osgi well, we
 create a maven-project and create this bundle along with each versioned
 release of our product.  As our project grows, contracts, or changes, we
 need to add or remove things from this bundle.  Because the package versions
 are always set to 0.0.0, there is the possibility of ClassNotFound or
 constraint violations.

 It would be a good idea to have a way to change the behavior of this plugin
 to allow the user to set the resultant package versions, or to override the
 default behavior to use the ${pom.version} for each package instead of
 0.0.0.

 v/r,

 Mike Van (karafman)

 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/maven-bundle-plugin-shading-version-issue-tp2088176p2088176.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Web Console Security Question

2010-12-15 Thread Guillaume Nodet
We do have support for encrypted passwords for JAAS realms, see
  
http://karaf.apache.org/manual/2.1.99-SNAPSHOT/developers-guide/security-framework.html
It will be part of 2.2.0 targeted for january.

On Wed, Dec 15, 2010 at 17:31, mohamadan mohama...@yahoo.com wrote:

 Thank you..I just registered for the mailing list.

 Any estimate when a new Version will support the custom JAAS Login Module ?

 Also, I noticed that the passwords are in plain text by default, but there
 is a way to encrypt them. Does it support FIPs-140 encryption ? if not, is
 it possible to use a different Encryption other than the default one.


 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Web-Console-Security-Question-tp2086921p2093005.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: List Range...

2010-12-20 Thread Guillaume Nodet
Not afaik.   One possible way to work around this syntax limitation
would be to define a command range and do something like:
   each (range 1 10) { start $it }
Basically, you need a way to compute your set of values to iterate through.

On Mon, Dec 20, 2010 at 15:39, Brad Beck bb...@peoplenetonline.com wrote:
 Is there a way to specify a range of values (similar to perl) in the list 
 operator at the shell? e.g.

 each [1..10] { start $it }

 -Brad





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Managed Properties question

2010-12-20 Thread Guillaume Nodet
Or use a cm:property-placeholder in combination with the reload flag
to reload the app if the config change, that's what i used inside
karaf, see:
  
http://svn.apache.org/repos/asf/karaf/trunk/shell/ssh/src/main/resources/OSGI-INF/blueprint/shell-ssh.xml

2010/12/20 Łukasz Dywicki l...@code-house.org:
 If you try to do managed component your configuration property name must
 match field name, otherwise container will not update your bean. Both
 spring-dm and aries blueprint works same. In your example the property named
 integer have to use placeholder ${integer}. If you would like to use
 different names you need to manage changes in bean (set strategy to bean
 managed and callback method).

 Best regards,
 Lukasz

 -Original Message-
 From: Achim Nierbeck [mailto:bcanh...@googlemail.com]
 Sent: Monday, December 20, 2010 6:00 PM
 To: user@karaf.apache.org
 Subject: Re: Managed Properties question

 OK, another try

 bean id=containerManaged class=ContainerManagedBean
   osgix:managed-properties persistent-id=labX
 update-strategy=container-managed/
   property name=integer value=23/
 /bean

 this is the official example :)

 I think the property (named integer here) is optional. Usually all
 properties which can be read through getter and setter can be set by
 the configuration.

 So your problem is that you try to inject properties within another property

 property name=hibernateProperties
   props
    prop key=hibernate.show_sql${myAppDbShowSql}/prop
    prop key=hibernate.format_sql${myAppDbFormatSql}/prop
   /props
  /property

 All your properties you want to update need to be accessible via
 getter and setter.
 If you want to do this you need to make an extra bean which is
 configurable (with the same pid)
 and inject that one after it is initialized. you may want to make your
 standard bean dependend on the new bean.

 2010/12/20 karafman mvangeert...@comcast.net

 It still isn't working.

 Here's an excerpt of my .cfg file (none of the names in the file have any
 characters other than [a-z, A-Z]:
 myAppDbShowSql = false
 myAppDbFormatSql = false

 In the file where I get my service I have:
 osgix:cm-properties id=myAppDatabaseProperties
 persistent-id=myApp.data.access/
 ctx:property-placeholder properties-ref=myAppDatabaseProperties/

 In the file where I am using the managed service I have (unnecessary bits
 removed):
 bean id=myAppSessionFactory
 class=org.springframework.orm.hibernate3.LocalSessionFactoryBean
 p:dataSource-ref=myAppPoolDataSource
  osgix:managedProperties persistent-id=myApp.data.access
 update-strategy=container-managed/
  property name=mappingResources
   listhibernate valies/list
  /property
  property name=hibernateProperties
    props
     prop key=hibernate.show_sql${myAppDbShowSql}/prop
     prop key=hibernate.format_sql${myAppDbFormatSql}/prop
    /props
   /property
 /bean

 When I change the values in my .cfg file, I can see those changes being
 populated by doing a config:list.

 However, I get the following error in my log:
 Configuration for myApp.data.access has already been used for service
 [org.osgi.service.cm.ManagedService, id=127, bundle=89] and will be also
 given to [org.osgi.service.cm.ManagedService, id=128, bundle=89]

 When I restart bundle 89, the properties are properly consumed.

 Anyone know what's going wrong?


 -
 Karafman
 Slayer of the JEE
 Pounder of the Perl Programmer

 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Managed-Properties-question-tp2107407p2121
 172.html
 Sent from the Karaf - User mailing list archive at Nabble.com.





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: features.cfg issue

2010-12-22 Thread Guillaume Nodet
Just use file:./etc/myApp.features.cfg
The current directory is always ${karaf.base}
But I think it should work anyway as FileInstall do use System
properties to interpolate the values.
Actually, we use it in etc/org.ops4j.pax.url.mvn.cfg, so ther emust be
something wrong with your config.

On Wed, Dec 22, 2010 at 16:41, karafman mvangeert...@comcast.net wrote:

 When adding custom features.xml documents to org.apache.karaf.features.cfg on
 the featuresRepositories line using the file:/// url, this works:
 featuresRepositories=file:///home/myArea/karaf-apache-2.0.0/etc/myApp.features.cfg

 But this does not:
 featuresRepositories=file://${karaf.base}/etc/myApp.features.cfg

 When running config:list, the following is seen:

 featuresRepositories
 file:///home/myArea/karaf-apache-2.0.0/etc/myApp.features.cfg

 The error in the log is:
 Caused by: java.net.URISyntaxException: Illegal character in authority at
 index 7: file://${karaf.base}/etc/myApp.features.cfg

 Karaf should resolve variables in the file url prior to attempting to get a
 file-handle, thus allowing users to specify a file url containing
 ${karaf.base} instead of hard-coding the file location.


 -
 Karafman
 Slayer of the JEE
 Pounder of the Perl Programmer

 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/features-cfg-issue-tp2131894p2131894.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Problem reusing JAAS login modules

2010-12-28 Thread Guillaume Nodet
It's certainly a missing import package.  When drpped into the deploy
folder, the deployer will automatically add the required import
packages.

On Tuesday, December 28, 2010, Rafael Marins rafael.mar...@neociclo.com wrote:
 Hi,
 When using the PropertiesLoginModule from a blueprint file in my bundle 
 located at the /OSGI-INF/blueprint/ folder, I've encountered problem with 
 ClassNotFoundException. But I can get it working properly when deploying the 
 blueprint xml into the /deploy/ folder.
 Check the logging stack trace here: http://pastebin.com/yFiKkpg3 (at line 57)
 From my blueprint file: accord-jaas-module.xml
   ...   jaas:config name=accord    jaas:module 
 className=org.apache.karaf.jaas.modules.properties.PropertiesLoginModule    
               flags=required        users = 
 $[karaf.base]/etc/accord-users.properties        debug = true    
 /jaas:module  /jaas:config  ...
 Any ideas on what must be done to solve this problem?
 --Rafael Marins






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Branded Karaf...

2010-12-28 Thread Guillaume Nodet
There's no cleanly defined way to do that.  I have in mind since
several months to enhance the features maven plugin from Karaf to help
doing that in a much easier way, i.e. have a simple way to create
custom distributions on top of karaf by overlaying a set of karaf
features + configuration files + branding (which could include
renaming the main scripts to something different than karaf/karaf.bat)
and let the plugin do all the work.
There's a jira for that but nobody has had much time to work on that
unfortunately.
In the mean time, what ServiceMix does is using the maven assembly
plugin to add all the required files on top of the unpacked karaf
distribution and rebuild archives from that.

On Tue, Dec 28, 2010 at 20:41, Kit Plummer kitplum...@gmail.com wrote:

 Hey Karafers.

 I'm building a framework on top of Karaf, that is currently only adding the
 Branding library and a few bundles.  But, it is likely in the future I'll
 need to remove some things, and more.  Is there a best way strategy to
 manage to relationship between my framework and Karaf?  How does ServiceMix
 do it?

 TIA,
 Kit
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Branded-Karaf-tp2158363p2158363.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Problems with Karaf hotdeploy

2011-01-06 Thread Guillaume Nodet
Can you give the output of osgi:headers for the easyb bundle once
installed manually ?
FileInstall only installs correct jars and r4 compatible bundle.  This
means that the MANIFEST.MF
has to be the first or second entry in the jar (eventually after META-INF/ )

On Thu, Jan 6, 2011 at 13:09, Jürgen Kindler jkind...@talend.com wrote:
 Hi,

 I’m experiencing a problem with deploying a bundle into Karaf 2.1.2 by
 copying it into the karaf_home/deploy folder.
 Basically I copy three bundles: commons-cli-1.2.jar, groovy-all-1.7.5.jar
 and easyb-0.9.8.jar

 I turned the default log level of karaf to debug, but I don’t see any hints
 about why it fails.
 In the logs I see that commons-cli-1.2.jar   groovy-all-1.7.5.jar are
 installed and started successfully:
 12:34:20,897 | DEBUG | Event Dispatcher | cli  |
 ?   ? | 32 - org.apache.commons.cli - 1.2 |
 BundleEvent INSTALLED
 12:34:20,899 | DEBUG | Event Dispatcher | cli  |
 ?   ? | 32 - org.apache.commons.cli - 1.2 |
 BundleEvent RESOLVED
 12:34:20,900 | DEBUG | raf-2.1.2/deploy | BlueprintExtender    |
 rint.container.BlueprintExtender  210 | 7 - org.apache.aries.blueprint -
 0.2.0.incubating | Scanning bundle org.apache.commons.cli for blueprint
 application
 12:34:20,900 | DEBUG | raf-2.1.2/deploy | BlueprintExtender    |
 rint.container.BlueprintExtender  276 | 7 - org.apache.aries.blueprint -
 0.2.0.incubating | No blueprint application found in bundle
 org.apache.commons.cli
 12:34:20,900 | DEBUG | Event Dispatcher | cli  |
 ?   ? | 32 - org.apache.commons.cli - 1.2 |
 BundleEvent STARTED
 12:34:31,155 | DEBUG | Event Dispatcher | groovy-all   |
 ?   ? | 33 - groovy-all - 1.7.5 |
 BundleEvent INSTALLED
 12:34:31,189 | DEBUG | Event Dispatcher | groovy-all   |
 ?   ? | 33 - groovy-all - 1.7.5 |
 BundleEvent RESOLVED
 12:34:31,208 | DEBUG | raf-2.1.2/deploy | BlueprintExtender    |
 rint.container.BlueprintExtender  210 | 7 - org.apache.aries.blueprint -
 0.2.0.incubating | Scanning bundle groovy-all for blueprint application
 12:34:31,214 | DEBUG | raf-2.1.2/deploy | BlueprintExtender    |
 rint.container.BlueprintExtender  276 | 7 - org.apache.aries.blueprint -
 0.2.0.incubating | No blueprint application found in bundle groovy-all
 12:34:31,214 | DEBUG | Event Dispatcher | groovy-all   |
 ?   ? | 33 - groovy-all - 1.7.5 |
 BundleEvent STARTED

 However, there is nothing for the third bundle easyb-0.9.8.jar

 Note that this really is a bundle and it is possible to explicitly install
 it using:
 osgi:install file:/tmp/jki/apache-karaf-2.1.2/deploy/easyb-0.9.8.jar
 Also starting it with its then assigned bundle id and using it works fine.

 Somehow it seems like it is ignored by the process that watches the
 karaf_home/deploy folder :-(

 Any ideas what the problem is or how I can figure out the root cause.

 Cheers
   Juergen
 --
 Jürgen Kindler




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: getting felix scr commands to show up in karaf?

2011-01-16 Thread Guillaume Nodet
The commands are exposed using plain OSGi, but it should still work without
a bueprint descriptor (those are just services in the OSGi registry at the
end).
I'll have a look at it as there should be a way to get that working.

On Mon, Jan 17, 2011 at 08:01, Jean-Baptiste Onofré j...@nanthrax.net wrote:

 Hi David,

 The Felix Scr bundle doesn't contain blueprint descriptor (in
 META-INF/OSGI-INF). Regarding the Import-Package, we can see that it uses
 org.apache.felix.shell as optional.
 Without blueprint descriptor, commands won't appear automatically after
 installation of the bundle. You have to restart your karaf container.
 Could you make the try ? 1/ install the scr bundle, 2/ restart karaf
 without deleting the data directory

 Anyway, we can submit a patch to felix containing scr blueprint.

 Regards
 JB



 On 01/14/2011 07:37 PM, David Jencks wrote:

 Does anyone know how to get the felix scr commands to show up in the karaf
 console?  I'm trying scr and karaf from trunk and have also installed and
 started

  osgi:install -s mvn:org.apache.felix/org.apache.felix.shell/

   install
 file:///Users/david/.m2/repository/org/apache/felix/org.apache.felix.scr/1.6.1-SNAPSHOT/org.apache.felix.scr-1.6.1-SNAPSHOT.jar

 I expected to see something like

 scr:list

 in the list of commands I get fromtab  but nothing shows up.

 Thanks in advance for any help

 david jencks




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: getting felix scr commands to show up in karaf?

2011-01-17 Thread Guillaume Nodet
The reason is that the CommandsCompleter (which does completion of command
names) only takes into account commands implementing Function (see
checkData() method), whereas gogo support reflective commands defined as
Methods for example.
However, we need to be able to access the mapping between the command object
and it's scope / name.

On Mon, Jan 17, 2011 at 09:25, Guillaume Nodet gno...@gmail.com wrote:

 Then it must be a refresh issue somehow.

 Actually, I've just made some tests and the commands are functional even if
 they don't appear in the tab completion.  That could be considered a Karaf
 bug.


 On Mon, Jan 17, 2011 at 08:48, Jean-Baptiste Onofré j...@nanthrax.netwrote:

 I saw that commands not exposed with Blueprint don't appear automatically
 after installation. I restarted Karaf to get it working.

 Regards
 JB


 On 01/17/2011 08:41 AM, Guillaume Nodet wrote:

 The commands are exposed using plain OSGi, but it should still work
 without a bueprint descriptor (those are just services in the OSGi
 registry at the end).
 I'll have a look at it as there should be a way to get that working.

 On Mon, Jan 17, 2011 at 08:01, Jean-Baptiste Onofré j...@nanthrax.net
 mailto:j...@nanthrax.net wrote:

Hi David,

The Felix Scr bundle doesn't contain blueprint descriptor (in
META-INF/OSGI-INF). Regarding the Import-Package, we can see that it
uses org.apache.felix.shell as optional.
Without blueprint descriptor, commands won't appear automatically
after installation of the bundle. You have to restart your karaf
container.
Could you make the try ? 1/ install the scr bundle, 2/ restart karaf
without deleting the data directory

Anyway, we can submit a patch to felix containing scr blueprint.

Regards
JB



On 01/14/2011 07:37 PM, David Jencks wrote:

Does anyone know how to get the felix scr commands to show up in
the karaf console?  I'm trying scr and karaf from trunk and have
also installed and started

  osgi:install -s mvn:org.apache.felix/org.apache.felix.shell/

   install

  
 file:///Users/david/.m2/repository/org/apache/felix/org.apache.felix.scr/1.6.1-SNAPSHOT/org.apache.felix.scr-1.6.1-SNAPSHOT.jar

I expected to see something like

scr:list

in the list of commands I get fromtab  but nothing shows up.

Thanks in advance for any help

david jencks




 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com





 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Console extensions without blueprint

2011-01-23 Thread Guillaume Nodet
The namespace you use is wrong, it should be
http://karaf.apache.org/xmlns/shell/v1.0.0

On Sun, Jan 23, 2011 at 19:18, Adam Crain acr...@greenenergycorp.com wrote:
 Thanks for the tips. I stripped everything down to the basic example in the
 manual, and it seems the problem I was having is some kind of blueprint
 issue. When I run the example console extension, my bundle starts in the
 GracePeriod and the log diplays:
 13:13:04,435 | WARN  | rint Extender: 3 | BlueprintContainerImpl           |
 container.BlueprintContainerImpl  252 | 7 - org.apache.aries.blueprint -
 0.2.0.incubating | Bundle reef.direct-shell is waiting for namespace
 handlers
 [((objectClass=org.apache.aries.blueprint.NamespaceHandler)(osgi.service.blueprint.namespace=http://felix.apache.org/karaf/xmlns/shell/v1.0.0))]

 ideas?
 -Adam
 On Fri, Jan 21, 2011 at 5:01 PM, Guillaume Nodet gno...@gmail.com wrote:

 Basically, the shell will recognize service that has the
 osgi.command.scope and osgi.command.function properties defined.
 However, in order to leverage completions, you currently need to have
 the service implement AbstractCommand.
 The easiest way would be to leverage (or adapt) the code in the
 export() method of the following class:

  http://svn.apache.org/repos/asf/karaf/trunk/shell/console/src/main/java/org/apache/felix/gogo/commands/basic/SimpleCommand.java

 On Fri, Jan 21, 2011 at 22:47, Adam Crain acr...@greenenergycorp.com
 wrote:
  How do I implement a console extension without blueprint?
  I created a BundleActivator and tried to publish the service
  OsgiCommandSupport guessing that would cause the system to pickup the
  new
  shell command, but I was wrong.
  thanks,
  Adam



 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Info about kar archive of Apache Karaf

2011-01-24 Thread Guillaume Nodet
Yes, kars are for features what eba are for aries applications.
Though David Jencks is currently doing a lot of work in this area to
have a nice maven integration for kars, so that building a karaf
distribution will be done done in a very simply way by referencing
kars as maven dependencies and kars will be created as a maven
packaging along with features.

On Mon, Jan 24, 2011 at 16:46, Charles Moulliard cmoulli...@gmail.com wrote:
 Is it similar to eba archive of Aries ?


 On Mon, Jan 24, 2011 at 3:02 PM, Adrian Trenaman
 adrian.trena...@googlemail.com wrote:
 The purpose of the Kar file is to facilitate easy packaging and
 deployment of Karaf features. A feature can have its own bundles *and*
 all its dependencies placed in a Kar; then, when the Kar archive file
 is dropped into a deploy directory, the bundles are extracted to the
 local drive in a psuedo-Maven directory structure, and any features
 files therein are automatically registered in the runtime. This makes
 deployment of Karaf solutions easier, particularly when on production
 machines where Maven resolution is not possible.

 On 24/01/2011, Charles Moulliard cmoulli...@gmail.com wrote:
 Hi,

 What is the purpose of the archive kar that we can deploy now on Karaf
 ? Benefits ?

 Regards,

 Charles Moulliard

 Sr. Principal Solution Architect - FuseSource
 Apache Committer

 Blog : http://cmoulliard.blogspot.com
 Twitter : http://twitter.com/cmoulliard
 Linkedin : http://www.linkedin.com/in/charlesmoulliard
 Skype: cmoulliard






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Apache Karaf WAR + JSF

2011-02-11 Thread Guillaume Nodet
Also, make sure the jsf-api isn't deployed as an OSGi bundle, else
that one could be used and it would not be able to find the
implementation inside the jar.

On Fri, Feb 11, 2011 at 09:22, Charles Moulliard cmoulli...@gmail.com wrote:
 The WAR contains the spec and the implementation.

 aristo-1.0.0.jar
 barbecue-1.5-beta1.jar
 bcmail-jdk14-1.38.jar
 bcmail-jdk14-138.jar
 bcprov-jdk14-1.38.jar
 bcprov-jdk14-138.jar
 bctsp-jdk14-1.38.jar
 black-tie-1.0.0.jar
 blitzer-1.0.0.jar
 bluesky-1.0.0.jar
 casablanca-1.0.0.jar
 commons-fileupload-1.2.1.jar
 commons-io-1.4.jar
 commons-logging-1.1.1.jar
 cupertino-1.0.0.jar
 dark-hive-1.0.0.jar
 dot-luv-1.0.0.jar
 eggplant-1.0.0.jar
 excite-bike-1.0.0.jar
 facestrace-1.1.0.jar
 flick-1.0.0.jar
 hot-sneaks-1.0.0.jar
 humanity-1.0.0.jar
 itext-2.1.7.jar
 jcommon-1.0.0.jar
 jdom-1.0.jar
 jfreechart-1.0.0.jar

 --
 jsf-api-2.0.4-b09.jar
 jsf-impl-2.0.4-b09.jar
 --

 jstl-1.2.jar
 junit-3.8.jar
 le-frog-1.0.0.jar
 log4j-1.2.13.jar
 midnight-1.0.0.jar
 mint-choc-1.0.0.jar
 overcast-1.0.0.jar
 pepper-grinder-1.0.0.jar
 poi-3.2-FINAL.jar
 primefaces-3.0-SNAPSHOT.jar
 redmond-1.0.0.jar
 rocket-1.0.0.jar
 rome-1.0.jar
 smoothness-1.0.0.jar
 south-street-1.0.0.jar
 start-1.0.0.jar
 sunny-1.0.0.jar
 swanky-purse-1.0.0.jar
 trontastic-1.0.0.jar
 ui-darkness-1.0.0.jar
 ui-lightness-1.0.0.jar
 vader-1.0.0.jar


 On Fri, Feb 11, 2011 at 5:06 AM, David Jencks david_jen...@yahoo.com wrote:
 You need both the api jar and the implementation in your war.  Which jsf 
 implementation?

 You might consider trying myfaces-bundle deployed outside your web app 
 (removing jsf jars from the web app). This works find in geronimo but I'm 
 not sure at the moment if we do extra initialization to get around this 
 problem.


 thanks
 david jencks


 On Feb 10, 2011, at 5:33 AM, Charles Moulliard wrote:

 Hi,

 I have deployed a WAR project on Karaf which is JSF technology based
 but get this issue :

 14:05:19,408 | WARN  | FelixStartLevel  | war
    | .eclipse.jetty.util.log.Slf4jLog   50 | 1834 -
 org.eclipse.jetty.util - 7.2.2.v20101205 | unavailable
 java.lang.IllegalStateException: Application was not properly
 initialized at startup, could not find Factory:
 javax.faces.context.FacesContextFactory
       at 
 javax.faces.FactoryFinder$FactoryManager.getFactory(FactoryFinder.java:804)[1957:file__Users_charlesmoulliard_Apache_karaf_assembly_target_apache-karaf-2.1.99-SNAPSHOT_deploy_prime-showcase-1.0.0-SNAPSHOT.war:0]
       at 
 javax.faces.FactoryFinder.getFactory(FactoryFinder.java:306)[1957:file__Users_charlesmoulliard_Apache_karaf_assembly_target_apache-karaf-2.1.99-SNAPSHOT_deploy_prime-showcase-1.0.0-SNAPSHOT.war:0]
       at 
 javax.faces.webapp.FacesServlet.init(FacesServlet.java:166)[1957:file__Users_charlesmoulliard_Apache_karaf_assembly_target_apache-karaf-2.1.99-SNAPSHOT_deploy_prime-showcase-1.0.0-SNAPSHOT.war:0]
       at 
 org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:432)[1842:org.eclipse.jetty.servlet:7.2.2.v20101205]
       at 
 org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:260)[1842:org.eclipse.jetty.servlet:7.2.2.v20101205]
       at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55)[1834:org.eclipse.jetty.util:7.2.2.v20101205]

 The lib directory of the WAR contains well the jar of jsf api --
 jsf-api-2.0.4-b09

 Remark : the same WAR deployed on Tomcat works fine

 Is it something that you already experienced ?

 Regards,

 Charles Moulliard

 Sr. Principal Solution Architect - FuseSource
 Apache Committer

 Blog : http://cmoulliard.blogspot.com
 Twitter : http://twitter.com/cmoulliard
 Linkedin : http://www.linkedin.com/in/charlesmoulliard
 Skype: cmoulliard






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Problems with features startlevel

2011-02-12 Thread Guillaume Nodet
 discussions on this topic?
 /Bengt

 2011/2/12 Guillaume Nodet gno...@gmail.com

 On Sat, Feb 12, 2011 at 00:38, Bengt Rodehav be...@rodehav.com wrote:
  Ok, thanks. Is there know way to avoid having to modify
  startup.properties
  then?

 Not really at startup.  I think after a restart, things are different
 as the cglib bundle is already installed and should be used by the
 framework.

  (Do you never sleep or are we in different time zones?)

 GMT +1, and not much / enough sleep unfortunately ;-)

  Den 12 feb 2011 00.15 skrev Guillaume Nodet gno...@gmail.com:
 



 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Passing negative integer as first command parameter...

2011-02-17 Thread Guillaume Nodet
Good question.  Not really sure actually.  Can you try
 range ' -1' 10
with a space between the first quote and the minus sign ?

On Thu, Feb 17, 2011 at 20:15, Brad Beck bb...@peoplenetonline.com wrote:
 How do I escape the first parameter to a command when it is a negative number 
 (e.g. range -1 10)?

 I'm currently getting the following error when I try the example above (Karaf 
 2.1.0)...

 Error executing command pfm:range undefined option -1

 -Brad Beck






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Race between Features and Spring/Blueprint XML

2011-03-09 Thread Guillaume Nodet
I've seen lots of problems when using spring-dm, some of them do not
have any clean solution *at all*, especially when using custom
namespace handlers such as camel.  That's why I've been advocating for
using blueprint instead, which works way better.

Camel support for blueprint is much better since 2.6 and 2.7 will
bring another set of improvements, so if that's an option, i'd switch
to blueprint.
Else, well, you can always try to put some Thread.sleep at some very
fined tuned location in order to make sure the camel namespace handler
is ready and that all camel components are available.

On Wed, Mar 9, 2011 at 15:51, Michael Prieß
mailingliste...@googlemail.com wrote:
 Hi all,

 i like to deploy camel-routes and features inside the deploy directory.

  A deployment look like the following:

 - a feature.xml which contain bundles like Apache Camel, Spring with a start
 level definition.
 - and many xml camel-routes which contain configurations for my components.

 Now i have the problem that my camel-routes have the same start level like
 the features.

 Have anyone a good idea how to resolve the problem?

 Regards,

 Michael Priess






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Help me! about Karaf with spring-DM webApp

2011-03-10 Thread Guillaume Nodet
Could you reformat your code ? I think a lot has been lost.  Also,
this is more a question for the users list, so I'm answering there.
Last, but not least, which version of Karaf do you use ? If you don't
use 2.2.0 which has been released, maybe you could try that one ?

On Thu, Mar 10, 2011 at 09:55, stream stream1...@gmail.com wrote:

 there is a problem that a java.lang.NullPointerException always happen
 when i used spring-dm in my webApp project. unless deleted the code


       contextClass

 org.springframework.osgi.web.context.support.OsgiBundleXmlWebApplicationContext



                org.springframework.web.context.ContextLoaderListener


 in my web.xml..

 who can help me
 i have spent three day...



 exception
 Could not start the servlet context for http context
 [org.ops4j.pax.web.extender.war.internal.WebAppWebContainerContext@cb5d35]
 java.lang.NullPointerException
 at
 org.ops4j.pax.web.service.jetty.internal.JettyServerWrapper.addContext(JettyServerWrapper.java:209)[86:org.ops4j.pax.web.pax-web-jetty:1.0.1]
        at
 org.ops4j.pax.web.service.jetty.internal.JettyServerWrapper.getOrCreateContext(JettyServerWrapper.java:112)[86:org.ops4j.pax.web.pax-web-jetty:1.0.1]
        at
 org.ops4j.pax.web.service.jetty.internal.JettyServerImpl.addServlet(JettyServerImpl.java:137)[86:org.ops4j.pax.web.pax-web-jetty:1.0.1]
        at
 org.ops4j.pax.web.service.jetty.internal.ServerControllerImpl$Started.addServlet(ServerControllerImpl.java:266)[86:org.ops4j.pax.web.pax-web-jetty:1.0.1]
        at
 org.ops4j.pax.web.service.jetty.internal.ServerControllerImpl.addServlet(ServerControllerImpl.java:107)[86:org.ops4j.pax.web.pax-web-jetty:1.0.1]
        at
 org.ops4j.pax.web.service.internal.HttpServiceStarted.registerResources(HttpServiceStarted.java:180)[85:org.ops4j.pax.web.pax-web-runtime:1.0.1]
        at
 org.ops4j.pax.web.service.internal.HttpServiceProxy.registerResources(HttpServiceProxy.java:66)[85:org.ops4j.pax.web.pax-web-runtime:1.0.1]
        at
 org.ops4j.pax.web.extender.war.internal.RegisterWebAppVisitorWC.visit(RegisterWebAppVisitorWC.java:138)[89:org.ops4j.pax.web.pax-web-extender-war:1.0.1]
        at
 org.ops4j.pax.web.extender.war.internal.model.WebApp.accept(WebApp.java:558)[89:org.ops4j.pax.web.pax-web-extender-war:1.0.1]
        at
 org.ops4j.pax.web.extender.war.internal.WebAppPublisher$HttpServiceListener.register(WebAppPublisher.java:170)[89:org.ops4j.pax.web.pax-web-extender-war:1.0.1]






 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Help-me-about-Karaf-with-spring-DM-webApp-tp2659667p2659667.html
 Sent from the Karaf - Dev mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: karaf shell extension with iPojo

2011-03-17 Thread Guillaume Nodet
You can't invoke the command ? or the command does not appear in the
completion when you press tab ?
The former sounds like a bug, but the later is kinda expected.

On Thu, Mar 17, 2011 at 13:48, Lindley Andrew andrew.lind...@ait.ac.at wrote:
 Dear all,



 I am using Karaf 2.1.0 and want to extend the shell with my own
 shell-commands.

 There’s a great tutorial showing how to do this with blueprint.

 http://karaf.apache.org/manual/2.1.99-SNAPSHOT/developers-guide/extending-console.html



 I was trying to do this with iPojo but did not succeed. That’s the sample I
 was trying to run.



 public interface SampleTUI {



   // the supported operations on the shell we're extending

   String FUNCTION_STR = [list];



   // will be used in the activator to define the namespace within the
 shell

   String SCOPE = preserv;



   public void list();



 }



 @Component(name=sample.addons.api.command.KarafSampleTUI)

 @Provides

 public class KarafSampleTUI implements SampleTUI {



   // the supported operations on the shell we're extending

   @ServiceProperty(name=osgi.command.function,
 value=ServiceRegistryTUI.FUNCTION_STR)

   public String[] functions;

   // will be used in the activator to define the namespace within the
 shell

   @ServiceProperty(name=osgi.command.scope,
 value=ServiceRegistryTUI.SCOPE)

   public String scope;



   //these fields are injected

   //@Requires

   //private MyUtils mu;





   @Descriptor(some sample description)

   public void list() {

     System.out.println(testing list);

   }

 }



 ipojo xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;

   xsi:schemaLocation=org.apache.felix.ipojo
 http://felix.apache.org/ipojo/schemas/CURRENT/core.xsd

       org.apache.felix.ipojo.extender
 http://felix.apache.org/ipojo/schemas/CURRENT/extender-pattern.xsd;

   xmlns=org.apache.felix.ipojo



   instance component=sample.addons.api.command.KarafSampleTUI/



 /ipojo



 It perfectly runs within Felix using gogo, but not in karaf (using Felix).
 Is this specific to gogo?

 I also had a look at apache\felix\ipojo\arch\ and
 apache\felix\ipojo\arch\gogo but both aren’t working in karaf as well.



 Thanks for your support,

 Kr Andrew



-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: How does features:refreshUrl handles snapshot

2011-03-17 Thread Guillaume Nodet
I'll experiment to see if it is possible to find if the snapshot has changed.
Could you please raise a JIRA issue for that ?

On Thu, Mar 17, 2011 at 17:27, Guillaume Nodet gno...@gmail.com wrote:
 First refreshUrl only reload the features descriptors.  It doesn't
 update the features.  We've added the dev:watch command in karaf 2.2.0
 which can automatically update bundles if newer snapshots are
 available in your local repo.   I guess the problem is to detect when
 snapshots have actually changed, but I suppose it could be done by
 looking at the maven metadata.
 Though it might be a bit more costly than checking the local file
 system as this would need several http requests for each snapshot, so
 a sufficient delay should be used between polls.

 On Thu, Mar 17, 2011 at 17:12, Dan Tran dant...@gmail.com wrote:
 Hello,

 I am able to install my snaphot features from my company maven
 repository.  When there is new snapshot on my repo,
 features:refreshUrl is able to identify the latest snapshot
 ..-features.xml ( via latest timestamp file ), however after that
 nothing happens.  I am expecting all my SNAPSHOT bundles belong to my
 features.xml to get downloaded and restart.

 is this a bug?

 -Dan




 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Followup on getting felix scr commands to show up in karaf

2011-03-19 Thread Guillaume Nodet
As you noticed, scr 1.6.0 use the old felix shell which is different
from gogo, so that won't work unless there's a bridge somewhere.
For scr trunk, not sure why it doesn't work.  Can you first check if
the command is actually registered in the registry using the osgi:ls
karaf command  ?
If the command is registered but isn't available, that's a bug in
Karaf console.  If the command isn't registered, a suspect an
exception to be thrown in the ScrCommand#register() method.

On Sat, Mar 19, 2011 at 14:56, Christoper Blunck ch...@thebluncks.com wrote:
 Hmmm...   I don't believe I'm trying to use a karaf feature on top of a gogo
 command.
 I'm just trying to run the scr:list command without using tab completion or
 anything.
 Steps to reproduce:
 1.) Download and untar:
  http://www.apache.org/dyn/closer.cgi/karaf/2.2.0/apache-karaf-2.2.0.tar.gz
 2.) Start up karaf bin running cd bin; ./karaf
 3.) Download into the deploy directory:
  http://www.gossipcheck.com/mirrors/apache//felix/org.apache.felix.scr-1.6.0.jar
 4.) type scr:list or scr list at the prompt.  You'll get command not
 found.
 A couple of things I've noticed...
 Here's the source code for the bundle activator for SCR in version 1.6.0:
   http://svn.apache.org/repos/asf/felix/releases/org.apache.felix.scr-1.6.0/src/main/java/org/apache/felix/scr/impl/Activator.java
 Note that in the start() method there's a try-catch around the part where
 the ScrCommand is registered.  I added a debug statement to the catch
 section and found out that there was a NoClassDefFoundError
 on org.apache.felix.shell.Command  when the code was trying to register the
 ScrCommand.  I downloaded org.apache.felix.shll.command 1.4.2 and put it my
 deploy directory.  That made the NoClassDefFoundError go away but it still
 did not make the scr list command work.  I still got a command not
 found.
 Also ... It appears the trunk of SCR has been updated to work with gogo:
 http://svn.apache.org/repos/asf/felix/trunk/scr/src/main/java/org/apache/felix/scr/impl/Activator.java
 The start() method looks different and the comment suggests it now works
 with gogo.
 I pulled the trunk and compiled it.  I then tried to run karaf with
 org.apache.felix.scr-1.6.1-SNAPSHOT.  DS started but I still didn't have my
 scr list command.
 Any ideas?

 -c

 On Sat, Mar 19, 2011 at 3:59 AM, Guillaume Nodet gno...@gmail.com wrote:

 Let's be clear about that.  There's no reason why the scr or any gogo
 commands would not work in karaf.
 What does not work is karaf features on top of gogo such as completion
 (both command and parameter).
 If that's not the case, this should clearly be fixed asap.

 On Sat, Mar 19, 2011 at 02:33, Christoper Blunck ch...@thebluncks.com
 wrote:
  Hello all,
  Over in this thread the gogo'ness of SCR is discussed:
 
   http://karaf.922171.n3.nabble.com/getting-felix-scr-commands-to-show-up-in-karaf-td2257486.html
  I'm stuck in the same problem where my SCR bundle loads and all my
  services
  are properly satisfied and injected but the scr command itself is not
  available.
  Guillame you remarked:
  Actually, I've just made some tests and the commands are functional
  even
  if they don't appear in the tab completion
  I was hoping you could elaborate a little more about this.  Are you
  saying
  that you were able to get to the scr command in the karaf prompt?  Or
  were
  you simply remarking that scr worked wrt injection and that the commands
  just weren't functional?
  I recognize that this ticket is still out there:
   https://issues.apache.org/jira/browse/KARAF-397
  And I see the priority is MAJOR.  Question to the devs:  is this
  something
  you expect will be fixed soon?
  I want to go to Karaf 2.2.0 but the lack of a scr command is going to
  give
  my developers a lot of heartache
 
  Thanks for your time,
 
  -c
 
  --
  Christopher Blunck
  ch...@thebluncks.com
 
 



 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com



 --
 Christopher Blunck
 ch...@thebluncks.com





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Followup on getting felix scr commands to show up in karaf

2011-03-21 Thread Guillaume Nodet
That's the expected behavior.  Command themselves work but not completion.

On Sun, Mar 20, 2011 at 02:35, Christoper Blunck ch...@thebluncks.com wrote:
 Hi Guillaume-
 Thanks for giving me some troubleshooting techniques.  I ran the osgi:ls
 command and here is a snippit of the output:
 Apache Felix Declarative Services (41) provides:
 
 org.osgi.service.cm.ConfigurationListener
 org.apache.felix.scr.ScrService
 org.osgi.service.cm.ManagedService
 org.apache.felix.scr.impl.ScrGogoCommand
 That last line suggests to me that the command itself has been registered.
 Interestingly enough scr list fails but scr:list succeeds:
 karaf@root scr list
 Command not found: scr
 karaf@root scr:list
 No components registered

 When I do a tab-tab at the karaf@root prompt I see a bunch of commands I
 can execute (which is expected).  However, scr:list is not in that list.
  Here are the commands starting with s that are available to me:
  set                              shell:cat
 shell:clear                      shell:each                       shell:echo

 shell:exec                       shell:grep                       shell:head

 shell:history                    shell:if                         shell:info

 shell:java                       shell:logout                     shell:more

 shell:new                        shell:printf
 shell:sleep
 shell:sort                       shell:tac                        shell:tail

 show-tree                        shutdown                         sleep

 sort                             ssh                              ssh:ssh

 ssh:sshd                         sshd                             start

 start-level                      stop

 I am really in over my head wrt gogo and the rest of the shell framework
 stuff so I'm not quite sure what to do next.
 Do you have some ideas of what is going on?
 I'd be happy to help you troubleshoot some more but I don't know how much
 more valuable I'd be at this point...
 Thanks again for your time - I do appreciate it.


 -c

 On Sat, Mar 19, 2011 at 3:50 PM, Guillaume Nodet gno...@gmail.com wrote:

 As you noticed, scr 1.6.0 use the old felix shell which is different
 from gogo, so that won't work unless there's a bridge somewhere.
 For scr trunk, not sure why it doesn't work.  Can you first check if
 the command is actually registered in the registry using the osgi:ls
 karaf command  ?
 If the command is registered but isn't available, that's a bug in
 Karaf console.  If the command isn't registered, a suspect an
 exception to be thrown in the ScrCommand#register() method.

 On Sat, Mar 19, 2011 at 14:56, Christoper Blunck ch...@thebluncks.com
 wrote:
  Hmmm...   I don't believe I'm trying to use a karaf feature on top of a
  gogo
  command.
  I'm just trying to run the scr:list command without using tab completion
  or
  anything.
  Steps to reproduce:
  1.) Download and untar:
 
   http://www.apache.org/dyn/closer.cgi/karaf/2.2.0/apache-karaf-2.2.0.tar.gz
  2.) Start up karaf bin running cd bin; ./karaf
  3.) Download into the deploy directory:
 
   http://www.gossipcheck.com/mirrors/apache//felix/org.apache.felix.scr-1.6.0.jar
  4.) type scr:list or scr list at the prompt.  You'll get command not
  found.
  A couple of things I've noticed...
  Here's the source code for the bundle activator for SCR in version
  1.6.0:
 
    http://svn.apache.org/repos/asf/felix/releases/org.apache.felix.scr-1.6.0/src/main/java/org/apache/felix/scr/impl/Activator.java
  Note that in the start() method there's a try-catch around the part
  where
  the ScrCommand is registered.  I added a debug statement to the catch
  section and found out that there was a NoClassDefFoundError
  on org.apache.felix.shell.Command  when the code was trying to register
  the
  ScrCommand.  I downloaded org.apache.felix.shll.command 1.4.2 and put it
  my
  deploy directory.  That made the NoClassDefFoundError go away but it
  still
  did not make the scr list command work.  I still got a command not
  found.
  Also ... It appears the trunk of SCR has been updated to work with gogo:
 
  http://svn.apache.org/repos/asf/felix/trunk/scr/src/main/java/org/apache/felix/scr/impl/Activator.java
  The start() method looks different and the comment suggests it now works
  with gogo.
  I pulled the trunk and compiled it.  I then tried to run karaf with
  org.apache.felix.scr-1.6.1-SNAPSHOT.  DS started but I still didn't have
  my
  scr list command.
  Any ideas?
 
  -c
 
  On Sat, Mar 19, 2011 at 3:59 AM, Guillaume Nodet gno...@gmail.com
  wrote:
 
  Let's be clear about that.  There's no reason why the scr or any gogo
  commands would not work in karaf.
  What does not work is karaf features on top of gogo such as completion
  (both command and parameter).
  If that's not the case, this should clearly be fixed asap.
 
  On Sat, Mar 19, 2011 at 02:33, Christoper Blunck ch...@thebluncks.com
  wrote:
   Hello all,
   Over

Re: Karaf 2.2.0 Startup with no internet Access

2011-03-30 Thread Guillaume Nodet
That's a bug imho, that file should either not be referenced or be included.
Could you please raise a JIRA issue for that ?

On Wed, Mar 30, 2011 at 16:43, Hervé BARRAULT herve.barra...@gmail.com wrote:
 HI,
 i have download the karaf 2.2.0 and i try to start it. (In my case karaf is
 installed in a clean machine without any maven installation nor internel
 access).

 It is impossible to start it because it seems that the features file is not
 in the delivery.

 16:04:43,527 | WARN  | rint Extender: 1 | FeaturesServiceImpl  |
 res.internal.FeaturesServiceImpl  911 | 23 - org.apache.karaf.features.core
 - 2.2.0 | Unable to add features repository
 mvn:org.apache.karaf.assemblies.features/enterprise/2.2.0/xml/features at
 startup
 java.lang.RuntimeException: URL
 [mvn:org.apache.karaf.assemblies.features/enterprise/2.2.0/xml/features]
 could not be resolved.
     at
 org.ops4j.pax.url.mvn.internal.Connection.getInputStream(Connection.java:195)
     at
 org.apache.karaf.features.internal.FeatureValidationUtil.validate(FeatureValidationUtil.java:49)
     at
 org.apache.karaf.features.internal.FeaturesServiceImpl.validateRepository(FeaturesServiceImpl.java:199)
     at
 org.apache.karaf.features.internal.FeaturesServiceImpl.internalAddRepository(FeaturesServiceImpl.java:210)
     at
 org.apache.karaf.features.internal.FeaturesServiceImpl.start(FeaturesServiceImpl.java:909)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
 Method)[:1.6.0_16]
     at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)[:1.6.0_16]
     at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[:1.6.0_16]
     at java.lang.reflect.Method.invoke(Method.java:597)[:1.6.0_16]
     at
 org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:226)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:824)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:636)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:724)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:64)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:219)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:147)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:640)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:331)[10:org.apache.aries.blueprint:0.3.0]
     at
 org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:227)[10:org.apache.aries.blueprint:0.3.0]
     at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)[:1.6.0_16]
     at
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)[:1.6.0_16]
     at java.util.concurrent.FutureTask.run(FutureTask.java:138)[:1.6.0_16]
     at
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)[:1.6.0_16]
     at
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207)[:1.6.0_16]
     at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[:1.6.0_16]
     at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[:1.6.0_16]
     at java.lang.Thread.run(Thread.java:619)[:1.6.0_16]

 I looked to the system directory and in org\apache\karaf, i have only the
 following directories :
 admin, deployer, diagnostic, features, jaas, org.apache.karaf.management,
 shell (no assemblies).
 I have also search for xml files and i didn't find one.

 (I will do some tests with this one :
 http://repo2.maven.org/maven2/org/apache/karaf/assemblies/features/enterprise/2.2.0/enterprise-2.2.0-features.xml)

 Am i wrong ?

 Thanks for answers
 Herve




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Using custom log4j appenders under Karaf 2.1.4

2011-04-05 Thread Guillaume Nodet
On Tue, Apr 5, 2011 at 23:43, mgardiner gardin...@familysearch.org wrote:
 Hi,

 Are there any examples available showing how to use custom log4j appenders
 under Karaf?  I see in the users guide the following note:

 If you plan to use your own appenders, you need to create an OSGi bundle
 and attach it as a fragment to the bundle with a symbolic name of
 org.ops4j.pax.logging.pax-logging-service. This way, the underlying logging
 system will be able to see and use your appenders.

 We have a custom remoting logging appender we wish to utilize with our
 project hosted in Karaf 2.1.4.

 I am assuming we turn our appenders jar into a bundle with the fragment host
 set to org.ops4j.pax.logging.pax-logging-service and deployed with our
 project.  Is that correct?

Yep, that's correct.


 How do you recommend handling multiple log4j.xml files for each deployment
 environment such as development, staging, and production.


Good question.  We don't usually use xml files for log4j but rather
the configuration based ones.
Take a look at the etc/org.ops4j.pax.logging.cfg file which is
actually a log4j config file.
That file will be used to configure log4j, so you should only touch
that file and configure it
differently between your environments I think;

 Thanks.

 -Mike-

 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Using-custom-log4j-appenders-under-Karaf-2-1-4-tp2781811p2781811.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: WAITING on?

2011-04-13 Thread Guillaume Nodet
log:debug | grep BlueprintEvent

It usually helps, but you may need to be at debug level.
We definitely need to provide a better way to find this information.
Feel free to raise a JIRA for that.

On Wed, Apr 13, 2011 at 20:56, mikevan mvangeert...@comcast.net wrote:
 Is there a console command that will allow you to see what a bundle that is
 active and Waiting is waiting on?


 -
 Mike Van (aka karafman)
 Karaf Team (Contributor)
 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/WAITING-on-tp2817312p2817312.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Karaf shell problem with long lines

2011-04-20 Thread Guillaume Nodet
Yes, and that i've fixed those issues trunk and 2.2.x branch, though
that's not released yet.

On Wed, Apr 20, 2011 at 23:56, Cristiano Gavião cvgav...@gmail.com wrote:
 Hi,

 I'm experiencing some weird behavior using karaf shell command with
 terminal's macosx.

 When I tried to install a feature with a long line pasted inside one
 terminal window (with the size smaller than the copied line) I got a
 RuntimeException...

 See:

 I copied this from a text editor:

 features:addurl
 mvn:org.jbehave.osgi/jbehave-osgi-karaf-features/1.0-SNAPSHOT/xml/features

 and paste it on Karaf command shell... the text is truncated

 karaf@root features:addurl
 mvn:org.jbehave.osgi/jbehave-osgi-karaf-features/1.0-SNAPSHOT/xml/featur

 Could not add Feature Repository:
 java.lang.RuntimeException: URL
 [mvn:org.jbehave.osgi/jbehave-osgi-karaf-features/1.0-SNAPSHOT/xml/featur]
 could not be resolved.

 But if I enlarge the window size and try again, everything works ok... :-D

 Could this be a Terminal's or Karaf's problem ?

 cheers

 Cristiano






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com

Connect at CamelOne May 24-26
The Open Source Integration Conference
http://camelone.com/


Re: Installing custom security provider

2011-04-27 Thread Guillaume Nodet
If boot delegation is available for a given package, it will always be
used first.

On Wed, Apr 27, 2011 at 10:21, Zhemzhitsky Sergey
sergey_zhemzhit...@troika.ru wrote:
 Hi there,



 I’m trying to install custom security provider using this
 http://felix.apache.org/site/65-deploying-security-providers.html link.
 Unfortunately this provider uses xmlsec library which must be placed near
 it.



 I have changed org.osgi.framework.bootdelegation like this:



 org.osgi.framework.bootdelegation=org.apache.karaf.jaas.boot,sun.*,com.sun.*,javax.transaction,javax.transaction.*,
 \

 org.customsec.*, \

 org.apache.xml.security.*



 And now I need all the bundles to delegate requests to
 org.apache.xml.security from the org.osgi.framework.bootdelegation even if
 they import packages from org.apache.xml.security with specific versions.



 Is it possible to achieve?



 Best Regards,

 Sergey Zhemzhitsky



 ___



 The information contained in this message may be privileged and conf
 idential and protected from disclosure. If you are not the original intended
 recipient, you are hereby notified that any review, retransmission,
 dissemination, or other use of, or taking of any action in reliance upon,
 this information is prohibited. If you have received this communication in
 error, please notify the sender immediately by replying to this message and
 delete it from your computer. Thank you for your cooperation. Troika Dialog,
 Russia.

 If you need assistance please contact our Contact Center (+7495) 258 0500 or
 go to www.troika.ru/eng/Contacts/system.wbp





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com

Connect at CamelOne May 24-26
The Open Source Integration Conference
http://camelone.com/


Re: Continuous Delivery

2011-05-04 Thread Guillaume Nodet
One thing I've been working on recently is a deployment agent for
fabric (see 
http://gnodet.blogspot.com/2011/05/introducing-fusesource-fabric.html).
 We've used a slightly different mechanism as the agent is responsible
for all the deployments so that it can actually upgrade all the
bundles, including karaf bundles and even the osgi framework itself.
This way, the agent can actually upgrade features to newer versions.
The downside is that you need to go through the agent for all
deployments.

On Wed, May 4, 2011 at 17:42, bbolingbroke
bolingbrok...@familysearch.org wrote:
 If your goal is to have zero downtime, then when you upgrade to 1.1, would
 you follow these steps?

 You release 1.1 and deploy it to the maven repo

 features:addurl mvn:org.yourorg/yourproject-features/1.1/xml
 features:install yourapp/1.1
 features:uninstall yourapp/1.0
 features:removeurl mvn:org.yourorg/yourproject-features/1.0/xml


 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Continuous-Delivery-tp2899218p2899341.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com

Connect at CamelOne May 24-26
The Open Source Integration Conference
http://camelone.com/


Re: Show karaf prompt after bundle println

2011-05-14 Thread Guillaume Nodet
The prompt is displayed when the execution of a command is finished.
If you're still executing the command, the prompt won't be dipslayed,
but you can retrieve it from the shell's session variables and display
it your self if you need.

On Sat, May 14, 2011 at 14:32, Samuel Cox crankydi...@gmail.com wrote:
 I tried that and it didn't work.  I will say that my bean doing the
 output is written in Scala.  I'm using println(), which I think is
 equivalent to Java's System.out.println().

 On Sat, May 14, 2011 at 12:38 AM, Jean-Baptiste Onofré j...@nanthrax.net 
 wrote:
 Hi Samuel,

 It should be the case.

 Karaf shell intercept the std and err output stream.
 It means that after a System.out.println(), the Karaf prompt should be
 display.
 It's the case in the commands. For instance, osgi:list iterates on the
 bundles and simply display the bundles attribute using System.out.println().

 Try to add a System.out.println() at the end of your bundle output, it
 should display the prompt just after your bundle output.

 Regards
 JB

 On 05/14/2011 03:10 AM, Samuel Cox wrote:

 Hi,

 Is it possible to get the prompt back after some bundle code does a
 System.out.println()?

 I have some beans that need to print stuff to the Karaf console.  I
 can't figure out how to get the prompt back without the user hitting
 enter.

 Many thanks.





-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com

Connect at CamelOne May 24-26
The Open Source Integration Conference
http://camelone.com/


Re: Show karaf prompt after bundle println

2011-05-17 Thread Guillaume Nodet
As I explained the prompt only deals with commands.  If you print
anything on the console and if you want the prompt to be displayed
again. I think the only way would be to create a shell session, and
then print the prompt that can be retrieved by calling
Console#getPrompt(), see
http://svn.apache.org/viewvc/karaf/trunk/shell/console/src/main/java/org/apache/karaf/shell/console/jline/Console.java?revision=1096717view=markup

I don't really see any other way for now.

Maybe another way would be for you to hack the etc/shell.init.script
and perform you diagnostics there ? The script is executed between the
welcome lines and the first prompt...


On Tue, May 17, 2011 at 15:13, Samuel Cox crankydi...@gmail.com wrote:
 Shameless bump to give this one more chance;)  I've been reading the
 documentation on custom commands, but nothing is jumping out...

 Just to be clear on what I need.  I need to automatically run
 diagnostics after all my OSGi bundles have loaded.  The results need
 to be displayed to the user in Karaf's console.

 The running of the diagnostics is currently be triggered by Spring
 bean loading (using an init-method).  In the past, it was in a bundle
 activator.  In neither case, does the karaf prompt show up unless the
 user hits the Enter key.

 It's not the end of the world if I can't get this to work, but it would be 
 nice.

 On Sat, May 14, 2011 at 8:10 AM, Samuel Cox crankydi...@gmail.com wrote:
 I'm not executing a command.  We do have our own commands, and they
 work fine.  This is just a bean in some bundle that displays some
 diagnostics on startup.  If there is a way to execute a command
 automatically when a bundle starts, I could print the diagnostics
 using that.  I couldn't find any info on doing that.

 Thanks for the help btw!

 On Sat, May 14, 2011 at 7:36 AM, Guillaume Nodet gno...@gmail.com wrote:
 The prompt is displayed when the execution of a command is finished.
 If you're still executing the command, the prompt won't be dipslayed,
 but you can retrieve it from the shell's session variables and display
 it your self if you need.

 On Sat, May 14, 2011 at 14:32, Samuel Cox crankydi...@gmail.com wrote:
 I tried that and it didn't work.  I will say that my bean doing the
 output is written in Scala.  I'm using println(), which I think is
 equivalent to Java's System.out.println().

 On Sat, May 14, 2011 at 12:38 AM, Jean-Baptiste Onofré j...@nanthrax.net 
 wrote:
 Hi Samuel,

 It should be the case.

 Karaf shell intercept the std and err output stream.
 It means that after a System.out.println(), the Karaf prompt should be
 display.
 It's the case in the commands. For instance, osgi:list iterates on the
 bundles and simply display the bundles attribute using 
 System.out.println().

 Try to add a System.out.println() at the end of your bundle output, it
 should display the prompt just after your bundle output.

 Regards
 JB

 On 05/14/2011 03:10 AM, Samuel Cox wrote:

 Hi,

 Is it possible to get the prompt back after some bundle code does a
 System.out.println()?

 I have some beans that need to print stuff to the Karaf console.  I
 can't figure out how to get the prompt back without the user hitting
 enter.

 Many thanks.





 --
 Cheers,
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com

 Connect at CamelOne May 24-26
 The Open Source Integration Conference
 http://camelone.com/






-- 
Cheers,
Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com

Connect at CamelOne May 24-26
The Open Source Integration Conference
http://camelone.com/


Re: Bundle dependencies

2011-06-10 Thread Guillaume Nodet
-features-to-repo/goal
     /goals
     configuration
     descriptors

 descriptormvn:org.apache.karaf.assemblies.features/standard/2.2.1/xm
 l/features/descriptor

 descriptormvn:com.myapps/features/${myapps.release}/xml/features/de
 scriptor
     /descriptors

     features
     featureconfig/feature
     featuressh/feature
     featuremanagement/feature
     featurejetty/feature
     featurewebconsole/feature
     featurewar/feature
     featurespring/feature
     featurespring-web/feature
     featurespring-dm/feature
     featurespring-dm-web/feature

     featuremy-custom-bundle/feature
     /features


 includeMvnBasedDescriptorstrue/includeMvnBasedDescriptors
     repositorytarget/features-repo/repository
     addTransitiveFeaturestrue/addTransitiveFeatures
     /configuration
     /execution
     /executions
 /plugin

 I tried to add the includeMvnBasedDescriptors and
 addTransitiveFeatures tags, but it doesn't seem to fix the problem.

 I also customized the org.ops4j.pax.ulr.mvn.cfg file, so that the
 proxy support is enabled and that the 'org.ops4j.pax.url.mvn.settings'
 and 'org.ops4j.pax.url.mvn.localRepository' properties are correctly
 set to my local repository and maven settings (where my proxy is defined).

 My feature description looks like:

 feature name=myFeature version=${myapps.release}
 resolver=(orb)

 bundlemvn:org.springframework.ldap/org.springframework.ldap/1.3.0.RE
 LEASE
 /bundle

 bundlemvn:com.thoughtworks.xstream/com.springsource.com.thoughtworks
 .xstream/1.3.1/bundle
     bundlemvn:com.myapps/myapps/${myapps.release}/bundle
 /feature



  The EFG Mail Gateway made the following annotation  This
 e-mail is confidential. If you are not the intended recipient, you
 should not copy it, re-transmit it, use it or disclose its contents,
 but should return it to the sender immediately and delete the copy
 from your system.
 EFG is not responsible for, nor endorses, any opinion, recommendation,
 conclusion, solicitation, offer or agreement or any information
 contained in this communication.
 EFG cannot accept any responsibility for the accuracy or completeness
 of this message as it has been transmitted over a public network.
 If you suspect that the message may have been intercepted or amended,
 please call the sender. Should you require any further information,
 please contact the Compliance Manager on off...@efggroup.com.
 ==
 








-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-07-17 Thread Guillaume Nodet
Most of Karaf only depends on the slf4j api.  The only dependencies on
pax-logging are related to the management of the logging system itself
(which is understandably tied to the logging system implementation).  if you
remove the shell log commands, you should have no problems.

On Sun, Jul 17, 2011 at 22:40, codeoncoffee codeoncof...@gmail.com wrote:

 Hello,

 I'm trying to run Karaf hosted within a larger application and just can't
 work my way around the class-loading issues with the embedded Log4J classes
 in the PAX Logging implementation.

 Let me take that back, I can work around them, but this whole OSGI layer
 is being implemented as a plugin to a larger application. So I have no
 control to let classes be added to the bootstrap classloader and import
 them
 into the OSGI instance.

 Earlier this year I tried removing PAX and using the simple Felix Log
 Service. I found several areas where Karaf was dependent on PAX logging
 being in place.

 So my question I guess is, Is there any interest in having Karaf be
 agnostic
 to Log Service implementation? Any idea how large of a task this would be?
 I
 don't mind contributing back, but at this point I'm very close to moving
 back to raw Felix.



 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Running-Karaf-without-PAX-Logging-tp3177504p3177504.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-07-25 Thread Guillaume Nodet
The only real use case for not using pax-logging I've heard about is
because people wanted to use logback as a backend.   I think this
might be solved by having a pax-logging implementation based on
logback instead.

On Mon, Jul 25, 2011 at 04:01, Andreas Pieber anpie...@gmail.com wrote:
 Just for interest; any reports by now on this topic? This might be
 interesting for other users too and I think a page in the usermanual (How
 to use Karaf without pax-logging) might be a good idea.
 Kind regards,
 Andreas

 On Sun, Jul 17, 2011 at 23:01, codeoncoffee codeoncof...@gmail.com wrote:

 Excellent. I'll give it a try and report back with steps taken

 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Running-Karaf-without-PAX-Logging-tp3177504p3177529.html
 Sent from the Karaf - User mailing list archive at Nabble.com.





-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Coming to Grips with Blueprint

2011-09-12 Thread Guillaume Nodet
As said, those elements are custom elements made available by projects to
extend the standard blueprint element set with more specific ones.
Each project (camel, activemq, cxf, ...) should document its own namespaces.
Karaf provides a few also (jaas, shell).

On Sat, Sep 10, 2011 at 00:47, Geoffry Roberts geoffry.robe...@gmail.comwrote:

 All,

 I have read about blueprint. I have reviewed the schema for blueprint, I
 have the specification.  I did not find any mention of the use of elements
 like camelContext... or broker... or whatever.  Yet I see these elements
 being used in blueprint files with success.  My question is, Is there any
 documentation that catalogs all this?  What other elements are there?

 Thanks
 --
 Geoffry Roberts




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: EBA and Karaf - Can I install an EBA to KARAF

2011-09-28 Thread Guillaume Nodet
Aries already provides a deployer for eba files iirc.

On Thu, Sep 29, 2011 at 07:29, Jean-Baptiste Onofré j...@nanthrax.net wrote:

 Hi,

 the EBA is an artifact format coming from Apache Aries.

 Currently, we don't have a deployer (polling the deploy folder) for this
 artifact.
 Anyway, it would be good to provide one.
 I will raise a Jira in that way.

 Regards
 JB


 On 09/29/2011 02:07 AM, Matt Madhavan wrote:

 Oh BTW,
 I copied the the eba to the deploy folder as well! Noting happens and no
 log
 neither


 --
 View this message in context: http://karaf.922171.n3.nabble.**
 com/EBA-and-Karaf-Can-I-**install-an-EBA-to-KARAF-**
 tp3377768p339.htmlhttp://karaf.922171.n3.nabble.com/EBA-and-Karaf-Can-I-install-an-EBA-to-KARAF-tp3377768p339.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: EBA and Karaf - Can I install an EBA to KARAF

2011-09-29 Thread Guillaume Nodet
Yeah, that would definitely make sense.
The bundle is the application-install one afaik:

http://repo1.maven.org/maven2/org/apache/aries/application/org.apache.aries.application.install/0.3/


On Thu, Sep 29, 2011 at 07:49, Jean-Baptiste Onofré j...@nanthrax.net wrote:

 Thanks for the update Guillaume.

 Anyway, regarding the application-without-isolation feature, I can't see
 any bundle looking like a deployer.
 Maybe we would have just to add the deployer bundle in the feature.

 I will dig around that.

 Regards
 JB


 On 09/29/2011 07:46 AM, Guillaume Nodet wrote:

 Aries already provides a deployer for eba files iirc.

 On Thu, Sep 29, 2011 at 07:29, Jean-Baptiste Onofré j...@nanthrax.net
 mailto:j...@nanthrax.net wrote:

Hi,

the EBA is an artifact format coming from Apache Aries.

Currently, we don't have a deployer (polling the deploy folder) for
this artifact.
Anyway, it would be good to provide one.
I will raise a Jira in that way.

Regards
JB


On 09/29/2011 02:07 AM, Matt Madhavan wrote:

Oh BTW,
I copied the the eba to the deploy folder as well! Noting
happens and no log
neither


--
View this message in context:
http://karaf.922171.n3.nabble.**__com/EBA-and-Karaf-Can-I-__**
 install-an-EBA-to-KARAF-__**tp3377768p339.html
http://karaf.922171.n3.**nabble.com/EBA-and-Karaf-Can-**
 I-install-an-EBA-to-KARAF-**tp3377768p339.htmlhttp://karaf.922171.n3.nabble.com/EBA-and-Karaf-Can-I-install-an-EBA-to-KARAF-tp3377768p339.html
 
Sent from the Karaf - User mailing list archive at Nabble.com.


--
Jean-Baptiste Onofré
jbono...@apache.org mailto:jbono...@apache.org

http://blog.nanthrax.net
Talend - http://www.talend.com




 --
 
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com


 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: bundles command in 2.2.3

2011-09-30 Thread Guillaume Nodet
It's not really a command afaik.
bundles was resolved to calling getBundles on the bundle context.
You can now do $.context bundles if you want to access the list of
deployed bundles.

On Fri, Sep 30, 2011 at 10:03, Achim Nierbeck bcanh...@googlemail.comwrote:

 Could it be that this command is only available with equinox and not with
 felix?

 just something that crossed my mind with the possibility of verification :)

 2011/9/29 Brad Beck brad.b...@quantumretail.com:
  bundles returned the actual bundle objects which could then be
 interrogated, which is what I really want...
 
  On Sep 29, 2011, at 3:49 PM, Glen Mazza wrote:
 
  osgi:list or list should simply do.
  
  From: Brad Beck [brad.b...@quantumretail.com]
  Sent: Wednesday, September 28, 2011 5:30 PM
  To: user@karaf.apache.org
  Subject: bundles command in 2.2.3
 
  One used to be able to get a list of bundles at the console using
 bundles, the 2.2.3 manual even still references it.
 
  I can't seem to get it to work under 2.2.3. Was this an intentional
 change? If so, is there an alternative?
 
  Thanks,
  -Brad
 
 



 --
 --
 *Achim Nierbeck*


 Apache Karaf http://karaf.apache.org/ Committer  PMC
 OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/
 Committer  Project Lead
 blog http://notizblog.nierbeck.de/




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Add another Jetty Server / OSGI

2011-10-06 Thread Guillaume Nodet
I think pax-web uses factory configurations, so multiple configurations
would lead to multiple http servers.

On Thu, Oct 6, 2011 at 12:02, Jean-Baptiste Onofré j...@nanthrax.net wrote:

 OK, got it.

 Yes it's possible for the Jetty, but I'm not sure for the OSGi HTTP
 service.

 Let me check.

 Regards
 JB


 On 10/06/2011 11:56 AM, Charles Moulliard wrote:

 My idea is to have 2 separate Jetty Servers or one server with by
 example 2 connectors (8181, 8282) to be able to separate
 administration (webconsole, karaf console, ) from camel-cxf,
 camel-jetty that we use in applications and define different level of
 security, logging, 


 On Thu, Oct 6, 2011 at 11:49 AM, Jean-Baptiste Onofréj...@nanthrax.net
  wrote:

 Hi Charles,

 Do you really need another Jetty ?
 I think just a new Jetty connector is enough.

 You can bind a port number for the HTTP service and another port for CXF
 for
 instance.

 WDYT ?

 Regards
 JB

 On 10/06/2011 11:46 AM, Charles Moulliard wrote:


 Hi,

 Is it possible to add a new Jetty Server (different from the one
 provided by default when installing features http or webconsole on
 Karaf) and register it as HTTP OSGI Service to allow by example CXF WS
 to be registered within the Servlet Container of this HTTP Server ? Is
 it something that we can do in blueprint / spring DM or a bundle must
 be created for that (java + xml config files) ?

 Regards,

 Charles Moulliard

 Apache Committer

 Blog : http://cmoulliard.blogspot.com
 Twitter : http://twitter.com/cmoulliard
 Linkedin : 
 http://www.linkedin.com/in/**charlesmoulliardhttp://www.linkedin.com/in/charlesmoulliard
 Skype: cmoulliard


 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com


 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-10-14 Thread Guillaume Nodet
The main problem I see with logback is that there's no properties based
configuration for logback.
That's a real problem imho because we would not be able to leverage the OSGi
ConfigAdmin which is a key point in order to have a clean interaction point
with all configurations.   If you are aware of anything that would look like
a properties based configurator for logback, things may changed.

That's said, I've still haven't heard any real reasons to swtich to logback.
 The version of log4j that is embedded in pax-logging has been enhanced to
fix some over synchronization problem (leading to performance issues) and
also to add the features that were needed (like a split appender, etc...).

What need do you have for logback ?

On Fri, Oct 14, 2011 at 12:31, samspy sam.spyc...@gmail.com wrote:

 Is there any ongoing effort for a Logback backend to Karaf?

 I can only find rather old messages in this regard.

 Thanks,
 Sam

 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Running-Karaf-without-PAX-Logging-tp3177504p3421252.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-10-14 Thread Guillaume Nodet
Maybe we should include your appender by default in pax-logging ?  It sounds
quite useful to me.

On Fri, Oct 14, 2011 at 12:51, Achim Nierbeck bcanh...@googlemail.comwrote:

 Well, the only thing i would like to have would be the appender that
 already does the zipping of roled over log files.
 But this can be easily done with a custom appender and I wrote a blog [1]
 about it how to attach it to karaf, so no
 real need to actually get it into pax-logging ;)

 regards, Achim

 [1] - http://nierbeck.de/cgi-bin/weblog_basic/index.php?p=201

 2011/10/14 Guillaume Nodet gno...@gmail.com

 The main problem I see with logback is that there's no properties based
 configuration for logback.
 That's a real problem imho because we would not be able to leverage the
 OSGi ConfigAdmin which is a key point in order to have a clean interaction
 point with all configurations.   If you are aware of anything that would
 look like a properties based configurator for logback, things may changed.

 That's said, I've still haven't heard any real reasons to swtich to
 logback.  The version of log4j that is embedded in pax-logging has been
 enhanced to fix some over synchronization problem (leading to performance
 issues) and also to add the features that were needed (like a split
 appender, etc...).

 What need do you have for logback ?


 On Fri, Oct 14, 2011 at 12:31, samspy sam.spyc...@gmail.com wrote:

 Is there any ongoing effort for a Logback backend to Karaf?

 I can only find rather old messages in this regard.

 Thanks,
 Sam

 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Running-Karaf-without-PAX-Logging-tp3177504p3421252.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




 --
 
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com




 --
 --
 *Achim Nierbeck*


 Apache Karaf http://karaf.apache.org/ Committer  PMC
 OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/  Committer 
 Project Lead
 blog http://notizblog.nierbeck.de/




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-10-14 Thread Guillaume Nodet
On Fri, Oct 14, 2011 at 17:21, ceki c...@qos.ch wrote:


 On Oct 14, 2011 3:47:54 am Guillaume Nodet wrote:
  The main problem I see with logback is that there's no properties
  based configuration for logback.
 
  That's a real problem imho because we would not be able to leverage
  the OSGi ConfigAdmin which is a key point in order to have a clean
  interaction point with all configurations.  If you are aware of
  anything that would look like a properties based configurator for
  logback, things may changed.

 Hello Guillaume,

 I am puzzled by the your remark. Logback can be configured via
 configuration files in XML or groovy format. It can also be configured
 programmatically via an API. (XML/groovy configuration invoke this API
 underneath). Logback can also be interrogated programmatically. For
 example, you can get obtain the list of existing logger and get (or
 set) the level of any logger.

 Why on earth would you care that logback does not support
 configuration files in properties format? What am I missing?


I do care a lot about logback being configure with properties to be able to
leverage ConfigAdmin.  It should be *the* way to configure things in OSGi.
 That way, you can distribute the configuration remotely or store it in a DB
or in any other mean without having to rewrite all the bundles to leverage
that.  That's the benefit of using a standard service.

You just said the configuration file needs to be xml or groovy, which is
different from a properties file.  For config admin, the input data needs to
be a map of key/value pairs.  I haven't said it was not possible with
logback, just that it does not exist, and I don't have the time and will to
start writing a new configuration mechanism for logback without having any
real need to switch to it.

But if you want to try that, it could be nice.  Though I still haven't heard
the reasons why you want logback instead of pax-logging.



 Best regards,
 --
 Ceki




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-10-14 Thread Guillaume Nodet
Well, if you really want to go that way, let's do it:
  - faster implementation: i explained we fixed most synchronization issues
which were performance bottlenecks, so I'm not really convinced by that
argument
  - extensive battery of tests: well, log4j is 10 years old and quite stable
;-)
  - loback speaks slf4j: in our case, that does not help, as we'd have to
use a intermediate api anyway because of the other constraints of
pax-logging
  - extensive documentation: maybe that's true, i haven't had many complains
about that
  - configuration files in xml or groovy: we need key/value pairs, so we
can't leverage those
  - automatic relaoding of config files: that's provided by config admin and
we would not use it
  - lilith: i think it can be used anyway
  - conditional processing: we can't use xml or groovy
  - filters: we support MDC in pax-logging, i'm sure we can implement that
if needed
  - sifting appender: it has been added in pax-logging
  - automatic compression of log files: Achim provided the code on github
and i think we can easily add it to pax-logging
  - stack traces: lol, this actually comes from James Strachan who provided
a patch to log4j and we have the same for osgi
  - automic removal: that's a minor feature we could add too

In summary: most of those reasons may actually be true for log4j, but not
for pax-logging.  I don't have any problems with logback, it's just that I
don't want to spend too much time to integrate it into pax-logging.  You're
very welcome to do it if you want, as long as we keep the same feature set.
  If you really need something in the above list that has not been fixed
yet, you can either fix the problem in pax-logging / log4j, or enhance
pax-logging to change the backend, but I just think the costs are very
different.

To your other reply, you're right, I want to enforce ConfigAdmin.

On Fri, Oct 14, 2011 at 18:04, samspy sam.spyc...@gmail.com wrote:

 Hi Guillaume

 I assume you missed my

 http://karaf.922171.n3.nabble.com/Running-Karaf-without-PAX-Logging-tp3177504p3421434.html
 post , where I linked the published reasons for switching to logback as
 well
 as stating our own.

 Could you perhaps briefly explain why the backend logging configuration
 file
 needs to be read by non-backend karaf components?

 Thanks again,
 Sam


 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Running-Karaf-without-PAX-Logging-tp3177504p3422108.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-10-14 Thread Guillaume Nodet
On Fri, Oct 14, 2011 at 18:27, ceki c...@qos.ch wrote:

 On 14/10/2011 5:48 PM, Guillaume Nodet wrote:

 Thank you for your quick response.


  I do care a lot about logback being configure with properties to be able
 to leverage ConfigAdmin.  It should be *the* way to configure things in
 OSGi.  That way, you can distribute the configuration remotely or store
 it in a DB or in any other mean without having to rewrite all the
 bundles to leverage that.  That's the benefit of using a standard service.


 How does any of the above change for properties format. For log4j which
 supports properties format, you still need to invoke PropertyConfigurator on
 the properties (or some URL containing the properties). It would be no
 different with logback, except that you would invoke a different
 configurator.



Yeah, I agree.  I just don't want to write that configurator.




  You just said the configuration file needs to be xml or groovy, which is
 different from a properties file.  For config admin, the input data
 needs to be a map of key/value pairs.  I haven't said it was not
 possible with logback, just that it does not exist, and I don't have the
 time and will to start writing a new configuration mechanism for logback
 without having any real need to switch to it.


  But if you want to try that, it could be nice.  Though I still haven't
 heard the reasons why you want logback instead of pax-logging.


 I am not a Karaf user, at least not yet. I am the founder of both log4j and
 logback projects although I now work mostly on logback. I am just trying to
 understand your use case for properties configuration. My apologies if the
 use case is obvious for Karaf users.


The use case is to provide the configuration through the standard OSGi
ConfigAdmin service.  The main benefit is that the data storage can be
abstracted.  We use files by default as the primary source, but Cellar can
push configuration changes between instances, Fuse Fabric stores them in
Zookeeper, I've also seen people using a JDBC storage for the configuration.
 All the things are mostly interesting when dealing with large deployments,
not for a single instance, I agree.



 Best regards,
 --
 Ceki




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Running Karaf without PAX Logging

2011-10-14 Thread Guillaume Nodet
Yeah, but you loose the ability to easily configure one logger level or
such.  Fine grained configuration is much easier imho, but you're right, it
would work too.

On Fri, Oct 14, 2011 at 19:28, David Jencks david_jen...@yahoo.com wrote:

 It might not fit too well with the felix/karaf idea of installing config
 admin pid configs through .cfg property files, but config admin has no
 problem dealing with a string property that is an entire xml document.  So
 I'd go for a handy way to initialize such config admin properties rather
 than a whole new logback configurator.  Maybe some kind of notation like

 ..file.key=filelocation

 which means the value for key is read from the filelocation.

 thanks
 david jencks

 On Oct 14, 2011, at 9:33 AM, Guillaume Nodet wrote:



 On Fri, Oct 14, 2011 at 18:27, ceki c...@qos.ch wrote:

 On 14/10/2011 5:48 PM, Guillaume Nodet wrote:

 Thank you for your quick response.


  I do care a lot about logback being configure with properties to be able
 to leverage ConfigAdmin.  It should be *the* way to configure things in
 OSGi.  That way, you can distribute the configuration remotely or store
 it in a DB or in any other mean without having to rewrite all the
 bundles to leverage that.  That's the benefit of using a standard
 service.


 How does any of the above change for properties format. For log4j which
 supports properties format, you still need to invoke PropertyConfigurator on
 the properties (or some URL containing the properties). It would be no
 different with logback, except that you would invoke a different
 configurator.



 Yeah, I agree.  I just don't want to write that configurator.




  You just said the configuration file needs to be xml or groovy, which is
 different from a properties file.  For config admin, the input data
 needs to be a map of key/value pairs.  I haven't said it was not
 possible with logback, just that it does not exist, and I don't have the
 time and will to start writing a new configuration mechanism for logback
 without having any real need to switch to it.


  But if you want to try that, it could be nice.  Though I still haven't
 heard the reasons why you want logback instead of pax-logging.


 I am not a Karaf user, at least not yet. I am the founder of both log4j
 and logback projects although I now work mostly on logback. I am just trying
 to understand your use case for properties configuration. My apologies if
 the use case is obvious for Karaf users.


 The use case is to provide the configuration through the standard OSGi
 ConfigAdmin service.  The main benefit is that the data storage can be
 abstracted.  We use files by default as the primary source, but Cellar can
 push configuration changes between instances, Fuse Fabric stores them in
 Zookeeper, I've also seen people using a JDBC storage for the configuration.
  All the things are mostly interesting when dealing with large deployments,
 not for a single instance, I agree.



 Best regards,
 --
 Ceki




 --
 
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 Open Source SOA
 http://fusesource.com





-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Simple way to log all bundle events

2011-10-24 Thread Guillaume Nodet
I think you have to.  The OSGi framework is not mandated to log bundle /
service events, and Felix Framework does not, so you could either use a
BundleListener or an EventHandler and log the events yourself.

If you already use Camel, you could also look at
http://gnodet.blogspot.com/2010/09/two-karaf-related-camel-components.htmland
use the event component to grab osgi events and log them using the log
component.


On Mon, Oct 24, 2011 at 10:20, Zhemzhitsky Sergey 
sergey_zhemzhit...@troika.ru wrote:

  Hi there, 

 ** **

 I have to log all bundle events (started, starting, etc.) among with some
 bundle properties, so I’m wondering in whether there is a simple way to do
 it without developing of custom BundleListener?

 ** **

 ** **

 Best Regards,

 Sergey

 ___



 The information contained in this message may be privileged and conf
 idential and protected from disclosure. If you are not the original intended
 recipient, you are hereby notified that any review, retransmission,
 dissemination, or other use of, or taking of any action in reliance upon,
 this information is prohibited. If you have received this communication in
 error, please notify the sender immediately by replying to this message and
 delete it from your computer. Thank you for your cooperation. Troika Dialog,
 Russia.

 If you need assistance please contact our Contact Center (+7495) 258 0500or 
 go to
 www.troika.ru/eng/Contacts/system.wbp






-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Is it possible to authorize a local karaf client without providing login information?

2011-10-25 Thread Guillaume Nodet
Fwiw, we'd have to also enhance our ssh server integration to support
certificate based authentication.  The sshd code itself can support that,
but we'd have to implement an org.apache.sshd.server.PublickeyAuthenticator
and provide the needed means to manage the certificates in Karaf.

On Tue, Oct 25, 2011 at 10:50, Jean-Baptiste Onofré j...@nanthrax.net wrote:

 Correct, the release is not available, and no, it's not yet scheduled.

 Feel free to create the Jira (or I will do it if you want).

 Regards
 JB


 On 10/25/2011 10:45 AM, mst wrote:

 Hi JB,

 thanks for the quick response.
 I guess ...we can enhance the client... means that this feature is not
 available right now. Is it scheduled for a certain release? Should I
 create
 an issue?

 Best
 Markus


 --
 View this message in context: http://karaf.922171.n3.nabble.**
 com/Is-it-possible-to-**authorize-a-local-karaf-**
 client-without-providing-**login-information-**tp3450816p3450846.htmlhttp://karaf.922171.n3.nabble.com/Is-it-possible-to-authorize-a-local-karaf-client-without-providing-login-information-tp3450816p3450846.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Aries 0.1 - Latest Karaf version I can use

2011-10-27 Thread Guillaume Nodet
You could try with karaf 2.0.0, as 2.1.0 is using Aries 0.2.  Though I don't
really remember if 2.1 could work with aries 0.1.
You try, you could download a karaf version, launch it and update the aries
bundles with the 0.1 version to see if it still works.

On Thu, Oct 27, 2011 at 21:23, Matt Madhavan mattmadha...@gmail.com wrote:

 Hello,
 Due to my client's target OSgG environment (IBM WAS 7.1) I;m stuck with
 Aries 0.1.

 I'm now trying to create a target environment with KARAF and Aries 0.1
 bundles from IBM!

 Can some one let me know whats the latest Karaf version I can use that will
 work with Aries 0.1? Also I would like to have Camel installed as well.

 Thanks in Advance!

 Matt Madhavan


 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Aries-0-1-Latest-Karaf-version-I-can-use-tp3458809p3458809.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Aries 0.1 - Latest Karaf version I can use

2011-10-27 Thread Guillaume Nodet
Camel is not really dependant on Karaf, so any version should work.
 However, camel-blueprint will have some requirement on aries (so may not be
able to use that part).  The commands that have been added recently will of
course not work either.

On Thu, Oct 27, 2011 at 22:32, Matt Madhavan mattmadha...@gmail.com wrote:

 Hi Guillaume,
 I could make it work with 2.0.0 but now Camel does not like Karaf 2.0.0.

 Any idea which version of Camel will work with Karaf 2.0.0?

 Man IBM!

 Thanks
 Matt


 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Aries-0-1-Latest-Karaf-version-I-can-use-tp3458809p3459051.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Aries and Spring Co-Existance in Karaf

2011-11-01 Thread Guillaume Nodet
You can use OSGi services for that.  OSGi services can be exported and
imported irrespective of the underlying technology used.

On Tue, Nov 1, 2011 at 13:35, Raman Gupta rocketra...@gmail.com wrote:

 On 11/01/2011 06:05 AM, Ioannis Canellos wrote:
  Let's not confuse blueprint with spring. Blueprint is
  a declarative way to work with OSGi services and Spring is a framework
  for creating applications.
  I don't think that Aries has the same focus with Spring but with
 SpringDM.
 
  You can always use both, if you have to go with Spring.
 
  If I had to use Spring, I would use it only where its necessary and
  for managing services etc I would use Aries.
  Example:
  In Cellar 90% of the modules use Aries, but there is a single module
  that uses Spring/SpringDM. We don't have any problem with that.

 What would have been nice is if Blueprint provided a way, out of the
 box, to expose beans created by Spring or Guice to the Blueprint
 context. That way, one could use the DI framework of choice /
 annotations inside a bundle, while consistently using Blueprint as a
 microservice layer. I'm surprised the Blueprint spec developers didn't
 consider interop with existing DI frameworks as a first class spec
 item. I suppose such functionality could still be implemented as a
 Blueprint extension for each DI framework.

 Regards,
 Raman Gupta
 VIVO Systems
 http://vivosys.com




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: shell commands user roles

2011-11-02 Thread Guillaume Nodet
Not really, while that could be implemented for commands, the problem is
that the command line also allows introspection and scripting and
authorization can't easily be done at that level so the console would not
be totally secured anyway.

On Wed, Nov 2, 2011 at 16:25, rrsavage rrsav...@hotmail.com wrote:

 I'm new to Karaf and have a question about user access control within the
 (SSH) shell.  Is there a way to define more granular level of user access
 to
 see (list/autocomplete) and execute commands via the (SSH) shell?  For
 example, can certain commands be restricted to a configured set of user
 roles via the command's name or scope?

 Thanks, Robert


 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/shell-commands-user-roles-tp3474148p3474148.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: shell commands user roles

2011-11-02 Thread Guillaume Nodet
We'd have to keep anything we do on role based access consistent with the
web console / jmx management layers.

On Wed, Nov 2, 2011 at 17:16, Achim Nierbeck bcanh...@googlemail.comwrote:

 Hi JB, Robert

 sounds like a reasonable User/Role feature for Karaf,
 would be interesting to see what roles we have here,
 a full accessible admin,
 a user level,
 do we need more distinct levels, like for example
 features, web, that correspond to the std. feature sets we have?

 @Robert could you open a Jira issue for that feature request :)

 regards, Achim


 2011/11/2 Jean-Baptiste Onofré j...@nanthrax.net

 Hi Robert,

 it's not possible for now but it's a good idea. We have something similar
 in Apache Kalumet (called AccessList).

 It's a good new feature for Karaf 3.0.

 Regards
 JB


 On 11/02/2011 04:58 PM, rrsavage wrote:

 Really what I'm after is a two level access system.  An admin level
 that
 has full access to all commands, scripting, introspection, etc.  And a
 user level of access that perhaps only provides access to a limited
 number
 of command.  Additionally user level access would disallow scripting
 and
 introspection capabilities.   Is this a reasonable approach and is it
 even
 possible?

 Thanks, Robert

 --
 View this message in context: http://karaf.922171.n3.nabble.**
 com/shell-commands-user-roles-**tp3474148p3474241.htmlhttp://karaf.922171.n3.nabble.com/shell-commands-user-roles-tp3474148p3474241.html
 Sent from the Karaf - User mailing list archive at Nabble.com.


 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com




 --
 --
 *Achim Nierbeck*


 Apache Karaf http://karaf.apache.org/ Committer  PMC
 OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/  Committer
  Project Lead
 blog http://notizblog.nierbeck.de/




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Seamlessly switching from Felix and Equinox - data folder

2011-11-03 Thread Guillaume Nodet
That's not really possible because each framework stores its data in a
proprietary format.
It's not something I would advise to do anyway, unless you're still in
development mode in which case bou may want to take a look at pax-exam.

On Thu, Nov 3, 2011 at 22:15, Matt Madhavan mattmadha...@gmail.com wrote:

 Hello,
 My understanding is that when I switch OSGi runtime the data folder becomes
 stale and I have to start delete the data folder content and start
 installing my features and bundles again.

 Can I preserve the state of my Karaf between OSGi runtime switches? Will
 save some time!

 Thanks
 Matt

 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Seamlessly-switching-from-Felix-and-Equinox-data-folder-tp3478439p3478439.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Fetching feature from a nexus repository

2011-11-15 Thread Guillaume Nodet
I think in some cases, the error message can be misleading.  Can you check
that the jar returned by your nexus is a real jar and not an error page?

On Tue, Nov 15, 2011 at 13:02, Michael Prieß mailingliste...@googlemail.com
 wrote:

 Hello,

 my Karaf Installation is behind a Firewall, so I have to use my own
 Maven Repository in the same subnet (Nexus) to fetch the wrapper
 feature from my nexus.

 So I changed the file org.ops4j.pax.url.mvn.cfg, uncommented the other
 repository and added my repository:

 org.ops4j.pax.url.mvn.repositories= \
http://myRepro/content/groups/public/

 If I now run a feature:install wrapper I get the following Error:

 Manifest not present in the first entry of the zip
 mvn:org.apache.karaf.shell/org.apache.karaf.shell.wrapper/2.2.4

 Have anyone a idea how fix this failure?

 Regards,

 Michael




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: OSGI Fragment and Logging

2011-11-16 Thread Guillaume Nodet
You don't need to import that package because your fragment will be
associated to the pax-logging-service bundle which contain that class, so
it will be directly available.

On Wed, Nov 16, 2011 at 15:27, Hervé BARRAULT herve.barra...@gmail.comwrote:

 Hi Achim
 My problem is the fragment is not able to be resolved as the package
 org.apache.log4j.helpers can't be imported.
 I thought Pax Logging export it but it does not. Adding again all the
 log4j API is not i think the best idea.

 Regards

 Hervé

 On 11/16/11, Achim Nierbeck bcanh...@googlemail.com wrote:
  Hi Hervé
 
  thanx for the hint.
  Did you also take a look a the configuring part?
  Cause to get things going in Karaf/ServiceMix you need to make sure your
  fragment is
  available before the host bundle enters the resolved state.
 
  regards, Achim
 
  2011/11/16 Hervé BARRAULT herve.barra...@gmail.com
 
  Thanks for the answer.
  I have seen your blog but my case is more a problem of dependency
  management than building a fragment itself.
 
  PS: be careful, in your blog,  close and open xml tag don't match :
  import -Package!*/import
  fragment -Hostorg.ops4j.pax.logging.pax-logging-service/fragment
 
  On 11/16/11, Achim Nierbeck bcanh...@googlemail.com wrote:
   Hi
  
   I wrote a blog about it [1]
  
   regards, Achim
  
   [1] - http://nierbeck.de/cgi-bin/weblog_basic/index.php?p=201
  
   2011/11/16 Hervé BARRAULT herve.barra...@gmail.com
  
   Hi,
  
   I am using using ServiceMix 4.3.0-fuse-01-00 so Karaf
   
 
 http://fusesource.com/wiki/display/ProdInfo/Fuse+Karaf+v2.0.0+Release+Notes
  
   2.0.0-fuse-00-00
  
   I have seen the following documentation :
   http://karaf.apache.org/manual/2.2.2/users-guide/logging-system.html
  
   Especially, the Using your own appenders section.
  
   I have tried to do something like the documentation but i have an
   issue.
  
   For my application i need to use the CountingQuietWriter class which
 is
  in
   the org.apache.log4j.helpers package.
   But it seems that pax-logging-api does not export this package.
  
   What is the best way to import this package ?
  
   For information (i know it is an old version) but pax-logging-api
  [1.5.2]
   export log4j 1.2.15 packages (looking to the export-package)
   and pax-logging-service [1.5.2] embeds a log4j 1.2.16 (looking to the
  pom)
  
   Thanks For Answers
  
   Hervé
  
  
  
  
   --
   *Achim Nierbeck*
  
   Apache Karaf http://karaf.apache.org/ Committer  PMC
   OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/
 Committer
  
   Project Lead
   blog http://notizblog.nierbeck.de/
  
 
 
 
 
  --
  *Achim Nierbeck*
 
  Apache Karaf http://karaf.apache.org/ Committer  PMC
  OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/ Committer
 
  Project Lead
  blog http://notizblog.nierbeck.de/
 




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: OSGI Fragment and Logging

2011-11-16 Thread Guillaume Nodet
Try adding  Import-Package!*/Import-Package in your pom

On Wed, Nov 16, 2011 at 15:41, Hervé BARRAULT herve.barra...@gmail.comwrote:

 Hi Guillaume,
 I am using the maven-bundle-plugin (1.4.0), and i don't find a way to
 build a bundle without importing all needed packages.

 Regards

 Hervé

 On 11/16/11, Guillaume Nodet gno...@gmail.com wrote:
  You don't need to import that package because your fragment will be
  associated to the pax-logging-service bundle which contain that class, so
  it will be directly available.
 
  On Wed, Nov 16, 2011 at 15:27, Hervé BARRAULT
  herve.barra...@gmail.comwrote:
 
  Hi Achim
  My problem is the fragment is not able to be resolved as the package
  org.apache.log4j.helpers can't be imported.
  I thought Pax Logging export it but it does not. Adding again all the
  log4j API is not i think the best idea.
 
  Regards
 
  Hervé
 
  On 11/16/11, Achim Nierbeck bcanh...@googlemail.com wrote:
   Hi Hervé
  
   thanx for the hint.
   Did you also take a look a the configuring part?
   Cause to get things going in Karaf/ServiceMix you need to make sure
 your
   fragment is
   available before the host bundle enters the resolved state.
  
   regards, Achim
  
   2011/11/16 Hervé BARRAULT herve.barra...@gmail.com
  
   Thanks for the answer.
   I have seen your blog but my case is more a problem of dependency
   management than building a fragment itself.
  
   PS: be careful, in your blog,  close and open xml tag don't match :
   import -Package!*/import
   fragment -Hostorg.ops4j.pax.logging.pax-logging-service/fragment
  
   On 11/16/11, Achim Nierbeck bcanh...@googlemail.com wrote:
Hi
   
I wrote a blog about it [1]
   
regards, Achim
   
[1] - http://nierbeck.de/cgi-bin/weblog_basic/index.php?p=201
   
2011/11/16 Hervé BARRAULT herve.barra...@gmail.com
   
Hi,
   
I am using using ServiceMix 4.3.0-fuse-01-00 so Karaf

  
 
 http://fusesource.com/wiki/display/ProdInfo/Fuse+Karaf+v2.0.0+Release+Notes
   
2.0.0-fuse-00-00
   
I have seen the following documentation :
   
 http://karaf.apache.org/manual/2.2.2/users-guide/logging-system.html
   
Especially, the Using your own appenders section.
   
I have tried to do something like the documentation but i have an
issue.
   
For my application i need to use the CountingQuietWriter class
 which
  is
   in
the org.apache.log4j.helpers package.
But it seems that pax-logging-api does not export this package.
   
What is the best way to import this package ?
   
For information (i know it is an old version) but pax-logging-api
   [1.5.2]
export log4j 1.2.15 packages (looking to the export-package)
and pax-logging-service [1.5.2] embeds a log4j 1.2.16 (looking to
the
   pom)
   
Thanks For Answers
   
Hervé
   
   
   
   
--
*Achim Nierbeck*
   
Apache Karaf http://karaf.apache.org/ Committer  PMC
OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/
  Committer
   
Project Lead
blog http://notizblog.nierbeck.de/
   
  
  
  
  
   --
   *Achim Nierbeck*
  
   Apache Karaf http://karaf.apache.org/ Committer  PMC
   OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/
 Committer
  
   Project Lead
   blog http://notizblog.nierbeck.de/
  
 
 
 
 
  --
  
  Guillaume Nodet
  
  Blog: http://gnodet.blogspot.com/
  
  Open Source SOA
  http://fusesource.com
 




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Bouncy Castle JCE with Felix

2011-12-01 Thread Guillaume Nodet
It definitely looks like a bug in Felix.  Please raise a JIRA for it.

On Thu, Dec 1, 2011 at 17:08, Caspar MacRae ear...@gmail.com wrote:

 Hi,

 I've an issue with Bouncy Castle JCE running with Felix (I found this while
 trying to extend our custom Karaf distro, it seems to be a problem with
 Karaf 2.2.0 (Felix 3.0.8) through to 2.2.4 (Felix 3.0.9) but doesn't happen
 when I use Equinox.


 Could not create framework: java.lang.ArrayIndexOutOfBoundsException: -1
 java.lang.ArrayIndexOutOfBoundsException: -1
     at java.util.ArrayList.get(ArrayList.java:324)
     at
 org.apache.felix.framework.BundleImpl.getCurrentModule(BundleImpl.java:1050)
     at
 org.apache.felix.framework.BundleImpl.getSymbolicName(BundleImpl.java:859)
     at org.apache.felix.framework.Felix.toString(Felix.java:1019)
     at org.apache.felix.framework.Logger.doLog(Logger.java:128)
     at org.apache.felix.framework.Logger._log(Logger.java:181)
     at org.apache.felix.framework.Logger.log(Logger.java:114)
     at
 org.apache.felix.framework.ExtensionManager.init(ExtensionManager.java:201)
     at org.apache.felix.framework.Felix.init(Felix.java:374)
     at
 org.apache.felix.framework.FrameworkFactory.newFramework(FrameworkFactory.java:28)
     at org.apache.karaf.main.Main.launch(Main.java:266)
     at org.apache.karaf.main.Main.main(Main.java:427)


 Steps to reproduce:

 wget
 http://www.apache.org/dyn/closer.cgi/karaf/2.2.4/apache-karaf-2.2.4.tar.gz

 tar -xvzf  apache-karaf-2.2.4.tar.gz

 cd apache-karaf-2.2.4/

 # Assuming you've got bcprov-jdk16-1.46.jar in your maven repo

 cp ~/.m2/repository/org/bouncycastle/bcprov-jdk16/1.46/bcprov-jdk16-1.46.jar
 ./lib/ext/

 nano etc/custom.properties    # Add the following:

 org.osgi.framework.system.packages.extra =  \
    org.bouncycastle.math.ec;version=1.46; \
    org.bouncycastle.jce.provider;version=1.46;
 org.apache.felix.karaf.security.providers =
 org.bouncycastle.jce.provider.BouncyCastleProvider
 org.osgi.framework.bootdelegation = org.bouncycastle.*;

 ./bin/karaf
 # It exits immediately with the stacktrace above


 Am I doing something incredibly stupid or should a bug be raised (with Karaf
 or Felix)?


 thanks,
 Caspar




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

Open Source SOA
http://fusesource.com


Re: Karaf - managing Camel routes startup behavior

2012-01-23 Thread Guillaume Nodet
You should listen to CamelContext objects being registered as OSGi services.
If they all have autostart = false, you could then manually control them.

On Mon, Jan 23, 2012 at 10:51, kmoens kris_mo...@yahoo.com wrote:
 Hi JB,

 thanks for the idea. Not sure this covers our needs, we want to control
 automatically the startup of deployed routes at both boottime of karaf and
 deploy time of a new bundle containing routes.

 In this case the bean you proposed should be listening on 'deploy events' of
 bundles having routes in spring xml files, how to achieve that?

 When I activate trace, I see the follwing messages logged:

 13:41:18,915 | INFO  | SpringOsgiExtenderThread-3 |
 OsgiBundleXmlApplicationContext | 47 - org.springframework.context -
 3.0.5.RELEASE | Refreshing
 OsgiBundleXmlApplicationContext(bundle=com.my.samples.Sample1,
 config=osgibundle:/META-INF/spring/*.xml): startup date [Wed Jan 18 13:41:18
 CET 2012]; root of context hierarchy
 13:41:18,915 | TRACE | SpringOsgiExtenderThread-3 |
 OsgiBundleResourcePatternResolver | 60 - org.springframework.osgi.io - 1.2.1
 | Found root resources for [osgibundle:/META-INF/spring/] :{URL
 [bundle://138.0:0/META-INF/spring/]}
 13:41:18,915 | TRACE | SpringOsgiExtenderThread-3 |
 OsgiBundleResourcePatternResolver | 60 - org.springframework.osgi.io - 1.2.1
 | Resolved location pattern [osgibundle:/META-INF/spring/*.xml] to resources
 [URL [bundle://138.0:0/META-INF/spring/Sample1.xml]]

 I was wondering if we could find a way to extend the spring deploy process
 some how and add our condition.

 BRs,
 Kris


 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Karaf-managing-Camel-routes-startup-behavior-tp3669403p3681405.html
 Sent from the Karaf - User mailing list archive at Nabble.com.



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Logging using log4j filters

2012-01-30 Thread Guillaume Nodet
The filter support has been added in pax-logging.
Have a look at
   
https://github.com/ops4j/org.ops4j.pax.logging/blob/master/pax-logging-service/src/main/java/org/apache/log4j/PaxLoggingConfigurator.java

You may very well be right that the order isn't kept, which would
definitely be a bug.

On Mon, Jan 30, 2012 at 10:17, Bengt Rodehav be...@rodehav.com wrote:
 I have the following configuration in my org.ops4j.pax.logging.cfg:

 # Per bundle log at INFO level
 log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
 log4j.appender.bundle.key=bundle.name
 log4j.appender.bundle.default=karaf
 log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
 log4j.appender.bundle.appender.MaxFileSize=10MB
 log4j.appender.bundle.appender.MaxBackupIndex=2
 log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
 log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p
 | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
 log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name\\}.log
 log4j.appender.bundle.appender.append=true
 log4j.appender.bundle.threshold=INFO

 # TRACE level for specific bundle - should normally be disabled
 log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
 log4j.appender.bundle_trace.key=bundle.name
 log4j.appender.bundle_trace.default=karaf
 log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
 log4j.appender.bundle_trace.appender.MaxFileSize=20MB
 log4j.appender.bundle_trace.appender.MaxBackupIndex=1
 log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
 log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601} |
 %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
 log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{bundle.name\\}.log
 log4j.appender.bundle_trace.appender.append=true
 log4j.appender.bundle_trace.threshold=TRACE
 log4j.appender.bundle_trace.filter.accept=org.apache.log4j.varia.StringMatchFilter
 log4j.appender.bundle_trace.filter.accept.StringToMatch=bunde.name:org.apache.camel.camel-core
 log4j.appender.bundle_trace.filter.accept.AcceptOnMatch=false
 log4j.appender.bundle_trace.filter.deny=org.apache.log4j.varia.DenyAllFilter

 The intention is to have bundle specific logs at INFO level but have a
 separate TRACE log for a specific bundle. The latter is not enabled by
 default but only when debugging.

 The problem is that the DenyAllFilter seems to take precedence over the
 StringMatchFilter. I believe that when listed in the order I do, the bundle
 with the name org.apache.camel.camel-core should be logged at TRACE level
 but no other bundles. Could it be that the ordering of filters are not
 preserved? I think that native log4j only supports filters when using XML
 configuration and I assume that the Karaf filtering support has been added
 on top of log4j (or is it in Pax-logging)? Has the ordering of filters been
 taken into account?

 I've been testing this on Karaf 2.2.0 with Pax logging 1.6.0.

 /Bengt



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Logging using log4j filters

2012-01-30 Thread Guillaume Nodet
Actually, the filters support is built into log4j, but if there's
really a problem we can always fix it in pax-logging until the patch
is released in log4j.

On Mon, Jan 30, 2012 at 10:21, Guillaume Nodet gno...@gmail.com wrote:
 The filter support has been added in pax-logging.
 Have a look at
   
 https://github.com/ops4j/org.ops4j.pax.logging/blob/master/pax-logging-service/src/main/java/org/apache/log4j/PaxLoggingConfigurator.java

 You may very well be right that the order isn't kept, which would
 definitely be a bug.

 On Mon, Jan 30, 2012 at 10:17, Bengt Rodehav be...@rodehav.com wrote:
 I have the following configuration in my org.ops4j.pax.logging.cfg:

 # Per bundle log at INFO level
 log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
 log4j.appender.bundle.key=bundle.name
 log4j.appender.bundle.default=karaf
 log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
 log4j.appender.bundle.appender.MaxFileSize=10MB
 log4j.appender.bundle.appender.MaxBackupIndex=2
 log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
 log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p
 | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
 log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name\\}.log
 log4j.appender.bundle.appender.append=true
 log4j.appender.bundle.threshold=INFO

 # TRACE level for specific bundle - should normally be disabled
 log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
 log4j.appender.bundle_trace.key=bundle.name
 log4j.appender.bundle_trace.default=karaf
 log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
 log4j.appender.bundle_trace.appender.MaxFileSize=20MB
 log4j.appender.bundle_trace.appender.MaxBackupIndex=1
 log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
 log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601} |
 %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
 log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{bundle.name\\}.log
 log4j.appender.bundle_trace.appender.append=true
 log4j.appender.bundle_trace.threshold=TRACE
 log4j.appender.bundle_trace.filter.accept=org.apache.log4j.varia.StringMatchFilter
 log4j.appender.bundle_trace.filter.accept.StringToMatch=bunde.name:org.apache.camel.camel-core
 log4j.appender.bundle_trace.filter.accept.AcceptOnMatch=false
 log4j.appender.bundle_trace.filter.deny=org.apache.log4j.varia.DenyAllFilter

 The intention is to have bundle specific logs at INFO level but have a
 separate TRACE log for a specific bundle. The latter is not enabled by
 default but only when debugging.

 The problem is that the DenyAllFilter seems to take precedence over the
 StringMatchFilter. I believe that when listed in the order I do, the bundle
 with the name org.apache.camel.camel-core should be logged at TRACE level
 but no other bundles. Could it be that the ordering of filters are not
 preserved? I think that native log4j only supports filters when using XML
 configuration and I assume that the Karaf filtering support has been added
 on top of log4j (or is it in Pax-logging)? Has the ordering of filters been
 taken into account?

 I've been testing this on Karaf 2.2.0 with Pax logging 1.6.0.

 /Bengt



 --
 
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 FuseSource, Integration everywhere
 http://fusesource.com



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Logging using log4j filters

2012-01-30 Thread Guillaume Nodet
No, the support has been added in log4j:
   http://svn.apache.org/viewvc?view=revisionrevision=821430

On Mon, Jan 30, 2012 at 10:30, Bengt Rodehav be...@rodehav.com wrote:
 Hello Guillaume,

 Doesn't the filter support in log4j require XML configuration (not
 properties file)? If so, then I assume that Pax-logging has added the
 possibility to use filters using a properties file configuration.

 /Bengt


 2012/1/30 Guillaume Nodet gno...@gmail.com

 Actually, the filters support is built into log4j, but if there's
 really a problem we can always fix it in pax-logging until the patch
 is released in log4j.

 On Mon, Jan 30, 2012 at 10:21, Guillaume Nodet gno...@gmail.com wrote:
  The filter support has been added in pax-logging.
  Have a look at
 
  https://github.com/ops4j/org.ops4j.pax.logging/blob/master/pax-logging-service/src/main/java/org/apache/log4j/PaxLoggingConfigurator.java
 
  You may very well be right that the order isn't kept, which would
  definitely be a bug.
 
  On Mon, Jan 30, 2012 at 10:17, Bengt Rodehav be...@rodehav.com wrote:
  I have the following configuration in my org.ops4j.pax.logging.cfg:
 
  # Per bundle log at INFO level
  log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
  log4j.appender.bundle.key=bundle.name
  log4j.appender.bundle.default=karaf
  log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
  log4j.appender.bundle.appender.MaxFileSize=10MB
  log4j.appender.bundle.appender.MaxBackupIndex=2
  log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
  log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601} |
  %-5.5p
  | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
 
  log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name\\}.log
  log4j.appender.bundle.appender.append=true
  log4j.appender.bundle.threshold=INFO
 
  # TRACE level for specific bundle - should normally be disabled
  log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
  log4j.appender.bundle_trace.key=bundle.name
  log4j.appender.bundle_trace.default=karaf
 
  log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
  log4j.appender.bundle_trace.appender.MaxFileSize=20MB
  log4j.appender.bundle_trace.appender.MaxBackupIndex=1
 
  log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
 
  log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601} 
  |
  %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
 
  log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{bundle.name\\}.log
  log4j.appender.bundle_trace.appender.append=true
  log4j.appender.bundle_trace.threshold=TRACE
 
  log4j.appender.bundle_trace.filter.accept=org.apache.log4j.varia.StringMatchFilter
 
  log4j.appender.bundle_trace.filter.accept.StringToMatch=bunde.name:org.apache.camel.camel-core
  log4j.appender.bundle_trace.filter.accept.AcceptOnMatch=false
 
  log4j.appender.bundle_trace.filter.deny=org.apache.log4j.varia.DenyAllFilter
 
  The intention is to have bundle specific logs at INFO level but have a
  separate TRACE log for a specific bundle. The latter is not enabled by
  default but only when debugging.
 
  The problem is that the DenyAllFilter seems to take precedence over the
  StringMatchFilter. I believe that when listed in the order I do, the
  bundle
  with the name org.apache.camel.camel-core should be logged at TRACE
  level
  but no other bundles. Could it be that the ordering of filters are not
  preserved? I think that native log4j only supports filters when using
  XML
  configuration and I assume that the Karaf filtering support has been
  added
  on top of log4j (or is it in Pax-logging)? Has the ordering of filters
  been
  taken into account?
 
  I've been testing this on Karaf 2.2.0 with Pax logging 1.6.0.
 
  /Bengt
 
 
 
  --
  
  Guillaume Nodet
  
  Blog: http://gnodet.blogspot.com/
  
  FuseSource, Integration everywhere
  http://fusesource.com



 --
 
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 FuseSource, Integration everywhere
 http://fusesource.com





-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Logging using log4j filters

2012-01-30 Thread Guillaume Nodet
Looking at the log4j code, it seems the filters are ordered using
their ids, so in your case accept and deny.
So I think the order should be ok.  Can you try changing their name so
that the order would be reversed ?

On Mon, Jan 30, 2012 at 11:09, Bengt Rodehav be...@rodehav.com wrote:
 OK - I didn't know that.

 Do you think I should post a message on ops4j's mailing list about this?

 The reason I tried the Karaf mailing list first is that I believe this
 whould be a pretty common (and useful) configuration. In my case, I will
 probably create logs per camel context and not per bundle but I still need
 the possiblity to configure more detailed logging for a specific MDC value.

 Have you tried something similar yourself?

 I actually posted a question on Stackoverflow about this as well:

 http://stackoverflow.com/questions/9049119/set-log-level-based-on-mdc-value-in-log4j

 No replies unfortunately. The filtering approach would be an alternative
 (although not as elegant way) to accomplish what I wanted.

 /Bengt

 2012/1/30 Guillaume Nodet gno...@gmail.com

 No, the support has been added in log4j:
   http://svn.apache.org/viewvc?view=revisionrevision=821430

 On Mon, Jan 30, 2012 at 10:30, Bengt Rodehav be...@rodehav.com wrote:
  Hello Guillaume,
 
  Doesn't the filter support in log4j require XML configuration (not
  properties file)? If so, then I assume that Pax-logging has added the
  possibility to use filters using a properties file configuration.
 
  /Bengt
 
 
  2012/1/30 Guillaume Nodet gno...@gmail.com
 
  Actually, the filters support is built into log4j, but if there's
  really a problem we can always fix it in pax-logging until the patch
  is released in log4j.
 
  On Mon, Jan 30, 2012 at 10:21, Guillaume Nodet gno...@gmail.com
  wrote:
   The filter support has been added in pax-logging.
   Have a look at
  
  
   https://github.com/ops4j/org.ops4j.pax.logging/blob/master/pax-logging-service/src/main/java/org/apache/log4j/PaxLoggingConfigurator.java
  
   You may very well be right that the order isn't kept, which would
   definitely be a bug.
  
   On Mon, Jan 30, 2012 at 10:17, Bengt Rodehav be...@rodehav.com
   wrote:
   I have the following configuration in my org.ops4j.pax.logging.cfg:
  
   # Per bundle log at INFO level
   log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
   log4j.appender.bundle.key=bundle.name
   log4j.appender.bundle.default=karaf
   log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
   log4j.appender.bundle.appender.MaxFileSize=10MB
   log4j.appender.bundle.appender.MaxBackupIndex=2
   log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
   log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601}
   |
   %-5.5p
   | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
  
  
   log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name\\}.log
   log4j.appender.bundle.appender.append=true
   log4j.appender.bundle.threshold=INFO
  
   # TRACE level for specific bundle - should normally be disabled
   log4j.appender.bundle_trace=org.apache.log4j.sift.MDCSiftingAppender
   log4j.appender.bundle_trace.key=bundle.name
   log4j.appender.bundle_trace.default=karaf
  
  
   log4j.appender.bundle_trace.appender=org.apache.log4j.RollingFileAppender
   log4j.appender.bundle_trace.appender.MaxFileSize=20MB
   log4j.appender.bundle_trace.appender.MaxBackupIndex=1
  
  
   log4j.appender.bundle_trace.appender.layout=org.apache.log4j.PatternLayout
  
  
   log4j.appender.bundle_trace.appender.layout.ConversionPattern=%d{ISO8601}
|
   %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
  
  
   log4j.appender.bundle_trace.appender.file=${logdir}/bundles/trace/$\\{bundle.name\\}.log
   log4j.appender.bundle_trace.appender.append=true
   log4j.appender.bundle_trace.threshold=TRACE
  
  
   log4j.appender.bundle_trace.filter.accept=org.apache.log4j.varia.StringMatchFilter
  
  
   log4j.appender.bundle_trace.filter.accept.StringToMatch=bunde.name:org.apache.camel.camel-core
   log4j.appender.bundle_trace.filter.accept.AcceptOnMatch=false
  
  
   log4j.appender.bundle_trace.filter.deny=org.apache.log4j.varia.DenyAllFilter
  
   The intention is to have bundle specific logs at INFO level but have
   a
   separate TRACE log for a specific bundle. The latter is not enabled
   by
   default but only when debugging.
  
   The problem is that the DenyAllFilter seems to take precedence over
   the
   StringMatchFilter. I believe that when listed in the order I do, the
   bundle
   with the name org.apache.camel.camel-core should be logged at
   TRACE
   level
   but no other bundles. Could it be that the ordering of filters are
   not
   preserved? I think that native log4j only supports filters when
   using
   XML
   configuration and I assume that the Karaf filtering support has been
   added
   on top of log4j (or is it in Pax-logging)? Has the ordering of
   filters
   been
   taken into account?
  
   I've been

Re: Logging using log4j filters

2012-01-30 Thread Guillaume Nodet
Have you tried matching on something more simple such as camel ?
The StringToMatch filter simply looks for the string to be in the
rendered event so maybe none of you events contains
bunde.name:org.apache.camel.camel-core

On Mon, Jan 30, 2012 at 11:52, Bengt Rodehav be...@rodehav.com wrote:
 I tried these four combinations:

 # 1
 log4j.appender.bundle_trace.filter.a=org.apache.log4j.varia.StringMatchFilter
 log4j.appender.bundle_trace.filter.a.StringToMatch=bunde.name:org.apache.camel.camel-core
 log4j.appender.bundle_trace.filter.a.AcceptOnMatch=true
 log4j.appender.bundle_trace.filter.b=org.apache.log4j.varia.DenyAllFilter

 # 2
 log4j.appender.bundle_trace.filter.b=org.apache.log4j.varia.DenyAllFilter
 log4j.appender.bundle_trace.filter.a=org.apache.log4j.varia.StringMatchFilter
 log4j.appender.bundle_trace.filter.a.StringToMatch=bunde.name:org.apache.camel.camel-core
 log4j.appender.bundle_trace.filter.a.AcceptOnMatch=true

 # 3
 log4j.appender.bundle_trace.filter.a=org.apache.log4j.varia.DenyAllFilter
 log4j.appender.bundle_trace.filter.b=org.apache.log4j.varia.StringMatchFilter
 log4j.appender.bundle_trace.filter.b.StringToMatch=bunde.name:org.apache.camel.camel-core
 log4j.appender.bundle_trace.filter.b.AcceptOnMatch=true

 # 4
 log4j.appender.bundle_trace.filter.b=org.apache.log4j.varia.StringMatchFilter
 log4j.appender.bundle_trace.filter.b.StringToMatch=bunde.name:org.apache.camel.camel-core
 log4j.appender.bundle_trace.filter.b.AcceptOnMatch=true
 log4j.appender.bundle_trace.filter.a=org.apache.log4j.varia.DenyAllFilter

 This would check if ordering of the configurations or filter naming would
 make a difference. Unfortunately none of the above work.

 But as soon as I comment out the DenyAllFilter, trace logfiles appear in the
 trace folder. So, either the DenyAllFilter prevents the StringMatchFilter
 from working or the StringMatchFilter never matches...

 /Bengt





 2012/1/30 Guillaume Nodet gno...@gmail.com

 Looking at the log4j code, it seems the filters are ordered using
 their ids, so in your case accept and deny.
 So I think the order should be ok.  Can you try changing their name so
 that the order would be reversed ?

 On Mon, Jan 30, 2012 at 11:09, Bengt Rodehav be...@rodehav.com wrote:
  OK - I didn't know that.
 
  Do you think I should post a message on ops4j's mailing list about this?
 
  The reason I tried the Karaf mailing list first is that I believe this
  whould be a pretty common (and useful) configuration. In my case, I will
  probably create logs per camel context and not per bundle but I still
  need
  the possiblity to configure more detailed logging for a specific MDC
  value.
 
  Have you tried something similar yourself?
 
  I actually posted a question on Stackoverflow about this as well:
 
 
  http://stackoverflow.com/questions/9049119/set-log-level-based-on-mdc-value-in-log4j
 
  No replies unfortunately. The filtering approach would be an alternative
  (although not as elegant way) to accomplish what I wanted.
 
  /Bengt
 
  2012/1/30 Guillaume Nodet gno...@gmail.com
 
  No, the support has been added in log4j:
    http://svn.apache.org/viewvc?view=revisionrevision=821430
 
  On Mon, Jan 30, 2012 at 10:30, Bengt Rodehav be...@rodehav.com wrote:
   Hello Guillaume,
  
   Doesn't the filter support in log4j require XML configuration (not
   properties file)? If so, then I assume that Pax-logging has added the
   possibility to use filters using a properties file configuration.
  
   /Bengt
  
  
   2012/1/30 Guillaume Nodet gno...@gmail.com
  
   Actually, the filters support is built into log4j, but if there's
   really a problem we can always fix it in pax-logging until the patch
   is released in log4j.
  
   On Mon, Jan 30, 2012 at 10:21, Guillaume Nodet gno...@gmail.com
   wrote:
The filter support has been added in pax-logging.
Have a look at
   
   
   
https://github.com/ops4j/org.ops4j.pax.logging/blob/master/pax-logging-service/src/main/java/org/apache/log4j/PaxLoggingConfigurator.java
   
You may very well be right that the order isn't kept, which would
definitely be a bug.
   
On Mon, Jan 30, 2012 at 10:17, Bengt Rodehav be...@rodehav.com
wrote:
I have the following configuration in
my org.ops4j.pax.logging.cfg:
   
# Per bundle log at INFO level
log4j.appender.bundle=org.apache.log4j.sift.MDCSiftingAppender
log4j.appender.bundle.key=bundle.name
log4j.appender.bundle.default=karaf
   
log4j.appender.bundle.appender=org.apache.log4j.RollingFileAppender
log4j.appender.bundle.appender.MaxFileSize=10MB
log4j.appender.bundle.appender.MaxBackupIndex=2
   
log4j.appender.bundle.appender.layout=org.apache.log4j.PatternLayout
   
log4j.appender.bundle.appender.layout.ConversionPattern=%d{ISO8601}
|
%-5.5p
| %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
   
   
   
log4j.appender.bundle.appender.file=${logdir}/bundles/$\\{bundle.name\\}.log

Re: Apache Commons DBCP fragment

2012-01-30 Thread Guillaume Nodet
First question, why don't you use the existing bundles:
   
http://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.commons-dbcp/1.4_1/

For fragments, the import and export are added to the host, but I
don't think this includes dynamic imports.

On Mon, Jan 30, 2012 at 17:01, lbu lburgazz...@gmail.com wrote:
 Hi,
 I'm working to a project OSGi-fication and I have a little trouble with a
 fragment that does not seem taken into account by Karaf (2.2.5). The
 fragment is supposed to add DynamicImport-Package to org.apache.commons.dbcp
 to look-up jdbc drivers.

 Here the META-INF/MANIFEST.MF of my fragment:

 Manifest-Version: 1.0
 Created-By: LB
 Bundle-Name: lb.fragment.apache.commons.dbcp
 Bundle-Vendor: LB
 Bundle-Version: 1.0.7
 Bundle-SymbolicName: org.apache.commons.dbcp.fragment
 Bundle-Description: Fragment attached to Apache Commons DBCP
 Fragment-Host: org.apache.commons.dbcp
 DynamicImport-Package: *

 What's wrong ?

 Thx,
 Luca




 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Apache-Commons-DBCP-fragment-tp3700373p3700373.html
 Sent from the Karaf - User mailing list archive at Nabble.com.



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Problems with Karaf 2.2.5 integration tests

2012-02-24 Thread Guillaume Nodet
The point is that in karaf 2.x, the command is named osgi:list and
not bundle:list which is only for 3.x

On Fri, Feb 24, 2012 at 12:51,  lennart.jore...@teliasonera.com wrote:
 Hello all,

 I have been trying for some time to get integration tests running properly
 in Karaf 2.2.5.
 The test rig is copied from what little integration tests could be found on
 the net.

 My problem is that I always receive an exception claiming that the executed
 command does not exist.
 Is this a known problem?
 Can anyone point me to a full Karaf integration test for Karaf 2.2.5 that
 actually works without relying on Karaf 3.0.0 snapshot dependencies?

 2012-02-24 12:46:16,133 | WARN  | rint Extender: 2 | KarArtifactInstaller
           | eployer.kar.KarArtifactInstaller   71 | 15 -
 org.apache.karaf.deployer.kar - 2.2.5 | Unable to create directory for Karaf
 Archive timestamps. Results may vary...
         __ __                  
        / //_/ __ _/ __/
       / ,  / __ `/ ___/ __ `/ /_
      / /| |/ /_/ / /  / /_/ / __/
     /_/ |_|\__,_/_/   \__,_/_/

   Apache Karaf (2.2.5)

 Hit 'tab' for a list of available commands
 and '[cmd] --help' for help on a specific command.
 Hit 'ctrl-d' or 'osgi:shutdown' to shutdown Karaf.

 karaf@root 2012-02-24 12:46:16,461 | WARN  | rint Extender: 2 |
 FeaturesServiceImpl              | res.internal.FeaturesServiceImpl  214 |
 16 - org.apache.karaf.features.core - 2.2.5 | Feature repository doesn't
 have a name. The name will be mandatory in the next Karaf version.
 2012-02-24 12:46:16,486 | WARN  | rint Extender: 2 | FeaturesServiceImpl
          | res.internal.FeaturesServiceImpl  214 | 16 -
 org.apache.karaf.features.core - 2.2.5 | Feature repository doesn't have a
 name. The name will be mandatory in the next Karaf version.
 2012-02-24 12:46:16,559 | WARN  | rint Extender: 2 | FeaturesServiceImpl
          | res.internal.FeaturesServiceImpl  214 | 16 -
 org.apache.karaf.features.core - 2.2.5 | Feature repository doesn't have a
 name. The name will be mandatory in the next Karaf version.

 Running command: bundle:list --help

 org.apache.felix.gogo.runtime.CommandNotFoundException: Command not found:
 bundle:list
 at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:471)
 at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:400)
 at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)
 at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:183)
 at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:120)
 at
 org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:89)


 --
 // Bästa hälsningar,
 // [sw. Best regards,]
 //
 // Lennart Jörelid, Systems Architect
 // email: lennart.jore...@teliasonera.com
 // cell: +46 708 507 603
 // skype: jgurueurope





-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: karaf/osgi start dependencies

2012-04-25 Thread Guillaume Nodet
No, but depending on the configuration it may expose a JMSCOnnectionFactory
which can be obtained from the OSGi registry.

On Wed, Apr 25, 2012 at 17:30, Jean-Baptiste Onofré j...@nanthrax.net wrote:

 Agree Achim, however, I'm not sure that ActiveMQ register a service per
 queue or topic for instance (only for the broker).

 Regards
 JB


 On 04/25/2012 05:18 PM, Achim Nierbeck wrote:

 Never the less you shouldn't rely on the startlevels,
 I suggest that your application waits on services provided by the
 ActiveMQ broker.
 This is far more safe and more OSGi like ;)

 regards, Achim

 2012/4/25 Jean-Baptiste Onofréj...@nanthrax.net:

 Hi Jason,

 Using a feature, you can define the start-level of the bundles in the
 feature.

 Regards
 JB


 On 04/25/2012 05:12 PM, Jason wrote:


 Hi all,

 I have an application that uses Apache ActiveMQ in Karaf. I have a
 broker project and multiple other projects that should depend on the
 broker service to be started. How do I specify in the maven pom
 (maven-bundle-plugin) that the child projects should wait until the
 broker is started?

 Thanks,
 Jason



 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com





 --
 Jean-Baptiste Onofré
 jbono...@apache.org
 http://blog.nanthrax.net
 Talend - http://www.talend.com




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: log4j

2012-05-03 Thread Guillaume Nodet
Custom appenders can be deployed by using fragments attached to the
pax-logging-service bundle fwiw.

On Thu, May 3, 2012 at 1:21 PM, Achim Nierbeck bcanh...@googlemail.comwrote:

 I'm not sure what you are trying to do, but log4j as slf4j and a
 couple more logging framworks are supported by Karaf.
 We use Pax Logging for this, log4j is even the underlying
 implementation for logging.
 If you want to use your own appenders you need to take special care for
 this.

 Regards, Achim

 2012/5/3 maaruks maris.orbid...@gmail.com:
  Is it possible to use log4j in karaf ?
 
  I have log4j classes in my bundle but I cant access them:
 
 try {
 Class? aClass = Class.forName(org.apache.log4j.Appender);
 } catch (ClassNotFoundException e) {
 throw new RuntimeException(e);
 }
 
 
 
  java.lang.RuntimeException: java.lang.ClassNotFoundException:
  org.apache.log4j.Appender not found by
 org.ops4j.pax.logging.pax-logging-api
  [4]
  ...
  Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Appender
 not
  found by org.ops4j.pax.logging.pax-logging-api [4]
 at
 
 org.apache.felix.framework.ModuleImpl.findClassOrResourceByDelegation(ModuleImpl.java:787)
 at
 org.apache.felix.framework.ModuleImpl.access$400(ModuleImpl.java:71)
 at
 
 org.apache.felix.framework.ModuleImpl$ModuleClassLoader.loadClass(ModuleImpl.java:1768)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:247)[:1.6.0_26]
 at
 
 org.apache.felix.framework.ModuleImpl.getClassByDelegation(ModuleImpl.java:645)
 at
  org.apache.felix.framework.resolver.WireImpl.getClass(WireImpl.java:99)
 at
  org.apache.felix.framework.ModuleImpl.searchImports(ModuleImpl.java:1390)
 at
 
 org.apache.felix.framework.ModuleImpl.findClassOrResourceByDelegation(ModuleImpl.java:722)
 at
 org.apache.felix.framework.ModuleImpl.access$400(ModuleImpl.java:71)
 at
 
 org.apache.felix.framework.ModuleImpl$ModuleClassLoader.loadClass(ModuleImpl.java:1768)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:247)[:1.6.0_26]
 at java.lang.Class.forName0(Native Method)[:1.6.0_26]
 at java.lang.Class.forName(Class.java:169)[:1.6.0_26]
 
 
  --
  View this message in context:
 http://karaf.922171.n3.nabble.com/log4j-tp3958839.html
  Sent from the Karaf - User mailing list archive at Nabble.com.



 --

 Apache Karaf http://karaf.apache.org/ Committer  PMC
 OPS4J Pax Web http://wiki.ops4j.org/display/paxweb/Pax+Web/
 Committer  Project Lead
 OPS4J Pax Vaadin http://team.ops4j.org/wiki/display/PAXVAADIN/Home
 Commiter  Project Lead
 blog http://notizblog.nierbeck.de/




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Is there a way to query osgi framework status (e.g. started)?

2012-05-03 Thread Guillaume Nodet
There's no such thing as a 'started' state.  The osgi framework is fully
asynchronous and things can even be done on behalf of other bundles.  The
only real way to know if 'something' is started is to have this thing
register an osgi service when started and wait for this service to be
registered.  A lot of bundles do behave in this way so that you have
inter-bundles dependencies using services.  But at the end, 'started'
depends on what meaning you put behind this word, and that mostly depends
on what you deploy.

On Thu, May 3, 2012 at 5:28 PM, bobshort jer...@check-it.ca wrote:

 We have bundles that do some intensive processing. Right now they are
 starting as the framework is starting and slowing startup down
 considerably.
 The osgi container is running on a plug computer so resources are minimal.

 I want to start my processing only after the osgi container is fully
 started. I've implemented a framework listener to detect when the framework
 is started and then trigger my startup logic:

 /public class EventListener implements FrameworkListener {

@Override
public void frameworkEvent(FrameworkEvent event) {
if (event.getType() == FrameworkEvent.STARTED) {
// Do startup logic here.
}
}
 }/

 This works if my bundles are installed before the framework is started, but
 it obviously does not work for modules installed after the framework is
 started.

 Is there any way I can query the framework status from my bundle so I can
 detect if the container is already fully started when the bundle is
 installed? I'd like to do something like:

 /
public void onBundleStarted() {
if (**test if framework already running**) {
// Do startup logic here.
}
}
 /

 Is this possible?

 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/Is-there-a-way-to-query-osgi-framework-status-e-g-started-tp3959588.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: log4j

2012-05-04 Thread Guillaume Nodet
That's really the best thing to do in karaf, as the log configuration is
handled in an osgi way, so no code should try to directly configure the
logging framework.

On Fri, May 4, 2012 at 1:01 PM, maaruks maris.orbid...@gmail.com wrote:


 Achim Nierbeck wrote
 
  I'm not sure what you are trying to do, but log4j as slf4j and a
  couple more logging framworks are supported by Karaf.
 

 I am migrating some code to karaf.It turns out Pax Logging doesn't
 allow
 to access all log4j classes.
 I removed code that is trying to access Appender.  Problem solved.


 --
 View this message in context:
 http://karaf.922171.n3.nabble.com/log4j-tp3958839p3961902.html
 Sent from the Karaf - User mailing list archive at Nabble.com.




-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Tomcat on karaf

2012-05-23 Thread Guillaume Nodet
Jetty has a long history of being lightweight and easily embeddable,
with a good and responsive community, that's why it was chosen to be
used in pax-web, and pax-web was really the only available solution at
some point hence we have chosen it for Karaf.
I don't really see any major problems deploying gemini web in Karaf
though I haven't actually tested it.
The question is why do you want to use Tomcat instead of Jetty ? Jetty
is very mature too, though a bit less known.

Doing the WAB support in OSGi is quite a lot of work and we have
people here maintaining pax-web, so I don't really see the point in
recreating a new container on top of Tomcat.  The most important thing
about Tomcat imho is that the container and management is well known,
but if we'd start embedding it, you'd loose all of that anyway.

On Tue, May 22, 2012 at 10:11 PM, Romain Gilles romain.gil...@gmail.com wrote:
 Hi all,
 I would like to know if any of you have run or run a tomcat web container as
 a Http Service and Web Application Service...
 Do I have to install eclipse gemini web on karaf to test it? I have had a
 look to pax web and the only spi provider is Jetty. Is there a reason that?
 Tomcat is an Apache project and the current osgi integration is done by the
 eclipse community and Jetty is now a eclipse project and is currently the
 default web container of karaf (an apache project ...)?

 Romain.



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Bundle is not compatible with this blueprint extender

2012-06-19 Thread Guillaume Nodet
Such a problem is not really a problem ;-)
One possible reason is that you have multiple blueprint extenders
deployed (so multiple blueprint specs exported).
An extender will only manage the blueprint bundles that are compatible
(i.e. it imports the exact same package) as the extender.
The main reason is to allow multiple blueprint implementations to co-exist.
If you want to force the use of a given blueprint, one way is to make
sure your blueprint bundle import a package which is specific to your
extender, such as org.apache.aries.blueprint package in addition to
the org.osgi.service.blueprint which should always be imported by
blueprint bundles.
The osgi framework will then be forced to wire against the aries api,
making sure the bundle will be extended only by aries

On Tue, Jun 19, 2012 at 6:24 PM, maaruks maris.orbid...@gmail.com wrote:
 I have a bundle that uses blueprint.   When I try to activate it
 BlueprintExtender prints this:

 Bundle ... is not compatible with this blueprint extender

 Why is my bundle not compatible ?     Because it uses spring 3 ?

 --
 View this message in context: 
 http://karaf.922171.n3.nabble.com/Bundle-is-not-compatible-with-this-blueprint-extender-tp4024898.html
 Sent from the Karaf - User mailing list archive at Nabble.com.



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Spring-DM web and pax web extender bundle processing order in 2.2.7

2012-06-25 Thread Guillaume Nodet
That's a quite big problem and I'm not really aware of any way to
control the extender execution order.
The reason is that each extender can work synchronously or
asynchronously and there's no coordination between them, there's not
even a way to specify such a thing.
I can only see two ways out:
  * enhance the extenders to better cooperate, but this would require
enhancing all the extenders and defining some metadata to control such
an order (given anyone can write an extender, this would be better if
we had a spec for that)
  * do the synchronization on your own, i.e. you can detect using a
spring bean when the spring app will be processed and from a servlet
when the web app stuff will be kicked, then make sure one is waiting
for the other.

The last option is the easiest one to achieve obviously, but will be
really tied to the extenders your using and how they behave, as if
both are started synchronously, you won't really have any way to do
some synchronization here, but iirc that's not the case here.

On Mon, Jun 25, 2012 at 6:21 AM, Raman Gupta rocketra...@gmail.com wrote:
 I just upgraded to Karaf 2.2.7 from Karaf 2.2.4 and noticed that now
 the Spring DM web extender and pax web extender's no longer run in the
 correct order.

 The Spring-DM extender needs to process the bundle *before* the PaxWeb
 extender, since until the app context is created by Spring-DM it is a
 non-functional web bundle. If Pax Web attempts to process it first,
 there is an error about the Spring context not existing when the
 servlet tries to initialize.

 A manual refresh of the bundle is required to fix the problem.

 One difference I can see between 2.2.4 and 2.2.7 is that the start
 level of the pax bundles is different. In Karaf 2.2.4, the pax bundles
 started at the default start level which was 60, but in 2.2.7 the war
 feature specifies they start at the same level as the Spring-DM
 bundles, which is 30. I don't know if that is the underlying problem
 though.

 Is there a way to control the order of the extender execution? If not,
 what is the best work-around?

 Regards,
 Raman



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Installing Hadoop bundle in Apache Felix Gogo shell

2012-06-25 Thread Guillaume Nodet
Deploying hadoop in OSGi is not a simple task, and wether you use Gogo
or not isn't really the problem.
Gogo just enables you to install bundles using a command instead of
using the osgi api, but as I said, the problem isn't really here.

So Hadoop isn't OSGi ready at all.  I worked on it a few weeks ago, so
you can try to build Fuse Fabric locally
  https://github.com/fusesource/fuse/tree/master/fabric/fabric-hadoop
and try to deploy the hadoop feature (or all the bundles listed manually)
  
https://github.com/fusesource/fuse/blob/master/fabric/fuse-fabric/src/main/resources/fabric-features.xml#L368

On Mon, Jun 25, 2012 at 10:25 AM, somya singhal
28somyasing...@gmail.com wrote:
 Hello

 I have recently learned to successfully deploy bundles with the help of
 Apache Felix Gogo command shell.Now i am trying  to install a Hadoop bundle
 with the help of Apache Felix Gogo command shell.I have tried searching web
 pages to get clear steps for installing the same but i am unable to find.Can
 anyone please help me to know the correct steps for installing the Hadoop
 bundle with the help of Apache Felix Gogo command shell?


 With Regards

 Somya Singhal



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Spring-DM web and pax web extender bundle processing order in 2.2.7

2012-06-25 Thread Guillaume Nodet
I think Peter's solution is a nice one, as spring (and blueprint) has
the ability to wait until all services are satisfied before actually
creating the full spring app.
That would only work if you need it that way though (i.e. spring app
started after the web app).

On Mon, Jun 25, 2012 at 4:14 PM, Raman Gupta rocketra...@gmail.com wrote:
 Thanks Guillaume and Achim... I will investigate the manual
 synchronization route. Does Pax-web initialize each servlet
 synchronously? If so, I can create a dummy servlet that initializes
 before the main servlet that can wait for the initialization of the
 Spring context.

 I also found this post by Peter Krien's responding to a similar query
 from someone on the Felix mailing list, and suggesting a similar solution:

 http://mail-archives.apache.org/mod_mbox/felix-users/201004.mbox/%3cf5b8b3e9-5602-4caf-9d4f-904fb947b...@aqute.biz%3E

 In Krien's solution, rather than manually detecting when a Spring bean
 has been initialized, I believe he his suggesting exposes the servlet
 itself as a service from the Spring context. When this service is
 available, the dummy servlet being initialized by the web extender
 sees it and proceeds (I guess this would depend on Pax-Web
 processing the dummy servlet first and synchronously).

 If that approach works, that would be some functionality that Pax-Web
 could use to manage this issue with requiring changes in other
 extenders. A manifest header could control whether Pax-Web waits for
 some service to be available before proceeding with the servlet
 initialization. This would skip the need to manually create the dummy
 servlet.

 Regards,
 Raman


 On 06/25/2012 04:12 AM, Achim Nierbeck wrote:
 Hi

 I just can second Guillaume here, there is only one more thing that
 crosses my mind.
 Pax Web needs to support Injecting OSGi Services into the Servlets,
 see also [1].
 But as usual this kind of stuff is needed much earlier then implemented :)

 regards,

 [1] - http://team.ops4j.org/browse/PAXWEB-367

 2012/6/25 Guillaume Nodet gno...@gmail.com:
 That's a quite big problem and I'm not really aware of any way to
 control the extender execution order.
 The reason is that each extender can work synchronously or
 asynchronously and there's no coordination between them, there's not
 even a way to specify such a thing.
 I can only see two ways out:
  * enhance the extenders to better cooperate, but this would require
 enhancing all the extenders and defining some metadata to control such
 an order (given anyone can write an extender, this would be better if
 we had a spec for that)
  * do the synchronization on your own, i.e. you can detect using a
 spring bean when the spring app will be processed and from a servlet
 when the web app stuff will be kicked, then make sure one is waiting
 for the other.

 The last option is the easiest one to achieve obviously, but will be
 really tied to the extenders your using and how they behave, as if
 both are started synchronously, you won't really have any way to do
 some synchronization here, but iirc that's not the case here.

 On Mon, Jun 25, 2012 at 6:21 AM, Raman Gupta rocketra...@gmail.com wrote:
 I just upgraded to Karaf 2.2.7 from Karaf 2.2.4 and noticed that now
 the Spring DM web extender and pax web extender's no longer run in the
 correct order.

 The Spring-DM extender needs to process the bundle *before* the PaxWeb
 extender, since until the app context is created by Spring-DM it is a
 non-functional web bundle. If Pax Web attempts to process it first,
 there is an error about the Spring context not existing when the
 servlet tries to initialize.

 A manual refresh of the bundle is required to fix the problem.

 One difference I can see between 2.2.4 and 2.2.7 is that the start
 level of the pax bundles is different. In Karaf 2.2.4, the pax bundles
 started at the default start level which was 60, but in 2.2.7 the war
 feature specifies they start at the same level as the Spring-DM
 bundles, which is 30. I don't know if that is the underlying problem
 though.

 Is there a way to control the order of the extender execution? If not,
 what is the best work-around?

 Regards,
 Raman



 --
 
 Guillaume Nodet
 
 Blog: http://gnodet.blogspot.com/
 
 FuseSource, Integration everywhere
 http://fusesource.com






-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Eventadmin as deploy seems to restart framework

2012-06-26 Thread Guillaume Nodet
That's because eventadmin is an optional dependency of a lot of
bundles, including aries blueprint.  FileInstall does a refresh on the
bundles and it causes the blueprint extender to be re-wired, causing
all blueprint applications to be restarted.  If that's not done,
events won't be sent by blueprint and other bundles.

On Mon, Jun 25, 2012 at 7:29 PM, John Hawksley
john_hawks...@intergral.com wrote:
 Hi folks,

 I seem to have a strange issue with Karaf 2.2.7.  I know eventadmin can be
 installed as a feature (or as a boot feature), but if I deploy
 org.apache.felix.eventadmin-1.2.14.jar into the deploy folder, I get a
 strange restart behaviour.

 Karaf seems to:

 - Come up to the point I get a console prompt
 - Immediately stop all bundles
 - Immediately restart all bundles, and I get a new console banner and
 prompt.

 The log at level INFO seems to show, for instance Service MBeans being
 registered, then deregistered, then registered again.

 This may be a known restriction or caveat, but I couldn't find any other
 cases of this even after googling.

 I'm using a freshly-untarred Karaf 2.2.7 on OSX with Java 7 (1.7.0_04), and
 the only thing in the deploy directory is eventadmin, (Felix EA 1.2.14).

 If anyone can shed any light on this I'd be grateful; we're trying to come
 up with a bundle set which doesn't rely on features, so we'd like to deploy
 eventadmin too.

 If this seems to be unknown, naturally I'll open a bug for it.

 Many thanks everyone,
 -John



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Spring-DM web and pax web extender bundle processing order in 2.2.7

2012-06-26 Thread Guillaume Nodet
If you use a spring web app, do you really need to use spring-dm in addition ?
Why don't you just use the spring app ?

On Tue, Jun 26, 2012 at 1:23 AM, Raman Gupta rocketra...@gmail.com wrote:
 On 06/25/2012 03:21 AM, Guillaume Nodet wrote:
 That's a quite big problem and I'm not really aware of any way to
 control the extender execution order.
 The reason is that each extender can work synchronously or
 asynchronously and there's no coordination between them, there's not
 even a way to specify such a thing.
 I can only see two ways out:
   * enhance the extenders to better cooperate, but this would require
 enhancing all the extenders and defining some metadata to control such
 an order (given anyone can write an extender, this would be better if
 we had a spec for that)
   * do the synchronization on your own, i.e. you can detect using a
 spring bean when the spring app will be processed and from a servlet
 when the web app stuff will be kicked, then make sure one is waiting
 for the other.

 The last option is the easiest one to achieve obviously, but will be
 really tied to the extenders your using and how they behave, as if
 both are started synchronously, you won't really have any way to do
 some synchronization here, but iirc that's not the case here.

 I'm trying to do the second option (manual synchronization) as
 described here.

 It isn't working as Pax Web itself attempts to initialize the Spring
 context due to the Spring ContextLoaderListener that is defined in
 web.xml, which then breaks the manual synchronization.

 Here is the stack for the Pax Web initialization of the context:

 http://pastebin.com/raw.php?i=RTunRTkF

 From the stack, the general path to the bean being initialized is:

 swissbox BundleWatcher - WebXmlObserver - WebAppPublisher - Pax Web
 Jetty - ContextLoaderListener - ExtenderOrderingBean.init()

 Then when the servlet is loaded by Pax Web, ExtenderOrderingBean
 thinks the context is good since it was created by Pax Web, and the
 stack which started this whole mess is again produced:

 http://pastebin.com/raw.php?i=PJvMqWRV

 I actually don't see where the Spring-DM extender loads the bean at all...

 So still looking for solutions... IMO, the behavior of Karaf 2.2.4 was
 perfect in this situation, and required no such hackish work-arounds.

 Regards,
 Raman



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Spring-DM web and pax web extender bundle processing order in 2.2.7

2012-06-26 Thread Guillaume Nodet
If you want to use spring-dm and spring-mvc, you want to look at
  http://static.springsource.org/osgi/docs/1.1.0-m2/reference/html/web.html
There's a special application context class designed for this it seems:
  
org.springframework.osgi.web.context.support.OsgiBundleXmlWebApplicationContext

On Tue, Jun 26, 2012 at 4:42 PM, Raman Gupta rocketra...@gmail.com wrote:
 On 06/26/2012 07:17 AM, Guillaume Nodet wrote:
 If you use a spring web app, do you really need to use spring-dm in addition 
 ?
 Why don't you just use the spring app ?

 I'm not sure what you mean... Spring-DM is what ties in the OSGi
 microservices into the web bundle's Spring context, right? How am I
 supposed to do that without using Spring-DM or Gemini Blueprint, or
 doing it manually using the OSGi api's?

 On Tue, Jun 26, 2012 at 1:23 AM, Raman Gupta rocketra...@gmail.com wrote:
 On 06/25/2012 03:21 AM, Guillaume Nodet wrote:
 That's a quite big problem and I'm not really aware of any way to
 control the extender execution order.
 The reason is that each extender can work synchronously or
 asynchronously and there's no coordination between them, there's not
 even a way to specify such a thing.
 I can only see two ways out:
   * enhance the extenders to better cooperate, but this would require
 enhancing all the extenders and defining some metadata to control such
 an order (given anyone can write an extender, this would be better if
 we had a spec for that)
   * do the synchronization on your own, i.e. you can detect using a
 spring bean when the spring app will be processed and from a servlet
 when the web app stuff will be kicked, then make sure one is waiting
 for the other.

 The last option is the easiest one to achieve obviously, but will be
 really tied to the extenders your using and how they behave, as if
 both are started synchronously, you won't really have any way to do
 some synchronization here, but iirc that's not the case here.

 I'm trying to do the second option (manual synchronization) as
 described here.

 It isn't working as Pax Web itself attempts to initialize the Spring
 context due to the Spring ContextLoaderListener that is defined in
 web.xml, which then breaks the manual synchronization.

 Here is the stack for the Pax Web initialization of the context:

 http://pastebin.com/raw.php?i=RTunRTkF

 From the stack, the general path to the bean being initialized is:

 swissbox BundleWatcher - WebXmlObserver - WebAppPublisher - Pax Web
 Jetty - ContextLoaderListener - ExtenderOrderingBean.init()

 Then when the servlet is loaded by Pax Web, ExtenderOrderingBean
 thinks the context is good since it was created by Pax Web, and the
 stack which started this whole mess is again produced:

 http://pastebin.com/raw.php?i=PJvMqWRV

 I actually don't see where the Spring-DM extender loads the bean at all...

 So still looking for solutions... IMO, the behavior of Karaf 2.2.4 was
 perfect in this situation, and required no such hackish work-arounds.


 Regards,
 Raman



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


Re: Bundles for deployment of hadoop service in osgi container

2012-06-27 Thread Guillaume Nodet
Have you tried the links I gave you ?

On Wed, Jun 27, 2012 at 10:58 AM, somya singhal
28somyasing...@gmail.com wrote:
 Hello

 I have recently coma across a link -:

 http://search-hadoop.com/m/7TZE59pm6vsubj=Re+PROPOSAL+Hadoop+OSGi+compliant+and+Apache+Karaf+features

 I am trying to install hadoop bundle in osgi for quite a few days.But i am
 unable to do so.Anyone can please suggest me as it is written in the
 link,from where can i get Hadoop modules (common, annotations, hdfs,
 mapreduce, etc)???

 Somya Singhal
 Btech(4th year,csi)
 IIT ROORKEE



-- 

Guillaume Nodet

Blog: http://gnodet.blogspot.com/

FuseSource, Integration everywhere
http://fusesource.com


  1   2   3   4   5   >