Karaf 4.2.3 uses glassfish jaxb 2.3.2 which is java 9, is this a problem?

2019-03-25 Thread tom
We've tried to upgrade to karaf 4.2.3/4 and have hit a problem.

I don't believe we explicitly use org.glassfish.jaxb ourselves, but karaf has a 
dependency on it. Karaf 4.2.3 updated this dependency to 2.3.2, and that's a 
java 9 module.

So with a fresh karaf 4.2.4, if you do:
  bundle:install -s 'wrap:mvn:org.glassfish.jaxb/txw2/2.3.2'  
you get:
  java.lang.ArrayIndexOutOfBoundsException: 19
at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:576)
at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:494)
at aQute.bnd.osgi.Clazz.parseClassFileWithCollector(Clazz.java:483)
at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:473)
at aQute.bnd.osgi.Analyzer.analyzeJar(Analyzer.java:2177)
at aQute.bnd.osgi.Analyzer.analyzeBundleClasspath(Analyzer.java:2083)
at aQute.bnd.osgi.Analyzer.analyze(Analyzer.java:138)
at aQute.bnd.osgi.Analyzer.calcManifest(Analyzer.java:616)
at org.ops4j.pax.swissbox.bnd.BndUtils.createBundle(BndUtils.java:161)
at 
org.ops4j.pax.url.wrap.internal.Connection.getInputStream(Connection.java:83)

This seems to be the symptom of "incorrect class version".

We run on java 8 at the moment, but karaf is intended to support java 8 as far 
as I understand it.

So is this an issue with karaf, or is there a dependency we need to update? 
Thanks.


Re: Waiting for a command to be available

2019-03-20 Thread tom
> No way to use etc/shell.init.script ?

I've understood this comment now.
The best method seems to be to use the pattern of:

set EXTRA_JAVA_OPTS=-Dkaraf.shell.init.script= 
-Dkaraf.delay.console=true
bin\karaf

(or equivalent)

Then ensure that the  file ends with "shutdown -f".

This seems to work as long as there aren't any errors in the script, if none of 
the commands return an error. The error handling seems to surround the complete 
initialisation script execution, so if any exception is thrown nothing further 
in the script is executed. Starting the script with "shutdown -f 1" seems a 
sensible precaution to ensure that karaf doesn't hang forever in these 
circumstances.

Anyway, this seems like a pattern than the 

  bin\start & bin\client -f  & bin\stop

pattern.


Re: Waiting for a command to be available

2019-03-20 Thread tom
> Not sure what issue you have fixed (perhaps you could expand?) but the 
> problem I have still remains.
Note also that the client appears to forget the password set on the command 
line if it has to retry (JIRA raised):

> client -u admin -p admin -r 30 
> Logging in as admin
> retrying (attempt 1) ...
> retrying (attempt 2) ...
> Password:


Re: Waiting for a command to be available

2019-03-20 Thread tom
> Did you try with 4.2.4 (as I did the fix on the client thread in this
> version) ?

Not sure what issue you have fixed (perhaps you could expand?) but the problem 
I have still remains.

The problem I have is the number of arguments that are passed from the BAT file 
to java:
ARGS=%2 %3 %4 %5 %6 %7 %8 %9
or 
ARGS=%1 %2 %3 %4 %5 %6 %7 %8 %9

I have 10 arguments (-u  -p  -d  -r  -f 
)

Therefore in both cases one or more arguments will be dropped before they get 
to java. Most confusing.


Re: Waiting for a command to be available

2019-03-20 Thread tom
> Using "start && client -f commands.txt && stop" a) seems unreliable and b) 
> would seem to need a sleep anyway as otherwise the client tries to connect 
> while karaf is still starting.

Note that trying the "-d" and "-r" options on the bin/client along with -f 
seems to result in either "miss the commands file" or "miss the attempts" 
depending on which way round you put the options (karaf 4.2.2).

I'll have a read though the source and see if I can find out why.


Re: Waiting for a command to be available

2019-03-20 Thread tom
> FYI, I fixed an issue in client, now you can inject directly a script.

Do you mean there's a way of using the "karaf" command and running a script? 
With the karaf.delay.console=true setting you can't pipe a script in to the 
karaf command, as it misinterprets the end of line as "Press Enter to open the 
shell now...".
Using "start && client -f commands.txt && stop" a) seems unreliable and b) 
would seem to need a sleep anyway as otherwise the client tries to connect 
while karaf is still starting.


Re: Waiting for a command to be available

2019-03-20 Thread tom
> By the way, you can also add a delay to have shell available. You can do
> this by adding karaf.delay.console=true in etc/config.properties.

That looks promising.
Can I set that other than in etc/config.properties? Can I set it in 
custom.properties for example (no), or somewhere similar?

I don't want to have to copy the complete config.properties file and ship a 
copy of it, that feels fragile.
Thanks.


Re: Waiting for a command to be available

2019-03-20 Thread tom
> No way to use etc/shell.init.script ?
Sorry, not sure I understand how that would help. That's a way I could get 
shell commands to be run automatically I'm assuming. That's part of my problem. 
The main problem is establishing whether the commands are available to run.

> FYI, I fixed an issue in client, now you can inject directly a script.

By what means? The client appears to support a -f option that pipes commands in 
from a file. Is that what you are referring to?

As I say, the main issue I have is waiting for the commands I need to be 
available, rather than actually how to invoke the commands


Waiting for a command to be available

2019-03-20 Thread tom
I'm wanting to automate some setup of a karaf based product. I want to create a 
docker image that is pre-configured for internal testing.

In order to do this I need to run some karaf shell commands.
What I was naively hoping to do is do something like:

/opt/karaf/bin/start && /opt/karaf/bin/client -u admin -p admin -f commands.txt 
&& /opt/karaf/bin/stop

or perhaps:

/opt/karaf/bin/karaf < commands.txt

However my problem is that the shell comes up before the commands I need are 
available to run. 

Any suggestions on how to deal with this?
- I can't find any documentation on shell variables that might give me a return 
code I could check and loop on with a sleep. This would probably work, but 
feels crude.
- A "does this command exist" command might be useful if I can then loop on its 
result. I can probably create such a command and put it in a bundle that I 
specify with a low start order to ensure it's available.
- Crudely sleeping for a long period of time would work (most of the time) but 
is inefficient. 
Fiddle with the start level of the shell bundle so that it comes up last (I'm 
not even totally sure that would work -- since component activation happens 
asynchronously I suspect that things from earlier bundles are still happening 
while later bundles are then being started).

Or is what I'm trying to do just not very sensible? Even if I created REST API 
endpoints do to it I'd have essentially the same issue, I could just write the 
logic in an external program of some kind, java, shell, whatever (repeatedly 
"ping" an endpoint until it doesn't return 404, then I know the endpoints are 
available). I'm guessing JMX would have essentially the same problem, in that I 
would have to start karaf then loop until the JMX beans become available.

Note that I found what appears to be some old karaf documentation 
(https://svn.apache.org/repos/asf/karaf/site/production/manual/latest-3.0.x/scripting.html)
 that includes a script for waiting for a command to become available. Perfect! 
However that a) doesn't work and b) would appear be based on standard OSGI gogo 
shell commands by the looks of it, rather than karaf shell commands, which 
aren't registered as services. 

Thanks.


Re: Replacing or blacklisting felix fileinstall

2019-03-15 Thread tom
Just as an update, this is what I've ended up doing, at least for a PoC.
I struggled for a while to try and completely remove fileinstall. I'm sure it's 
probably possible, but in the end it was easier to "move it out of the way".
In custom.properties, I set:
* felix.fileinstall.enableConfigSave to stop it trying to write config files 
back 
* felix.fileinstall.dir to a bogus directory to stop it trying to read config 
in. This works OK, as long as you set:
* felix.fileinstall.disableNio2 to true to make is use "simple" recursive 
directory scanning. Without this it tried to create a nio2 watcher service 
which fails due to the bogus directory. This failure is caught and then the 
code falls back to the non-noi2 version, so setting this just avoids that 
happening.
* felix.fileinstall.noInitialDelay to false to avoid it doing an initial scan, 
which again would fail due to the bogus directory.
* felix.fileinstall.poll to a very large number to avoid it doing a subsequent 
scan.

I then created my own startup bundle and set the start level to the same value 
as was used for fileinstall. In this bundle I did my own initial read of the 
etc directory and loaded the configuration properties into ConfigurationAgmin. 
This is enough to get karaf to then start.

I then created my own implementation of the felix configadmin 
PersistenceManager implementation and registered it as an OSGi service with the 
"name" property. I then set the felix.cm.pm system option in custom.properties 
to that name. 

This gave me a working basis for experimentation. Configuration properties are 
loaded initially, and then modifications are saved. You may be able to get away 
with the initial load of the cfg files and rely totally on the 
PersistenceManager, but I haven't been totally successful with that. You'd 
think it ought to work. As it is you get a slightly curious situation where you 
load the config files in and the PersistenceManager gets asked to save them 
again. 

The early start level causes some problems, you can't use declarative services, 
depending on other bundles such as json or yaml is slightly problematic, but 
otherwise this works fine. Assuming you can get over the early start level I 
don't see why writing to a database wouldn't work, but I've just been 
interested in reading and writing to files in a different way.


Re: Replacing or blacklisting felix fileinstall

2019-03-13 Thread tom
> You have to create a custom framework features that you use at
> startupFeature instead of the "standard" karaf feature.

OK I see. I've been reluctant to do this in the past as I have to make sure to 
track changes that occur in the versions of the "normal" one. But it sounds 
like the simplest solution here.


Re: Replacing or blacklisting felix fileinstall

2019-03-13 Thread tom
> I'm working on a Karaf Configuration persistence layer to "abstract"
> fileinstall, (especially to deal with encryption, AWS keys, etc). It
> will help you to implement your own backend.

That will be awesome. Any targeted release date?

Thanks.


Re: Replacing or blacklisting felix fileinstall

2019-03-13 Thread tom
> You should remove fileinstall from etc/startup.properties to remove it.

So how would I do that from a maven build? We're using the karaf maven plugin, 
so the startup.properties file gets generated automagically. I'm guessing I'd 
have to either put a version of it in my filtered_resources/etc or 
unfiltered_resources/etc direct and hope that overwrites, or put a post process 
step in to remove the line?

Anyway. That gives me a method of experimentation which is good. I'll probably 
worry about how to do it properly if I can get my preferred solution working 
otherwise.


Re: Replacing or blacklisting felix fileinstall

2019-03-13 Thread tom
> I've implemented a SQL mechanism for persisting configurations. I started
> by trying to implement a custom persistence mechanism for Felix CM. This
> didn't work (see
> http://karaf.922171.n3.nabble.com/Custom-PersistenceManager-configurations-not-instantiating-components-td4052786.html#a4052799
> ).
> 
> What I ended up doing was having a component which just interacted with
> Configuration Admin (creating configurations at startup; updating the
> database when modifications occur; deleting configurations at shutdown).
> File install is still running - it creates files when my component creates
> configurations, and updates & deletes them as necessary.
> 
> The only downside I've found is the factory configurations get a new PID
> every time Karaf starts (as you can't specify the pid for a new factory
> configuration - though I understand this is possible in new versions of
> Config Admin).


That's the kind of thing I want to do. I have had success with a custom 
PersistenceManager before, but fileinstall got in the way, and there was a 
change in behaviour at some point that affected things (this is going back a 
year or so).
So my first aim is to get fileinstall out of the way and put a simpler 
component in that will just statically load the config from the .cfg files in 
etc. Then I'll play with trying to save any dynamic changes.

Thanks. Good to know that someone's had at least some kind of success with this 
sort of thing.


Replacing or blacklisting felix fileinstall

2019-03-13 Thread tom
I'm trying to experiment with an alternative way of loading up configuration. 
So my goal is to disable felix fileinstall and provide an alternative 
implementation of the org.apache.felix.cm.PersistenceManager interface.

However, so far I'm having great difficulty either blacklisting fileinstall or 
replacing it. 

in etc/org.apache.karaf.features.xml I have tried:


mvn:org.apache.felix/org.apache.felix.fileinstall


However I can't get this to have any effect. I have traced through with a 
debugger, and the LocationPattern.matches method is returning true, and it 
appears to be doing the right thing at that level, but the bundle still starts. 
I can't see any log messages that might be relevant.

I also tried using  in that file too. I had slight success 
-- I can use:




and can compile my own version of fileinstall 3.6.5 and I can see that it's 
being loaded in.

I say can. I could. I did that this morning, though I seem unable to reproduce 
that now, I'm getting both versions of the bundle now. Anyway, I struggled to 
change the group and artifact to something else (wasn't changing any code, I 
was just changing the pom to change the maven coordinates it was building to 
and then trying to reference that in the replacement url). But as soon as I did 
that it went back to loading the original.

Anyone got a recipe for providing an alternative implementation of file install?

Thanks.


Re: Karaf freezes up under MS-Windows

2018-11-29 Thread tom leung
Based on the latest Apache Karaf v4.1.7, run it on Windows 7.

Still freeze up at Windows side after entering the command "list - t 0"
but the same symptom never happen at Linux side.

So the bug is really related to MS-Windows.

May you file this symptom as a bug.

Best Rgds,

Tom Leung




On Mon, Aug 13, 2018 at 1:03 PM Jean-Baptiste Onofré 
wrote:

> OK, let me try to reproduce (I think I still have a VM with Windows 7).
>
> Thanks,
> Regards
> JB
>
> On 13/08/2018 06:48, tom leung wrote:
> > No response no matter what I type including "CTRL-C and hit ENTER"
> >
> > I have type the following command
> >
> > karaf@root()>
> > karaf@root()> bundle:list -t 0
> > START LEVEL 100 , List Threshold: 0
> >
> > Same issue, still freeze up after typing "CTRL-C" and hit ENTER
> >
> >
> >
> >
> >
> > On Mon, Aug 13, 2018 at 12:44 PM, Jean-Baptiste Onofré  > <mailto:j...@nanthrax.net>> wrote:
> >
> > Hi Tom,
> >
> > By freeze, you mean that you can type any command anymore in the
> > console ?
> >
> > Did you try CTRL-C or typing ENTER after the command ?
> > Does the same happen with bundle:list -t 0 command ?
> >
> > Nothing special in the karaf.log ?
> >
> > Regards
> > JB
> >
> > On 13/08/2018 06:38, tom leung wrote:
> > > I find the same issues happened in V4.1.6 anf v4.2.0
> > >
> > > At MS-Windows 64-bit Windows 7.0 professional version
> > >
> > >
> > > I install a new fresh copy of Karaf v4.1.6
> > >
> > > After it shows the following logo
> > >
> > > C:\software\apache-karaf-4.1.6\bin>karaf
> > > __ __  
> > >/ //_/ __ _/ __/
> > >   / ,<  / __ `/ ___/ __ `/ /_
> > >  / /| |/ /_/ / /  / /_/ / __/
> > > /_/ |_|\__,_/_/   \__,_/_/
> > >
> > >   Apache Karaf (4.1.6)
> > >
> > > Hit '' for a list of available commands
> > > and '[cmd] --help' for help on a specific command.
> > > Hit '' or type 'system:shutdown' or 'logout' to shutdown
> > Karaf.
> > >
> > > karaf@root()>
> > >
> > > karaf@root()> list
> > > START LEVEL 100 , List Threshold: 50
> > >
> > >
> > > I type the following command, Karaf seems to be freezed up withut
> any
> > > response.
> > >
> > > I need to kill Karaf process manually and restart karaf Karaf
> again.
> > >
> > > Same issue if I type the above command again.
> > >
> > > The same issue also happens in Karaf v4.2.0
> > >
> > > Is it a bug?
> > >
> > > Best Rgds,
> > >
> > > Tom Leung
> > >
> > >
> > >
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org <mailto:jbono...@apache.org>
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
> >
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: Aries JAX-RS Whiteboard

2018-11-24 Thread tom
> as I'm creating the Aries JAXRS feature for Karaf, I'm currently using
> the one from Aries.
That's the one I used.


Re: Aries JAX-RS Whiteboard

2018-11-24 Thread tom
> Did you notice that the JAX-RS Whiteboard provides a ClientBuilder
> (prototype scoped) service?
> 
> e.g.
> @Reference
> ClientBuilder clientBuilder;
> 

Ah, no, I hadn't. I had read the words, but obviously not understood the 
significance.
Thanks for the pointer.


Re: Aries JAX-RS Whiteboard

2018-11-24 Thread tom


> The SSE from JAX-RS 2.1 definitely works (client and server side) with the 
> Aries implementation, so hopefully that will give you everything that you 
> need. 

I have it all working now. I've had to make one or two changes though, as a 
result of the change from jersey to cxf.

Generally, the implementation was pretty easy, it certainly works to use Aries 
JAX-RS Whiteboard within Karaf 4.1.2. Once I worked out what the required 
dependency bundles were, it was OK. Anecdotally the requests seem faster than 
using jersey as well, though I haven't done any testing on that.

I battled with an issue for a while because I had two bundles providing the 
jaxrs API. I had the original one, plus I also had the 
org.apache.aries.javax.jax.rs-api one (required as it adds a required OSGi 
contract specification). That caused me some issues with bundles sometimes 
working and sometimes reporting "exposed to package via two dependency chain" 
issues, and huge startup times and memory use while it figured it out. That 
took me a while to iron out. 

An issue I failed to resolve was that we had some use of jaxrs http client. I 
never did manage to get it to use the CXF client implementation. The Aries 
JAX-RS whiteboard bundles the required parts of CXF within it, but I don't 
think they are accessible to use. I tried including the relevant parts of CXF, 
but couldn't get it all to work. It seemed to be a bundle initialisation order 
issue, in that the geronimo osgi locator component was being used to find the 
JAX-RS http client classes before it had been initialised. Maybe if I'd just 
included the complete CXF bundle it would all have worked OK, but that seemed 
overkill when all I wanted was the client, and when the Aries JAX-RS whiteboard 
implementation includes its own copy of CXF as well. Since we only had one 
class using it I just substituted it for a non-jax-rs http client and the 
problem went away.

I encountered an issue with Aries JAX-RS Whiteboard that I will raise on 
github. It doesn't like "void" resource method results. You get:
  java.lang.NullPointerException
at 
 
org.apache.aries.jax.rs.whiteboard.internal.cxf.PromiseAwareJAXRSInvoker.checkFutureResponse(PromiseAwareJAXRSInvoker.java:40)

This has caused me a bit of rework to get round to be sure it was the problem. 

I also encountered a difference in behaviour between cxf and jersey. I had a 
resource component with a path of "/a", and another with a path of "/a/b". In 
CXF the second of these didn't seem to get matched. Instead I had to add a 
subresource locator method on the first to match "b" and return the second 
resource component. No big deal, and I don't actually know what the spec says 
is valid. I'm assuming that this is in CXF rather than the whiteboard.

Apart from all of that, it worked fine. 
Now to see whether it all actually solves the reliability issues we were having 
with our own homebrew whiteboard.

Thanks for the assistance.


Re: Aries JAX-RS Whiteboard

2018-11-22 Thread tom
> Honestly, it sounds like you’re about 30 minutes away from having the Aries 
> JAX-RS Whiteboard working...

OK, Understand your reference to servicemix annotation earlier. 
I had to pick up the org.apache.felix.http.servlet-api-1.1.2.jar to get the 
JavaServlet contract version 3.1.

I've now got karaf starting cleanly, and it's obviously doing *something*. I 
suspect if I created a simple example it would be working, but obviously I was 
naive and greedy and went straight for converting my entire app. I mean, what 
could go wrong?

I say it's doing something, in that I can request an api and I get an error 
such as:
java.lang.ClassNotFoundException: 
org.glassfish.jersey.internal.RuntimeDelegateImpl not found by javax.ws.rs-api

but the important thing is that what's in the stack trace is my resource class. 
So it's registered the endpoint and routed it correctly, it's just I've got 
some references to jersey. I'll have to clean all that out and it'll probably 
be more successful.

Thanks.


Re: Aries JAX-RS Whiteboard

2018-11-22 Thread tom
> that won't work out of the box as Karaf 4.2.x is still R6.
> 
> It will work with Karaf 4.3.x that will be R7.
> 
> In the mean time, I'm creating a very simply rest whiteboard pattern for
> CXF.
> It doesn't use all the JAXRS whiteboard spec, but just works fine for
> most of the use cases.

OK, thanks. That's appreciated.
We're only doing simple things, registering resources and extensions (message 
body writer, request/response filters etc), but we only have one application. 
It's what I would have thought was pretty normal stuff to be honest.
We do use SSE (server sent events) implementation from jersey at the moment, 
and also multipart, but it looks like CXF supports those, so I'm sure that'll 
be possible ;-)

I'll hold off for now then. 
What's the timescale for Karaf 4.3? 

Thanks.


Re: Aries JAX-RS Whiteboard

2018-11-22 Thread tom
> You should then be able to get away with relatively few bundles. The JAX-RS 
> Whiteboard API, OSGi Promises + function, the Aries wrapping of the JAX-RS 
> API and the Aries JAX-RS Whiteboard implementation should be enough. This is 
> by far preferable to using CXF directly, where you don’t have proper resource 
> isolation, nor do you have a nice way to apply extensions (e.g. JSON support, 
> CORS headers, etc).

So I've added those bundles (promise, function and the aries jaxrs spec bundle 
for the JavaJAXRS capability), the problem I now have is that it's missing the 
JavaAnnotation capability, version 1.3.0. 
I suspect I have something providing an earlier version of that, but at the 
moment my OSGi fu hasn't yielded the answer yet.

Good to know though that I'm potentially on the right track.


Aries JAX-RS Whiteboard

2018-11-22 Thread tom
For largely historical reasons we have ended up with a setup where we use the 
standard karaf HTTP whiteboard service, and then run jersey on top of that with 
our own homebrew whiteboard service to register JAXRS endpoints.

I'm looking to replace this with a better solution, presumably based around the 
OSGi JAXRS whiteboard spec. and aries-jax-rs-whiteboard 
(https://github.com/apache/aries-jax-rs-whiteboard) since that now exists, 
which it didn't when we started out.

Is aries-jax-rs-whiteboard compatible with karaf does any one know? Or does it 
depend on things that aren't provided, or rely on other things from later OSGi 
specs that it doesn't support? I'm finding I'm having to add in a bunch of 
bundles, and I'm wondering whether ultimately it's a dead end?

Am I better off doing it another way? Karaf comes with CXF doesn't it? My 
preference is to use the official OSGi whiteboard, but if that's going to be 
too hard right now I'm not against doing it a CXF specific way. The only 
example I can find so far though looks something like this:

@Component(service=TaskServiceRest.class, 
property={"service.exported.interfaces=*",

"service.exported.configs=org.apache.cxf.rs",

"org.apache.cxf.rs.address=/tasklistRest"})

Which seems, well, more complex that necessary in comparison to 
@Component(service=TaskServiceRest.class)
@JaxrsResource


What's the "best" route right now? It has to be declarative services based, and 
whiteboard pattern.

Thanks.


Karaf freezes up under MS-Windows

2018-08-12 Thread tom leung
I find the same issues happened in V4.1.6 anf v4.2.0

At MS-Windows 64-bit Windows 7.0 professional version


I install a new fresh copy of Karaf v4.1.6

After it shows the following logo

C:\software\apache-karaf-4.1.6\bin>karaf
__ __  
   / //_/ __ _/ __/
  / ,<  / __ `/ ___/ __ `/ /_
 / /| |/ /_/ / /  / /_/ / __/
/_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (4.1.6)

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf.

karaf@root()>

karaf@root()> list
START LEVEL 100 , List Threshold: 50


I type the following command, Karaf seems to be freezed up withut any
response.

I need to kill Karaf process manually and restart karaf Karaf again.

Same issue if I type the above command again.

The same issue also happens in Karaf v4.2.0

Is it a bug?

Best Rgds,

Tom Leung


Re: client.bat does not work in Karaf 4.1.5

2018-03-05 Thread tom
There was an incompatibility I believe in 4.1.3, where a dependency introduced 
a breaking change in a point release that was picked up by karaf. 4.1.5 has 
similar problems, though I don't know what the exact cause is there yet. We 
jumped to 4.1.5 and then back down to 4.1.4 as that one worked. In 4.1.3/4.1.5, 
external SSH clients work, just not client.bat.
Vanilla 4.1.5 download, start karaf.bat, start client.bat, can't login. 

> On 05 March 2018 at 13:54 t...@quarendon.net wrote:
> 
> 
> We've seen the same thing. 4.1.3 had a problem, 4.1.5 has a different 
> problem, 4.1.4 is the sweetspot at the moment. I can't recall the details, 
> I'll dig them out.
> 
> > On 05 March 2018 at 13:02 Jean-Baptiste Onofré  wrote:
> > 
> > 
> > Hi Nicolas,
> > 
> > the big issue is just Windows ;)
> > 
> > Without kidding, it seems that the ParsedLine line is null in the Console 
> > reader.
> > 
> > It could be related to your terminal. What's your Windows version ?
> > 
> > Can you try to cleanup etc/shell.init.script (especially around the color
> > settings) ?
> > Does it work with Karaf 4.1.4 ?
> > 
> > Thanks !
> > Regards
> > JB
> > 
> > On 03/05/2018 01:56 PM, DUTERTRY Nicolas wrote:
> > > Hi,
> > > 
> > >  
> > > 
> > > I have just downloaded Karaf 4.1.5 and it seems that, under Windows, it 
> > > is not
> > > possible to obtain an interactive SSH session anymore with the script 
> > > “client.bat”.
> > > 
> > > The login is successful but when I press any key the SSH session ends.
> > > 
> > > I have this error in logs :
> > > 
> > >  
> > > 
> > > 2018-03-05T13:35:39,496 | ERROR | Karaf ssh console user karaf |
> > > ShellUtil| 43 - org.apache.karaf.shell.core - 
> > > 4.1.5 |
> > > Exception caught while executing command
> > > 
> > > java.lang.NullPointerException: null
> > > 
> > >   at
> > > org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:348)
> > > [43:org.apache.karaf.shell.core:4.1.5]
> > > 
> > >   at java.lang.Thread.run(Thread.java:748) [?:?]
> > > 
> > > 2018-03-05T13:35:40,011 | WARN  | sshd-SshServer[50ae478a]-nio2-thread-3 |
> > > ServerSessionImpl| 48 - org.apache.sshd.core - 1.6.0 |
> > > exceptionCaught(ServerSessionImpl[karaf@/127.0.0.1:65321])[state=Opened]
> > > IOException: The specified network name is no longer available.
> > > 
> > >  
> > > 
> > > Do you have any idea what the issue is ?
> > > 
> > > Regards,
> > > 
> > > --
> > > 
> > > Nicolas Dutertry
> > > 
> > > Sopra HR Software - http://www.soprahr.com/
> > > 
> > >  
> > > 
> > 
> > -- 
> > Jean-Baptiste Onofré
> > jbono...@apache.org
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com


Re: client.bat does not work in Karaf 4.1.5

2018-03-05 Thread tom
We've seen the same thing. 4.1.3 had a problem, 4.1.5 has a different problem, 
4.1.4 is the sweetspot at the moment. I can't recall the details, I'll dig them 
out.

> On 05 March 2018 at 13:02 Jean-Baptiste Onofré  wrote:
> 
> 
> Hi Nicolas,
> 
> the big issue is just Windows ;)
> 
> Without kidding, it seems that the ParsedLine line is null in the Console 
> reader.
> 
> It could be related to your terminal. What's your Windows version ?
> 
> Can you try to cleanup etc/shell.init.script (especially around the color
> settings) ?
> Does it work with Karaf 4.1.4 ?
> 
> Thanks !
> Regards
> JB
> 
> On 03/05/2018 01:56 PM, DUTERTRY Nicolas wrote:
> > Hi,
> > 
> >  
> > 
> > I have just downloaded Karaf 4.1.5 and it seems that, under Windows, it is 
> > not
> > possible to obtain an interactive SSH session anymore with the script 
> > “client.bat”.
> > 
> > The login is successful but when I press any key the SSH session ends.
> > 
> > I have this error in logs :
> > 
> >  
> > 
> > 2018-03-05T13:35:39,496 | ERROR | Karaf ssh console user karaf |
> > ShellUtil| 43 - org.apache.karaf.shell.core - 4.1.5 
> > |
> > Exception caught while executing command
> > 
> > java.lang.NullPointerException: null
> > 
> >   at
> > org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:348)
> > [43:org.apache.karaf.shell.core:4.1.5]
> > 
> >   at java.lang.Thread.run(Thread.java:748) [?:?]
> > 
> > 2018-03-05T13:35:40,011 | WARN  | sshd-SshServer[50ae478a]-nio2-thread-3 |
> > ServerSessionImpl| 48 - org.apache.sshd.core - 1.6.0 |
> > exceptionCaught(ServerSessionImpl[karaf@/127.0.0.1:65321])[state=Opened]
> > IOException: The specified network name is no longer available.
> > 
> >  
> > 
> > Do you have any idea what the issue is ?
> > 
> > Regards,
> > 
> > --
> > 
> > Nicolas Dutertry
> > 
> > Sopra HR Software - http://www.soprahr.com/
> > 
> >  
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Re: Using java.util.logging

2017-10-12 Thread tom
> can you check if JUL packages really comes from Pax Logging ? 

Err, how can they? Perhaps I'm misunderstanding. 
Surely the java.util.logging packages are provided by the JRE?

I DO have one log message that DOES come out. Very odd.
If I step through the code in the debugger, I appear to have two code paths. 
In one, my calls to java.util.logging.Logger.info end up in 
org.ops4j.pax.logging.log4j2.internal.JdkHandler.

Later a call to java.util.logging.Logger.info ends up in 
org.ops4j.pax.logging.service.internal.JdkHandler

These log calls end up being passed through to log4j, rather than log4j2. log4j 
isn't set up at all, only log4j2 is set up, so these messages don't come out.

So for some reason, we have the "log4j" pax logging bundle ("OPS4J Pax Logging 
- Service", "org.ops4j.pax.logging.pax-logging-service") AS WELL as the log4j2 
logging bundle ("OPS4J Pax Logging - Log4j v2", 
"org.ops4j.pax.logging.pax-logging-log4j2"). Not sure where that's come from, 
but that doesn't appear in the standard karaf distro. I don't believe we 
explicitly list that anywhere as a requirement so at the moment I'm not sure 
how I determine what has caused that to be included by the distribution build 
process.


Re: Using java.util.logging

2017-10-12 Thread tom
> Yet when I try, I can't.

OK, so that's frustrating.
If I try a simple "hello world" bundle that just logs a message, run in a clean 
karaf, it DOES come out in the karaf.log file.
Run it in our application though and it doesn't.
So some bundle is interfering with the logging configuration in some way.
That's going to be s easy to diagnose :-)

Why is it that logging is so complicated, and that there are about a hundred 
logging frameworks? About half the projects on github must be dedicated to 
logging.


Using java.util.logging

2017-10-12 Thread tom
Documentation indicates I can do this. Pax logging indicates I can do this.
Yet when I try, I can't.

I have some code that uses java.util.logging. If I log at level SEVERE, it 
comes out on the console, and only the console, and doesn't come out in the 
karaf.log file. Anything less than that and it doesn't come out anywhere. This 
is the default java.util.logging behaviour isn't it?
I'm starting using the karaf.bat file. As a double check I've printed out the 
value of the java.util.logging.config.file system property and it does indeed 
point to the etc\java.util.logging.properties.

So I'm somewhat confused. Is there any additional configuration I need to do to 
make jdk logging come out somewhere useful?

I'm using a 4.1.3 snapshot of karaf on Windows. 
Thanks.


Re: Providing alternative config mechanism than felix.fileinstall/Preserving config changes on re-install

2017-10-06 Thread tom

> You can implement your own PersistenceManager (ConfigAdmin service).

OK, I'm actually super confused now (not hard).
felix configadmin appears to have logic in it that persists configurations to 
and from files. It's unclear in the karaf environment where the 
FilePersistenceManager is attempting to read, and more importantly write, 
changes to/from. I can't see any evidence of it writing files anywhere, but the 
logic would appear to be to fall back to writing to the current directory if 
there isn't any explicit configuration (which there doesn't appear to be that I 
can find).

Given the presence of felix configadmin and FilePersistenceManager, I can only 
assume that the reason that filinstall is *actually* the thing that is used in 
karaf for persistence of configuration is the polling behaviour, allowing you 
to change the files and have it pick the changes up without having to restart. 

Say I implemented my own felix configadmin PersistenceManager. I'd still need 
to get that activated very early, which is something I've not yet understood 
how to do. 
Any suggestions for how to get a bundle that's activated super early, same as 
configadmin/fileinstall?


Re: Providing alternative config mechanism than felix.fileinstall/Preserving config changes on re-install

2017-10-06 Thread tom
I can see KARAF-418, but that's pretty old, and sounds like it was considered 
unnecessary? Is there anything else I can't find?

I don't necessarily want to store things in a database, I just want different 
behaviour to normal, to provide my own implementation of something that listens 
to config changes and injects configuration on startup. And I can write that 
bit, what I can't do is substitute it in at a central enough level to replace 
fileinstall.

I've made a little progress. I manually edited the "startup.properties" file 
and put my bundle in there at level 11. It got activated. So what I don't 
currently understand is a) where that file comes from (it's clearly generated 
as part of building my karaf distribution, it's not in source control) and b) 
what specifying the start-level in the feature.xml file does (since it doesn't 
appear to specify the start level :-)).
My problem now appears to be that I'd written my code using declarative 
services, and I think I need to go back to old fashioned bundle activators and 
service trackers in order to reduce the dependencies and make the code work in 
the "simple" environment I encounter down at that start level.

There was also a comprehension question of why the ConfigRepository was 
attempting to write the config files directly, rather than just calling 
Configuration.update. Surely one thing or the other (calling update I assume is 
preferable), but not both?

Thanks.

> On 06 October 2017 at 11:40 Jean-Baptiste Onofré  wrote:
> 
> 
> Hi
> 
> I guess you want to use an alternative backend to the filesystem (a database 
> for instance).
> 
> In that case we have a Jira about that and you can provide your own 
> persistence backend.
> 
> Regards
> JB
> 
> On Oct 6, 2017, 12:30, at 12:30, t...@quarendon.net wrote:
> >I'm trying to establish some alternative configuration behaviour than
> >what felix-fileinstall gives me.
> >I have written a very simple component that reads configuration files
> >in from /etc and updates config admin with the information, much like
> >fileinstall does. I can run this and it appears to work, however I
> >still have the existing mechanism in that I'd like to remove.
> >
> >So I naively did the following:
> >   set the start-level of my bundle to be 11, same as fileinstall
> >set felix.fileinstall.enableConfigSave to false in
> >etc/custom.properties
> >   set felix.fileinstall.dir to empty
> >
> >Karaf fails to start.
> >
> >So my suspicion is that apache fileinstall is more centrally required
> >than I'd hoped. Looking at the karaf code there are certainly a few
> >places where it assumes a configuration contains a
> >felix.fileinstall.filename property that names the file where the
> >configuration is stored, and seems to directly read and write those
> >files. This appears to mean that I wouldn't be able to substitute my
> >own configuration storage backend, which is a shame (I'm actually
> >confused what org.apache.karaf.config.core.ConfigRepository is actually
> >doing here -- why does is write directly to the file, rather than just
> >letting fileinstall do it, especially as it only seems to allow for
> >".cfg" and not ".config" files). There may be other reasons why karaf
> >won't start though.
> >
> >Is it likely that I would substitute felix.fileinstall in this way?
> >
> >
> >What I was actually trying to solve was what to do when a user
> >uninstalls and reinstalls our karaf-based product, and attempting to
> >preserve any configuration changes. What I had hoped to do was store
> >any actually modified configuration properties in separate files (just
> >the actual properties that were different from default or from the
> >originals in the etc/*.cfg files), so that the original etc/*.cfg files
> >would be replaced without difficulty, and the changed configuration
> >changes would then be applied.
> >
> >So alternative question: How else can I achieve the same thing without
> >making the users manually merge the configuration changes?
> >
> >Thanks.


Providing alternative config mechanism than felix.fileinstall/Preserving config changes on re-install

2017-10-06 Thread tom
I'm trying to establish some alternative configuration behaviour than what 
felix-fileinstall gives me.
I have written a very simple component that reads configuration files in from 
/etc and updates config admin with the information, much like fileinstall does. 
I can run this and it appears to work, however I still have the existing 
mechanism in that I'd like to remove.

So I naively did the following:
   set the start-level of my bundle to be 11, same as fileinstall
   set felix.fileinstall.enableConfigSave to false in etc/custom.properties
   set felix.fileinstall.dir to empty

Karaf fails to start.

So my suspicion is that apache fileinstall is more centrally required than I'd 
hoped. Looking at the karaf code there are certainly a few places where it 
assumes a configuration contains a felix.fileinstall.filename property that 
names the file where the configuration is stored, and seems to directly read 
and write those files. This appears to mean that I wouldn't be able to 
substitute my own configuration storage backend, which is a shame (I'm actually 
confused what org.apache.karaf.config.core.ConfigRepository is actually doing 
here -- why does is write directly to the file, rather than just letting 
fileinstall do it, especially as it only seems to allow for ".cfg" and not 
".config" files). There may be other reasons why karaf won't start though.

Is it likely that I would substitute felix.fileinstall in this way?


What I was actually trying to solve was what to do when a user uninstalls and 
reinstalls our karaf-based product, and attempting to preserve any 
configuration changes. What I had hoped to do was store any actually modified 
configuration properties in separate files (just the actual properties that 
were different from default or from the originals in the etc/*.cfg files), so 
that the original etc/*.cfg files would be replaced without difficulty, and the 
changed configuration changes would then be applied.

So alternative question: How else can I achieve the same thing without making 
the users manually merge the configuration changes? 

Thanks.


Config admin doesn't correctly store strings containing backslashes

2017-09-29 Thread tom
Really? I find this somewhat hard to believe, but I'm fairly sure this is the 
case.

Anyway. I'm using config admin to update a configuration from within a karaf 
command. One of the properties is a string that contains a filename. I'm on 
Windows. Filename therefore contains backslashes.
I type in 
c:\work\tomq.keytab

I have double checked with the debugger that what my Hashtable contains is a 
string with backslashes in, it's not that the backslashes are being removed by 
the shell command input reader. 

I call "update" on the configuration. My component correctly sees the filename 
with backslashes. The properties file gets written with something like:

  file = "c:\\work\\tomq.keytab"

So so far so good. However, it I restart karaf, it seems to go wrong. 
So with karaf stopped, the file still contains the same value.
I start karaf. My component is then activated with a string that has the 
backslashes reinterpreted as escapes. The component sees:
c:work  omq.keytab
The properties file is also rewritten by something so that the backslashes have 
been reinterpreted in, well a way I don't understand. It now contains:
file =  "c:work\tomq.keytab"

I don't know who is responsible here, whether this is all felix fileinstall, or 
karaf. 
I'm using karf 4.1.2.
If it makes a difference, my component is written using declarative services 
and has something along the lines of:

@interface Config { String file{}; }
@Activate putlic void activate(Config config) {}

I've double checked this a few times as I feel I must be seeing things, because 
surely this should work?
Can anyone confirm my madness/sanity?

Thanks.


Re: Creating a karaf feature containing a karaf shell command breaks karaf

2017-09-07 Thread tom
So although moving 
eventadmin
out of startupFeatures makes the command work, it seems to break a bunch of 
other things, so doesn't seem wise afterall. E.g, I get things like the 
following that only seem to happen if I've moved that line:

Bundle org.ops4j.pax.web.pax-web-extender-whiteboard [115] Error starting 
mvn:org.ops4j.pax.web/pax-web-extender-whiteboard/6.0.6 
(org.osgi.framework.BundleException: Activator start error in bundle 
org.ops4j.pax.web.pax-web-extender-whiteboard [115].)
org.osgi.framework.BundleException: Activator start error in bundle 
org.ops4j.pax.web.pax-web-extender-whiteboard [115].
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2289) 
[?:?]
at org.apache.felix.framework.Felix.startBundle(Felix.java:2145) [?:?]
at 
org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1372) [?:?]
at 
org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308)
 [?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
Caused by: java.lang.IllegalStateException: HttpService must be implementing 
Pax-Web WebContainer!
at 
org.ops4j.pax.web.extender.whiteboard.internal.ExtendedHttpServiceRuntime.serviceChanged(ExtendedHttpServiceRuntime.java:110)
 ~[?:?]
at 
org.ops4j.pax.web.extender.whiteboard.internal.ExtendedHttpServiceRuntime.serviceChanged(ExtendedHttpServiceRuntime.java:44)
 ~[?:?]
at 
org.ops4j.pax.web.extender.whiteboard.internal.util.tracker.ReplaceableService.bind(ReplaceableService.java:86)
 ~[?:?]
at 
org.ops4j.pax.web.extender.whiteboard.internal.util.tracker.ReplaceableService$Customizer.addingService(ReplaceableService.java:105)
 ~[?:?]
at 
org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:941)
 ~[?:?]
at 
org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:870)
 ~[?:?]
at 
org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256) 
~[?:?]
at 
org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:183) 
~[?:?]
at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:318) 
~[?:?]
at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:261) 
~[?:?]
at 
org.ops4j.pax.web.extender.whiteboard.internal.util.tracker.ReplaceableService.start(ReplaceableService.java:72)
 ~[?:?]
at 
org.ops4j.pax.web.extender.whiteboard.internal.ExtendedHttpServiceRuntime.start(ExtendedHttpServiceRuntime.java:153)
 ~[?:?]
at 
org.ops4j.pax.web.extender.whiteboard.internal.Activator.start(Activator.java:65)
 ~[?:?]
at 
org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
 ~[?:?]
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2239) 
~[?:?]
... 4 more


Re: Creating a karaf feature containing a karaf shell command breaks karaf

2017-09-07 Thread tom
> Thanks, I'm checking out and I will take a look (and eventually submit a PR 
> ;)).
OK, thanks.


Re: Creating a karaf feature containing a karaf shell command breaks karaf

2017-09-07 Thread tom
There's a complete example here:
https://github.com/tomq42/karaf-command-feature

Thanks.

> On 07 September 2017 at 09:15 Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
> 
> 
> Hi Tom,
> 
> can you share the pom.xml you use to create your custom distro ?
> 
> It looks like some resources are missing in the distro.
> 
> Regards
> JB
> 
> On 09/07/2017 10:06 AM, t...@quarendon.net wrote:
> > Clearly this can't be true, since karaf ships with features containing 
> > bundles containing commands, but I can't get it to work.
> > 
> > I've created a simple karaf shell command, following the tutorial in the 
> > documentation. As per the example, it just says "hello world".
> > If I build that as an isolated bundle, install it into karaf and run it, it 
> > works, as per the documentation.
> > 
> > What I'm trying to do though is build that into a feature to include into a 
> > custom karaf distribution.
> > I can build the feature, and I can manually install the feature into a 
> > karaf distribution, and it works OK, I can run the resulting command.
> > I can build the karaf distribution containing the feature OK, but when I 
> > then run the resulting karaf then fails to initialise properly.
> > 
> > 
> > In the log file, I get:
> > Adding features:
> > Changes to perform:
> > Region: root
> > Bundles to install:
> > ...
> > mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.1.2
> > null <-- This is clearly bad
> > mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.1.2
> > ...
> > 
> > Installing bundles:
> > ...
> > null <-- This is the same null as before and causes the problem below.
> > Error installing boot features
> > java.lang.IllegalStateException: Resource has no uri
> > at 
> > org.apache.karaf.features.internal.service.Deployer.getBundleInputStream(Deployer.java:1460)
> >  [10:org.apache.karaf.features.core:4.1.2]
> > at 
> > org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:766)
> >  [10:org.apache.karaf.features.core:4.1.2]
> > at 
> > org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1233)
> >  [10:org.apache.karaf.features.core:4.1.2]
> > at 
> > org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$0(FeaturesServiceImpl.java:1132)
> >  [10:org.apache.karaf.features.core:4.1.2]
> > at 
> > org.apache.karaf.features.internal.service.FeaturesServiceImpl$$Lambda$15/951619949.call(Unknown
> >  Source) [10:org.apache.karaf.features.core:4.1.2]
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
> > at 
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >  [?:?]
> > at 
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >  [?:?]
> > 
> > The console then goes funny, and you get:
> > Error in initialization script: etc\shell.init.script: String index out of 
> > range: 0
> > which I'm assuming is a knockon issue.
> > 
> > I've got no idea where this "null" bundle is coming from, but it's clearly 
> > causing the karaf initialisation to go wrong.
> > 
> > I have created an example project at 
> > https://github.com/tomq42/karaf-command-feature which I hope shows the 
> > problem.
> > Just build with maven, then run the 
> > karaf-distro\target\assembly\bin\karaf(.bat) command. You should see the 
> > error above on the console, and the error in the log.
> > 
> > Any insight would be very welcome.
> > Thanks.
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Creating a karaf feature containing a karaf shell command breaks karaf

2017-09-07 Thread tom
Clearly this can't be true, since karaf ships with features containing bundles 
containing commands, but I can't get it to work.

I've created a simple karaf shell command, following the tutorial in the 
documentation. As per the example, it just says "hello world".
If I build that as an isolated bundle, install it into karaf and run it, it 
works, as per the documentation.

What I'm trying to do though is build that into a feature to include into a 
custom karaf distribution. 
I can build the feature, and I can manually install the feature into a karaf 
distribution, and it works OK, I can run the resulting command. 
I can build the karaf distribution containing the feature OK, but when I then 
run the resulting karaf then fails to initialise properly.


In the log file, I get:
Adding features: 
Changes to perform:
Region: root
Bundles to install:
...
mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.1.2
null <-- This is clearly bad
mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.1.2
...

Installing bundles:
...
null <-- This is the same null as before and causes the problem below.
Error installing boot features
java.lang.IllegalStateException: Resource has no uri
at 
org.apache.karaf.features.internal.service.Deployer.getBundleInputStream(Deployer.java:1460)
 [10:org.apache.karaf.features.core:4.1.2]
at 
org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:766) 
[10:org.apache.karaf.features.core:4.1.2]
at 
org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1233)
 [10:org.apache.karaf.features.core:4.1.2]
at 
org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$0(FeaturesServiceImpl.java:1132)
 [10:org.apache.karaf.features.core:4.1.2]
at 
org.apache.karaf.features.internal.service.FeaturesServiceImpl$$Lambda$15/951619949.call(Unknown
 Source) [10:org.apache.karaf.features.core:4.1.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:?]

The console then goes funny, and you get: 
Error in initialization script: etc\shell.init.script: String index out of 
range: 0
which I'm assuming is a knockon issue.

I've got no idea where this "null" bundle is coming from, but it's clearly 
causing the karaf initialisation to go wrong.

I have created an example project at 
https://github.com/tomq42/karaf-command-feature which I hope shows the problem.
Just build with maven, then run the 
karaf-distro\target\assembly\bin\karaf(.bat) command. You should see the error 
above on the console, and the error in the log.

Any insight would be very welcome.
Thanks.


Re: Developing for karaf

2017-09-05 Thread tom
OK, thanks.

I'm really actually quite passionate about this whole area. The OSGi community 
is too small to fragment, and there's an excellent tool in the shape of 
bndtools (gradle based), but it doesn't "deploy" to anything. It's just super 
frustrating that there's no clear integration path between that and karaf 
(maven based). There would be an awesome environment if that could be nailed, 
so that it operated a bit like developing j2ee webapps for a container like 
tomcat, which seems roughly analogous (integration of running karaf within 
eclipse itself, automatic deployment of the bundles into the container and so 
forth, along with the ease of build and dependency configurations that bndtools 
has).

Anyway. I'll experiment with a "pure karaf" environment and see where it gets 
me. You've confirmed my basic understanding of the current situation anyhow.

> On 05 September 2017 at 13:07 Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
> 
> 
> Hi Tom,
> 
> You can also create your own Karaf custom distro.
> 
> We are also discussing about Karaf Boot to simplify the bootstrapping/ramp up 
> on 
> Karaf. Short term, we are working on an improved dev guide.
> 
> Back on your question:
> 1. From a dev perspective, you can have a running karaf instance, you just do 
> mvn install on your bundle, and thanks to bundle:watch, it's automatically 
> updated in Karaf (not need to perform any command).
> 2. For debugging, you are right: just start karaf in debug mode (bin/karaf 
> debug), it binds port 5005 by default, and then, connect your IDE remotely.
> 
> Regards
> JB
> 
> On 09/05/2017 01:48 PM, t...@quarendon.net wrote:
> > I'm trying to get an idea of how people go about developing for karaf, from 
> > a practical point of view.
> > 
> > So karaf is maven focused. There are archetypes for creating command, 
> > general bundles and so on.
> > I can then use maven to generate some eclipse project files that allow me 
> > to write and compile my code within eclipse. I guess if I need extra 
> > dependencies, I have to edit my pom and hopefully eclipse picks this up 
> > (never really done serious maven development, so I don't know how this 
> > process really works).
> > 
> > When I want to try something out, I have to perform a maven build, start up 
> > a copy of karaf, install the bundle (or bundles) into it, then try out my 
> > new code? All from the command line?
> > What about debugging? You start karaf with the "debug" option and then 
> > remotely connect eclipse to the karaf instance so that you can then place 
> > breakpoints and step through the code if necessary? Does it just magically 
> > find all the source code?
> > 
> > Just trying to get a picture of what the expected workflow is and whether 
> > I'm missing anything. We're used to doing things in bndtools where you've 
> > got eclipse tooling for everything, so I'm just trying to do a mental reset 
> > really on what a "karaf/maven centric" development environment and process 
> > would look like. (I'm aware of the "Integrate Apache Karaf and Bnd 
> > toolchain" article, but we've had limited success with it beyond simple 
> > "hello world" examples. Maybe we just need more perseverance).
> > 
> > Thanks.
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Developing for karaf

2017-09-05 Thread tom
I'm trying to get an idea of how people go about developing for karaf, from a 
practical point of view.

So karaf is maven focused. There are archetypes for creating command, general 
bundles and so on.
I can then use maven to generate some eclipse project files that allow me to 
write and compile my code within eclipse. I guess if I need extra dependencies, 
I have to edit my pom and hopefully eclipse picks this up (never really done 
serious maven development, so I don't know how this process really works).

When I want to try something out, I have to perform a maven build, start up a 
copy of karaf, install the bundle (or bundles) into it, then try out my new 
code? All from the command line?
What about debugging? You start karaf with the "debug" option and then remotely 
connect eclipse to the karaf instance so that you can then place breakpoints 
and step through the code if necessary? Does it just magically find all the 
source code?

Just trying to get a picture of what the expected workflow is and whether I'm 
missing anything. We're used to doing things in bndtools where you've got 
eclipse tooling for everything, so I'm just trying to do a mental reset really 
on what a "karaf/maven centric" development environment and process would look 
like. (I'm aware of the "Integrate Apache Karaf and Bnd toolchain" article, but 
we've had limited success with it beyond simple "hello world" examples. Maybe 
we just need more perseverance).

Thanks.


Re: Potential security issue with default karaf console access control lists?

2017-08-31 Thread tom
OK, JIRA created.

I realise it's worse. I can *write* to files with "tac", so I can rewrite the 
users.properties file if I want to and create new users with admin priviledges. 
Cool.



> On 31 August 2017 at 13:04 Jean-Baptiste Onofré  wrote:
> 
> 
> I agree: as we did for vi/edit command, we should limit cat to admin role.
> 
> Can you create a Jira about that ?
> 
> Thanks !
> Regards
> JB
> 
> On 08/31/2017 01:01 PM, t...@quarendon.net wrote:
> > Any user that can log on to the karaf console appears to be able to run the 
> > "shell:cat" command (among others), and hence view any file that the 
> > operating system user that's running the karaf process can see. Whilst 
> > there is access control on a few of the shell scope commands, it seems that 
> > the default access control allows any user to run things with no explicit 
> > access control.
> > 
> > This *feels* like a security issue to me.
> > 
> > I'd like to be able to restrict access to the shell completely, but from 
> > experiment and looking at the code it appears that anyone who has some kind 
> > of "role" assigned to them (either directly, or as a member of a group) 
> > appears to be able to connect to the karaf console, and hence can 
> > potentially navigate the visible filesystem.  This doesn't feel very 
> > desirable.
> > 
> > It seems a shame that I can no longer restrict access to the console using 
> > the "sshRole" configuration property (still referenced in the 
> > documentation), but it seems that was removed when the role based access 
> > control was introduced.
> > 
> > Other than physically restricting access to the SSH port, are there other 
> > ways I can restrict access to the console? Or do I need to develop my own 
> > access control list for the shell scope, and accept that all users can 
> > potentially access the console?
> > 
> > Thanks.
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Re: Console role based access control and command completion

2017-08-31 Thread tom
Hmm, OK.
There's a comment somewhere that implies that someone had at least at some 
point tried doing that or thought that was what happened.

It leads to *slightly* odd behaviour, of being told that a command exists, but 
then being told, "oh wait, not it doesn't".

Thanks anyway.

> On 31 August 2017 at 13:02 Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
> 
> 
> Hi Tom,
> 
> We don't use the ACL in the completers, only on the action step. That's why 
> you 
> can complete but not execute.
> 
> Regards
> JB
> 
> On 08/31/2017 12:35 PM, t...@quarendon.net wrote:
> > If I'm logged on to the console as user, the list of commands I can execute 
> > is controlled by access control lists.
> > So, if I'm logged on as a user who has only got the "viewer" role, then I 
> > can't shut karaf down, the system:shutdown command requires the "admin" 
> > role.
> > 
> > Great.
> > 
> > However, I still appear to be able to get command completion that 
> > system:shutdown is a command, but when I try and invoke it I get "Command 
> > not found: system:shutdown", which seems confusing.
> > 
> > Is this intentional? I saw a comment in the code somewhere (lost it now) 
> > that made me think that the intention was that only commands I can actually 
> > invoke are then put in the completion list, and indeed that would seem like 
> > reasonable behaviour.
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Re: Default access control list for commands in the console

2017-08-31 Thread tom
Well that was a supplemental question actually.
Within a single access control list for a particular scope, I can presumably 
provide a catch all "*" entry at the bottom of that file, so that I can define 
a default access control list for all commands *within that scope*, but can I 
provide one that applies to *all* scopes? The access control list file is 
selected based on the scope of the command, so it doesn't seem obvious how I 
would define an access control list that applied to multiple scopes.

> On 31 August 2017 at 13:03 Jean-Baptiste Onofré  wrote:
> 
> 
> You can use regex (like *) to match on all "other" commands.
> 
> Regards
> JB
> 
> On 08/31/2017 12:38 PM, t...@quarendon.net wrote:
> > I just wanted to check that in the absence of an explicit access control 
> > list for commands in the console, the default is to be to allow anyone 
> > access.
> > Can that be altered at all? Is there a way of providing a default access 
> > control list at all? Or do I have to make sure I provide one for each 
> > command scope that I create?
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Potential security issue with default karaf console access control lists?

2017-08-31 Thread tom
Any user that can log on to the karaf console appears to be able to run the 
"shell:cat" command (among others), and hence view any file that the operating 
system user that's running the karaf process can see. Whilst there is access 
control on a few of the shell scope commands, it seems that the default access 
control allows any user to run things with no explicit access control.

This *feels* like a security issue to me. 

I'd like to be able to restrict access to the shell completely, but from 
experiment and looking at the code it appears that anyone who has some kind of 
"role" assigned to them (either directly, or as a member of a group) appears to 
be able to connect to the karaf console, and hence can potentially navigate the 
visible filesystem.  This doesn't feel very desirable. 

It seems a shame that I can no longer restrict access to the console using the 
"sshRole" configuration property (still referenced in the documentation), but 
it seems that was removed when the role based access control was introduced.

Other than physically restricting access to the SSH port, are there other ways 
I can restrict access to the console? Or do I need to develop my own access 
control list for the shell scope, and accept that all users can potentially 
access the console?

Thanks.


Default access control list for commands in the console

2017-08-31 Thread tom
I just wanted to check that in the absence of an explicit access control list 
for commands in the console, the default is to be to allow anyone access.
Can that be altered at all? Is there a way of providing a default access 
control list at all? Or do I have to make sure I provide one for each command 
scope that I create?


Karaf security framework and access to OSGi services

2017-08-29 Thread tom
The Karaf user guide section 4.1 says:

 The Apache Karaf security framework is used internally to control the access 
to:
 the OSGi services (described in the developer guide)

However the developer guide doesn't say anything that I can see about what that 
means. 5.15 in the Karaf user guide talks about how to set things up, the 
different login modules etc. It doesn't say anything further that implies that 
somehow you can apply access control to OSGi services. 

Am I just misinterpreting the initial statement?

Thanks.


Re: Console branding doesn't affect "client" console?

2017-08-26 Thread Tom Quarendon
Oh, OK. 
Any reason why there are two files? The implication is that you want to have 
different branding depending on whether you are running the server in you 
terminal or connecting as a client, which to me seems unlikely? 

This isn't mentioned at all in the documentation as far as I can see? 

Sent from my iPhone

> On 26 Aug 2017, at 6:54 am, Jean-Baptiste Onofré  wrote:
> 
> Hi
> 
> You have a branding-ssh dedicated for ssh/client.
> 
> Regards
> JB
>> On Aug 25, 2017, at 15:24, t...@quarendon.net wrote:
>> I'm running this on Windows, I don't know whether the same is true on Linux.
>> I'm using Karaf 4.1.2.
>> 
>> If I run the karaf.bat file, I see my branding from my 
>> etc/branding.properties file appear.
>> 
>> If I then run the client.bat file (or a standard SSH client), I see the 
>> normal Karaf logo. 
>> 
>> The effect is that when I've installed karaf as a service, and so only use 
>> the "client.bat" script, I don't get a branded console, which seems to 
>> defeat the idea.
>> 
>> Can I get a branded console in this environment?
>> Thanks.


Console branding doesn't affect "client" console?

2017-08-25 Thread tom
I'm running this on Windows, I don't know whether the same is true on Linux.
I'm using Karaf 4.1.2.

If I run the karaf.bat file, I see my branding from my etc/branding.properties 
file appear.

If I then run the client.bat file (or a standard SSH client), I see the normal 
Karaf logo. 

The effect is that when I've installed karaf as a service, and so only use the 
"client.bat" script, I don't get a branded console, which seems to defeat the 
idea.

Can I get a branded console in this environment?
Thanks.


Re: Writing commands for karaf shell.

2017-07-21 Thread tom
> If you look at Karaf >= 4.1.x, a bunch of commands are not coming from
> Karaf anymore, but from Gogo or JLine.  I moved them when working on the
> gogo / jline3 integration.  The main point that was blocking imho is that
> they did not have completion support.  With the new fully scripted
> completion system from gogo-jline, gogo commands can have full completion,
> so I don't see any blocking points anymore.  It's just about tracking
> commands and registering them in the karaf shell.

I'm sorry, but I don't really understand what you're saying. You're talking 
about impediments to making changes to Karaf? Or how I go about writing 
commands? 
Sorry, just not following.

Fundamentally, should commands that I write using apache felix gogo command 
features such as the Parameter and Description annotations, and the 
CommandService interface work? Or if I want to do something other than a simple 
"hello world", do I need to work out how to use the karaf shell from within 
bndtools so that I can write commands using the Karaf command framework?

Thanks.


Re: Writing commands for karaf shell.

2017-07-21 Thread tom
Yes, but what's the actual situation from a standards point of view?
Is a shell defined by a standard at all? OSGi enroute seems to require a gogo 
shell and appears to rely on felix gogo shell command framework.
Is it just that Karaf happens to ship a shell that happens to be based on the 
felix gogo shell (or perhaps not, but stack traces seem to suggest so), but 
that basically if I want to implement a shell command I have to implement it 
differently for each shell type?

That seems a poor situation and leaves me with having to implement one command 
implementation to be used in the development environment and one that is used 
in the karaf deployment.

Originally I thought that Karaf was the "enterprise version of felix". This 
doesn't seem to be the case?

There *could* be a really powerful environment and ecosystem here, if it was 
all a *little* bit less fragmented :-)

> On 21 July 2017 at 11:01 Jean-Baptiste Onofré  wrote:
> 
> 
> From a karaf perspective, the standard is to use karaf commands/annotations. 
> Gogo commands support is just for compatibility (as the features are 
> limited). When gogo commands will improve and provide the same the same 
> features, it could change.
> 
> Others may have different standpoint but there's mine ;)
> 
> Regards
> JB
> 
> On Jul 21, 2017, 11:54, at 11:54, t...@quarendon.net wrote:
> >There was a thread recently related to this, but I have a different
> >question.
> >
> >I'm confused about the shell situation, and what is "standard" and what
> >is not.
> >
> >I have naively written some commands for the felix gogo shell. We
> >develop using bndtools (obviously) and use the OSGI enRoute templates,
> >and that's all set up to use the apache felix gogo shell implementation
> >and related bundles.
> >The commands I have written make use of what I *thought* were standard
> >features such as org.apache.felix.service.command.Descriptor and
> >org.apache.felix.service.command.Parameter parameter annotations and
> >the org.apache.felix.service.command.CommandSession interface (though
> >the fact that they are in the felix namespace did confuse me). The
> >commands are registered using the osgi.enroute.debug.api.Debug
> >constants.
> >
> >If I try and run these inside Karaf it clearly understands that the
> >command is there, so the registration is working, it's just it
> >obviously doesn't understand the Parameter annotations and
> >CommandSession interface, so can't call it ("cannot cooerce ..."
> >error).
> >
> >From the examples, Karaf has its own, non standard way of writing
> >command extensions, it doesn't seem to use the
> >osgi.enroute.debug.api.Debug constants.
> >
> >So can someone clarify the situation?
> >Is *anything* actually standard at all? I can't find any reference in
> >the OSGi specs to a shell (but then I'm probably not looking for the
> >right thing), so perhaps not.
> >Does karaf use the apache felix gogo shell implementation (I thought it
> >did)?
> >If so, should it be able to understand things like the Parameter
> >annotation, and CommandSession?
> >
> >Or should this all work, but it's just that I'm missing some vital
> >bundle in my installation?
> >
> >Thanks.


Writing commands for karaf shell.

2017-07-21 Thread tom
There was a thread recently related to this, but I have a different question.

I'm confused about the shell situation, and what is "standard" and what is not.

I have naively written some commands for the felix gogo shell. We develop using 
bndtools (obviously) and use the OSGI enRoute templates, and that's all set up 
to use the apache felix gogo shell implementation and related bundles. 
The commands I have written make use of what I *thought* were standard features 
such as org.apache.felix.service.command.Descriptor and 
org.apache.felix.service.command.Parameter parameter annotations and the 
org.apache.felix.service.command.CommandSession interface (though the fact that 
they are in the felix namespace did confuse me). The commands are registered 
using the osgi.enroute.debug.api.Debug constants.

If I try and run these inside Karaf it clearly understands that the command is 
there, so the registration is working, it's just it obviously doesn't 
understand the Parameter annotations and CommandSession interface, so can't 
call it ("cannot cooerce ..." error).

>From the examples, Karaf has its own, non standard way of writing command 
>extensions, it doesn't seem to use the osgi.enroute.debug.api.Debug constants.

So can someone clarify the situation?
Is *anything* actually standard at all? I can't find any reference in the OSGi 
specs to a shell (but then I'm probably not looking for the right thing), so 
perhaps not. 
Does karaf use the apache felix gogo shell implementation (I thought it did)?
If so, should it be able to understand things like the Parameter annotation, 
and CommandSession?

Or should this all work, but it's just that I'm missing some vital bundle in my 
installation?

Thanks.


Using maven bundle plugin to compile and bundle code not in src/main/java

2017-06-22 Thread tom
Forgive my maven ignorance, but I’d like to retrofit some source with a maven 
build that uses the felix maven bundle plugin, it’s just my source isn’t in 
src/main/java.
Can I do that?

If so, can someone provide a simple example?
It’s not clear how the source even gets compiled to me. I’ve used the 
karaf-bundle-archetype maven archetype but no where is there anything that says 
"go compile this code", so I have no handle on what to modify. I'm vaguely 
guessing I need another "plugin" section but I'm shooting wildly in the dark 
there.

Thanks.


Re: Karaf maven plugin/wrap/slf4j problem

2017-06-14 Thread tom
> I propose to share the code and chat directly (on hangout/skype/private 
> e-mail).
Any help you can give me would be greatly appreciated.
I have many issues that I'm trying to resolve, as you can tell.


Re: Why can I not satisfy "(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"

2017-06-14 Thread tom
Oddly I can no longer reproduce the issue.

I thought it might have gone away because I'd added:
   scr
into my feature in the features.xml file. However I take that out again and it 
still works. Odd.

If I encounter it again, I'll reduce to a simple test case.

> On 14 June 2017 at 09:45 Jean-Baptiste Onofré  wrote:
> 
> 
> That would be great.
> 
> In the mean time, I'm testing with the karaf-samples I prepared for the dev 
> guide update.
> 
> Regards
> JB
> 
> On 06/14/2017 10:24 AM, t...@quarendon.net wrote:
> >> Can you share your sample project ?
> > 
> > I will try to put together a simple standalone test case. In fact I'll do 
> > that for something else as well that I can't get past.
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Karaf maven plugin/wrap/slf4j problem

2017-06-14 Thread tom
I have put together a simple example of the problem I'm been encountering 
attempting to create a custom karaf distribution.

If you attempt to include a bundle such as 
org.apache.felix:org.apache.felix.http.servlet-api in a feature, you get this 
build error:

missing requirement [org.ops4j.pax.url.wrap/2.5.2] osgi.wiring.package; 
filter:="(&(osgi.wiring.package=org.slf4j)(version>=1.6.0)(!(version>=2.0.0)))"

org.apache.felix.http.servlet-api has a "compile" dependency on 
org.apache.tomcat/tomcat-servlet-api/8.0.9 and this is interpreted by the karaf 
maven plugin as a dependency. Whether it should do that or not I don't know. It 
doesn't seem like it should, but that's not the issue. Having made that 
interpretation, it then adds a dependency on 
wrap:mvn:org.apache.tomcat/tomcat-servlet-api/8.0.9, since it isn't proper OSGi 
bundle. The build then fails with the above error.

I don't understand how to resolve this issue. 

If I remove this and build karaf, in the result, I can see that wrap starts, 
and satisfies the requirement from the pax-logging bundle:
  Imported Packages
  org.slf4j,version=1.7.13 from org.ops4j.pax.logging.pax-logging-api (6)
  org.slf4j,version=1.7.7 from org.ops4j.pax.logging.pax-logging-api (6)
  org.slf4j,version=1.7.1 from org.ops4j.pax.logging.pax-logging-api (6)
  org.slf4j,version=1.6.6 from org.ops4j.pax.logging.pax-logging-api (6)
  org.slf4j,version=1.5.11 from org.ops4j.pax.logging.pax-logging-api (6)
  org.slf4j,version=1.4.3 from org.ops4j.pax.logging.pax-logging-api (6)

So I don't get why it can't be resolved during the build. Is there some 
dependency I need to add to my feature in the features.xml file? It doesn't 
feel like I should, as my feature doesn't really depend on "wrap", and I 
shouldn't be concerned with what it itself then depends on?

It feels like the dependency on tomcat-servlet-api is being added in error, but 
I could live with that if I could get the result to compile.

See code at https://github.com/tomq42/karaf-tests-1

Thanks.


Re: Why can I not satisfy "(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"

2017-06-14 Thread tom
> Can you share your sample project ?

I will try to put together a simple standalone test case. In fact I'll do that 
for something else as well that I can't get past.


Why can I not satisfy "(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"

2017-06-14 Thread tom
I'm trying to build a custom karaf distribution using the maven karaf-assembly 
packaging type.

My latest issue is that the build fails with 

missing requirement osgi.extender; 
filter:="(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"

I interpret this as meaning the bundle uses DS and therefore I need 
apache.felix.scr.

If I don't add my bundle in, the build works, and when I start up the resulting 
karaf, feature:list shows that scr is installed and started. "bundle:headers 
mvn:org.apache.felix/org.apache.felix.scr/2.0.10" shows:

Provide-Capability =

osgi.extender;uses:=org.osgi.service.component;osgi.extender=osgi.component;version:Version=1.3

So I appear to have that capability don't I? 

So why does the build apparently fail in that way?

My POM has:
   

org.apache.karaf.features
framework
${karafVersion}
kar


org.apache.karaf.features
standard
${karafVersion}
features
xml
compile


and then:


org.apache.karaf.tooling
karaf-maven-plugin
 4.1.1
 true


wrapper


eventadmin


standard
webconsole
http-whiteboard
scr
prereqs

1.8




Re: org.ops4j.pax.url.wrap/2.5.2 missing org.slf4j package?

2017-06-13 Thread tom
> the easiest would be to actually remove that "new" requirement for those
> components.
> A fix for this is on it's way for Pax Web, so you'll have something that'll
> work for you.

Sorry Achim, you'll need to spell that out. What is that you'd fix? Create a 
pax-web bundle that contains and exports the javax.servlet packages?

Thanks in advance of the fix though.


org.ops4j.pax.url.wrap/2.5.2 missing org.slf4j package?

2017-06-13 Thread tom

I'm trying to build a custom karaf distribution with the "karaf-assembly" maven 
packaging. 
I am making slow progress :-)

My latest issue comes about through trying to resolve lack of a javax.servlet 
package requirement. 
So I naively included the org.apache.felix:org.apache.felix.http.servlet-api 
bundle as a dependency of the feature I'm including in my assembly. This is 
what we normally use in development in bndtools, but we normally use apache 
felix HTTP rather than pax-web generally.

As soon as I do that I get an odd build failure:
Unable to resolve org.ops4j.pax.url.wrap/2.5.2: missing requirement 
[org.ops4j.pax.url.wrap/2.5.2] osgi.wiring.package; 
filter:="(&(osgi.wiring.package=org.slf4j)(version>=1.6.0)(!(version>=2.0.0)))"]

If I remove org.apache.felix.http.servlet-api it builds, and I can run the 
resulting karaf. My karaf shell fu isn't that great, but feature:list shows me 
that "wrap" is "started", and
  bundle:info mvn:org.ops4j.pax.url/pax-url-wrap/2.5.2/jar/uber
shows me that it imports the org.slf4j package. I don't know how to find out 
where it resolves that from from the shell, but the web console shows me:

Imported Packages
org.slf4j,version=1.7.25 from org.ops4j.pax.logging.pax-logging-api (5)
org.slf4j,version=1.6.6 from org.ops4j.pax.logging.pax-logging-api (5)
org.slf4j,version=1.5.11 from org.ops4j.pax.logging.pax-logging-api (5)
org.slf4j,version=1.4.3 from org.ops4j.pax.logging.pax-logging-api (5)

So I don't understand why I'm suddenly getting a resolution failure for slf4j 
from pax.url.wrap.

I also don't understand why including 
org.apache.felix:org.apache.felix.http.servlet-api suddenly causes this, 
nothing in the manifest for it would appear to indicate a need for it. All it 
is is a simple bundle that contains and exports the javax.servlet api.

To sidestep the the question, how *should* I resolve the requirement for 
javax.servlet package? There doesn't seem to be a pax-web bundle that I can 
find that provides the javax.servlet package.

Thanks (again).


Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1

2017-06-13 Thread tom
> On 13 June 2017 at 16:50 Jean-Baptiste Onofré  wrote:
> 
> 
> Yeah, you have to add the snapshot repository in your pom.xml:
OK, yes, that's better.
Thanks.


Now all I need to do is work out why I can't apparently satisfy 
   (&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"
despite Karaf including SCR. 

Hmm. Karaf seems to have SCR 2.0.10. I don't know why I need <2, it's just what 
bndtools builds in as a requirement. Ah, confusion between API version and 
software version. 
from Karaf console, bundle:headers on felix.scr:
   
osgi.extender;uses:=org.osgi.service.component;osgi.extender=osgi.component;version:Version=1.3
which is presumably what the requirement is referencing, so that ought to be 
satisfied.

So no idea why that requirement can't be satisfied.
Anyway, that's for tomorrow.

Thanks.


Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1

2017-06-13 Thread tom
Windows, yes.

> On 13 June 2017 at 15:51 Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
> 
> 
> By the way, are you on Windows ?
> 
> Regards
> JB
> 
> On 06/13/2017 04:44 PM, t...@quarendon.net wrote:
> > 
> >> On 13 June 2017 at 15:02 Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
> >>
> >>
> >> Hi Tom
> >>
> >> It has been fixed and will be included in 4.1.2.
> >>
> > Ah, good. 4.1 seems unusable as far as I can see otherwise.
> > 
> > I've been looking around, but I can't find where I might get a snapshot 
> > build from, what repository I would need to point at?
> > 
> > Sorry for all the questions. I have more!
> > Thanks.
> > 
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com


Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1

2017-06-13 Thread tom

> On 13 June 2017 at 15:50 Jean-Baptiste Onofré  wrote:
> 
> 
> Do you have a chance to test with 4.1.2-SNAPSHOT ?

That's what I'm wanting to do, I just can't work out where to get it. I naively 
put 
4.1.2-SNAPSHOT
as my karaf version in the pom, but it doesn't find it, as I presumably need to 
point at a snapshot repository somewhere. That was really my question. What URL 
should I use for the snapshot URL? 

Thanks.


Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1

2017-06-13 Thread tom

> On 13 June 2017 at 15:02 Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
> 
> 
> Hi Tom
> 
> It has been fixed and will be included in 4.1.2.
> 
Ah, good. 4.1 seems unusable as far as I can see otherwise.

I've been looking around, but I can't find where I might get a snapshot build 
from, what repository I would need to point at?

Sorry for all the questions. I have more!
Thanks.


Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1

2017-06-13 Thread tom
Start up Karaf with the "bin/karaf.bat" shell script.

At the console type
help bundle:info
You get:
gogo: NullPointerException: "in" is null!

If I run this from the official 4.1.1 install, it looks like this is trying to 
"more" the help contents or something. I get a colon, and if you press q it 
goes back to the prompt. You get no help output though. If I do the same on 
4.0.6, I get paginated help out, so something has changed there.

Run this from a "custom assembly" consisting of the "standard" feature, and I 
get:


2017-06-13T14:33:11,173 | ERROR | Karaf local console user karaf | ShellUtil
| 55 - org.apache.karaf.shell.core - 4.1.1 | Exception 
caught while executing command
java.lang.NumberFormatException: For input string: "43B"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
~[?:?]
at java.lang.Integer.parseInt(Integer.java:580) [?:?]
at java.lang.Integer.(Integer.java:867) [?:?]
at 
org.fusesource.jansi.AnsiOutputStream.write(AnsiOutputStream.java:122) 
[86:org.fusesource.jansi:1.14.0]
at java.io.FilterOutputStream.write(FilterOutputStream.java:125) [?:?]
at 
java.nio.channels.Channels$WritableByteChannelImpl.write(Channels.java:458) 
[?:?]
at org.apache.felix.gogo.runtime.Pipe$MultiChannel.write(Pipe.java:644) 
[55:org.apache.karaf.shell.core:4.1.1]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) [?:?]
at java.nio.channels.Channels.writeFully(Channels.java:101) [?:?]
at java.nio.channels.Channels.access$000(Channels.java:61) [?:?]
at java.nio.channels.Channels$1.write(Channels.java:174) [?:?]
at java.io.PrintStream.write(PrintStream.java:480) [?:?]
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) [?:?]
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) 
[?:?]
at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104) [?:?]
at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185) 
[?:?]
at java.io.PrintStream.write(PrintStream.java:527) [?:?]
at java.io.PrintStream.print(PrintStream.java:669) [?:?]
at java.io.PrintStream.println(PrintStream.java:806) [?:?]
at org.apache.felix.gogo.jline.Posix._main(Posix.java:128) 
[55:org.apache.karaf.shell.core:4.1.1]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:?]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:497) ~[?:?]
at org.apache.felix.gogo.runtime.Reflective.invoke(Reflective.java:136) 
[55:org.apache.karaf.shell.core:4.1.1]
at 
org.apache.karaf.shell.impl.console.SessionFactoryImpl$ShellCommand.lambda$wrap$0(SessionFactoryImpl.java:195)
 [55:org.apache.karaf.shell.core:4.1.1]
at 
org.apache.karaf.shell.impl.console.SessionFactoryImpl$ShellCommand$$Lambda$37/1313854807.execute(Unknown
 Source) [55:org.apache.karaf.shell.core:4.1.1]
at 
org.apache.karaf.shell.impl.console.SessionFactoryImpl$ShellCommand.execute(SessionFactoryImpl.java:241)
 [55:org.apache.karaf.shell.core:4.1.1]
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:68)
 [55:org.apache.karaf.shell.core:4.1.1]
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:86)
 [55:org.apache.karaf.shell.core:4.1.1]
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:560) 
[55:org.apache.karaf.shell.core:4.1.1]
at 
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:486) 
[55:org.apache.karaf.shell.core:4.1.1]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:375) 
[55:org.apache.karaf.shell.core:4.1.1]
at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:417) 
[55:org.apache.karaf.shell.core:4.1.1]
at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229) 
[55:org.apache.karaf.shell.core:4.1.1]
at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59) 
[55:org.apache.karaf.shell.core:4.1.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
2017-06-13T14:33:11,177 | ERROR | Karaf local console user karaf | ShellUtil
| 55 - org.apache.karaf.shell.core - 4.1.1 | Exception 
caught while executing command
java.lang.NumberFormatException: For input string: "43BF"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
~[?:?]

Re: Exception starting karaf custom distro "minimal example" from docs

2017-06-13 Thread tom
> On 13 June 2017 at 12:32 Jean-Baptiste Onofré  wrote:
> 
> 
> I don't understand why you have the pax-web feature.
> 
> Do you have it defined in bootFeatures ? Or do you install it by hand ?
> 
> I can confirm that I don't have pax-web feature on my custom distro.

So I created a completely fresh directory, deleted the contents of my .m2 maven 
repository cache, created a pom.xml file with your example contents in, and ran 
"mvn clean install".

I ran target/assembly/bin/karaf.bat.
feature:list then lists pax-http as "started".

Interestingly though I have no exceptions in the log file.

Well that's interesting. 
Worryingly, deleting my .m2 directory contents seems to have altered the 
behaviour. Going back to my original example, it now works closer to how I 
would expect. 
I admit my knowledge of Maven is low, but it's a cache isn't it? Re-downloading 
the dependencies ought not to have changed anything? That worries me.

The good news is that this seems to have fixed things. I no longer have odd 
exceptions in the log.

Confidence in Karaf has increased, confidence in maven lowered!
Thanks for the help.


Re: Exception starting karaf custom distro "minimal example" from docs

2017-06-13 Thread tom

> On 13 June 2017 at 09:08 Jean-Baptiste Onofré <j...@nanthrax.net> wrote:
> 
> 
> Hi Tom,
> 
> Here's a pom.xml to do what you want:

As far as I can see that's the same as the "minimal example" in the docs. 
If I use your example, I still get errors:

2017-06-13T09:46:01,692 | ERROR | FelixDispatchQueue | 
pax-web-extender-whiteboard  | 118 - 
org.ops4j.pax.web.pax-web-extender-whiteboard - 6.0.3 | FrameworkEvent ERROR - 
org.ops4j.pax.web.pax-web-extender-whiteboard
org.osgi.framework.BundleException: Activator start error in bundle 
org.ops4j.pax.web.pax-web-extender-whiteboard [118].
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2288) 
[?:?]
at org.apache.felix.framework.Felix.startBundle(Felix.java:2144) [?:?]
at 
org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1371) [?:?]
at 
org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308)
 [?:?]
at java.lang.Thread.run(Thread.java:745) [?:?]
Caused by: java.lang.ClassCastException: 
org.apache.felix.httplite.osgi.HttpServiceImpl cannot be cast to 
org.osgi.service.http.HttpService

and
2017-06-13T09:50:23,076 | ERROR | paxweb-config-2-thread-1 | Felix  
  |  -  -  | Bundle org.apache.felix.inventory [36] 
EventDispatcher: Error during dispatch. (java.lang.ClassCastException: 
org.apache.felix.httplite.osgi.HttpServiceImpl cannot be cast to 
org.osgi.service.http.HttpService)

and 

2017-06-13T09:50:23,076 | ERROR | FelixDispatchQueue | inventory
| 36 - org.apache.felix.inventory - 1.0.4 | FrameworkEvent ERROR - 
org.apache.felix.inventory
java.lang.ClassCastException: org.apache.felix.httplite.osgi.HttpServiceImpl 
cannot be cast to org.osgi.service.http.HttpService

etc.

The "javax.servlet two chains" issue has gone. However that comes back if I 
specify "enterprise" instead of "standard" in my "bootFeatures" (investigating 
I had a "org.apache.karaf.features.cfg" file that specified some things that I 
had inherited from another example -- deleted that now).


This clearly indicates to be that there are just some incompatible bundles 
there. org.ops4j.pax.web.pax-web-extender-whiteboard is expecting 
org.apache.felix.httplite.osgi.HttpServiceImpl to be a 
org.osgi.service.http.HttpService when it's not, or at least not the same 
org.osgi.service.http.HttpService class.


Closing Karaf down and then starting it up again though, it still just hangs 
with no output of any kind, no logging.


Re: missing requirement osgi.contract=JavaServlet

2017-06-12 Thread tom
OK, I've "solved" this by creating an additional bundle that simply has the 
required:

Provide-Capability: osgi.contract;osgi.contract=JavaServlet;version:Vers
 ion="3.1";uses:="javax.servlet,javax.servlet.http,javax.servlet.descrip
 tor,javax.servlet.annotation"

line in the MANIFEST.

I say "solved", the karaf assmembly now at least builds. I have yet to 
determine how successfully it actually runs.

However, this seems like a gross hack to me. Shouldn't the pax-web http-api 
bundle provide this capability?

Thanks.


RE: missing requirement osgi.contract=JavaServlet

2017-06-12 Thread tom
With regard to the "wrap/0.0.0" error, running Maven with -X gives me:

Caused by: 
org.apache.karaf.features.internal.service.Deployer$CircularPrerequisiteException:
 [wrap/0.0.0]
at 
org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:266)
at org.apache.karaf.profile.assembly.Builder.resolve(Builder.java:1429)
at 
org.apache.karaf.profile.assembly.Builder.startupStage(Builder.java:1183)
at 
org.apache.karaf.profile.assembly.Builder.doGenerateAssembly(Builder.java:659)
at 
org.apache.karaf.profile.assembly.Builder.generateAssembly(Builder.java:441)
at 
org.apache.karaf.tooling.AssemblyMojo.doExecute(AssemblyMojo.java:506)
at org.apache.karaf.tooling.AssemblyMojo.execute(AssemblyMojo.java:262)
... 22 more


Suggesting a circular dependency issue somewhere, though quite where, who 
knows. There are one or two references to "pax-url-wrap" in the -X output, but 
that's all there is that mentions "wrap" at any point. 

Don't know whether that helps?


missing requirement osgi.contract=JavaServlet

2017-06-12 Thread tom
I'm trying to use the Karaf maven plugin to build a custom Karaf distribution 
(so "karaf-assembly" packaging type)

I'm stuck on the following error though:

Failed to execute goal 
org.apache.karaf.tooling:karaf-maven-plugin:4.1.1:assembly (default-assembly) 
on project karaf-distro: Unable to build assembly: Unable to resolve root: 
missing requirement [root] osgi.identity; osgi.identity=core-features; 
type=karaf.feature; version=1.0.0.SNAPSHOT; 
filter:="(&(osgi.identity=core-features)(type=karaf.feature)(version>=1.0.0.SNAPSHOT))"
 [caused by: Unable to resolve core-features/1.0.0.SNAPSHOT: missing 
requirement [core-features/1.0.0.SNAPSHOT] osgi.identity; 
osgi.identity=mybundle; type=osgi.bundle; 
version="[1.0.0.201706120848,1.0.0.201706120848]"; resolution:=mandatory 
[caused by: Unable to resolve mybundle/1.0.0.201706120848: missing requirement 
[mybundle/1.0.0.201706120848] osgi.contract; osgi.contract=JavaServlet; 
filter:="(&(osgi.contract=JavaServlet)(version=3.1.0))"]]



I have the org.apache.karaf.features:enterprise feature as a dependency (along 
with framework, standard, spring), so I believe that it should have the 3.1.0 
servlet API, so I don't think it's an issue of requiring 3.1.0 when something 
earlier is installed.

In development we use felix HTTP (we use bndtools, and that's just what it 
uses), and the requirement is satisfied by the Apache Felix Servlet API bundle. 
But Karaf uses pax-web instead, and nothing seems to provide that capability as 
far as I can tell (looking at output of bundle:headers in the console).

Naively adding a maven dependency on org.apache.felix.http.servlet-api give me 
a different, earlier, error 

Unable to build assembly: [wrap/0.0.0]

My bundle is built using bndtools, I'm afraid I don't know at the moment how 
that manifest requirement comes about, haven't managed to follow the whole 
chain through yet.

So my question is, within Karaf, where do I get this dependency satisfied? 
Can I easily just substitute felix HTTP in place of pax-web on the basis that 
that's what we use in production? If so how? Naively adding felix HTTP as 
dependencies in my pom.xml just gives this "wrap/0.0.0" error, which means 
nothing to me.

Thanks.


Setting config properties with JMX bean happens asynchronously?

2017-01-09 Thread tom
For test purposes we have a small java program that automatically deploys our
built bundles into a Karaf container as part of the build and test process.
However, the deployment process is unreliable.

Basically what we do is connect to the main Karaf container and create a new
instance. Then connect to that instance, and set the mvn url search repositories
to the repository into which we deploy the build artifacts and then do a feature
install. We do this all through the JMX interface.

The feature install often fails though, claiming:

Error resolving artifact : Could not transfer artifact 
from/to apache (http://repository.apache.org/content/groups/snapshots-group/)

What's odd about this message is that we've just changed the mvn url search path
(org.ops4j.pax.url.mvn.repositories property in the org.ops4j.pax.url.mvn
configuration pid) so that it doesn't even include that repository. So why is it
looking there?

Looking at the implementation of the config JMX bean it just calls the OSGi
method Configuration.update. The actual config update though happens
asynchronously on another thread (so says the documentation for
Configuration.update).
It is possible to install a configuration change listener in OSGi that can
listen for completion of a configuration change to a certain pid, though I don't
know how you tell it was YOUR change, rather than a change that someone else has
made, but it's possible that would improve the situation.

This makes life very awkward, and you would appear to have to reply on a fragile
"sleep" in the client to give the configuration changes time to propagate before
you continue, or having made the config changes, stop the instance and restart
it again (I *think* that would work, I *think* that the configuration changes
are persisted by Karaf synchronously).

Other than making the config changes in the parent instance so that they are
inherited by the child instance, I'm not aware of any other way to make the
configuration changes in this kind of scenario.

Is this a known issue? Or is this as designed?

Thanks.


Create and manager an instance using Jolokia

2016-09-21 Thread tom
How do I create and then manage a new instance using jolokia?

I can execute the commands on my root instance to create and start a new
instance fine. On create I specify that I want "jolokia" as a feature.

Fundamentally, in order to manage the second instance I have to connect directly
to it don't I? This is fine with the RMI, as I get the opportunity to set the
ports on instance creation. Despite the instance name being in the Mbean object
names (the "name=[instance]" bit), I can't connect to the root instance and
control the second instance simply by setting "name=second" (doesn't work
anyway).

So I'm going to need to set the HTTP port on the second instance to something
other than default, otherwise I won't be able to connect, the second instance
will attempt to bind to 8181, and then fail.

I seem to have two ways I might be able to influence the new instance. 
The first is that I have the opportunity to pass in startup options when the
instance is started. Can I influence the port that way? These are presumably
java system options, but I'm not aware you can set OSGi configuration from
system options can you? Or indeed that the HTTP service will pick up a specific
system option?

The second opportunity seems to be that on creating the instance, I can pass in
feature URLs and features to install. Presumably I could pass in the URL of a
feature XML file that contains a feature that just has configuration settings in
it. But in order to do this, I have to have created that feature XML file though
and have in a location that is addressable by a URL from the machine on which
Karaf is running, I can't just pass the information in easily.

Are there any other ways of achieving this that I've missed?

Thanks for any input.


Re: Deploying an application from jenkins for test

2016-09-21 Thread tom
Achim,Thanks.

I'm trying to use mvn, as it does seem the best option. If I can get it to do
what I want though.
What I'm trying to work out is how to ensure that I get exactly the bundle
that's just been published, and nothing gets cached.

I'm publishing SNAPSHOT builds to our artifactory repository.
With the default configuration, I don't get the latest snapshot. So I:

install the feature
Update the code.
Rebuild.
Publish
uninstall, then reinstall the feature.

I get the same bundle again.
Now I don't know anything about maven works, but there seems to be an "update
policy" that by default is "daily". The implication of this is that if I ran
this cycle over two days I might get what I want.

The PAX URL code seems to have a "globalUpdatePolicy" configuration. I set that
to "always", and now I get a complaint from aether about not being able to
resolve the artifact:

Could not find artifact
simple-osgi:simle.osgi.command:jar:1.0.0-20160921.072458-1 in
artifactory-snapshot(repository URL)
at
shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444)[7:org.ops4j.pax.url.mvn:2.4.7]
at
shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246)[7:org.ops4j.pax.url.mvn:2.4.7]
at
shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223)[7:org.ops4j.pax.url.mvn:2.4.7]
at
shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294)[7:org.ops4j.pax.url.mvn:2.4.7]
at
org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:650)[7:org.ops4j.pax.url.mvn:2.4.7]
at
org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:598)[7:org.ops4j.pax.url.mvn:2.4.7]
at
org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:576)[7:org.ops4j.pax.url.mvn:2.4.7]
at
org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:550)[7:org.ops4j.pax.url.mvn:2.4.7]
at
org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:34)[8:org.apache.karaf.features.core:4.0.6]



The one in the artifactory is now 073344 instead.

So I don't understand what's going on here. Setting "globalUpdatePolicy" seems
to have said "ignore the cached artefacts and go get them again from the
remote", but what it doesn't seem to have said is "check for the latest SNAPSHOT
version", so it's still trying to download the version that it worked out last
time. 


As I say, my knowledge of Maven is minimal at best.
There must be a way of doing this though isn't there? I mean, convince it
somehow to just install whatever is latest on out artefactory repository? I get
the caching thing when you're dealing with release versions, but in this
environment, snapshots ought to not be cached, and attempting to get version
"1.0.0-SNAPSHOT" ought to just go and figure out what the latest version is and
download it (you can cache once you've resolved the SNAPSHOT into its actual
unique version, but not before).

Any ideas?

Thanks.


Resolution problem with osgi.enroute.dto.api

2016-09-20 Thread tom
I'm tracking down a rather odd problem trying to deploy a bundle into Karaf. The
issue appears to be with the osgi.enroute.dto.api package.

I'm getting this resolution error from Karaf:

missing requirement: osgi.wiring.package;
filter:="(&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=2.0.0)))"

So I don't have anything that provides the osgi.enroute.dto.api package.

I have the Karaf obr festure installed, and it's set up to point to 
https://raw.githubusercontent.com/osgi/osgi.enroute/v1.0.0/cnf/distro/index.xml

So I *think* it ought to just find whatever it needs by just looking in that
OBR. Nice.

Looking at the runbundles list in the bndrun file I have, it shows
"osgi.enroute.dto.bndlib.provider". At a guess this is where bndtools has
resolved that capability. Indeed, if I look at the MANIFEST for that jar, I see

Export-Package: osgi.enroute.dto.api;version="1.0.0"

If I use Karaf to list the information about that JAR though, it shows:
Requires:package:(&(package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=1.1.0)))
Karaf is just showing the information in the
https://raw.githubusercontent.com/osgi/osgi.enroute/v1.0.0/cnf/distro/index.xml
file here.

So, on the one hand, the osgi.enroute.dto.bndlib.provider.jar file says it
exports the package, but on the other hand the OBR index says that it has a
requirement on that package.

On the face of it, this seems like a contradiction. Indeed if I install that
bundle into Karaf, it complains of:
Unsatisfied Requirements:
[osgi.enroute.dto.bndlib.provider [60](R 60.0)] osgi.wiring.package;
(&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=1.1.0)))

Equally if I create a very simple bundle that just has:
   @Reference private osgi.enroute.dto.api.DTOs dtos;

I don't seem to be able to deploy it within Karaf, bundle:diag shows:

Unsatisfied Requirements:
[simple.dtousage [63](R 63.0)] osgi.wiring.package;
(&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=2.0.0)))
[simple.dtousage [63](R 63.0)] osgi.service;
(objectClass=osgi.enroute.dto.api.DTOs)


Is this an issue with the bundle? Or the bnd index? Or with karaf? 
Or am I misinterpreting? 

Thanks.


Re: Creating features.xml files

2016-09-20 Thread tom

> On 20 September 2016 at 12:52 Benson Margulies  wrote:
> 
> 
> I build all my features with the karaf-maven-plugin.
> 

I don't use Maven, I use eclipse and bndtools, hence gradle as my build
environment. Since I didn't have to do anything at all to get the gradle command
line build set up, it was just generated for me, I'm reluctant to start manually
setting up a maven build environment. Ideally I want to just generate a feature
xml file out of the bndtools environment somehow.


How do I resolve resolution problems in Karaf

2016-09-20 Thread tom
I'm really struggling to get my bundles installed in Karaf, so I'd appreciate
some hints on how to diagnose some issues. I'm trying to do a feature:install of
a features.xml file I've written to install my bundles. 
My latest is:

missing requirement osgi.wiring.package;
filter:="(&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=2.0.0)))"
[caused by: Unable to resolve osgi.enroute.base.api [62](R 62.0): missing
requirement [osgi.enroute.base.api [62](R 62.0)] osgi.unresolvable;
(&(must.not.resolve=*)(!(must.not.resolve=*)))]]]

My interpretation of this is that I've got conflicting versions of something. I
have no idea what, nor to figure out what the cause is.

Up to now I've always just been using bndtools in eclipse (and the bundles I'm
installing all work fine there), my first experience of Karaf was yesterday, so
beyond what I've read in the docs, I know nothing about what useful commands
there might be to help me diagnose. I don't even know how I would list what I've
currently got installed that might satisfy osgi.enroute.dto.api or
osgi.enroute.base.api.

Any hints would be much appreciated.
This seems to be extraordinarily more complicated that "resolve" in bndtools, or
am I being naive?

Thanks.


OSGi Log Service?

2016-09-20 Thread tom
According to the documentation, the OSGi Log service is supported.

I'm trying to install a bundle that uses it, and I get:

missing requirement [bundleid/1.0.0.201609201024] osgi.service;
filter:="(objectClass=org.osgi.service.log.LogService)"; effective:=active]]

My use is with DS, so:

@Reference
private LogService log;


To me that says that there isn't an implementation of the LogService object that
it can find to resolve. 

Am I misinterpreting?

(Karaf 4.0.6).

Thanks.


Re: Deploying an application from jenkins for test

2016-09-20 Thread tom
> you can use Pax Exam[1] for that and start Karaf embedded, this will give
> you a clean state for every run.

Initially I want to do it for testing, but I then want to deploy a "snapshot"
and "release" version of the actual application for general internal demo use. I
currently already have an integration test suite, but what it doesn't test is
the deployment into karaf, so I want to deploy it for testing in a way that is
as close to the real thing as possible.

> The other way would be to have a Karaf with Jolokia running and deploy via
> JMX, I once created a sample for that [2].

As I say, my question was really one of what the transport is for the bundles.
One way of the other I invoke the equivalent of "feature:add-repo", then
"feature:install", my question is what are the URLs, where does karaf actually
pull stuff from. To use mvn repositories I need to find a way of turning off the
caching, which so far I've failed to do. Pushing the files on to the remote
machine using SCP and then addressing them with "file:///" URLs seems another
way, but it seems over complicated to push them. Addressing the artifacts in
jenkins directly is another way, but it seems like I'm going to have to write
something bespoke to create a feature repository with the correct URLs in from
some kind of template (substituting the correct build number in all of the
URLs).

Just hoping there might be a known best way to do this kind of thing, save me
figuring it out!

Thanks.


Creating features.xml files

2016-09-20 Thread tom
Up until now I've been developing code using bndtools in eclipse, writing bndrun
files, resolving them, and running my application that way. The resolution step
resolves all the requirements from the set of OBR repositories. All very easy
(well, it is now, once I got over the learning curve :-) )

I'm now trying to deploy those same applications into karaf, and so wanting to
write feature repository XML files the the same thing. So naively I seem to want
to create a feature XML file that is the same as my resolved bndrun file. Is
there an easy way to do this? Is there an easy way to write feature repository
files, or do I have to maintain those separately? It also feels that the way I
have to write the files depends on how I'm deploying. So if I want to deploy
from a maven repository, my bundle references need to be in one form, but If I'm
deploying directly from disk, my bundle references have to be in another form. 

Is there an easy way to manage the feature xml files?

Thanks.


Re: Karaf, gogo shell?

2016-09-20 Thread tom
I started using Karaf yesterday. Version 4.0.6.

Not aware of shell-compat. What does it do? I can't see it mentioned in any
documentation. 

Naively tried installing it. Doesn't seem to change my ability to run a gogo
shell command that I can see?


> On 19 September 2016 at 17:08 Benson Margulies  wrote:
> 
> 
> What version of karaf?
> 
> Did you install the shell-compat feature, which is required for gogo commands?
> 
> 
> On Mon, Sep 19, 2016 at 12:06 PM,   wrote:
> > I'm trying to get started with Karaf, and am having a few issues.
> >
> > I have created a simple OSGi enroute project using bndtools in eclipse. I
> > have
> > created a feature.xml file for it and have installed it in karaf. So far so
> > good.
> >
> > The default project that bndtools generates includes a gogo command. It's
> > just a
> > "hello world".
> >
> > When I run this within eclipse under the normal OSGi environment there, I
> > can
> > run my command from the gogo shell.
> >
> > Naively I tried doing the same from within the karaf shell. No joy.
> >
> > I feel I'm back to square one in terms of diagnostics. Tools I used to rely
> > on
> > like "lb", "inspect capabilities" and so on don't exist as far as I can
> > tell, so
> > I can't really tell what might be going wrong.
> > "bundle:diag" shows no unsatisfied requirements though.
> >
> > Should I expect such a command to work?
> >
> > Thanks.


Karaf, gogo shell?

2016-09-19 Thread tom
I'm trying to get started with Karaf, and am having a few issues.

I have created a simple OSGi enroute project using bndtools in eclipse. I have
created a feature.xml file for it and have installed it in karaf. So far so
good. 

The default project that bndtools generates includes a gogo command. It's just a
"hello world".

When I run this within eclipse under the normal OSGi environment there, I can
run my command from the gogo shell. 

Naively I tried doing the same from within the karaf shell. No joy.

I feel I'm back to square one in terms of diagnostics. Tools I used to rely on
like "lb", "inspect capabilities" and so on don't exist as far as I can tell, so
I can't really tell what might be going wrong. 
"bundle:diag" shows no unsatisfied requirements though.

Should I expect such a command to work?

Thanks.


Re: Strange behaviour after installing compendium bundle

2016-09-07 Thread tom leung
After removing compendium bundle, install any feature won't make Karaf
'restart' and show Karaf logo again


Thanks for your help.








2016-09-07 20:51 GMT+08:00 Jean-Baptiste Onofré <j...@nanthrax.net>:

> Hi Tom,
>
> you don't have to install compendium bundle: it comes with feature when
> required.
>
> Moreover, you should use cmpn 6.0.0 (and not compendium 5.0.0).
>
> Regards
> JB
>
>
> On 09/07/2016 02:37 PM, tom leung wrote:
>
>> After installing the following bundle
>>
>> install mvn:org.osgi/org.osgi.compendium/5.0.0
>>
>>
>> Every time I install or uninstall a feature, Karaf seems to restart
>> something, and show the Karaf logo again as below:
>>
>>
>> karaf@root()>
>> karaf@root()>
>> karaf@root()> install mvn:org.osgi/org.osgi.compendium/5.0.0
>> Bundle ID: 52
>> karaf@root()> feature:install spring/3.1.4.RELEASE
>>
>> __ __  
>>/ //_/ __ _/ __/
>>   / ,<  / __ `/ ___/ __ `/ /_
>>  / /| |/ /_/ / /  / /_/ / __/
>> /_/ |_|\__,_/_/   \__,_/_/
>>
>>   Apache Karaf (4.0.6)
>>
>> Hit '' for a list of available commands
>> and '[cmd] --help' for help on a specific command.
>> Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf.
>>
>>
>> karaf@root()> feature:uninstall spring/3.1.4.RELEASE
>>
>> __ __  
>>/ //_/ __ _/ __/
>>   / ,<  / __ `/ ___/ __ `/ /_
>>  / /| |/ /_/ / /  / /_/ / __/
>> /_/ |_|\__,_/_/   \__,_/_/
>>
>>   Apache Karaf (4.0.6)
>>
>> Hit '' for a list of available commands
>> and '[cmd] --help' for help on a specific command.
>> Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf.
>>
>>
>> Sometimes, this 'restart' symptom makes some bundles for a installed
>> feature stay in status 'Installed' as below:
>>
>>
>> 50 | Active   |  20 | 5.0.4 | ASM all classes with debug info
>> 51 | Active   |   5 | 2.4.7 | OPS4J Pax Url - wrap:
>> 61 | Active   |  30 | 1.0.0.6   | Apache ServiceMix :: Bundles ::
>> aopallian
>> 62 | Installed |  30 | 3.1.4.RELEASE | Spring AOP
>> 63 | Active   |  30 | 3.1.4.RELEASE | Spring ASM
>> 64 | Active   |  30 | 3.1.4.RELEASE | Spring Beans
>> 65 | Active   |  30 | 3.1.4.RELEASE | Spring Context
>> 66 | Active   |  30 | 3.1.4.RELEASE | Spring Context Support
>> 67 | Installed |  30 | 3.1.4.RELEASE | Spring Core
>> 68 | Installed |  30 | 3.1.4.RELEASE | Spring Expression Language
>>
>> I need to start the bundles manually.
>>
>>
>>
>>
>> After uninstalling the compendium bundle, installing any feature won't
>> create the symptom.
>>
>>
>> karaf@root()> uninstall mvn:org.osgi/org.osgi.compendium/5.0.0
>> karaf@root()>
>> karaf@root()> feature:install spring/3.1.4.RELEASE
>> karaf@root()>
>>
>>
>> Any way to solve this symptom?
>>
>>
>>
>> Best Rgds,
>>
>> Tom
>>
>>
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Strange behaviour after installing compendium bundle

2016-09-07 Thread tom leung
After installing the following bundle

install mvn:org.osgi/org.osgi.compendium/5.0.0


Every time I install or uninstall a feature, Karaf seems to restart
something, and show the Karaf logo again as below:


karaf@root()>
karaf@root()>
karaf@root()> install mvn:org.osgi/org.osgi.compendium/5.0.0
Bundle ID: 52
karaf@root()> feature:install spring/3.1.4.RELEASE

__ __  
   / //_/ __ _/ __/
  / ,<  / __ `/ ___/ __ `/ /_
 / /| |/ /_/ / /  / /_/ / __/
/_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (4.0.6)

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf.


karaf@root()> feature:uninstall spring/3.1.4.RELEASE

__ __  
   / //_/ __ _/ __/
  / ,<  / __ `/ ___/ __ `/ /_
 / /| |/ /_/ / /  / /_/ / __/
/_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (4.0.6)

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf.


Sometimes, this 'restart' symptom makes some bundles for a installed
feature stay in status 'Installed' as below:


50 | Active   |  20 | 5.0.4 | ASM all classes with debug info
51 | Active   |   5 | 2.4.7 | OPS4J Pax Url - wrap:
61 | Active   |  30 | 1.0.0.6   | Apache ServiceMix :: Bundles ::
aopallian
62 | Installed |  30 | 3.1.4.RELEASE | Spring AOP
63 | Active   |  30 | 3.1.4.RELEASE | Spring ASM
64 | Active   |  30 | 3.1.4.RELEASE | Spring Beans
65 | Active   |  30 | 3.1.4.RELEASE | Spring Context
66 | Active   |  30 | 3.1.4.RELEASE | Spring Context Support
67 | Installed |  30 | 3.1.4.RELEASE | Spring Core
68 | Installed |  30 | 3.1.4.RELEASE | Spring Expression Language

I need to start the bundles manually.




After uninstalling the compendium bundle, installing any feature won't
create the symptom.


karaf@root()> uninstall mvn:org.osgi/org.osgi.compendium/5.0.0
karaf@root()>
karaf@root()> feature:install spring/3.1.4.RELEASE
karaf@root()>


Any way to solve this symptom?



Best Rgds,

Tom


Re: subscribe

2016-09-07 Thread Tom Barber
JB... didn't you just reply to the mailing list they aren't subscribed to?
;)

On Wed, Sep 7, 2016 at 8:44 AM, Jean-Baptiste Onofré 
wrote:

> Just send an e-mail to user-subscr...@karaf.apache.org
>
> Regards
> JB
>
>
> On 09/07/2016 09:00 AM, Saishrinivas Polishetti wrote:
>
>> I want to subcribe mailing list
>>
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: [ANN] Apache Karaf Cellar 4.0.1 has been released

2016-07-17 Thread Tom Barber
Good work folks! Looking forward to checking it out.


On Sun, Jul 17, 2016 at 9:24 PM, Jean-Baptiste Onofré 
wrote:

> The Karaf team is pleased to announce Cellar 4.0.1 release.
>
> Apache Karaf Cellar 4.0.1 is a maintenance release on the Cellar 4.x serie
> including bug fixes and couple of new features.
>
> Take a look on the website for details:
>
> http://karaf.apache.org/download.html#cellar-401
>
> Enjoy !
> The Karaf Team
>


Re: How do You use spring in OSGi?

2016-04-22 Thread Tom Barber
Hey Tomasz

As opposed to answering the question at the end of your email, I'll answer
your subject, which are subtly different ;)

I migrated a bunch of webapp stuff to Karaf and I spent ages trying to
figure out the spring stuff, then I suddenly realised. don't use it! :)

If I chat to people migrating webapps these days I try and steer them away
from Spring, don't get me wrong in a non OSGI environment I use spring all
the time, but in OSGI CDI and Blueprint have been more than enough for me
and the projects I've worked on.

Especially since Spring stopped support for OSGI stuff it seems to me to be
getting progressively worse, so my own opinion is even if in the short term
there is a gain with the synergy and cross over between 2 apps, at the end
of the day it mostly provides wiring, which you can get better support for
out of Blueprint.

My 2 cents.

Tom

On Fri, Apr 22, 2016 at 2:56 PM, Tomasz Wozniak <tomasz.wozn...@s3group.com>
wrote:

> Hi
>
> We've selected OSGi to modularize our system, it is obvious. Our
> technological background is JEE with  spring framework. As we were
> satisfied with spring, JEE servers are hard to reconfigure, so we took on
> the table Karaf and we felt some confusion related to programing model. On
> the forum there were some  discussions which recommended  Spring DM or
> Blueprint but karaf provides also spring framework bundles which dosen’t
> work in OSGi out of the box in our opinion. We met two major problems:
>
> 1.ClassLoaders, You can set up class loader for spring
> application context, but some classes in spring uses
> org.springframework.util.ClassUtil which return ContextClassLoader and
> every method is static so there is no option to override it
>
> 2.We weren’t able to load spring schema definitions from
> spring bundles (eg. tx, mvc, context)
>
>
>
> So the question is if there is some OSGi library which solves those
> problems or the only option is to use blueprint?
>
>
>
> Moreover OSGi is a novelty for us and we would like to have a cheap
> fallback solution to technology which we know -> spring + JEE servers, we
> decided to write own library which will allows us to use newest spring
> version in OSGi in a transparent manner.
>
> I would like to share idea behind that library to get opinions from more
> advanced OSGi users, maybe there is a similar solution on the market which
> we’ve missed.
>
>
>
> We looked at Spring DM, Blueprint and Peaberry they are designed around
> the same idea -> OSGi services are connected based on the information given
> in the annotations and this approach ties code to OSGi environment. We
> decided to operate on connected ServiceFactories which produces for user
> required service graph, such approach removes from code OSGi specific
> annotations actually we have two layers, ServiceFactory layer which know
> that operate in OSGi environment, and service layer which is OSGi free.
>
>
>
> In Spring ServiceFactory it is a Bean Factory or Application Context which
> extends Bean Factory so we are putting them to OSGi servise registry.
> Spring Application Contexts are connected by ParentBeanFactory which
> searches in OSGi service registry for required Spring Application Contexts
> by given bundle  base on some additional OSGi headers. Which such
> configuration bundle application context produces spring beans/services for
> implementation which comes from that bundle or delegates creation to wired
> Spring Application Contexts by parent bean factory.
>
>
>
>
>
> Maybe in point how it works:
>
> 1.Each bundle with implementation has its own Spring Context
> published in OSGi service registry
>
> 2.Spring context are wired via parent bean factory, for that
> purpose we introduced two types of bundles,
>
> a.one which contains beans interfaces called API bundle
>
> b.second type which contains implementation and thus
> containing spring context, that bundle is saying in OSGi headers which API
> is implementing  and which API is using. Based on that information Spring
> bean factories are connected in dependency graph on the factory level
>
> 3.Recommended scope for spring beans is session, request or
> prototype thanks to that we can have hot deployment for implementation. We
> can install new implementation bundle and its bean factory will be used by
> new user sessions, old user sessions will be using previously installed
> implementation.  With singleton scope we are unable to use newly uploaded
> implementation as singleton beans are wired during first request.
>
> 4.Ongoing user session is unchanged by deployment of new
> implementation bundle, it uses the old on

Re: Working WebSocket example?

2016-04-04 Thread Tom Barber
A while ago me and Achim tried websockets with CXF, it failed miserably, so
I also swapped to PAX Web. Clearly not to say it doesn't work, but there
were a bunch of "oh we're so close, but its not working" moments

On Mon, Apr 4, 2016 at 11:13 PM, Nick Baker  wrote:

> Thanks Achim,
>
> This is interesting. I should have mentioned that we're using the CXF
> features from Karaf 4.0.3 and Blueprint XML. We're unfortunately not able
> to leverage WAB capabilities.
>
> -Nick
>
> From: Achim Nierbeck 
> Reply-To: "user@karaf.apache.org" 
> Date: Monday, April 4, 2016 at 3:47 PM
> To: "user@karaf.apache.org" 
> Subject: Re: Working WebSocket example?
>
> Hi,
>
> might want to take a look at this one [1].
> It should work right away with K4 and Pax-Web 4.2.x
>
> regards, Achim
>
> [1] -
> https://github.com/ops4j/org.ops4j.pax.web/tree/pax-web-4.2.x/samples/websocket-jsr356
>
>
> 2016-04-04 21:40 GMT+02:00 Nick Baker :
>
>> Hey All,
>>
>> Does anyone have a working project using WebSockets? I was trying a quick
>> prototype repurposing the CXF example but haven't had any luck:
>>
>>
>> https://github.com/apache/cxf/blob/master/distribution/src/main/release/samples/jax_rs/websocket/src/main/java/demo/jaxrs/server/CustomerService.java
>>
>>
>> I keep receiving a 200 from the server, whereas 101 should be returned:
>>
>> [Error] WebSocket connection to
>> 'ws://localhost:8181/cxf/sockets/infinitymachine/clock' failed: Unexpected
>> response code: 200
>>
>> Thanks,
>> Nick
>>
>
>
>
> --
>
> Apache Member
> Apache Karaf  Committer & PMC
> OPS4J Pax Web  Committer &
> Project Lead
> blog 
> Co-Author of Apache Karaf Cookbook 
>
> Software Architect / Project Manager / Scrum Master
>
>


Re: Embedding Karaf in a WAR (Tomcat)

2016-03-23 Thread Tom Barber
Hi Martin

I'm not sure quite how they bootstrap it, but Pentaho BI Server 6.x runs
Tomcat & Karaf as a single service.

Tom

On Wed, Mar 23, 2016 at 11:53 AM, mjelen <martin.je...@isb-ag.de> wrote:

> Dear Karaf developers und fellow users,
>
> due to customer requirements, we have to deliver all our software as web
> archives that are deployable on Tomcat. I'm hoping for this requirement to
> change in the future and I've been reasoning with our customer for over a
> year now about it, but at the moment they won't budge.
>
> We're currently developing a couple of web applications using Karaf and for
> production deployment, we have built a reasonably generic Felix WAR that
> starts the OSGi Container and includes the Felix File Install to read and
> start our application bundles from a custom directory. This approach has
> been working within Tomcat for over a year, but it has several drawbacks.
> Those that I'm aware of are:
>   - The development and production environment are different from each
> other, introducing a new bug source (I can live with the necessary
> difference between embedded Jetty and bridged Tomcat, but I don't want more
> than that).
>   - In production, we lose a lot of Karaf's features (such as the console,
> "feature"s, wrappers, KARs) and have to fall back on the basic File Install
> (no start levels etc.) and have to package the applications differently for
> development/production.
>   - We have to maintain our custom Felix WAR distribution.
>
> To solve these problems, I would ideally have a generic Karaf launcher
> packaged as a WAR with the path to a Karaf home directory a parameter. That
> way I could simply decide whether Karaf gets started by the shell script or
> from my web application WAR. However, I can see several hurdles on the way
> and would be interested to hear if anyone has successfully done this
> before.
>
> Things I'm unsure of right now:
>   - The default Karaf launcher (.bat/.sh scripts) uses the "endorsed
> libraries" mechanism of the JRE to override even classes like
> java.lang.Exception. Even if that works with current Tomcat versions (I
> haven't tried that yet), it seems fragile for the future in the embedded
> scenario and I'm not happy about changing Tomcat's libraries to that
> extent.
>   - Will I have to include any/many libraries in Tomcat's classloader (e.g.
> in my WAR's WEB-INF/lib and modified framework.properties)? I already had
> to
> do that for my Felix WAR with Geronimo JTA-spec and it works fine at the
> moment, but again it makes me nervous regarding future enhancements.
>
> Any help will be appreciated, whether you have a few pointers for a
> solution, some code to share or even a horror story about how it can't be
> done :-).
>
> Kind regards
>
> --
> Martin Jelen
> ISB AG
>
>
>
> --
> View this message in context:
> http://karaf.922171.n3.nabble.com/Embedding-Karaf-in-a-WAR-Tomcat-tp4045931.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>


Re: [ANN] New Karaf website online

2016-02-04 Thread Tom Barber
Haha, very true!

I suspect that word isn't required, and instead of a dual polymorphic
container, what the author tried to say was "Karaf provides polymorphic
container and application bootstrapping paradigms to the Enterprise."

Of course, I still don't know what a single polymorphic container is
either, but I clearly use one! ;)

On Thu, Feb 4, 2016 at 10:15 PM, Raman Gupta  wrote:

> Congrats!
>
> Now, if someone could only explain to me what a "dual polymorphic
> container" is! :-)
>
> Regards,
> Raman
>
>
> On 02/04/2016 10:31 AM, Jean-Baptiste Onofré wrote:
> > Hi all,
> >
> > as you may have seen that the new Karaf website is now online.
> >
> > Don't hesitate to create Jira (with website component) if you see some
> > broken links and rendering issue.
> >
> > Thanks !
> > Regards
> > JB
>


Re: [ANN] New Karaf website online

2016-02-04 Thread Tom Barber
Looks great the only issue i saw on my phone was the slider changed so
quickly I could read the side.

Tom
On 4 Feb 2016 3:47 pm, "Jean-Baptiste Onofré" <j...@nanthrax.net> wrote:

> Thanks for the update Scott, let's see to improve this ;)
>
> Regards
> JB
>
> On 02/04/2016 04:45 PM, Leschke, Scott wrote:
>
>> Looks real good. My only comment would be that the navigation header and
>> possibly the footer, should probably be fixed on the page. That's a nit
>> though.
>>
>> -Original Message-
>> From: Jean-Baptiste Onofré [mailto:j...@nanthrax.net]
>> Sent: Thursday, February 04, 2016 9:32 AM
>> To: user <user@karaf.apache.org>; Karaf Dev <d...@karaf.apache.org>
>> Subject: [ANN] New Karaf website online
>>
>> Hi all,
>>
>> as you may have seen that the new Karaf website is now online.
>>
>> Don't hesitate to create Jira (with website component) if you see some
>> broken links and rendering issue.
>>
>> Thanks !
>> Regards
>> JB
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: How to solve bundle state GracePeriod

2016-01-24 Thread Tom Barber
I had similar although I don't see your logs so I can't say if its 100% the
same, but mixing jpa 1.x headers in blueprint with jpa 2.x as the installed
feature will cause you that problem.

Tom

On Sun, Jan 24, 2016 at 7:11 PM, simon <simon-pob...@outlook.com> wrote:

> Hello, I am trying to use Karaf with OpenJPA and Derby (Embedded). However
> my
> module state is always "GracePeriod". The sample I am trying to deploy is
> available here:
>
>
> https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter7/chapter7-recipe4
>
> I build the example with maven. I then open Karaf (please note I tried
> these
> steps with both apache-karaf-3.0.5 and apache-karaf-4.0.4). I then install
> the following features as explained in the book "Apache Karaf Cookbook":
>
> feature:install jpa
> feature:install openjpa/2.2.2
> feature:install transaction
> install -s mvn:org.apache.derby/derbyclient/10.8.1.2
> feature:install jndi
>
> (Note: for Karaf 4.0.4 I removed versions so latest is installed but when I
> try an install transaction I got java.lang.ClassNotFoundException:
> javax.transaction.SystemException not found by org.apache.openjpa so
> reverted to use 3.0.5!)
>
> Here I check and everything is nice and state is "Active". I then follow
> the
> last step, to deploy the compiled code:
>
> install -s mvn:com.packt/jpa-only/1.0.0-SNAPSHOT
>
> I check the state and "jpa-only" bundle is in state "GracePeriod"
>
> When I look in the logs I see the following error:
>
> [error]
> JPA-Only Demo Bundle starting...
> 2016-01-24 19:30:03,662 | INFO  | l for user karaf | BlueprintContainerImpl
> | 15 - org.apache.aries.blueprint.core - 1.4.4 | Bundle
> jpa-only/1.0.0.SNAPSHOT is waiting for dependencies
> [(&(&(!(org.apache.aries.jpa.proxy.factory=*))(osgi.unit.name
> =recipe))(objectClass=javax.persistence.EntityManagerFactory))]
> 2016-01-24 19:30:03,678 | WARN  | l for user karaf | container
> | 70 - org.apache.aries.jpa.container - 1.0.2 | There are no providers
> available.
> Bundle ID: 95
> [/error]
>
> I have searched all over the Internet to find a solution but nothing seems
> to work. I found the following which suggest that the is related to Aries
> JPA update.
> https://issues.apache.org/jira/browse/KARAF-3244
>
> However if I try "feature:install jpa/1.0.1" as suggested I get "Can't
> install feature jpa/1.0.1".
>
> I found this:
>
> http://www.liquid-reality.de/display/liquid/2012/01/13/Apache+Karaf+Tutorial+Part+6+-+Database+Access
>
> This does not really suite me because it makes use of Hibernate not
> OpenJPA.
> However I tried it all the same.
>
> On a clean install of 3.0.5 I run the following:
>
> feature:install jdbc
> feature:repo-add
> mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.5.0/xml/features
> feature:install transaction jndi pax-jdbc-h2 pax-jdbc-config
> pax-jdbc-pool-dbcp2 jpa/2.1.0 hibernate/4.3.6.Final
> install -s mvn:net.lr.tutorial.karaf.db/db-examplejpa/1.0-SNAPSHOT
>
> This works. However on 4.0.4 I get the "GracePeriod" message as well.
>
> I wonder if anyone has a working Karaf + OpenJPA + Derby (Embedded)
> example?
> That would really help me out.
>
> I would like to use latest version of all but I read that Blueprint XML is
> no longer supported.
>
>
>
> --
> View this message in context:
> http://karaf.922171.n3.nabble.com/How-to-solve-bundle-state-GracePeriod-tp4044982.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>


Re: How to solve bundle state GracePeriod

2016-01-24 Thread Tom Barber
i say blueprint, I also mean the persistence.xml depending on how you're
bootstrapping it, for 2.x I had to switch to this:
https://github.com/OSBI/meteorite-core/blob/master/model/src/main/resources/META-INF/persistence.xml#L3

On Sun, Jan 24, 2016 at 7:53 PM, simon <simon-pob...@outlook.com> wrote:

> Hi Tom, thanks for your reply. Ok, downloaded sample aries libraries and
> moved the jpa libraries and transaction to deploy (instead of using
> feature:install) and it marks as "Active".
>
> So am I understanding this right; the problem is the headers in blueprint
> are JPA1 but the bundles installed are JPA2? or I am not on the right road?
>
>
>
> --
> View this message in context:
> http://karaf.922171.n3.nabble.com/How-to-solve-bundle-state-GracePeriod-tp4044982p4044985.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>


Re: Datasource not found

2015-12-28 Thread Tom Barber
Okay

It appears either a) upgrading to JPA 2 or b) switching from blueprint xml
to blueprint annotations for JPA has fixed the issue as the test suite has
now run a few times without freaking out.

Tom

On Mon, Dec 21, 2015 at 2:53 PM, Tom Barber <tom.bar...@meteorite.bi> wrote:

> So I have:
>
> https://github.com/OSBI/meteorite-core/blob/master/karaf/pom.xml#L176
>
> in my custom bundle, and then when you start:
>
> feature:install meteorite-core-features
>
> and all my bundles go to active.
>
> In my itests:
>
>
> https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/util/ITestBootstrap.java#L112
>
> install all my bundles then:
>
>
> https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/core/security/TestSecurity.java#L72
>
> As far as I can see I'm literally trying to block up everything, yet still
> the data sources don't get resolved, some of the time, but some times they
> do.
>
> The filter timeouts look correct, am I missing anything else I can try?
>
> Thanks
>
> Tom
>
> On Mon, Dec 21, 2015 at 1:44 PM, Achim Nierbeck <bcanh...@googlemail.com>
> wrote:
>
>> if you want to have your datasource injected into your test, you can also
>> add a timeout on that @Filter annotation.
>> That might help already, especially since your datasource is managed
>> service.
>>
>> regards, Achim
>>
>>
>> 2015-12-21 14:29 GMT+01:00 Tom Barber <tom.bar...@meteorite.bi>:
>>
>>> Okay I give up, sometimes pax exam runs okay, sometimes it doesn't, and
>>> I can't work out how to get it to wait for the datasource to become
>>> available.
>>>
>>> I've tried:
>>>
>>>   >>filter="(osgi.jndi.service.name=jdbc/userlist)"
>>> availability="mandatory"/>
>>> in blueprint
>>>
>>> and
>>>
>>>   @Inject
>>>   @Filter("(osgi.jdbc.driver.class=org.h2.Driver)")
>>>   private DataSourceFactory dsf;
>>>
>>> in the test suite and neither consistently run the test suite.
>>>
>>>
>>>
>>> On Mon, Dec 21, 2015 at 12:07 PM, Tom Barber <tom.bar...@meteorite.bi>
>>> wrote:
>>>
>>>> Hmm yeah, although running my Karaf container in pax exam, reverts back
>>>> to throwing the same error, even though running it manually doesn't.
>>>>
>>>> On Sun, Dec 20, 2015 at 9:42 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
>>>> wrote:
>>>>
>>>>> OK, it's what I thought. That's why I added the pax-jdbc feature
>>>>> dependency in jdbc now.
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>> On 12/20/2015 09:29 PM, Tom Barber wrote:
>>>>>
>>>>>> Looks like I was missing pax-jdbc, although I did have 3 other pax
>>>>>> jdbc
>>>>>> related features. *facepalm*
>>>>>>
>>>>>> On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré <
>>>>>> j...@nanthrax.net
>>>>>> <mailto:j...@nanthrax.net>> wrote:
>>>>>>
>>>>>> Can you check if pax-jdbc and pax-jdbc-config feature are
>>>>>> installed
>>>>>> (boot features in your case I guess) ?
>>>>>>
>>>>>> Regards
>>>>>> JB
>>>>>>
>>>>>> On 12/20/2015 06:00 PM, Tom Barber wrote:
>>>>>>
>>>>>> Okay so I tried something.
>>>>>>
>>>>>> If I unzip my distro and start it up and run jdbc:ds-list the
>>>>>> data
>>>>>> source is listed. If I then install my persistence bundle it
>>>>>> finds it.
>>>>>>
>>>>>> If I unzip my distro and start up my persistence bundle, it
>>>>>> doesn't
>>>>>> detect the datasource.
>>>>>>
>>>>>> Am I missing some bootstrap?
>>>>>>
>>>>>> Tom
>>>>>>
>>>>>>
>>>>>> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré
>>>>>> <j...@nanthrax.net <mailto:j...@nanthrax.net>
>>>>>> <mailto:j...@nanthrax.net <

Re: Datasource not found

2015-12-21 Thread Tom Barber
Okay I give up, sometimes pax exam runs okay, sometimes it doesn't, and I
can't work out how to get it to wait for the datasource to become available.

I've tried:

  
in blueprint

and

  @Inject
  @Filter("(osgi.jdbc.driver.class=org.h2.Driver)")
  private DataSourceFactory dsf;

in the test suite and neither consistently run the test suite.



On Mon, Dec 21, 2015 at 12:07 PM, Tom Barber <tom.bar...@meteorite.bi>
wrote:

> Hmm yeah, although running my Karaf container in pax exam, reverts back to
> throwing the same error, even though running it manually doesn't.
>
> On Sun, Dec 20, 2015 at 9:42 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
> wrote:
>
>> OK, it's what I thought. That's why I added the pax-jdbc feature
>> dependency in jdbc now.
>>
>> Regards
>> JB
>>
>> On 12/20/2015 09:29 PM, Tom Barber wrote:
>>
>>> Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc
>>> related features. *facepalm*
>>>
>>> On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré <j...@nanthrax.net
>>> <mailto:j...@nanthrax.net>> wrote:
>>>
>>> Can you check if pax-jdbc and pax-jdbc-config feature are installed
>>> (boot features in your case I guess) ?
>>>
>>> Regards
>>> JB
>>>
>>> On 12/20/2015 06:00 PM, Tom Barber wrote:
>>>
>>> Okay so I tried something.
>>>
>>> If I unzip my distro and start it up and run jdbc:ds-list the
>>> data
>>>     source is listed. If I then install my persistence bundle it
>>> finds it.
>>>
>>> If I unzip my distro and start up my persistence bundle, it
>>> doesn't
>>> detect the datasource.
>>>
>>> Am I missing some bootstrap?
>>>
>>> Tom
>>>
>>>
>>> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré
>>> <j...@nanthrax.net <mailto:j...@nanthrax.net>
>>> <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote:
>>>
>>>  Hi Tomn
>>>
>>>  what did you define in the cfg file ?
>>>  Anything special in the log ?
>>>
>>>  I guess you use Karaf 4.0.2. Did you install the pax-jdbc
>>> feature:
>>>
>>>  feature:install pax-jdbc
>>>
>>>  ?
>>>
>>>  I fixed that in next Karaf version: now the jdbc feature
>>> installs
>>>  pax-jdbc (it wasn't the case before).
>>>
>>>  Regards
>>>  JB
>>>
>>>  On 12/20/2015 11:21 AM, Tom Barber wrote:
>>>
>>>  Hello folks
>>>
>>>  i have a datasource define in
>>> etc/org.ops4j.datasource-users.cfg
>>>
>>>  When I install my feature I have a persistence bundle
>>> that
>>>  starts but I get:
>>>
>>>  apache.aries.jpa.container - 1.0.2 | The DataSource
>>>
>>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name
>>> <http://osgi.jndi.service.name>
>>>  <http://osgi.jndi.service.name>
>>>  <http://osgi.jndi.service.name>=userlist) required by
>>> bundle
>>>  bi.meteorite.persistence/1.0.0.SNAPSHOT could not be
>>> found.
>>>
>>>  And my bundle hangs in grace period.
>>>
>>>  If I then stop and start karaf all my bundles start.
>>>
>>>  So how do I get it to find the datasource before trying
>>> to start my
>>>  bundle with blueprint?
>>>
>>>  Thanks
>>>
>>>  Tom
>>>
>>>
>>>  --
>>>  Jean-Baptiste Onofré
>>> jbono...@apache.org <mailto:jbono...@apache.org>
>>> <mailto:jbono...@apache.org <mailto:jbono...@apache.org>>
>>> http://blog.nanthrax.net
>>>  Talend - http://www.talend.com
>>>
>>>
>>>
>>> --
>>> Jean-Baptiste Onofré
>>> jbono...@apache.org <mailto:jbono...@apache.org>
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>


Re: Datasource not found

2015-12-21 Thread Tom Barber
Hmm yeah, although running my Karaf container in pax exam, reverts back to
throwing the same error, even though running it manually doesn't.

On Sun, Dec 20, 2015 at 9:42 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
wrote:

> OK, it's what I thought. That's why I added the pax-jdbc feature
> dependency in jdbc now.
>
> Regards
> JB
>
> On 12/20/2015 09:29 PM, Tom Barber wrote:
>
>> Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc
>> related features. *facepalm*
>>
>> On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré <j...@nanthrax.net
>> <mailto:j...@nanthrax.net>> wrote:
>>
>> Can you check if pax-jdbc and pax-jdbc-config feature are installed
>> (boot features in your case I guess) ?
>>
>> Regards
>> JB
>>
>> On 12/20/2015 06:00 PM, Tom Barber wrote:
>>
>> Okay so I tried something.
>>
>> If I unzip my distro and start it up and run jdbc:ds-list the data
>> source is listed. If I then install my persistence bundle it
>> finds it.
>>
>>         If I unzip my distro and start up my persistence bundle, it
>> doesn't
>> detect the datasource.
>>
>> Am I missing some bootstrap?
>>
>> Tom
>>
>>
>> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré
>> <j...@nanthrax.net <mailto:j...@nanthrax.net>
>> <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote:
>>
>>  Hi Tomn
>>
>>  what did you define in the cfg file ?
>>  Anything special in the log ?
>>
>>  I guess you use Karaf 4.0.2. Did you install the pax-jdbc
>> feature:
>>
>>  feature:install pax-jdbc
>>
>>  ?
>>
>>  I fixed that in next Karaf version: now the jdbc feature
>> installs
>>  pax-jdbc (it wasn't the case before).
>>
>>  Regards
>>  JB
>>
>>  On 12/20/2015 11:21 AM, Tom Barber wrote:
>>
>>  Hello folks
>>
>>  i have a datasource define in
>> etc/org.ops4j.datasource-users.cfg
>>
>>  When I install my feature I have a persistence bundle
>> that
>>  starts but I get:
>>
>>  apache.aries.jpa.container - 1.0.2 | The DataSource
>>
>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name
>> <http://osgi.jndi.service.name>
>>  <http://osgi.jndi.service.name>
>>      <http://osgi.jndi.service.name>=userlist) required by
>> bundle
>>  bi.meteorite.persistence/1.0.0.SNAPSHOT could not be
>> found.
>>
>>  And my bundle hangs in grace period.
>>
>>  If I then stop and start karaf all my bundles start.
>>
>>  So how do I get it to find the datasource before trying
>> to start my
>>  bundle with blueprint?
>>
>>  Thanks
>>
>>  Tom
>>
>>
>>  --
>>  Jean-Baptiste Onofré
>> jbono...@apache.org <mailto:jbono...@apache.org>
>> <mailto:jbono...@apache.org <mailto:jbono...@apache.org>>
>> http://blog.nanthrax.net
>>  Talend - http://www.talend.com
>>
>>
>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org <mailto:jbono...@apache.org>
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: Datasource not found

2015-12-21 Thread Tom Barber
So I have:

https://github.com/OSBI/meteorite-core/blob/master/karaf/pom.xml#L176

in my custom bundle, and then when you start:

feature:install meteorite-core-features

and all my bundles go to active.

In my itests:

https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/util/ITestBootstrap.java#L112

install all my bundles then:

https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/core/security/TestSecurity.java#L72

As far as I can see I'm literally trying to block up everything, yet still
the data sources don't get resolved, some of the time, but some times they
do.

The filter timeouts look correct, am I missing anything else I can try?

Thanks

Tom

On Mon, Dec 21, 2015 at 1:44 PM, Achim Nierbeck <bcanh...@googlemail.com>
wrote:

> if you want to have your datasource injected into your test, you can also
> add a timeout on that @Filter annotation.
> That might help already, especially since your datasource is managed
> service.
>
> regards, Achim
>
>
> 2015-12-21 14:29 GMT+01:00 Tom Barber <tom.bar...@meteorite.bi>:
>
>> Okay I give up, sometimes pax exam runs okay, sometimes it doesn't, and I
>> can't work out how to get it to wait for the datasource to become available.
>>
>> I've tried:
>>
>>   >filter="(osgi.jndi.service.name=jdbc/userlist)"
>> availability="mandatory"/>
>> in blueprint
>>
>> and
>>
>>   @Inject
>>   @Filter("(osgi.jdbc.driver.class=org.h2.Driver)")
>>   private DataSourceFactory dsf;
>>
>> in the test suite and neither consistently run the test suite.
>>
>>
>>
>> On Mon, Dec 21, 2015 at 12:07 PM, Tom Barber <tom.bar...@meteorite.bi>
>> wrote:
>>
>>> Hmm yeah, although running my Karaf container in pax exam, reverts back
>>> to throwing the same error, even though running it manually doesn't.
>>>
>>> On Sun, Dec 20, 2015 at 9:42 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
>>> wrote:
>>>
>>>> OK, it's what I thought. That's why I added the pax-jdbc feature
>>>> dependency in jdbc now.
>>>>
>>>> Regards
>>>> JB
>>>>
>>>> On 12/20/2015 09:29 PM, Tom Barber wrote:
>>>>
>>>>> Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc
>>>>> related features. *facepalm*
>>>>>
>>>>> On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré <j...@nanthrax.net
>>>>> <mailto:j...@nanthrax.net>> wrote:
>>>>>
>>>>> Can you check if pax-jdbc and pax-jdbc-config feature are installed
>>>>> (boot features in your case I guess) ?
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>> On 12/20/2015 06:00 PM, Tom Barber wrote:
>>>>>
>>>>> Okay so I tried something.
>>>>>
>>>>> If I unzip my distro and start it up and run jdbc:ds-list the
>>>>> data
>>>>> source is listed. If I then install my persistence bundle it
>>>>> finds it.
>>>>>
>>>>> If I unzip my distro and start up my persistence bundle, it
>>>>> doesn't
>>>>> detect the datasource.
>>>>>
>>>>> Am I missing some bootstrap?
>>>>>
>>>>> Tom
>>>>>
>>>>>
>>>>> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré
>>>>> <j...@nanthrax.net <mailto:j...@nanthrax.net>
>>>>> <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote:
>>>>>
>>>>>  Hi Tomn
>>>>>
>>>>>  what did you define in the cfg file ?
>>>>>  Anything special in the log ?
>>>>>
>>>>>  I guess you use Karaf 4.0.2. Did you install the pax-jdbc
>>>>> feature:
>>>>>
>>>>>  feature:install pax-jdbc
>>>>>
>>>>>  ?
>>>>>
>>>>>  I fixed that in next Karaf version: now the jdbc feature
>>>>> installs
>>>>>  pax-jdbc (it wasn't the case before).
>>>>>
>>>>>  Regards
>>>>>      

Datasource not found

2015-12-20 Thread Tom Barber
Hello folks

i have a datasource define in etc/org.ops4j.datasource-users.cfg

When I install my feature I have a persistence bundle that starts but I get:

apache.aries.jpa.container - 1.0.2 | The DataSource
osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=userlist)
required by bundle bi.meteorite.persistence/1.0.0.SNAPSHOT could not be
found.

And my bundle hangs in grace period.

If I then stop and start karaf all my bundles start.

So how do I get it to find the datasource before trying to start my bundle
with blueprint?

Thanks

Tom


Re: Datasource not found

2015-12-20 Thread Tom Barber
4.0.2 and whatever the latest pax jdbc version is.
On 20 Dec 2015 12:20 pm, "Christian Schneider" <ch...@die-schneider.net>
wrote:

> What versions of karaf and pax-jdbc do you use?
>
> 2015-12-20 11:21 GMT+01:00 Tom Barber <tom.bar...@meteorite.bi>:
>
>> Hello folks
>>
>> i have a datasource define in etc/org.ops4j.datasource-users.cfg
>>
>> When I install my feature I have a persistence bundle that starts but I
>> get:
>>
>> apache.aries.jpa.container - 1.0.2 | The DataSource
>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=userlist)
>> required by bundle bi.meteorite.persistence/1.0.0.SNAPSHOT could not be
>> found.
>>
>> And my bundle hangs in grace period.
>>
>> If I then stop and start karaf all my bundles start.
>>
>> So how do I get it to find the datasource before trying to start my
>> bundle with blueprint?
>>
>> Thanks
>>
>> Tom
>>
>
>
>
> --
> --
> Christian Schneider
> http://www.liquid-reality.de
> <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>
>
> Open Source Architect
> http://www.talend.com
> <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.talend.com>
>


Re: Datasource not found

2015-12-20 Thread Tom Barber
Okay so I tried something.

If I unzip my distro and start it up and run jdbc:ds-list the data source
is listed. If I then install my persistence bundle it finds it.

If I unzip my distro and start up my persistence bundle, it doesn't detect
the datasource.

Am I missing some bootstrap?

Tom


On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
wrote:

> Hi Tomn
>
> what did you define in the cfg file ?
> Anything special in the log ?
>
> I guess you use Karaf 4.0.2. Did you install the pax-jdbc feature:
>
> feature:install pax-jdbc
>
> ?
>
> I fixed that in next Karaf version: now the jdbc feature installs pax-jdbc
> (it wasn't the case before).
>
> Regards
> JB
>
> On 12/20/2015 11:21 AM, Tom Barber wrote:
>
>> Hello folks
>>
>> i have a datasource define in etc/org.ops4j.datasource-users.cfg
>>
>> When I install my feature I have a persistence bundle that starts but I
>> get:
>>
>> apache.aries.jpa.container - 1.0.2 | The DataSource
>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name
>> <http://osgi.jndi.service.name>=userlist) required by bundle
>> bi.meteorite.persistence/1.0.0.SNAPSHOT could not be found.
>>
>> And my bundle hangs in grace period.
>>
>> If I then stop and start karaf all my bundles start.
>>
>> So how do I get it to find the datasource before trying to start my
>> bundle with blueprint?
>>
>> Thanks
>>
>> Tom
>>
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: Datasource not found

2015-12-20 Thread Tom Barber
Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc
related features. *facepalm*

On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
wrote:

> Can you check if pax-jdbc and pax-jdbc-config feature are installed (boot
> features in your case I guess) ?
>
> Regards
> JB
>
> On 12/20/2015 06:00 PM, Tom Barber wrote:
>
>> Okay so I tried something.
>>
>> If I unzip my distro and start it up and run jdbc:ds-list the data
>> source is listed. If I then install my persistence bundle it finds it.
>>
>> If I unzip my distro and start up my persistence bundle, it doesn't
>> detect the datasource.
>>
>> Am I missing some bootstrap?
>>
>> Tom
>>
>>
>> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré <j...@nanthrax.net
>> <mailto:j...@nanthrax.net>> wrote:
>>
>> Hi Tomn
>>
>> what did you define in the cfg file ?
>> Anything special in the log ?
>>
>> I guess you use Karaf 4.0.2. Did you install the pax-jdbc feature:
>>
>> feature:install pax-jdbc
>>
>> ?
>>
>> I fixed that in next Karaf version: now the jdbc feature installs
>> pax-jdbc (it wasn't the case before).
>>
>> Regards
>> JB
>>
>> On 12/20/2015 11:21 AM, Tom Barber wrote:
>>
>> Hello folks
>>
>> i have a datasource define in etc/org.ops4j.datasource-users.cfg
>>
>> When I install my feature I have a persistence bundle that
>> starts but I get:
>>
>> apache.aries.jpa.container - 1.0.2 | The DataSource
>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name
>> <http://osgi.jndi.service.name>
>> <http://osgi.jndi.service.name>=userlist) required by bundle
>>     bi.meteorite.persistence/1.0.0.SNAPSHOT could not be found.
>>
>> And my bundle hangs in grace period.
>>
>> If I then stop and start karaf all my bundles start.
>>
>> So how do I get it to find the datasource before trying to start
>> my
>> bundle with blueprint?
>>
>> Thanks
>>
>> Tom
>>
>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org <mailto:jbono...@apache.org>
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>


Re: Updating Snapshot bundles from installedFeatures

2015-12-17 Thread Tom Barber
Scratch that, found the global update policy thing on jira.

Tom

On Thu, Dec 17, 2015 at 4:39 PM, Tom Barber <tom.bar...@meteorite.bi> wrote:

> Hello folks,
>
> I have a custom distro with an installedFeature (will later be a
> bootfeature), so it ships with the custom distro.
>
> Currently its 1.0-SNAPSHOT, and it exists in distro/system.
>
> After we push a build up to our build server its deployed to our maven
> repo, how do I get karaf to fetch the updated snapshot, because it already
> exists in distro/system it just uses the local one unless I manually delete
> it. Is there an alternative?
>
> Thanks
>
> Tom
>


Updating Snapshot bundles from installedFeatures

2015-12-17 Thread Tom Barber
Hello folks,

I have a custom distro with an installedFeature (will later be a
bootfeature), so it ships with the custom distro.

Currently its 1.0-SNAPSHOT, and it exists in distro/system.

After we push a build up to our build server its deployed to our maven
repo, how do I get karaf to fetch the updated snapshot, because it already
exists in distro/system it just uses the local one unless I manually delete
it. Is there an alternative?

Thanks

Tom


  1   2   >