[VOTE] Apache Karaf Cellar 4.0.3 release

2016-10-19 Thread Jean-Baptiste Onofré

Hi all,

I submit Apache Karaf Cellar 4.0.3 release to your vote.

Release Notes:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12311140&version=12338324

Staging Repository:
https://repository.apache.org/content/repositories/orgapachekaraf-1076/

Git Tag:
cellar-4.0.3

Please vote to approve this release:

[ ] +1 Approve the release
[ ] -1 Don't approve the release (please provide specific comments)

This vote will be open for at least 72 hours.

Thanks,
Regards
JB
--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread David Daniel
This custom resource repository is what I would love to see as far as
bndtools and karaf integration.  I think bndtools had made a huge leap with
their new maven integration in the bnd pom repository.  I would love to see
a feature repository built off of that. Ideally when I run karaf in pax
exam I could include it as a repo and my unit tests could share the same
repo as my ide that I debug in.

On Wed, Oct 19, 2016 at 11:17 AM, Christian Schneider <
ch...@die-schneider.net> wrote:

> On 19.10.2016 16:57, Guillaume Nodet wrote:
>
>>
>>
>> Something like:
>>mvn:...
>>mvn:...
>>...
>>
>> That sounds like a feature to me ;-)
>>
> The advantage would be that you can use IDE tooling for poms like M2E. So
> you get search and completion for the artifacts. You get nice introspection
> for the transitive dependencies.
> I think one thing we are missing in karaf feature files is IDE support.
>
>
>> Anyway, a custom osgi resource repository can be easily created for an
>> external file by inheriting
>> org.apache.karaf.features.internal.repository.BaseRepository.  Just add
>> your new repository into org.apache.karaf.features.internal.region.
>> getRepository  and it should work.
>>
> Sounds interesting. I will play with that.
>
>>
>> Again, in the simple cases, if this list of bundles can be generated, it
>> means that the feature itself can be generated, so I'm not sure where the
>> benefit is...
>>
> I have not used the generated features a lot. One thing that might be
> problematic is that a feature file can contain several
> features while a pom can only contain one list of bundles. That is why I
> would treat the list of bundles from the pom rather as a
> backing repository than as a feature.
>
> Again I think the main benefit would come from tooling. Bndtools allows to
> search in the index and just drag bundles into the requirements.
> Such a mechanism would also be great for features. Unfortunately we do not
> have a lot of good Eclipse RCP or Intellij plugin developers at Apache.
> So I guess the dream of having decent IDE tooling is quite far away :-(
>
>
> Christian
>
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Christian Schneider

On 19.10.2016 16:57, Guillaume Nodet wrote:



Something like:
   mvn:...
   mvn:...
   ...

That sounds like a feature to me ;-)
The advantage would be that you can use IDE tooling for poms like M2E. 
So you get search and completion for the artifacts. You get nice 
introspection for the transitive dependencies.

I think one thing we are missing in karaf feature files is IDE support.



Anyway, a custom osgi resource repository can be easily created for an
external file by inheriting
org.apache.karaf.features.internal.repository.BaseRepository.  Just add
your new repository into org.apache.karaf.features.internal.region.
getRepository  and it should work.

Sounds interesting. I will play with that.


Again, in the simple cases, if this list of bundles can be generated, it
means that the feature itself can be generated, so I'm not sure where the
benefit is...
I have not used the generated features a lot. One thing that might be 
problematic is that a feature file can contain several
features while a pom can only contain one list of bundles. That is why I 
would treat the list of bundles from the pom rather as a

backing repository than as a feature.

Again I think the main benefit would come from tooling. Bndtools allows 
to search in the index and just drag bundles into the requirements.
Such a mechanism would also be great for features. Unfortunately we do 
not have a lot of good Eclipse RCP or Intellij plugin developers at Apache.

So I guess the dream of having decent IDE tooling is quite far away :-(

Christian


--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Guillaume Nodet
2016-10-19 16:13 GMT+02:00 Christian Schneider :

> On 19.10.2016 15:53, Guillaume Nodet wrote:
>
>> 2016-10-19 15:28 GMT+02:00 Christian Schneider :
>>
>> On 19.10.2016 15:22, Guillaume Nodet wrote:
>>>
>>> I disagree.

 All the problems come when you start using maven transitive dependencies
 in
 real projects and hit lots of dependencies which are not OSGi bundles or
 not OSGi ready.  Think about simple examples like spring, or all the
 bundles that we do re-wrap in servicemix.
 I think this idea is nice in theory, but it just does not work in real
 life.

 The idea is to not simply use the transitive dependencies of an existing
>>> project.
>>> Instead you create a pom where you tune the dependencies using excludes
>>> that only the bundles remain that you really want.
>>> I agree that just using any pom will produce bad results.
>>>
>>
>> That sounds like years ago when we migrated from maven 1 to maven 2.
>> Maven
>> 1 did not use transitive dependencies, so suddenly, a lot of unwanted
>> artifacts were included in the build and we had to use exclusions and
>> such.  The main difference, is that the fact that everyone would be
>> migrating from maven 1 to maven 2 was quite sure, so it was a transition.
>> However, I doubt everyone will support OSGi ever, so that's a state, not a
>> transition.
>> I think that's the difference.
>>
>
> In order to push user to a direction, we need to make sure it scales to
>> real projects.  Nice tools for beginners can become a pain when you
>> realize
>> you need to change to something else because they are too limited.  Maven
>> metadata will never be able to support the metadata than a karaf feature
>> can carry...
>>
> I agree it is quite some work. The good thing is that such repository poms
> can be used by karaf and bndtools.
> I already started providing such poms in Aries RSA and CXF DOSGi. I was
> able to build some nice demos for CXF DOSGi using these
> poms. So I think it can work and at least scale to support CXF.
>
> I think it will not work as well with camel as the nature of camel is to
> integrate many things. So for camel writing features by hand might be the
> better option.
> On the other hand I found that writing a pom for the depenencies you need
> from camel in an actual project is not that difficult. I did this for the
> OSGi chat example
> were I used camel-irc.
>
> I agree with the problem that maven is not the only build system. So
> people who build with gradle will not like the idea of using a pom as a
> repo.
> Maybe we can supply a simple maven plugin that just outputs a list of the
> bundles with maven coordinates. That would also remove the need to fully
> resolve pom files.
> For gradle we could then simply have a gradle plugin that does the same.
>
> So the effort in karaf would only be to create an OBR index on the fly
> from a simple text file with mvn urls.


Something like:
  mvn:...
  mvn:...
  ...

That sounds like a feature to me ;-)

Anyway, a custom osgi resource repository can be easily created for an
external file by inheriting
org.apache.karaf.features.internal.repository.BaseRepository.  Just add
your new repository into org.apache.karaf.features.internal.region.
getRepository  and it should work.

Again, in the simple cases, if this list of bundles can be generated, it
means that the feature itself can be generated, so I'm not sure where the
benefit is...


>
>
> Christian
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Christian Schneider

On 19.10.2016 15:53, Guillaume Nodet wrote:

2016-10-19 15:28 GMT+02:00 Christian Schneider :


On 19.10.2016 15:22, Guillaume Nodet wrote:


I disagree.

All the problems come when you start using maven transitive dependencies
in
real projects and hit lots of dependencies which are not OSGi bundles or
not OSGi ready.  Think about simple examples like spring, or all the
bundles that we do re-wrap in servicemix.
I think this idea is nice in theory, but it just does not work in real
life.


The idea is to not simply use the transitive dependencies of an existing
project.
Instead you create a pom where you tune the dependencies using excludes
that only the bundles remain that you really want.
I agree that just using any pom will produce bad results.


That sounds like years ago when we migrated from maven 1 to maven 2.  Maven
1 did not use transitive dependencies, so suddenly, a lot of unwanted
artifacts were included in the build and we had to use exclusions and
such.  The main difference, is that the fact that everyone would be
migrating from maven 1 to maven 2 was quite sure, so it was a transition.
However, I doubt everyone will support OSGi ever, so that's a state, not a
transition.
I think that's the difference.



In order to push user to a direction, we need to make sure it scales to
real projects.  Nice tools for beginners can become a pain when you realize
you need to change to something else because they are too limited.  Maven
metadata will never be able to support the metadata than a karaf feature
can carry...
I agree it is quite some work. The good thing is that such repository 
poms can be used by karaf and bndtools.
I already started providing such poms in Aries RSA and CXF DOSGi. I was 
able to build some nice demos for CXF DOSGi using these

poms. So I think it can work and at least scale to support CXF.

I think it will not work as well with camel as the nature of camel is to 
integrate many things. So for camel writing features by hand might be 
the better option.
On the other hand I found that writing a pom for the depenencies you 
need from camel in an actual project is not that difficult. I did this 
for the OSGi chat example

were I used camel-irc.

I agree with the problem that maven is not the only build system. So 
people who build with gradle will not like the idea of using a pom as a 
repo.
Maybe we can supply a simple maven plugin that just outputs a list of 
the bundles with maven coordinates. That would also remove the need to 
fully resolve pom files.

For gradle we could then simply have a gradle plugin that does the same.

So the effort in karaf would only be to create an OBR index on the fly 
from a simple text file with mvn urls.


Christian

--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Guillaume Nodet
2016-10-19 15:28 GMT+02:00 Christian Schneider :

> On 19.10.2016 15:22, Guillaume Nodet wrote:
>
>> I disagree.
>>
>> All the problems come when you start using maven transitive dependencies
>> in
>> real projects and hit lots of dependencies which are not OSGi bundles or
>> not OSGi ready.  Think about simple examples like spring, or all the
>> bundles that we do re-wrap in servicemix.
>> I think this idea is nice in theory, but it just does not work in real
>> life.
>>
> The idea is to not simply use the transitive dependencies of an existing
> project.
> Instead you create a pom where you tune the dependencies using excludes
> that only the bundles remain that you really want.
> I agree that just using any pom will produce bad results.


That sounds like years ago when we migrated from maven 1 to maven 2.  Maven
1 did not use transitive dependencies, so suddenly, a lot of unwanted
artifacts were included in the build and we had to use exclusions and
such.  The main difference, is that the fact that everyone would be
migrating from maven 1 to maven 2 was quite sure, so it was a transition.
However, I doubt everyone will support OSGi ever, so that's a state, not a
transition.
I think that's the difference.


>
>
>> Also, in the past years, several attemps have been made at using pure
>> maven
>> metadata to do provisioning, it has always failed to my knowledge, so I'd
>> really want to avoid going that road again.
>>
> The idea is not to use the maven metadata. It should still work like an
> OBR index but the index could be generated on the fly.
>
>>
>> Keep in mind that Karaf always create OBR repositories on the fly, based
>> on
>> the bundles listed in the features.
>>
> So the main change would be to move the definition of the bundles from the
> feature file to a pom. Eventually this could be the pom where the feature
> file resides.


That could be have been an interesting idea when people were working on
Tycho.  That's not the case anymore, and maven is not the only build tool
people use (think graddle, ivy, etc.., though I think it's still the more
popular).

I'm really not convinced that it will really help.  For simple projects,
the feature can already be generated automatically, we may be able to
slighly improve it.  For more complicated cases, projects like cxf, camel,
karaf, pax web, etc..., I'm not sure it will be usable.

In order to push user to a direction, we need to make sure it scales to
real projects.  Nice tools for beginners can become a pain when you realize
you need to change to something else because they are too limited.  Maven
metadata will never be able to support the metadata than a karaf feature
can carry...


>
>
> Christian
>
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Guillaume Nodet
I disagree.

All the problems come when you start using maven transitive dependencies in
real projects and hit lots of dependencies which are not OSGi bundles or
not OSGi ready.  Think about simple examples like spring, or all the
bundles that we do re-wrap in servicemix.
I think this idea is nice in theory, but it just does not work in real life.

Also, in the past years, several attemps have been made at using pure maven
metadata to do provisioning, it has always failed to my knowledge, so I'd
really want to avoid going that road again.

Keep in mind that Karaf always create OBR repositories on the fly, based on
the bundles listed in the features.

If you maven metadata is clean, you can simply use the generate goal and
you're good to go.  If it's not clean enough, because it contains non OSGi
dependencies, or because you need additional informations, keep your
features hand-written.

As for pure resource repositories, they can already be used and referenced
from feature files using the  elements, so we can
already experiment a bit.


2016-10-19 15:04 GMT+02:00 Christian Schneider :

> I agree. Currently the feature files are too low level. Basically you have
> to list all bundles.
> We need something like bndtools resolution.
>
> So I think a feature should be backed by an index. I have made some good
> experiences with pom based indexes. This is a pom that depends on bundles
> (including transitive deps).
> The goal is to provide a full list of bundles as a basis of the feature
> file.
> See this as an example https://github.com/apache/arie
> s-rsa/blob/master/repository/pom.xml.
> In the pom above an OBR index is created by maven. Eventually this could
> be skipped and karaf could create such an index on the fly as the maven
> index plugin is not working very well for me.
>
> Each feature would then only need to list the top level bundles and other
> requirements.
>
> The karaf maven plugin could then either validate the features against the
> repository. This works if karaf can work with such index backed feature
> files.
> If karaf can not do this then the plugin could create the full list of
> bundles per feature and write "traditional" feature files. The additional
> resolved bundles would then
> have dependency=true.
>
> Christian
>
>
> On 13.10.2016 11:19, Milen Dyankov wrote:
>
>> I have 2 things to say to that
>> - I agree with all the pain points you've identified (experienced them
>> myself)
>> - I'd prefer to fix things instead of claim them useless due to
>> malfunctioning
>>
>> Perhaps a middle ground would be a good starting point? Something like how
>> bndrun resolution works. I mean:
>>   - developer says - this is what I care to run (perhaps a prototype
>> feature
>> or something ...)
>>   - feature-generate-descriptor takes it from there and fills in the gaps
>>   - developer can change/fix things by tweaking the prototype if not happy
>> with what feature-generate-descriptor did
>>
>> This is just my first thought and I'm pretty sure reality is not that
>> simple. Just wanted to vote against removing it and suggest to start
>> looking for better solution instead.
>>
>> Best,
>> Milen
>>
>>
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Christian Schneider

On 19.10.2016 15:22, Guillaume Nodet wrote:

I disagree.

All the problems come when you start using maven transitive dependencies in
real projects and hit lots of dependencies which are not OSGi bundles or
not OSGi ready.  Think about simple examples like spring, or all the
bundles that we do re-wrap in servicemix.
I think this idea is nice in theory, but it just does not work in real life.
The idea is to not simply use the transitive dependencies of an existing 
project.
Instead you create a pom where you tune the dependencies using excludes 
that only the bundles remain that you really want.

I agree that just using any pom will produce bad results.


Also, in the past years, several attemps have been made at using pure maven
metadata to do provisioning, it has always failed to my knowledge, so I'd
really want to avoid going that road again.
The idea is not to use the maven metadata. It should still work like an 
OBR index but the index could be generated on the fly.


Keep in mind that Karaf always create OBR repositories on the fly, based on
the bundles listed in the features.
So the main change would be to move the definition of the bundles from 
the feature file to a pom. Eventually this could be the pom where the 
feature file resides.


Christian

--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Richard Nicholson
Christian - sounds like a good idea ;) 

Paremus have always pursued this sort of approach -  See 
https://docs.paremus.com/pages/viewpage.action?pageId=8060997 
 for a simple 
example. 

Cheers




> On 19 Oct 2016, at 14:04, Christian Schneider  wrote:
> 
> I agree. Currently the feature files are too low level. Basically you have to 
> list all bundles.
> We need something like bndtools resolution.
> 
> So I think a feature should be backed by an index. I have made some good 
> experiences with pom based indexes. This is a pom that depends on bundles 
> (including transitive deps).
> The goal is to provide a full list of bundles as a basis of the feature file.
> See this as an example 
> https://github.com/apache/aries-rsa/blob/master/repository/pom.xml.
> In the pom above an OBR index is created by maven. Eventually this could be 
> skipped and karaf could create such an index on the fly as the maven index 
> plugin is not working very well for me.
> 
> Each feature would then only need to list the top level bundles and other 
> requirements.
> 
> The karaf maven plugin could then either validate the features against the 
> repository. This works if karaf can work with such index backed feature files.
> If karaf can not do this then the plugin could create the full list of 
> bundles per feature and write "traditional" feature files. The additional 
> resolved bundles would then
> have dependency=true.
> 
> Christian
> 
> On 13.10.2016 11:19, Milen Dyankov wrote:
>> I have 2 things to say to that
>> - I agree with all the pain points you've identified (experienced them
>> myself)
>> - I'd prefer to fix things instead of claim them useless due to
>> malfunctioning
>> 
>> Perhaps a middle ground would be a good starting point? Something like how
>> bndrun resolution works. I mean:
>>  - developer says - this is what I care to run (perhaps a prototype feature
>> or something ...)
>>  - feature-generate-descriptor takes it from there and fills in the gaps
>>  - developer can change/fix things by tweaking the prototype if not happy
>> with what feature-generate-descriptor did
>> 
>> This is just my first thought and I'm pretty sure reality is not that
>> simple. Just wanted to vote against removing it and suggest to start
>> looking for better solution instead.
>> 
>> Best,
>> Milen
>> 
> 
> 
> -- 
> Christian Schneider
> http://www.liquid-reality.de
> 
> Open Source Architect
> http://www.talend.com
> 



Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Christian Schneider
I agree. Currently the feature files are too low level. Basically you 
have to list all bundles.

We need something like bndtools resolution.

So I think a feature should be backed by an index. I have made some good 
experiences with pom based indexes. This is a pom that depends on 
bundles (including transitive deps).
The goal is to provide a full list of bundles as a basis of the feature 
file.
See this as an example 
https://github.com/apache/aries-rsa/blob/master/repository/pom.xml.
In the pom above an OBR index is created by maven. Eventually this could 
be skipped and karaf could create such an index on the fly as the maven 
index plugin is not working very well for me.


Each feature would then only need to list the top level bundles and 
other requirements.


The karaf maven plugin could then either validate the features against 
the repository. This works if karaf can work with such index backed 
feature files.
If karaf can not do this then the plugin could create the full list of 
bundles per feature and write "traditional" feature files. The 
additional resolved bundles would then

have dependency=true.

Christian

On 13.10.2016 11:19, Milen Dyankov wrote:

I have 2 things to say to that
- I agree with all the pain points you've identified (experienced them
myself)
- I'd prefer to fix things instead of claim them useless due to
malfunctioning

Perhaps a middle ground would be a good starting point? Something like how
bndrun resolution works. I mean:
  - developer says - this is what I care to run (perhaps a prototype feature
or something ...)
  - feature-generate-descriptor takes it from there and fills in the gaps
  - developer can change/fix things by tweaking the prototype if not happy
with what feature-generate-descriptor did

This is just my first thought and I'm pretty sure reality is not that
simple. Just wanted to vote against removing it and suggest to start
looking for better solution instead.

Best,
Milen




--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: [DISCUSS] Gogo commands completion data

2016-10-19 Thread Guillaume Nodet
I really think the scripts are more flexible and powerful than command
metadata.
But I agree having to ship them is a pain, so I hope we can find a better
way in the future.  Your proposal of a known location in the bundle makes
sense.

2016-10-19 13:51 GMT+02:00 Christian Schneider :

> The command metadata service API of course would need to live at felix
> gogo. So the plain gogo shell could also make use of it.
> We could then provide an extender at karaf that creates these services out
> of the karaf specific commands.
>
> The problem I see with the scripts is how to add them to the karaf shell
> in a modular way. After all you want to just install the bundle with the
> command.
> You do not want to additionally have to install a shell script at some
> other place. That is where the magic location for a script file could help
> for the short run.
>
> Christian
>
> On 19.10.2016 13:43, Guillaume Nodet wrote:
>
>> I thought about that, but then, the bundles would be tied to Karaf while
>> the goal is to support commands from projects which are not related to
>> karaf.  Else, we could just force them to use our own command api if they
>> want to support Karaf ;-)
>>
>> Also remember that scripting offers very nice additional features : it's
>> dynamic.  A static completion system would not be able to offer the same
>> level, so you'd have to offer an api, and then... well, it's not
>> independent anymore, so you've lost the benefit.
>>
>> The fact that the scripts are external it not necessarily a problem.
>> That's the way your unix shell completion works btw ;-)
>>
>>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread James Carman
I am +1 on removing it, especially if nobody wants to maintain it.  I tried
to use it at one point and it just never really worked well.  Hand-crafted
features files are always the best option, IMHO.  It might be nice to have
a Maven archetype or something that would generate a "features module" from
scratch, to give folks a starting point.

On Thu, Oct 13, 2016 at 5:08 AM Guillaume Nodet  wrote:

> The feature packaging is a nice thing, as it allows automatic attachment of
> the feature file.
> However, it always use the feature-generate-descriptor, which produces a
> lot of weird results.
> Afaik, the feature packaging is not much used and all projects i've seen,
> such as pax projects, camel, cxf, and even karaf itself (including
> decanter, cellar, karaf container...).
>
> I think part of the problem comes from the feature descriptor generation,
> which is difficult to control.  I have always found much easier to simply
> write the feature manually.
> Although the generation process rewrites the xml entirely, so that any xml
> comments or license header is lost.
>
> Overall, I'm not sure that it makes our users life really easier.
>
> So I'd like to propose to get rid of the feature-generate-descriptor from
> inside the feature packaging and replace it with the verify goal to
> validate the hand-written features instead.
>
> Thoughts ?
>
> --
> 
> Guillaume Nodet
> 
> Red Hat, Open Source Integration
>
> Email: gno...@redhat.com
> Web: http://fusesource.com
> Blog: http://gnodet.blogspot.com/
>


Re: [DISCUSS] Feature package, feature generation and validation

2016-10-19 Thread Guillaume Nodet
So I'll go ahead with the following:
  * change the verify goal to be a default goal for the feature packaging
  * change the feature-generate-descriptor to be a non default goal for the
feature packaging

The benefits is that people who want to hand-write features will be able to
use the feature packaging, while those who want to use the generator will
have to add this goal to their build.
I've raised KARAF-4787 for that.


2016-10-13 11:07 GMT+02:00 Guillaume Nodet :

> The feature packaging is a nice thing, as it allows automatic attachment
> of the feature file.
> However, it always use the feature-generate-descriptor, which produces a
> lot of weird results.
> Afaik, the feature packaging is not much used and all projects i've seen,
> such as pax projects, camel, cxf, and even karaf itself (including
> decanter, cellar, karaf container...).
>
> I think part of the problem comes from the feature descriptor generation,
> which is difficult to control.  I have always found much easier to simply
> write the feature manually.
> Although the generation process rewrites the xml entirely, so that any xml
> comments or license header is lost.
>
> Overall, I'm not sure that it makes our users life really easier.
>
> So I'd like to propose to get rid of the feature-generate-descriptor from
> inside the feature packaging and replace it with the verify goal to
> validate the hand-written features instead.
>
> Thoughts ?
>
> --
> 
> Guillaume Nodet
> 
> Red Hat, Open Source Integration
>
> Email: gno...@redhat.com
> Web: http://fusesource.com
> Blog: http://gnodet.blogspot.com/
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: [DISCUSS] Gogo commands completion data

2016-10-19 Thread Christian Schneider
The command metadata service API of course would need to live at felix 
gogo. So the plain gogo shell could also make use of it.
We could then provide an extender at karaf that creates these services 
out of the karaf specific commands.


The problem I see with the scripts is how to add them to the karaf shell 
in a modular way. After all you want to just install the bundle with the 
command.
You do not want to additionally have to install a shell script at some 
other place. That is where the magic location for a script file could 
help for the short run.


Christian

On 19.10.2016 13:43, Guillaume Nodet wrote:

I thought about that, but then, the bundles would be tied to Karaf while
the goal is to support commands from projects which are not related to
karaf.  Else, we could just force them to use our own command api if they
want to support Karaf ;-)

Also remember that scripting offers very nice additional features : it's
dynamic.  A static completion system would not be able to offer the same
level, so you'd have to offer an api, and then... well, it's not
independent anymore, so you've lost the benefit.

The fact that the scripts are external it not necessarily a problem.
That's the way your unix shell completion works btw ;-)



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: [DISCUSS] Gogo commands completion data

2016-10-19 Thread Christian Schneider
The command metadata service API of course would need to live at felix 
gogo. So the plain gogo shell could also make use of it.
We could then provide an extender at karaf that creates these services 
out of the karaf specific commands.


The problem I see with the scripts is how to add them to the karaf shell 
in a modular way. After all you want to just install the bundle with the 
command.
You do not want to additionally have to install a shell script at some 
other place. That is where the magic location for a script file could 
help for the short run.


Christian

On 19.10.2016 13:43, Guillaume Nodet wrote:

I thought about that, but then, the bundles would be tied to Karaf while
the goal is to support commands from projects which are not related to
karaf.  Else, we could just force them to use our own command api if they
want to support Karaf ;-)

Also remember that scripting offers very nice additional features : it's
dynamic.  A static completion system would not be able to offer the same
level, so you'd have to offer an api, and then... well, it's not
independent anymore, so you've lost the benefit.

The fact that the scripts are external it not necessarily a problem.
That's the way your unix shell completion works btw ;-)



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: [DISCUSS] Gogo commands completion data

2016-10-19 Thread Guillaume Nodet
I thought about that, but then, the bundles would be tied to Karaf while
the goal is to support commands from projects which are not related to
karaf.  Else, we could just force them to use our own command api if they
want to support Karaf ;-)

Also remember that scripting offers very nice additional features : it's
dynamic.  A static completion system would not be able to offer the same
level, so you'd have to offer an api, and then... well, it's not
independent anymore, so you've lost the benefit.

The fact that the scripts are external it not necessarily a problem.
That's the way your unix shell completion works btw ;-)

2016-10-19 13:14 GMT+02:00 Christian Schneider :

> Not sure if the script approach is good for the long run.
>
> What we could do with it though is to provide a script in a magic location
> in each bundle.
> The shell could then dynamically add and remove the scripts as the bundles
> are started / stopped.
> Does that make sense ?
>
> For the long run I think we could have a service that provides this
> information. Kind of a command metadata service.
> A bundle could either offer such a service manually or it could be created
> by an extender from annotation data.
> Such a solution could then also cover the plain karaf commands.
>
> Christian
>
> On 12.10.2016 17:57, Guillaume Nodet wrote:
>
>> I'm working on trying to nicely integrate gogo commands.
>> The new gogo-jline bundle has a very nice way to allow external
>> configuration for command completion. For example, one need to execute the
>> script at https://gist.github.com/gnodet/18de68d57fc959efb7f9e4766415ff5e
>> to add full completion to the Karaf shell once you have the scr bundle
>> installed (it always provides gogo commands).  Other examples are
>> available
>> at
>> https://github.com/apache/felix/blob/trunk/gogo/jline/src/
>> main/resources/gosh_profile
>>
>> The question is : how to provide such a script.
>> One possibility would be to have a dedicated folder such as etc/scripts/
>> where all scripts would be loaded when a session is started. We could then
>> reference those files in features so that they are copied when features
>> are
>> installed.
>> This would allow leveraging the  feature xml element.
>>
>> Do you guys have better ideas ?
>>
>>
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: [DISCUSS] Gogo commands completion data

2016-10-19 Thread Christian Schneider

Not sure if the script approach is good for the long run.

What we could do with it though is to provide a script in a magic 
location in each bundle.
The shell could then dynamically add and remove the scripts as the 
bundles are started / stopped.

Does that make sense ?

For the long run I think we could have a service that provides this 
information. Kind of a command metadata service.
A bundle could either offer such a service manually or it could be 
created by an extender from annotation data.

Such a solution could then also cover the plain karaf commands.

Christian

On 12.10.2016 17:57, Guillaume Nodet wrote:

I'm working on trying to nicely integrate gogo commands.
The new gogo-jline bundle has a very nice way to allow external
configuration for command completion. For example, one need to execute the
script at https://gist.github.com/gnodet/18de68d57fc959efb7f9e4766415ff5e
to add full completion to the Karaf shell once you have the scr bundle
installed (it always provides gogo commands).  Other examples are available
at
https://github.com/apache/felix/blob/trunk/gogo/jline/src/main/resources/gosh_profile

The question is : how to provide such a script.
One possibility would be to have a dedicated folder such as etc/scripts/
where all scripts would be loaded when a session is started. We could then
reference those files in features so that they are copied when features are
installed.
This would allow leveraging the  feature xml element.

Do you guys have better ideas ?




--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: [DISCUSS] Gogo commands completion data

2016-10-19 Thread Jean-Baptiste Onofré

Awesome.

I will take a look. Let me know for the scr commands, I can tackle that.

Regards
JB

On 10/19/2016 11:30 AM, Guillaume Nodet wrote:

I have committed my changes.
I still need to replace the karaf scr commands  with the native ones and
write the completion data for it.

2016-10-12 17:57 GMT+02:00 Guillaume Nodet :


I'm working on trying to nicely integrate gogo commands.
The new gogo-jline bundle has a very nice way to allow external
configuration for command completion. For example, one need to execute the
script at https://gist.github.com/gnodet/18de68d57fc959efb7f9e4766415ff5e
to add full completion to the Karaf shell once you have the scr bundle
installed (it always provides gogo commands).  Other examples are available
at https://github.com/apache/felix/blob/trunk/gogo/jline/
src/main/resources/gosh_profile

The question is : how to provide such a script.
One possibility would be to have a dedicated folder such as etc/scripts/
where all scripts would be loaded when a session is started. We could then
reference those files in features so that they are copied when features are
installed.
This would allow leveraging the  feature xml element.

Do you guys have better ideas ?

--

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/







--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: [DISCUSS] Gogo commands completion data

2016-10-19 Thread Guillaume Nodet
I have committed my changes.
I still need to replace the karaf scr commands  with the native ones and
write the completion data for it.

2016-10-12 17:57 GMT+02:00 Guillaume Nodet :

> I'm working on trying to nicely integrate gogo commands.
> The new gogo-jline bundle has a very nice way to allow external
> configuration for command completion. For example, one need to execute the
> script at https://gist.github.com/gnodet/18de68d57fc959efb7f9e4766415ff5e
> to add full completion to the Karaf shell once you have the scr bundle
> installed (it always provides gogo commands).  Other examples are available
> at https://github.com/apache/felix/blob/trunk/gogo/jline/
> src/main/resources/gosh_profile
>
> The question is : how to provide such a script.
> One possibility would be to have a dedicated folder such as etc/scripts/
> where all scripts would be loaded when a session is started. We could then
> reference those files in features so that they are copied when features are
> installed.
> This would allow leveraging the  feature xml element.
>
> Do you guys have better ideas ?
>
> --
> 
> Guillaume Nodet
> 
> Red Hat, Open Source Integration
>
> Email: gno...@redhat.com
> Web: http://fusesource.com
> Blog: http://gnodet.blogspot.com/
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Karaf feature service and AbstractRetryableDownloadTask

2016-10-19 Thread Grzegorz Grzybek
Hello

Recently I worked a bit on pax-url-aether and among other fixes (like
timeout configuration or improved error reporting), I added 2nd variant of
org.ops4j.pax.url.mvn.MavenResolver.resolve() methods - an exception can be
passed as additional argument which should be treated as _hint_.
This _hint_ may be used to tell the resolver that it's not the first time
we're trying to resolve Maven artifact and this passed exception _may_ be
used to optimize 2nd and further resolution.

Currently pax-url-aether 2.5.0 does some checks - is it (or is caused by)
AetherException? is the exception related to currently resolved artifact?
If these conditions match, optimization is done to narrow list of remote
repositories being queried (again).

Before pax-url-aether 2.5.0 remote repositories are normally taken from
org.ops4j.pax.url.mvn.repositories property (possibly adjusted with repos
from active profiles from current Maven settings.xml). But if
AetherException says that some of the repositories returned simple
ArtifactNotFoundException, there's good chance that the artifact won't
appear in such repository in next few seconds. So such repository is
discarded.

Without going into details, here's @Test that presents this behavior:
https://github.com/ops4j/org.ops4j.pax.url/blob/url-2.5.0/
pax-url-aether/src/test/java/org/ops4j/pax/url/mvn/internal/
AetherResolutionWithHintsTest.java#L83.

OK. Back to Karaf. Karaf4 uses AbstractRetryableDownloadTask and
Maven-related subclass. In this line
,
when exception occurs the download is retried - without checking if there's
a chance of success. The retry mechanism was designed with OSGi dynamics in
mind and assumption that if something doesn't work now, it may work in few
seconds.

But if download fails because we're getting "connection refused" or "no
route to host" I don't think it's wise to repeat the download process.

I created JIRA issue https://issues.apache.org/jira/browse/KARAF-4773 and
attached PR, where download attempt is retried based on some criteria. And
MavenDownloadTask (karaf) asks MavenResolver (pax-url-aether 2.5.0) if the
exception is actually retryable.

pax-url-aether/AetherBAsedResolvet assumes that errors like "socket
timeout" are retryable and gives abstract hint called "RetryChance.LOW".
Karaf's AbstractRetryableDownloadTask than knows that it may try repeating
the download few times, but not default "9" times.

There are tests that show optimized network access and new timeout
configuration from pax-url-aether 2.5.0 ensures that timeouts are really
enforced. Thus Maven handling in Karaf 4 should actually improve.

Please add comments if needed or if I missed something from my design.

regards
Grzegorz Grzybek